Data, AI and Software Engineering
Corporate Performance Management
Sales Performance Management
Data, AI and Software Engineering
Corporate Performance Management
Sales Performance Management
Data, AI and Software Engineering
Corporate Performance Management
Industries
We help organizations cut through the noise, evaluate the right options, and move forward with greater clarity.
Whether you are replacing legacy systems or supporting growth, let’s define the right next step for your business.
Watch this on-demand webinar to learn how AI is reshaping FP&A for modern finance teams.
Come meet Delbridge in Austin, Texas, where Delbridge is sponsoring this year’s Vena Excelerate Conference!
Over the years I’ve worked on blockchain projects, the hardest challenges weren’t around smart contracts or wallet integrations—they were architectural. Specifically, how to store and serve fragmented data efficiently across multiple chains, APIs, and token types without overcomplicating the stack. As the number of data sources and blockchain standards grew, so did the need for a more flexible, scalable approach to data modeling. That’s where MongoDB came in.
In early projects, my stack was heavily based on relational databases and GraphQL for querying. While these tools served many use cases well, they started to show serious limitations when dealing with multi-source, semi-structured data, such as on-chain events, public APIs, token metadata, wallet details, and user-specific configurations.
As more blockchains came into play, each with unique standards and formats, managing and normalizing the data became a heavy lift. It required schema migrations, added joins, and increasingly complex backend logic just to get usable data to the frontend.
I was first introduced to MongoDB while working as a Partner Consulting Engineer and going through the initial training. That’s when my perspective started to shift. Challenges I had previously faced, especially around data modeling and architectural complexity, suddenly had more effective, streamlined solutions. MongoDB’s flexibility opened up new ways to structure and serve data that aligned more naturally with the needs of modern applications. What stood out immediately was its dynamic schema design. I was no longer constrained by rigid, predefined structures or forced to write complex joins just to deliver usable data. Information that previously lived across several tables could now be embedded into a single document — shaped by how the frontend consumed it, not by how the database expected it.
Let me walk you through a practical example. In the relational model, a single token required multiple tables to represent its context and relationships:
Figure: Relational model (left) vs MongoDB document model (right), showing how token data can be unified depending on its type and usage pattern.
Depending on the token type (ERC20 or NFT), I had to build distinct query chains and joins. For ERC20 tokens, this meant fetching chain details along with all associated liquidity pairs. NFTs, on the other hand, required joining collection data with individual token IDs, often pulling from external APIs. These operations were resource-intensive and led to fragmented logic across backend resolvers, frequently duplicating transformation steps already handled on the client side.
With MongoDB, I shifted from modeling tokens based on traditional database logic to structuring them around how they would actually be queried and used in the application.
This design introduced several key advantages:
Another major advantage I found with MongoDB was the ability to seamlessly expand to new blockchains and token types. In a traditional relational setup, onboarding a new chain often required creating additional tables, modifying foreign key relationships, and updating resolvers or GraphQL schemas to support new data structures.
With MongoDB, I structured documents to tolerate partial data, which allowed me to insert or update information incrementally as each new data source became available. Whether a token orginated from Ethereum, Polygon, or a lesser-known chain like Celo or Base, the core document structure didn’t need to change.
This flexibility is critical when working across different ways of extracting data from the blockchain:
Because MongoDB allowed for schema flexibility, I could sync data as it became available, starting with a token’s basic metadata from an API, then enriching it later with real-time data pulled from the chain. I didn’t have to worry about schema migrations, backfilling new tables, or tightly coupling data-fetching logic with storage logic.
By shifting to MongoDB, I was able to:
More importantly, we began to treat tokens—fungible or not—as unified objects rather than fractured relationships. This change improved collaboration across engineering and analytics teams.
Design your data for how it’s used, not how it’s stored. That shift made all the difference in how we delivered blockchain applications.
Today, I help companies make similar transitions, simplifying architecture, improving scalability, and building products faster.
Curious how this could apply to your project? Let’s chat. MongoDB is more than a database, it’s a smarter way to build.