Background

Mafiree's MongoDB consulting team audited and optimized the query layer of a high-traffic e-commerce platform in India, bringing average API response times down from 340ms to 92ms - a 73% improvement. The work involved diagnosing slow queries, redesigning indexes, restructuring aggregation pipelines, and setting up ongoing monitoring — all without adding a single server. 

Why MongoDB Queries Slow Down

MongoDB query optimization challenges typically emerge when datasets outgrow the assumptions made during initial development. Four root causes are identified: 

Collection Scans (COLLSCAN): When no matching index exists, MongoDB scans every document in a collection. On a 50-million-document collection, this can take seconds instead of milliseconds.

Inefficient Index Usage: Having indexes isn't enough — the wrong indexes can be just as harmful. A compound index built for one query pattern is useless for a different field combination, even if it looks similar.

Unbounded Aggregation Pipelines: Stages like $lookup and $unwind that run without an early $match force MongoDB to process the entire collection before filtering, turning a 5ms operation into a 5-second one.

Over-Fetching Documents: Returning entire documents when only a few fields are needed wastes network bandwidth, memory, and CPU on deserialization.

The Client's Situation

The client operated a popular e-commerce marketplace built on MongoDB 6.0, running a 3-node replica set on AWS, serving 2 million daily active users, with a product catalog of 12 million documents and an orders collection exceeding 80 million documents.

Performance had degraded gradually over 18 months. Product search API responses averaged 340ms, the checkout flow experienced intermittent timeouts during flash sales, and the analytics dashboard for sellers took over 8 seconds to load. The engineering team had added indexes reactively over time, resulting in 23 indexes on the products collection alone — many redundant or unused. 

After optimization, the results were dramatic: product search dropped from 340ms to 92ms, checkout p99 latency fell from 1,200ms to 280ms, the seller dashboard went from 8.2s to 1.8s, collection scans per hour dropped from 4,200 to just 12, active indexes on the products collection were reduced from 23 to 9, and monthly AWS spend fell from $4,800 to $3,200.

The 3-Step Diagnostic Process

Step 1 — Profiler Analysis: The team enabled MongoDB's built-in profiler at level 1 (slow operations only) with a 100ms threshold. Within 24 hours, 14 distinct query shapes responsible for 87% of all slow operations were identified. The top three offenders were the product search query, the order history aggregation, and the inventory availability check. 

Step 2 — Explain Plan Analysis: For each slow query, explain("executionStats") was used to examine execution details. The product search query was scanning nearly 4 million documents to return just 20 results. Despite having 23 indexes on the collection, none matched this specific query shape. 

Step 3 — Index Usage Audit: Using the $indexStats aggregation, every index was evaluated. 14 of the 23 indexes on the products collection had zero or near-zero usage over the past 30 days. Unused indexes aren't harmless — each one adds overhead to every write operation. 

 

The Four Key Fixes

Fix 1 — ESR Rule for Compound Indexes: MongoDB's ESR (Equality, Sort, Range) rule is the foundation of effective compound index design. Fields used in equality filters come first, followed by sort fields, then range filters. Applying this rule to the product search query brought latency from 340ms to 92ms, and reduced documents examined from 3.8 million to just 847. 

Fix 2 — Aggregation Pipeline Refactoring: The seller dashboard pipeline was processing the entire orders collection before filtering. By moving the $match and $sort stages to the beginning, MongoDB could leverage indexes early. Dashboard load time went from 8.2s to 1.8s, and the working document set shrank from 80 million to approximately 45,000 before the expensive $lookup ran.

Fix 3 — Projections and Covered Queries: Several API endpoints were fetching entire 4KB product documents when only a few fields were needed. Adding projections and supporting covering indexes eliminated full document fetches. Listing page API latency reduced by 60% and network bandwidth dropped by 45%. 

Fix 4 — Dropping Unused Indexes: After confirming which indexes were safe to remove, 14 unused indexes were dropped. This freed approximately 2.8GB of RAM and noticeably improved write performance. Write latency improved by 18% and monthly AWS spend decreased by $1,600. 

 

Best Practices for Ongoing Performance

The blog concludes with five production best practices: always follow the ESR rule when designing compound indexes; run the slow query profiler continuously rather than only during incidents; audit index usage quarterly using $indexStats and drop what isn't used; always place $match first in aggregation pipelines; and use projections in every query to avoid fetching unnecessary data.

MongoDB query optimization is not something you do once and forget. Data grows, query patterns shift, and application features evolve. The lasting success of this engagement came from the monitoring framework put in place — continuous profiling, automated index audits, and real-time alerting on latency regressions — ensuring performance stays on track as the platform scales.