Building a Real-Time, Dynamic Seller Dashboard on MongoDB
Karthic Subramanian, Katya Kamenieva7 min read • Published Dec 21, 2022 • Updated Aug 05, 2024
Rate this tutorial
One of the key aspects of being a successful merchant is knowing your market. Understanding your top-selling products, trending SKUs, and top customer locations helps you plan, market, and sell effectively. As a marketplace, providing this visibility and insights for your sellers is crucial. For example, SHOPLINE has helped over 350,000 merchants reach more than 680 million customers via e-commerce, social commerce, and offline point-of-sale (POS) transactions. With key features such as inventory and sales management tools, data analytics, etc. merchants have everything they need to build a successful online store.
In this article, we are going to look at how a single query on MongoDB can power a real-time view of top selling products, and a deep-dive into the top selling regions.
In the relational world, such a dashboard would require multiple joins across at least four distinct tables: seller details, product details, channel details, and transaction details.
This increases complexity, data latency, and costs for providing insights on real-time, operational data. Often, organizations pre-compute these tables with up to a 24-hour lag to ensure a better user experience.
With MongoDB, using the Query API, we could deliver such dashboards in near real-time, working directly on operational data. The required information for each sales transaction can be stored in a single collection.
Each document would look as follows:
1 { 2 "_id": { "$oid": "5bd761dcae323e45a93ccfed" }, 3 "saleDate": { "$date": {...} }, 4 "items": [ 5 { "name": "binder", 6 "tags": [ 7 "school", 8 "general"], 9 "price": { "$numberDecimal": "13.44" }, 10 "quantity": 8 11 }, 12 { "name": "binder", 13 "tags": [ 14 "general", 15 "organization" 16 ], 17 "price": { "$numberDecimal": "16.66" }, 18 "quantity": 10 19 } 20 ], 21 "storeLocation": "London", 22 "customer": { 23 "gender": "M", 24 "age": 44, 25 "email": "owtar@pu.cd", 26 "satisfaction": 2 27 }, 28 "couponUsed": false, 29 "purchaseMethod": "In store" 30 }
This specific document is from the “sales” collection within the “sample_supplies” database, available as sample data when you create an Atlas Cluster. Start free on Atlas and try out this exercise yourself. MongoDB allows for flexible schema and versioning which makes updating this document with a “seller” field, similar to the customer field, and managing it in your application, very simple. From a data modeling perspective, the polymorphic pattern is ideal for our current use case.
In order to build a dashboard showcasing the top five products sold over a specific period, we would want to transform the documents into the following sorted array:
1 [ 2 { 3 "total_volume": 1897, 4 "item": "envelopes" 5 }, 6 { 7 "total_volume": 1844, 8 "item": "binder" 9 }, 10 { 11 "total_volume": 1788, 12 "item": "notepad" 13 }, 14 { 15 "total_volume": 1018, 16 "item": "pens" 17 }, 18 { 19 "total_volume": 830, 20 "item": "printer paper" 21 } 22 ]
With just the “_id” and “total_volume” fields, we can build a chart of the top five products. If we wanted to deliver an improved seller experience, we could build a deep-dive chart with the same single query that provides the top five locations and the quantity sold for each.
The output for each item would look like this:
1 { 2 "_id": "binder", 3 "totalQuantity": 100, 4 "topFiveRegionsByQuantity": { 5 "Seattle": 41, 6 "Denver": 26, 7 "New York": 14, 8 "Austin": 10, 9 "London": 9 10 } 11 }
With the Query API, this transformation can be done in real-time in the database with a single query. In this example, we go a bit further to build another transformation on top which can improve user experience. In fact, on our Atlas developer data platform, this becomes significantly easier when you leverage Atlas Charts.
- Set up your Atlas Cluster and load sample data “sample_supplies.”
(Tip: Click “Create new pipeline from text” to copy the code below and easily play with the pipeline.)
1 [{ 2 $match: { 3 saleDate: { 4 $gte: ISODate('2017-12-25T05:00:00.000Z'), 5 $lt: ISODate('2017-12-30T05:00:00.000Z') 6 } 7 } 8 }, { 9 $unwind: { 10 path: '$items' 11 } 12 }, { 13 $group: { 14 _id: { 15 item: '$items.name', 16 region: '$storeLocation' 17 }, 18 quantity: { 19 $sum: '$items.quantity' 20 } 21 } 22 }, { 23 $addFields: { 24 '_id.quantity': '$quantity' 25 } 26 }, { 27 $replaceRoot: { 28 newRoot: '$_id' 29 } 30 }, { 31 $group: { 32 _id: '$item', 33 totalQuantity: { 34 $sum: '$quantity' 35 }, 36 topFiveRegionsByQuantity: { 37 $topN: { 38 output: { 39 k: '$region', 40 v: '$quantity' 41 }, 42 sortBy: { 43 quantity: -1 44 }, 45 n: 5 46 } 47 } 48 } 49 }, { 50 $sort: { 51 totalQuantity: -1 52 } 53 }, { 54 $limit: 5 55 }, { 56 $set: { 57 topFiveRegionsByQuantity: { 58 $arrayToObject: '$topFiveRegionsByQuantity' 59 } 60 } 61 }]
This short but powerful pipeline processes our data through the following stages:
- First, it filters our data to the specific subset we need. In this case, sale transactions are from the specified dates. It’s worth noting here that you can parametrize inputs to the $match stage to dynamically filter based on user choices.
Note: Beginning our pipeline with this filter stage significantly improves processing times. With the right index, this entire operation can be extremely fast and reduce the number of documents to be processed in subsequent stages.
- To fully leverage the polymorphic pattern and the document model, we store items bought in each order as an embedded array. The second stage unwinds this so our pipeline can look into each array. We then group the unwound documents by item and region and use $sum to calculate the total quantity sold.
- Ideally, at this stage we would want our documents to have three data points: the item, the region, and the quantity sold. However, at the end of the previous stage, the item and region are in an embedded object, while quantity is a separate field. We use $addFields to move quantity within the embedded object, and then use $replaceRoot to use this embedded _id document as the source document for further stages. This quick maneuver gives us the transformed data we need as a single document.
- Next, we group the items as per the view we want on our dashboard. In this example, we want the total volume of each product sold, and to make our dashboard more insightful, we could also get the top five regions for each of these products. We use $group for this with two operators within it:
- Finally, we use $set to convert the array of the top five regions per item to an embedded document with the format {region: quantity}, making it easier to work with objects in code. This is an optional step.
Note: The $topN operator was introduced in MongoDB 5.2. To test this pipeline on Atlas, you would require an M10 cluster. By downloading MongoDB community version, you can test through Compass on your local machine.
While adding visibility on the top five products and the top-selling regions is one part of the dashboard, by leveraging MongoDB and the Query API, we deliver near real-time visibility into live operational data.
In this article, we saw how to build a single query which can power multiple charts on a seller dashboard. What would you build into your dashboard views? Join our vibrant community forums, to discuss more.
For reference, here’s what the code blocks look like in other languages.
1 # Import the necessary packages 2 from pymongo import MongoClient 3 from bson.son import SON 4 5 # Connect to the MongoDB server 6 client = MongoClient(URI) 7 8 # Get a reference to the sample_supplies collection 9 db = client.<database_name> 10 supplies = db.sample_supplies 11 12 # Build the pipeline stages 13 match_stage = { 14 "$match": { 15 "saleDate": { 16 "$gte": "ISODate('2017-12-25T05:00:00.000Z')", 17 "$lt": "ISODate('2017-12-30T05:00:00.000Z')" 18 } 19 } 20 } 21 22 unwind_stage = { 23 "$unwind": { 24 "path": "$items" 25 } 26 } 27 28 group_stage = { 29 "$group": { 30 "_id": { 31 "item": "$items.name", 32 "region": "$storeLocation" 33 }, 34 "quantity": { 35 "$sum": "$items.quantity" 36 } 37 } 38 } 39 40 addfields_stage = { 41 $addFields: { 42 '_id.quantity': '$quantity' 43 } 44 } 45 46 replaceRoot_stage = { 47 $replaceRoot: { 48 newRoot: '$_id' 49 } 50 } 51 52 group2_stage = { 53 $group: { 54 _id: '$item', 55 totalQuantity: { 56 $sum: '$quantity' 57 }, 58 topFiveRegionsByQuantity: { 59 $topN: { 60 output: { 61 k: '$region', 62 v: '$quantity' 63 }, 64 sortBy: { 65 quantity: -1 66 }, 67 n: 5 68 } 69 } 70 } 71 } 72 73 sort_stage = { 74 $sort: { 75 totalQuantity: -1 76 } 77 } 78 79 limit_stage = { 80 $limit: 5 81 } 82 83 set_stage = { 84 $set: { 85 topFiveRegionsByQuantity: { 86 $arrayToObject: '$topFiveRegionsByQuantity' 87 } 88 } 89 } 90 91 92 pipeline = [match_stage, unwind_stage, group_stage, 93 addfields_stage, replaceroot_stage, group2_stage, 94 sort_stage, limit_stage, set_stage] 95 96 # Execute the aggregation pipeline 97 results = supplies.aggregate(pipeline)
1 import com.mongodb.client.MongoCollection; 2 import com.mongodb.client.model.Aggregates; 3 import org.bson.Document; 4 5 import java.util.Arrays; 6 7 // Connect to MongoDB and get the collection 8 MongoClient mongoClient = new MongoClient(URI); 9 MongoDatabase database = mongoClient.getDatabase(<database_name>); 10 MongoCollection<Document> collection = database.getCollection("sample_supplies"); 11 12 // Create the pipeline stages 13 Bson matchStage = Aggregates.match(Filters.and( 14 Filters.gte("saleDate", new Date("2017-12-25T05:00:00.000Z")), 15 Filters.lt("saleDate", new Date("2017-12-30T05:00:00.000Z")) 16 )); 17 18 Bson unwindStage = Aggregates.unwind("$items"); 19 20 Bson groupStage = Aggregates.group("$items.name", 21 Accumulators.sum("quantity", "$items.quantity") 22 ); 23 24 Bson addFieldsStage = Aggregates.addFields(new Field("_id.quantity", "$quantity")); 25 26 Bson replaceRootStage = Aggregates.replaceRoot("_id"); 27 28 Bson group2Stage = Aggregates.group("$item", 29 Accumulators.sum("totalQuantity", "$quantity"), 30 Accumulators.top("topFiveRegionsByQuantity", 5, new TopOptions() 31 .output(new Document("k", "$region").append("v", "$quantity")) 32 .sortBy(new Document("quantity", -1)) 33 ) 34 ); 35 36 Bson sortStage = Aggregates.sort(new Document("totalQuantity", -1)); 37 38 Bson limitStage = Aggregates.limit(5); 39 40 Bson setStage = Aggregates.set("topFiveRegionsByQuantity", new Document("$arrayToObject", "$topFiveRegionsByQuantity")); 41 42 // Execute the pipeline 43 List<Document> results = collection.aggregate(Arrays.asList(matchStage, unwindStage, groupStage, addFieldsStage, replaceRootStage, group2Stage, sortStage, limitStage, setStage)).into(new ArrayList<>());
1 const MongoClient = require('mongodb').MongoClient; 2 const assert = require('assert'); 3 4 // Connection URL 5 const url = 'URI'; 6 7 // Database Name 8 const db = 'database_name'; 9 10 // Use connect method to connect to the server 11 MongoClient.connect(url, function(err, client) { 12 assert.equal(null, err); 13 console.log("Connected successfully to server"); 14 15 const db = client.db(dbName); 16 17 // Create the pipeline stages 18 const matchStage = { 19 $match: { 20 saleDate: { 21 $gte: new Date('2017-12-25T05:00:00.000Z'), 22 $lt: new Date('2017-12-30T05:00:00.000Z') 23 } 24 } 25 }; 26 27 const unwindStage = { 28 $unwind: { 29 path: '$items' 30 } 31 }; 32 33 const groupStage = { 34 $group: { 35 _id: { 36 item: '$items.name', 37 region: '$storeLocation' 38 }, 39 quantity: { 40 $sum: '$items.quantity' 41 } 42 } 43 }; 44 45 const addFieldsStage = { 46 $addFields: { 47 '_id.quantity': '$quantity' 48 } 49 }; 50 51 const replaceRootStage = { 52 $replaceRoot: { 53 newRoot: '$_id' 54 } 55 }; 56 57 const groupStage = { 58 $group: { 59 _id: '$item', 60 totalQuantity: { 61 $sum: '$quantity' 62 }, 63 topFiveRegionsByQuantity: { 64 $topN: { 65 output: { 66 k: '$region', 67 v: '$quantity' 68 }, 69 sortBy: { 70 quantity: -1 71 }, 72 n: 5 73 } 74 } 75 } 76 }; 77 78 const sortStage = { 79 $sort: { 80 totalQuantity: -1 81 } 82 }; 83 84 const limitStage = { 85 $limit: 5 86 }; 87 88 const setStage = { 89 $set: { 90 topFiveRegionsByQuantity: { 91 $arrayToObject: '$topFiveRegionsByQuantity' 92 } 93 } 94 }; 95 96 const pipeline = [matchStage, unwindStage, groupStage, 97 addFieldsStage, replaceRootStage, group2Stage, 98 sortStage, limitStage, setStage] 99 100 // Execute the pipeline 101 db.collection('sample_supplies') 102 .aggregate(pipeline) 103 .toArray((err, results) => { 104 assert.equal(null, err); 105 console.log(results); 106 107 client.close(); 108 }); 109 });