1 / 1
Aug 2024

Hello,

I have a node.js application that uses mongodb library. I have the following script that establishes the connection to the database when the server spins up. I’m making the call to connectToDB in a singleton fashion before starting the node server. I haven’t mentioned any explicit maxPoolsize for the MongoDB driver. So, it should be using the default maxPoolsize of 100.

const { MongoClient } = require('mongodb'); const path = require('path'); const dbInstance = {}; const dbClient = {}; const dbConnectionCount = {}; const connectToDB = async (url, podName, eachDB) => { const podNameInUC = podName.toUpperCase(); try { console.log(`Initiating a new connection to ${podNameInUC}_${eachDB} and in-memory connection has following keys ${Object.keys(dbInstance)}`); const client = new MongoClient(url); if (!dbConnectionCount[`${podNameInUC}_${eachDB}`]) { dbConnectionCount[`${podNameInUC}_${eachDB}`] = {}; } if (!dbConnectionCount[`${podNameInUC}_${eachDB}`].connection) { dbConnectionCount[`${podNameInUC}_${eachDB}`] = { connection: 0, }; } client.on('connectionCreated', () => { console.log('Connected....'); dbConnectionCount[`${podNameInUC}_${eachDB}`].connection += 1; console.log(`Connection created. Total connections: on pod: ${podName} for DB: ${eachDB}`, dbConnectionCount[`${podNameInUC}_${eachDB}`].connection); }); client.on('connectionClosed', () => { dbConnectionCount[`${podNameInUC}_${eachDB}`].connection -= 1; console.log(`Connection closed. Total connections: on pod: ${podName} for DB: ${eachDB}`, dbConnectionCount[`${podNameInUC}_${eachDB}`].connection); }); dbInstance[`${podNameInUC}_${eachDB}`] = await client.db(eachDB); dbClient[`${podNameInUC}_${eachDB}`] = client; } catch (err) { console.log('Error----> ', err); } console.log(`After: in-memory connection has following keys ${Object.keys(dbInstance)}`); }; const getCollection = async (podName, dbName, collectionName) => { const podNameInUC = podName.toUpperCase(); if (dbInstance[`${podNameInUC}_${dbName}`]) { const selectedDB = dbInstance[`${podNameInUC}_${dbName}`]; const collection = await selectedDB.collection(collectionName); return collection; } console.log(`${podNameInUC}_${dbName}`, dbInstance); return null; }; connectToDB('mongodb://localhost:27022/', 'podName', 'dbName'); const collection = await getCollection('podName', 'dbName', 'collectionName'); const repeatedArray = Array(8100).fill(1); /* Making parallel calls. Driver creates 100 connections and re-uses them for all 8100 calls (In development environment). */ await Promise.all(repeatedArray.map(async () => { const docs = await collection.find({}).toArray(); }));

In the development environment, when I make numerous parallel calls to the database, I can see that the MongoDB driver is creating at most 100 connections.

But, in our production logs, we noticed that one server has 7000+ connections.

This is the log from production

Connection created. Total connections: on pod: podName for DB: dbName 6983

What could be the reason for that? Ideally, maxPoolSize of 100 should be honored right?

We have a multi-region setup with 8 regions and 3 instances per region. Every instance makes around 180 parallel calls. And atlas dashboard shows around 4000 total connections for the database, which is reasonable(8 * 3 * 180 = 4320).

But a single server receiving 7000+ connectionCreated events is what looks good.

Looking for any inputs around this.

Thanks for your time and support.