Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Getting Started With MongoDB Atlas Serverless, AWS CDK, and AWS Serverless Computing

Zuhair Ahmed, Pahud Hsieh17 min read • Published Aug 09, 2024 • Updated Aug 09, 2024
ServerlessAWSJavaScriptPythonAtlas
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Serverless development is a cloud computing execution model where cloud and SaaS providers dynamically manage the allocation and provisioning of servers on your behalf, dropping all the way to $0 cost when not in use. This approach allows developers to build and run applications and services without worrying about the underlying infrastructure, focusing primarily on writing code for their core product and associated business logic. Developers opt for serverless architectures to benefit from reduced operational overhead, cost efficiency through pay-per-use billing, and the ability to easily scale applications in response to real-time demand without manual intervention.
MongoDB Atlas serverless instances eliminate the cognitive load of sizing infrastructure and allow you to get started with minimal configuration, so you can focus on building your app. Simply choose a cloud region and then start building with documents that map directly to objects in your code. Your serverless database will automatically scale with your app's growth, charging only for the resources utilized. Whether you’re just getting started or already have users all over the world, Atlas provides the capabilities to power today's most innovative applications while meeting the most demanding requirements for resilience, scale, and data privacy.
In this tutorial, we will walk you through getting started to build and deploy a simple serverless app that aggregates sales data stored in a MongoDB Atlas serverless instance using AWS Lambda as our compute engine and Amazon API Gateway as our fully managed service to create a RESTful API interface. Lastly, we will show you how easy this is using our recently published AWS CDK Level 3 constructs to better incorporate infrastructure as code (IaC) and DevOps best practices into your software development life cycle (SDLC).
In this step-by-step guide, we will walk you through the entire process. We will be starting from an empty directory in an Ubuntu 20.04 LTS environment, but feel free to follow along in any supported OS that you prefer.
Let's get started!

Setup

  1. Create a MongoDB Atlas account. Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via the AWS Marketplace.
  2. Create a MongoDB Atlas programmatic API key (PAK)
  3. Install and configure the AWS CLI and Atlas CLI in your terminal if you don’t have them already.
  4. Install the latest versions of Node.js and npm.
  5. Lastly, for the playground code running on Lambda function, we will be using Python so will also require Python3 and pip installed on your terminal.

Step 1: install AWS CDK, Bootstrap, and Initialize

The AWS CDK is an open-source framework that lets you define and provision cloud infrastructure using code via AWS CloudFormation. It offers preconfigured components for easy cloud application development without the need for expertise. For more details, see the AWS CDK Getting Started guide.
You can install CDK using npm:
1sudo npm install -g aws-cdk
Next, we need to “bootstrap” our AWS environment to create the necessary resources to manage the CDK apps (see AWS docs for full details). Bootstrapping is the process of preparing an environment for deployment. Bootstrapping is a one-time action that you must perform for every environment that you deploy resources into.
The cdk bootstrap command creates an Amazon S3 bucket for storing files, AWS IAM roles, and a CloudFormation stack to manage these scaffolding resources:
1cdk bootstrap aws://ACCOUNT_NUMBER/REGION
Now, we can initialize a new CDK app using TypeScript. This is done using the cdk init command:
1cdk init -l typescript
This command initializes a new CDK app in TypeScript language. It creates a new directory with the necessary files and directories for a CDK app. When you initialize a new AWS CDK app, the CDK CLI sets up a project structure that organizes your application's code into a conventional layout. This layout includes bin and lib directories, among others, each serving a specific purpose in the context of a CDK app. Here's what each of these directories is for:
  • The bin directory contains the entry point of your CDK application. It's where you define which stacks from your application should be synthesized and deployed. Typically, this directory will have a <your_project_name>.ts file (with the same name as your project or another meaningful name you choose) that imports stacks from the lib directory and initializes them.
    The bin directory's script is the starting point that the CDK CLI executes to synthesize CloudFormation templates from your definitions. It acts as the orchestrator, telling the CDK which stacks to include in the synthesis process.
  • The lib directory is where the core of your application's cloud infrastructure code lives. It's intended for defining CDK stacks and constructs, which are the building blocks of your AWS infrastructure. Typically, this directory will have a <your_project_name-stack>.ts file (with the same name as your project or another meaningful name you choose).
    The lib directory contains the actual definitions of those stacks — what resources they include, how those resources are configured, and how they interact. You can define multiple stacks in the lib directory and selectively instantiate them in the bin directory as needed.

Step 2: create and deploy the MongoDB Atlas Bootstrap Stack

The [atlas-cdk-bootstrap](https://github.com/mongodb/awscdk-resources-mongodbatlas/tree/main/src/l3-resources/atlas-bootstrap) CDK construct was designed to facilitate the smooth configuration and setup of the MongoDB Atlas CDK framework. This construct simplifies the process of preparing your environment to run the Atlas CDK by automating essential configurations and resource provisioning.
Key features:
  • User provisioning: The atlas-cdk-bootstrap construct creates a dedicated execution role within AWS Identity and Access Management (IAM) for executing CloudFormation Extension resources. This helps maintain security and isolation for Atlas CDK operations.
  • Programmatic API key management: It sets up an AWS Secrets Manager to securely store and manage programmatic API Keys required for interacting with the Atlas services. This ensures sensitive credentials are protected and can be easily rotated.
  • CloudFormation Extensions activation: This construct streamlines the activation of CloudFormation public extensions essential for the MongoDB Atlas CDK. It provides a seamless interface for users to specify the specific CloudFormation resources that need to be deployed and configured.
With atlas-cdk-bootstrap, you can accelerate the onboarding process for Atlas CDK and reduce the complexity of environment setup. By automating user provisioning, credential management, and resource activation, this CDK construct empowers developers to focus on building and deploying applications using the MongoDB Atlas CDK without getting bogged down by manual configuration tasks.
To use the atlas-cdk-bootstrap, we will first need a specific CDK package called awscdk-resources-mongodbatlas (see more details on this package on our
Construct Hub page). Let's install it:
1npm install awscdk-resources-mongodbatlas
To confirm that this package was installed correctly and to find its version number, see the package.json file.
Next, in the <your_project_name>.ts file in the bin directory (typically the same name as your project, i.e., cloudshell-user.ts), delete the entire contents and update with:
1#!/usr/bin/env node
2import 'source-map-support/register';
3import * as cdk from 'aws-cdk-lib';
4import { AtlasBootstrapExample } from '../lib/cloudshell-user-stack'; //replace "cloudshell-user" with name of the .ts file in the lib directory
5
6const app = new cdk.App();
7const env = { region: process.env.CDK_DEFAULT_REGION, account: process.env.CDK_DEFAULT_ACCOUNT };
8
9new AtlasBootstrapExample(app, 'mongodb-atlas-bootstrap-stack', { env });
Next, in the <your_project_name-stack>.ts file in the lib directory (typically the same name as your project concatenated with “-stack”, i.e., cloudshell-user-stack.ts), delete the entire contents and update with:
1import * as cdk from 'aws-cdk-lib'
2import { Construct } from 'constructs'
3import {
4 MongoAtlasBootstrap,
5 MongoAtlasBootstrapProps,
6 AtlasBasicResources
7} from 'awscdk-resources-mongodbatlas'
8
9export class AtlasBootstrapExample extends cdk.Stack {
10 constructor (scope: Construct, id: string, props?: cdk.StackProps) {
11 super(scope, id, props)
12
13 const roleName = 'MongoDB-Atlas-CDK-Excecution'
14 const mongoDBProfile = 'development'
15
16 const bootstrapProperties: MongoAtlasBootstrapProps = {
17 roleName, secretProfile: mongoDBProfile,
18 typesToActivate: ['ServerlessInstance', ...AtlasBasicResources]
19 }
20
21 new MongoAtlasBootstrap(this, 'mongodb-atlas-bootstrap', bootstrapProperties)
22 }
23}
Lastly, you can check and deploy the atlas-cdk-bootstrap CDK construct with:
1npx cdk diff mongodb-atlas-bootstrap-stack
2npx cdk deploy mongodb-atlas-bootstrap-stack

Step 3: store MongoDB Atlas PAK as env variables and update AWS Secrets Manager

Now that the atlas-cdk-bootstrap CDK construct has been provisioned, we then store our previously created MongoDB Atlas programmatic API keys in AWS Secrets Manager. For more information on how to create MongoDB Atas PAK, refer to Step 2 from our prerequisites setup.
This will allow the CloudFormation Extension execution role to provision key components including: MongoDB Atlas serverless instance, Atlas project, Atlas project IP access list, and database user.
First, we must store these secrets as environment variables:
1export MONGO_ATLAS_PUBLIC_KEY=’INPUT_YOUR_PUBLIC_KEY'
2export MONGO_ATLAS_PRIVATE_KEY=’INPUT_YOUR_PRIVATE_KEY'
Then, we can update AWS Secrets Manager with the following AWS CLI command:
1aws secretsmanager update-secret --secret-id cfn/atlas/profile/development --secret-string "{\"PublicKey\":\"${MONGO_ATLAS_PUBLIC_KEY}\",\"PrivateKey\":\"${MONGO_ATLAS_PRIVATE_KEY}\"}"

Step 4: create and deploy the atlas-serverless-basic resource CDK L3 construct

The AWS CDK Level 3 (L3) constructs are high-level abstractions that encapsulate a set of related AWS resources and configuration logic into reusable components, allowing developers to define cloud infrastructure using familiar programming languages with less code. Developers use L3 constructs to streamline the process of setting up complex AWS and MongoDB Atlas services, ensuring best practices, reducing boilerplate code, and enhancing productivity through simplified syntax.
The MongoDB Atlas AWS CDK L3 construct for Atlas Serverless Basic provides developers with an easy and idiomatic way to deploy MongoDB Atlas serverless instances within AWS environments. Under the hood, this construct abstracts away the intricacies of configuring and deploying MongoDB Atlas serverless instances and related infrastructure on your behalf.
Next, we then update our <your_project_name>.ts file in the bin directory to:
  • Add the AtlasServerlessBasicStack to the import statement.
  • Add the IP address of NAT gateway which we suggest to be the only IP address on your Atlas serverless instance access whitelist.
1#!/usr/bin/env node
2import 'source-map-support/register';
3import * as cdk from 'aws-cdk-lib';
4import { AtlasBootstrapExample, AtlasServerlessBasicStack } from '../lib/cloudshell-user-stack'; //update "cloudshell-user" with your stack name
5
6const app = new cdk.App();
7const env = { region: process.env.CDK_DEFAULT_REGION, account: process.env.CDK_DEFAULT_ACCOUNT };
8
9// the bootstrap stack
10new AtlasBootstrapExample(app, 'mongodb-atlas-bootstrap-stack', { env });
11
12type AccountConfig = {
13 readonly orgId: string;
14 readonly projectId?: string;
15}
16
17const MyAccount: AccountConfig = {
18 orgId: '63234d3234ec0946eedcd7da', //update with your Atlas Org ID
19};
20
21const MONGODB_PROFILE_NAME = 'development';
22
23// the serverless stack with mongodb atlas serverless instance
24const serverlessStack = new AtlasServerlessBasicStack(app, 'atlas-serverless-basic-stack', {
25 env,
26 ipAccessList: '46.137.146.59', //input your static IP Address from NAT Gateway
27 profile: MONGODB_PROFILE_NAME,
28 ...MyAccount,
29});
To leverage this, we can update our <your_project_name-stack>.ts file in the lib directory to:
  • Update import blocks for newly used resources.
  • Activate underlying CloudFormation resources on the third-party CloudFormation registry.
  • Create a database username and password and store them in AWS Secrets Manager.
  • Update output blocks to display the Atlas serverless instance connection string and project name.
1import * as path from 'path';
2import {
3 App, Stack, StackProps,
4 Duration,
5 CfnOutput,
6 SecretValue,
7 aws_secretsmanager as secretsmanager,
8} from 'aws-cdk-lib';
9import * as cdk from 'aws-cdk-lib';
10import { SubnetType } from 'aws-cdk-lib/aws-ec2';
11import {
12 MongoAtlasBootstrap,
13 MongoAtlasBootstrapProps,
14 AtlasBasicResources,
15 AtlasServerlessBasic,
16 ServerlessInstanceProviderSettingsProviderName,
17} from 'awscdk-resources-mongodbatlas';
18import { Construct } from 'constructs';
19
20
21export class AtlasBootstrapExample extends cdk.Stack {
22 constructor (scope: Construct, id: string, props?: cdk.StackProps) {
23 super(scope, id, props)
24
25 const roleName = 'MongoDB-Atlas-CDK-Excecution'
26 const mongoDBProfile = 'development'
27
28 const bootstrapProperties: MongoAtlasBootstrapProps = {
29 roleName: roleName,
30 secretProfile: mongoDBProfile,
31 typesToActivate: ['ServerlessInstance', ...AtlasBasicResources]
32 }
33
34 new MongoAtlasBootstrap(this, 'mongodb-atlascdk-bootstrap', bootstrapProperties)
35 }
36}
37
38export interface AtlasServerlessBasicStackProps extends StackProps {
39 readonly profile: string;
40 readonly orgId: string;
41 readonly ipAccessList: string;
42}
43export class AtlasServerlessBasicStack extends Stack {
44 readonly dbUserSecret: secretsmanager.ISecret;
45 readonly connectionString: string;
46 constructor(scope: Construct, id: string, props: AtlasServerlessBasicStackProps) {
47 super(scope, id, props);
48
49 const stack = Stack.of(this);
50 const projectName = `${stack.stackName}-proj`;
51
52 const dbuserSecret = new secretsmanager.Secret(this, 'DatabaseUserSecret', {
53 generateSecretString: {
54 secretStringTemplate: JSON.stringify({ username: 'serverless-user' }),
55 generateStringKey: 'password',
56 excludeCharacters: '%+~`#$&*()|[]{}:;<>?!\'/@"\\=-.,',
57 },
58 });
59
60 this.dbUserSecret = dbuserSecret;
61 const ipAccessList = props.ipAccessList;
62
63 // see https://github.com/mongodb/awscdk-resources-mongodbatlas/blob/main/examples/l3-resources/atlas-serverless-basic.ts#L22
64 const basic = new AtlasServerlessBasic(this, 'serverless-basic', {
65 serverlessProps: {
66 profile: props.profile,
67 providerSettings: {
68 providerName: ServerlessInstanceProviderSettingsProviderName.SERVERLESS,
69 regionName: 'EU_WEST_1',
70 },
71 },
72 projectProps: {
73 orgId: props.orgId,
74 name: projectName,
75 },
76 dbUserProps: {
77 username: 'serverless-user',
78 },
79 ipAccessListProps: {
80 accessList: [
81 { ipAddress: ipAccessList, comment: 'My first IP address' },
82 ],
83 },
84 profile: props.profile,
85 });
86
87 this.connectionString = basic.mserverless.getAtt('ConnectionStrings.StandardSrv').toString();
88
89 new CfnOutput(this, 'ProjectName', { value: projectName });
90 new CfnOutput(this, 'ConnectionString', { value: this.connectionString });
91 }
92}
Lastly, you can check and deploy the atlas-serverless-basic CDK construct with:
1npx cdk diff atlas-serverless-basic-stack
2npx cdk deploy atlas-serverless-basic-stack
Verify in the Atlas UI, as well as the AWS Management Console, that all underlying MongoDB Atlas resources have been created. Note the database username and password is stored as a new secret in AWS Secrets Manager (as specified in above AWS region of your choosing).

Step 5: copy the auto-generated database username and password created in AWS Secrets Manager secret into Atlas

When we initially created the Atlas database user credentials, we created a random password, and we can’t simply copy that into AWS Secrets Manager because this would expose our database password in our CloudFormation template.
To avoid this, we need to manually update the MongoDB Atlas database user password from the secret stored in AWS Secrets Manager so they will be in sync. The AWS Lambda function will then pick this password from AWS Secrets Manager to successfully authenticate to the Atlas serverless instance.
We can do this programmatically via the Atlas CLI. To get started, we first need to make sure we have configured with the correct PAK that we created as part of our initial setup:
1atlas config init
We then input the correct PAK and select the correct project ID. For example:
The image shows a terminal window with command line interface showing the setup process for the MongoDB Atlas CLI. Fields include: Atlas Public API Key, Atlas Private API Key, Atlas Project ID, and Output Format.
Next, we can simply update our MongoDB Atlas database user password credentials which we can copy from the AWS Management Console. This can be done with the command:
1atlas dbusers update serverles-user --password INSERT_YOUR_AWS_SECRET_MANAGER_PASSWORD
You can verify this operation by either reviewing the response from the Atlas UI (“Successfully updated database user serverless-user”) or via checking the Database Access section from the Atlas UI.

Step 6: create and deploy AWS Lambda, a Python-based Lambda function, and Amazon API Gateway CDK constructs

At this point, all your core MongoDB Atlas services should have been provisioned successfully. We next move on to provisioning the remaining AWS Lambda, Python-based Lambda function and Amazon API Gateway CDK constructs.
Next, we update our <your_project_name>.ts file in the bin directory:
  • Reference your VPC ID created in your AWS account.
    • To retrieve the VPC IDs in your AWS region that you wish to deploy into, simply use the AWS CLI command:
1aws ec2 describe-vpcs
  • Create the AWS Lambda resource and associate with Lambda function.
  • Create the API Gateway API with the Lambda handler.
1#!/usr/bin/env node
2import 'source-map-support/register';
3import * as path from 'path';
4import {
5 aws_ec2 as ec2,
6 aws_lambda as lambda,
7 aws_apigateway as apigw,
8 Duration,
9} from 'aws-cdk-lib';
10import * as cdk from 'aws-cdk-lib';
11import { SubnetType } from 'aws-cdk-lib/aws-ec2';
12import { AtlasBootstrapExample, AtlasServerlessBasicStack } from '../lib/cloudshell-user-stack'; //update "cloudshell-user" with your .ts filename
13
14const app = new cdk.App();
15const env = { region: process.env.CDK_DEFAULT_REGION, account: process.env.CDK_DEFAULT_ACCOUNT };
16
17// the bootstrap stack
18new AtlasBootstrapExample(app, 'mongodb-atlas-bootstrap-stack', { env });
19
20type AccountConfig = {
21 readonly orgId: string;
22 readonly projectId?: string;
23}
24
25const MyAccount: AccountConfig = {
26 orgId: '63234d3234ec0946eedcd7da', //update with your Atlas Org ID
27};
28
29const MONGODB_PROFILE_NAME = 'development';
30
31// the serverless stack with mongodb atlas serverless instance
32const serverlessStack = new AtlasServerlessBasicStack(app, 'atlas-serverless-basic-stack', {
33 env,
34 ipAccessList: '46.137.146.59', //input your static IP Address from NAT Gateway
35 profile: MONGODB_PROFILE_NAME,
36 ...MyAccount,
37});
38
39// Reference your VPC ID created in your AWS Account
40const vpc = ec2.Vpc.fromLookup(serverlessStack, 'VPC', {
41 vpcId: 'vpc-0060b48b973dbe4a5', // Use your actual VPC ID here
42});
43
44// The demo lambda function.
45const handler = new lambda.Function(serverlessStack, 'LambdaFunc', {
46 code: lambda.Code.fromAsset(path.join(__dirname, '../lambda/playground')),
47 runtime: lambda.Runtime.PYTHON_3_10,
48 handler: 'index.handler',
49 timeout: Duration.seconds(30),
50
51 vpc,
52 vpcSubnets: {
53 subnetType: SubnetType.PRIVATE_WITH_EGRESS,
54 },
55
56 environment: {
57 CONN_STRING_STANDARD: serverlessStack.connectionString,
58 DB_USER_SECRET_ARN: serverlessStack.dbUserSecret.secretArn,
59 },
60});
61
62// allow the handler to read the db user secret
63serverlessStack.dbUserSecret.grantRead(handler);
64
65// create the API Gateway REST API with the lambda handler.
66new apigw.LambdaRestApi(serverlessStack, 'RestAPI', { handler });
Next, we create Lambda function directories:
1mkdir -p lambda/playground
2touch lambda/playground/index.py
The Python code below is a handler function for an AWS Lambda function that interacts with the MongoDB Atlas serverless instance via a public endpoint. It fetches database credentials from AWS Secrets Manager, constructs a MongoDB Atlas connection string using these credentials, and connects to the MongoDB Atlas serverless instance.
The function then generates and inserts 20 sample sales records with random data into a sales collection within the database. It also aggregates sales data for the year 2023, counting the number of sales and summing the total sales amount by item. Finally, it prints the count of sales in 2023 and the aggregation results, returning this information as a JSON response.
Hence, we populate the Lambda/playground/index.py with:
1from datetime import datetime, timedelta
2from pymongo.mongo_client import MongoClient
3from pymongo.server_api import ServerApi
4import random, json, os, re, boto3
5
6# Function to generate a random datetime between two dates
7def random_date(start_date, end_date):
8 time_delta = end_date - start_date
9 random_days = random.randint(0, time_delta.days)
10 return start_date + timedelta(days=random_days)
11
12def get_private_endpoint_srv(mongodb_uri, username, password):
13 """
14 Get the private endpoint SRV address from the given MongoDB URI.
15 e.g. `mongodb+srv://my-cluster.mzvjf.mongodb.net` will be converted to
16 `mongodb+srv://<username>:<password>@my-cluster-pl-0.mzvjf.mongodb.net/?retryWrites=true&w=majority`
17 """
18 match = re.match(r"mongodb\+srv://(.+)\.(.+).mongodb.net", mongodb_uri)
19 if match:
20 return "mongodb+srv://{}:{}@{}-pl-0.{}.mongodb.net/?retryWrites=true&w=majority".format(username, password, match.group(1), match.group(2))
21 else:
22 raise ValueError("Invalid MongoDB URI: {}".format(mongodb_uri))
23
24def get_public_endpoint_srv(mongodb_uri, username, password):
25 """
26 Get the private endpoint SRV address from the given MongoDB URI.
27 e.g. `mongodb+srv://my-cluster.mzvjf.mongodb.net` will be converted to
28 `mongodb+srv://<username>:<password>@my-cluster.mzvjf.mongodb.net/?retryWrites=true&w=majority`
29 """
30 match = re.match(r"mongodb\+srv://(.+)\.(.+).mongodb.net", mongodb_uri)
31 if match:
32 return "mongodb+srv://{}:{}@{}.{}.mongodb.net/?retryWrites=true&w=majority".format(username, password, match.group(1), match.group(2))
33 else:
34 raise ValueError("Invalid MongoDB URI: {}".format(mongodb_uri))
35
36
37 client = boto3.client('secretsmanager')
38 conn_string_srv = os.environ.get('CONN_STRING_STANDARD')
39 secretId = os.environ.get('DB_USER_SECRET_ARN')
40 json_secret = json.loads(client.get_secret_value(SecretId=secretId).get('SecretString'))
41 username = json_secret.get('username')
42 password = json_secret.get('password')
43
44def handler(event, context):
45# conn_string_private = get_private_endpoint_srv(conn_string_srv, username, password)
46 conn_string = get_public_endpoint_srv(conn_string_srv, username, password)
47 print('conn_string=', conn_string)
48
49 client = MongoClient(conn_string, server_api=ServerApi('1'))
50
51 # Select the database to use.
52 db = client['mongodbVSCodePlaygroundDB']
53
54 # Create 20 sample entries with dates spread between 2021 and 2023.
55 entries = []
56
57 for _ in range(20):
58 item = random.choice(['abc', 'jkl', 'xyz', 'def'])
59 price = random.randint(5, 30)
60 quantity = random.randint(1, 20)
61 date = random_date(datetime(2021, 1, 1), datetime(2023, 12, 31))
62 entries.append({
63 'item': item,
64 'price': price,
65 'quantity': quantity,
66 'date': date
67 })
68
69 # Insert a few documents into the sales collection.
70 sales_collection = db['sales']
71 sales_collection.insert_many(entries)
72
73 # Run a find command to view items sold in 2023.
74 sales_2023 = sales_collection.count_documents({
75 'date': {
76 '$gte': datetime(2023, 1, 1),
77 '$lt': datetime(2024, 1, 1)
78 }
79 })
80
81 # Print a message to the output window.
82 print(f"{sales_2023} sales occurred in 2023.")
83
84 pipeline = [
85 # Find all of the sales that occurred in 2023.
86 { '$match': { 'date': { '$gte': datetime(2023, 1, 1), '$lt': datetime(2024, 1, 1) } } },
87 # Group the total sales for each product.
88 { '$group': { '_id': '$item', 'totalSaleAmount': { '$sum': { '$multiply': [ '$price', '$quantity' ] } } } }
89 ]
90
91 cursor = sales_collection.aggregate(pipeline)
92 results = list(cursor)
93 print(results)
94 response = {
95 'statusCode': 200,
96 'headers': {
97 'Content-Type': 'application/json'
98 },
99 'body': json.dumps({
100 'sales_2023': sales_2023,
101 'results': results
102 })
103 }
104
105 return response
Lastly, we need to create one last file that will store our requirements for the Python playground application with:
1touch lambda/playground/requirements.txt
In this file, we populate with:
1pymongo
2requests
3boto3
4testresources
5urllib3==1.26
To then install these dependencies used in requirements.txt:
1cd lambda/playground
2pip install -r requirements.txt -t .
This installs all required Python packages in the playground directory and AWS CDK would bundle into a zip file which we can see from AWS Lambda console after deployment.

Step 7: create suggested AWS networking infrastructure

AWS Lambda functions placed in public subnets do not automatically have internet access because Lambda functions do not have public IP addresses, and a public subnet routes traffic through an internet gateway (IGW). To access the internet, a Lambda function can be associated with a private subnet with a route to a NAT gateway.
First, ensure that you have NAT gateway created in your public subnet. Then, create a route from a private subnet (where your AWS Lambda resource will live) to the NAT gateway and route the public subnet to IGW. The benefits of this networking approach is that we can associate a static IP to our NAT gateway so this will be our one and only Atlas project IP access list entry. This means that all traffic is still going to the public internet through the NAT gateway and is TLS encrypted. The whitelist only allows the NAT gateway static public IP and nothing else.
Alternatively, you can choose to build with AWS PrivateLink which does carry additional costs but will dramatically simplify networking management by directly connecting AWS Lambda to a MongoDB Atlas severless instance without the need to maintain subnets, IGWs, or NAT gateways. Also, AWS PrivateLink creates a private connection to AWS services, reducing the risk of exposing data to the public internet.
Select whichever networking approach best suits your organization’s needs.
The image is a diagram that illustrates the NAT Gateway Approach for connecting a cloud-based application to the internet and external services. The components of the architecture are contained within the AWS Cloud environment. Here's a description of the flow and components depicted in the image
  • An HTTP Client outside AWS Cloud sends a request to the Amazon API Gateway.
  • The Amazon API Gateway, represented by a pink icon, receives the HTTP request and passes it to AWS Lambda, symbolized by an orange icon. AWS Lambda is a serverless compute service that automatically manages the compute resources.
  • The AWS Cloud environment is divided into a Virtual Private Cloud (VPC), denoted by a blue border.
  • Inside the VPC, there are two types of subnets:
    • A Public subnet, which has direct access to the Internet via an Internet Gateway, depicted with a purple icon.
    • A Private subnet, which does not have direct access to the Internet.
  • To allow the AWS Lambda function within the Private subnet to access the Internet, a NAT (Network Address Translation) Gateway, shown with a purple NAT icon, is used. The NAT Gateway is located in the Public subnet.
  • Traffic from the AWS Lambda function is routed through the NAT Gateway to reach external services.
  • The external service that the AWS Lambda function is communicating with is a MongoDB Atlas Serverless Instance, indicated by a green icon with a database symbol. This instance is outside of the AWS Cloud and accessible over the Internet.”
The image is a schematic representation of the AWS PrivateLink Approach for securely connecting services within AWS Cloud.
The components of this architecture are as follows:
  • On the left side, an "HTTP Client" represented by a cloud icon, is initiating a request.
  • This request is sent to the "Amazon API Gateway," indicated by a pink icon with two brackets and a lightning symbol, suggesting it's an entry point for APIs.
  • Below the API Gateway, there's an "AWS Lambda" function, symbolized by an orange Lambda (λ) icon, which is a serverless computing service in AWS.
  • The Lambda function is within an "AWS Cloud" boundary, shown with a black outline.
  • Inside the AWS Cloud, there's a "Virtual private cloud (VPC)" depicted by a blue border, which is a segregated part of the AWS Cloud, isolated from other networks.
  • Within the VPC, there is a "VPC Endpoint" represented by a circular icon with a purple border and an inward-facing arrow, indicating it's a gateway for private connections.
  • The "AWS PrivateLink," shown with a purple cloud-like icon, facilitates private connectivity between services within AWS, bypassing the public internet.
  • On the far right, there's a "MongoDB Atlas Serverless Instance" depicted with a green database icon and a plug, indicating that it's an external service connected via PrivateLink.”
Finally, we are ready to check and deploy the mongodb-atlas-bootstrap-stack for the last time:
1npx cdk diff atlas-serverless-basic-stack
2npx cdk deploy atlas-serverless-basic-stack

Step 8: review and test the RESTful API endpoint from the serverless application

Review AWS Management Console. See under CloudFormation and Lambda:
The image is a screenshot of the AWS Management Console, specifically within the AWS CloudFormation service. The focus of the screenshot is on a particular CloudFormation stack named "atlas-serverless-basic-stack2".
The image is a screenshot of the AWS Management Console, specifically within the AWS CloudFormation service. The focus of the screenshot is on a particular CloudFormation stack named "atlas-serverless-basic-stack2".
Click “Rest API Endpoint” to see if the Lambda function returns a response:
The image is a terminal screen with command line output related to an AWS CloudFormation stack deployment. The instruction "Click 'Rest API Endpoint' to see if the Lambda function returns a response" is guiding the user to test the deployed Lambda function by accessing the provided API Gateway URL. This is a common post-deployment step to verify that a serverless application's endpoint is operational and the Lambda function is executing as expected. When this URL is accessed (via a web browser or a tool like curl), it should invoke the Lambda function, which will process the request and return a response.
Review the Lambda function by inputting into a web browser.
The image shows the output of an AWS Lambda function when accessed via a REST API endpoint from the browser. The output is in JSON format and shows a key named "sales_2023" with a value of 6, indicating there are 6 sales records for the year 2023. Under the "results" key, there are multiple objects each with an "id" and "totalSaleAmount
The JSON structure implies that the Lambda function is working correctly, as it's able to execute and return structured data. If you need to test this Lambda function further or integrate it with other services, you would typically do so by calling this REST API endpoint from your application with the appropriate HTTP method (GET, POST, etc.), headers, and any required parameters or body content.”
Lastly, you can also curl this API endpoint:
1 curl -s https://[REDACTED].execute-api.eu-west-1.amazonaws.com/prod/ | jq -r .
The image is of a terminal screen showing the use of ‘curl’ command to make a request to an AWS Lambda function via a provided API endpoint. The ‘curl’ command is using the -s flag for silent mode, which means it won't show progress or error messages. The output of the curl command is being piped into ‘jq’, which is a lightweight and flexible command-line JSON processor. The ‘jq’ command is used with the -r flag to output raw strings, not JSON-quoted strings.

Step 9 (optional): tear down infrastructure to prevent unwanted charges

1npx cdk destroy atlas-serverless-basic-stack
And then finally:
1npx cdk destroy mongodb-atlas-bootstrap-stack
Note: This order ensures the serverless stack would be destroyed first. If you use the npx cdk destroy --all command instead, as we do not specify their stack dependency, all stacks will be deleted in parallel which may cause failures. This is because we will lose the required bootstrap resources before destroying all other resources.

All done

Congratulations! You have just deployed your first serverless application with MongoDB Atlas serverless, AWS Lambda, and Amazon API Gateway with the AWS CDK.
Next, head to YouTube for a full step-by-step overview and walkthrough on a recent episode of MongoDB TV Cloud Connect (aired 15 Feb 2024). Also, see the GitHub repo with the full open-source code of materials used in this demo serverless application.
The MongoDB Atlas CDK resources are open-sourced under the Apache-2.0 license and we welcome community contributions. To learn more, see our contributing guidelines.
Get started quickly by creating a MongoDB Atlas account through the AWS Marketplace and start building with MongoDB Atlas and the AWS CDK today!
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

How to Implement Working Memory in AI Agents and Agentic Systems for Real-time AI Applications


Nov 18, 2024 | 13 min read
Tutorial

Instant GraphQL APIs for MongoDB with Grafbase


Oct 12, 2023 | 7 min read
Tutorial

Streamlining Log Management to Amazon S3 Using Atlas Push-based Log Exports With HashiCorp Terraform


Jul 08, 2024 | 6 min read
Article

Optimizing your Online Archive for Query Performance


Jan 23, 2024 | 3 min read
Table of Contents