The Atlas CLI, a unified command-line tool for creating and managing MongoDB Atlas deployments, now supports local development, including the ability to develop with Atlas Search and Atlas Vector Search locally. This makes it even easier to create full-text search or AI-powered applications, no matter your preferred environment for building with MongoDB.
Please note that the new local experience is intended only for development purposes and not for production use cases.
It only takes two commands to get started:
Download and install using Homebrew package manager (more options): $ brew install mongodb-atlas
Set up your local development environment: $ atlas deployments setup
Hi @Dan_Musorrafiti ,
The local development experience for Atlas, Atlas Search and Vector Search is designed and built with a focus on addressing the needs of local development and testing scenarios. To illustrate, local deployments operate as single-node replica sets and are accessible without requiring authentication.
For Atlas Search to seamlessly function in a production environment, we recommend utilizing Atlas deployments hosted in the cloud.
The Public Preview of the local development experience for Atlas has indeed limited supportability. We’re planning to expand it to add Ubuntu and GitHub Actions support towards the General Availability.
In meantime, could you share more details about what errors are you getting on the GitHub Action run?
Sorry Jake for the late response, I was on leave.
Regarding the timelines: we’re planning to look into the Ubuntu and GitHub Actions support in the first half of next year but can’t tell more precisely at this point.
Hi @Jakub_Lazinski,
I have been facing issue with creating atlas local deployment through a docker container over macOs host. I am using ‘mongodb/atlas’ image from docker hub to perform the task. Upon trying create local deployment it exits with exit code 125. If you can give reference to create atlas deployment through a container, then that’d be really helpful.
Hey Yaj, I was actually playing with this today and had the same issue. I believe it is related to docker not being able to spin up the required podman container needed for a local deployment.
Yes, upon researching few blog posts, I have came to the same conclusion but couldn’t find any solution to that. I would appreciate if you have any suggestions for plausible solutions.
Thanks, @Yaj_Vikani and @Jake_Turner, for bringing this up, and apologies for the delayed response. The local Atlas development experience was not included in the Public Preview, as noted in our documentation’s known limitations section.
However, lately some progress has been made on this anticipated feature. With the newly released Atlas CLI 1.14.0 version, we’re now offering support for running the local experience from a container. We are currently refining dedicated documentation, but for now, here’s a quick guide for running the Local Atlas experience directly from Docker or Docker Compose:
Docker
Fetch the latest mongodb/atlas docker image with docker pull mongodb/atlas:latest
Start the Docker image in bash mode using docker run -p 27777:27017 --privileged -it mongodb/atlas bash . More options here.
Set up a local deployment with atlas deployments setup --bindIpAll --username root --password root --type local --force
To connect to the deployment from the host (outside the container), use: mongosh --port 27777 --username root --password root
Docker-Compose
Install docker-compose with brew install docker-compose
Navigate to your project folder and create a docker-compose.yml file with the content provided below.
Hello, first of all, thank you for announcement of this exciting feature! This certainly makes lives of the developers much easier and allows to create a reproducible environments.
Quote: " 1. Navigate to your project folder and create a docker-compose.yml file with the content provided above."
Where do we get the contents of docker-compose.yml from please?
Thank you, it seem to work just fine on docker-ce linux!
On question is regards to the compose configuration, how can I make sure that DB data will be persisted on the host like in this example for a regular mongo:
volumes:
- /data/db:/data/db
not sure is this 100% relevant for atlas version.
The thing is that it is creating at the moment a new cluster each time one runs docker compose, but it is obviously not desired behavior, the desired behavior is to create cluster once and reuse it upon subsequent runs.
Also at the moment there is an issue there - if one stops docker container and resumes it again, cluster after resuming shows IDLE state and not responsible anymore. Stopping cluster manually with atlas deployments stop before stopping the container and than resuming when container is resumed with atlas deployments start fixes the issue but this seem to be fragile and not really a proper solution either.
Thanks @Igor_Prokopenkov for raising these great questions.
Could you shed a bit more light on how are you using the docker-compose method here? Is it a CI pipeline or a local dev environment or sth else?
In meantime I’ll see how we can map a volumen to the container’s data directory without breaking the solution.
Re “Also at the moment there is an issue there - if one stops docker container and resumes it again, cluster after resuming shows IDLE state and not responsible anymore” - can you provide steps you’re taking here to make sure we’re on the same page?
Also, what happens when you try running atlas deployments start right after the container is restarted?
Hi @Jakub_Lazinski, I’ve had similar issues, and can share my details. I appreciate your engagement and trying to get this all to work in a local experience.
docker-compose. We want to use an Atlas Docker container both for local dev and a CI pipeline.
A. CI pipeline. The CI pipeline seems doable; the Docker container spins up, we can create a deployment using command, and based on what we’ve seen, we should be able to connect, populate data, run tests on functions/endpoints accessing/manipulating that data, etc. Running this all once, in isolation, seems to work ok.
The one concern we have about CI is that there doesn’t seem to be any caching of the MongoDB binaries downloaded in step 2 of creation of the cluster. Not being able to cache costs money in the form of additional pipeline minutes.
B. Local dev. Local dev is the real issue for us. We want to persist data, so that as devs start and stop containers as they need to, they don’t have to keep recreating the entire database (losing whatever objects they may have populated on their own by running a mongorestore on shared data or whatever).
Or, in other words, we want to be able to create a dev deployment once, and basically restart it and reconnect to it whenever we start the container, storing the data on a persisted volume. Ideally, we could do all of this via a single command: create the deployment if it doesn’t exist, start if it does.
Any input on how to do that would be wonderful. Here is what we have tried so far.
We have kinda been able to persist the data to a mapped volume, but have not been able to access the same deployment once a container is stopped. And although we can see the data on the mapped volume, we haven’t been able to access that data again via the deployment (more on that below).
We tried giving the deployment the name dev. We searched for the .wt files in the Docker container, which led us to adding this volume mapping:
… and I can connect using that connection string as well as see the data in my ./data/atlas folder.
If I then stop my container and re-up it using the same docker-compose, I have gotten a variety of responses.
Obviously, that docker-compose has a command of atlas deployments setup, which might well be inherently problematic to rerun.
In any event, I might be wrong, but there seems to be some relationship to the time elapsed since I stopped the container and the error that I get.
I seem to first get this if immediately re-upping:
mongo_atlas | Error: "dev" deployment already exists andis currently in"running" state
mongo_atlas exited with code 1
If I try to re-up rapidly after that, I get this:
mongo_atlas | 2/2: Creating your deployment dev...
mongo_atlas | Error: exitstatus125: Error: network name mdb-local-dev already used: network already exists
If I get that “network already exists” error, it seems to repeat forever.
However, sometimes I have gotten this instead (which I think has happened when I have waited longer to re-up):
mongo_atlas | 3/3: Creating your deployment dev...
mongo_atlas | Error: failed to connect to mongodb server: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
mongo_atlas exited with code 1
When I get that error, I am eventually able to restart the deployment by upping that same docker-compose, but it seems to re-initialize things with default data, losing whatever I tried to persist.
Again, that docker-compose has a command of atlas deployments setup. If I try changing the docker-compose command to this:
… and then immediately re-up after stopping my container, I get this message:
mongo_atlas|Error:currentlytherearenodeployments
Basically, no matter what I have tried, once the container has been stopped, I have not been able to subsequently access the same deployment I created the first time I ran the docker-compose and keep my data.
If there are different commands that I should be running, I’d be grateful to learn what they are – and some further documentation for maintaining local dev environments might be helpful.
Running the container and deploying manually. I can’t speak for @Igor_Prokopenkov, who may know more and have done more, but here is what I have done:
Execute docker run -p 27017:27017 --privileged -it mongodb/atlas bash
Set up a deployment via atlas deployments setup dev --type local --port 27017 --bindIpAll --username root --password root --mdbVersion 7.0 --force or similar
Running atlas deployments lists shows me this:
sh-4.4# atlas deployments list
NAME TYPE MDB VER STATE
dev LOCAL 7.0.4 IDLE
And I can connect using the connection string.
If I then run atlas deployments pause devbefore stopping the container, then atlas deployments list gives me this both before I stop the container and after I restart it:
sh-4.4# atlas deployments list
NAME TYPE MDB VER STATE
dev LOCAL 7.0.4 STOPPED
And after restarting, I can run atlas deployments start dev and reconnect.
However, if I simply stop the container without first running atlas deployments pause dev, then I get this when running atlas deployments list after re-upping:
sh-4.4# atlas deployments list
NAME TYPE MDB VER STATE
dev LOCAL 7.0.4 IDLE
… but I can’t connect using the connection string. I also can’t pause it:
sh-4.4# atlas deployments pause dev
Error: exit status 255: Error: OCI runtime error: runc: exec failed: cannot exec in a stopped container
Note that it still continues to show as IDLE when I run atlas deployments pause list.
And if I try to run atlas deployments start dev, it hangs.
So we could follow an OS-agnostic dev approach by using the Docker image to run a dev deployment in this way, but remembering to pause the deployment every time before stopping the container – or even being able to – doesn’t seem feasible.
This is a very comprehensive description. Nothing to add really, just want to share my docker compose automation script to cover the manual steps mentioned above:
#!/bin/bash -x# Function to stop Atlas deployment gracefullystop_atlas() {
echo"Stopping Atlas deployment..."
atlas deployments stop
}
# Function to start Atlas deploymentstart_atlas() {
echo"Starting Atlas deployment..."
atlas deployments start
}
# Trap SIGTERM and call stop_atlastrap'stop_atlas' SIGTERM SIGINT
# Check if the deployment exists and its state
deployment_status=$(atlas deployments list | grep 'LOCAL')
if [[ -z "$deployment_status" ]]; thenecho"No local deployment found. Setting up..."
atlas deployments setup --bindIpAll --username root --password root --typelocal --force
elseif [[ $deployment_status == *"STOPPED"* ]]; then
start_atlas
fifiwhiletruedo#sleep 1000 - Doesn't work with sleep. Not sure why.tail -f /dev/null & wait${!}done