2 / 34
Oct 2023

Hello everyone!

Today, we are excited to announce the release of a new local experience with Atlas, Atlas Search, and Atlas Vector Search with the Atlas CLI.

The Atlas CLI, a unified command-line tool for creating and managing MongoDB Atlas deployments, now supports local development, including the ability to develop with Atlas Search and Atlas Vector Search locally. This makes it even easier to create full-text search or AI-powered applications, no matter your preferred environment for building with MongoDB.

Please note that the new local experience is intended only for development purposes and not for production use cases.

It only takes two commands to get started:

  1. Download and install using Homebrew package manager (more options):
    $ brew install mongodb-atlas
  2. Set up your local development environment:
    $ atlas deployments setup

Try it today and let us know what you think. If you’re interested in sharing a direct feedback, please send an email to local_dev_atlascli-eap@mongodb.com and we will get in touch with you.

Jakub Lazinski

read 8 min
9 days later

Can you explain to us what are these limitations ? What are the mechanisms that prevent a use in production ?

Hi @Dan_Musorrafiti ,
The local development experience for Atlas, Atlas Search and Vector Search is designed and built with a focus on addressing the needs of local development and testing scenarios. To illustrate, local deployments operate as single-node replica sets and are accessible without requiring authentication.

For Atlas Search to seamlessly function in a production environment, we recommend utilizing Atlas deployments hosted in the cloud.

12 days later

Hi Jake,

The Public Preview of the local development experience for Atlas has indeed limited supportability. We’re planning to expand it to add Ubuntu and GitHub Actions support towards the General Availability.

In meantime, could you share more details about what errors are you getting on the GitHub Action run?

Thanks,
Jakub

Is there any approximate timeline for GA or a reference to changes expected to be made?

Listed my problems on the relevant GH repos

Sorry Jake for the late response, I was on leave.
Regarding the timelines: we’re planning to look into the Ubuntu and GitHub Actions support in the first half of next year but can’t tell more precisely at this point.

1 month later

Hi @Jakub_Lazinski,
I have been facing issue with creating atlas local deployment through a docker container over macOs host. I am using ‘mongodb/atlas’ image from docker hub to perform the task. Upon trying create local deployment it exits with exit code 125. If you can give reference to create atlas deployment through a container, then that’d be really helpful.

Hey Yaj, I was actually playing with this today and had the same issue. I believe it is related to docker not being able to spin up the required podman container needed for a local deployment.

Hello @Jake_Turner,

Yes, upon researching few blog posts, I have came to the same conclusion but couldn’t find any solution to that. I would appreciate if you have any suggestions for plausible solutions.

cc: @Jakub_Lazinski

Thanks, @Yaj_Vikani and @Jake_Turner, for bringing this up, and apologies for the delayed response. The local Atlas development experience was not included in the Public Preview, as noted in our documentation’s known limitations section.

However, lately some progress has been made on this anticipated feature. With the newly released Atlas CLI 1.14.0 version, we’re now offering support for running the local experience from a container. We are currently refining dedicated documentation, but for now, here’s a quick guide for running the Local Atlas experience directly from Docker or Docker Compose:

Docker

  1. Fetch the latest mongodb/atlas docker image with docker pull mongodb/atlas:latest
  2. Start the Docker image in bash mode using docker run -p 27777:27017 --privileged -it mongodb/atlas bash . More options here.
  3. Set up a local deployment with atlas deployments setup --bindIpAll --username root --password root --type local --force
  4. To connect to the deployment from the host (outside the container), use: mongosh --port 27777 --username root --password root

Docker-Compose

  1. Install docker-compose with brew install docker-compose
  2. Navigate to your project folder and create a docker-compose.yml file with the content provided below.
services: mongo: image: mongodb/atlas privileged: true command: | /bin/bash -c "atlas deployments setup --type local --port 27777 --bindIpAll --username root --password root --force && tail -f /dev/null" volumes: - /var/run/docker.sock:/var/run/docker.sock ports: - 27777:27777
  1. Start docker-compose with docker-compose up
  2. Connect to the deployment from the host (outside the container) using: mongosh --port 27777 --username root --password root

Please let me know how it works out for you!

Hello, first of all, thank you for announcement of this exciting feature! This certainly makes lives of the developers much easier and allows to create a reproducible environments.
Quote: " 1. Navigate to your project folder and create a docker-compose.yml file with the content provided above."
Where do we get the contents of docker-compose.yml from please?

This is working for me to recreate above

version: “3.8”

services:
mongo1:
image: mongodb/atlas:latest
ports:
- 27017:27017
privileged: true
entrypoint: atlas deployments setup --bindIpAll --username root --password root --type local --force

@Igor_Prokopenkov

Thank you, it seem to work just fine on docker-ce linux!
On question is regards to the compose configuration, how can I make sure that DB data will be persisted on the host like in this example for a regular mongo:

volumes: - /data/db:/data/db

not sure is this 100% relevant for atlas version.

The thing is that it is creating at the moment a new cluster each time one runs docker compose, but it is obviously not desired behavior, the desired behavior is to create cluster once and reuse it upon subsequent runs.

Also at the moment there is an issue there - if one stops docker container and resumes it again, cluster after resuming shows IDLE state and not responsible anymore. Stopping cluster manually with atlas deployments stop before stopping the container and than resuming when container is resumed with atlas deployments start fixes the issue but this seem to be fragile and not really a proper solution either.

8 days later

Thanks @Igor_Prokopenkov for raising these great questions.
Could you shed a bit more light on how are you using the docker-compose method here? Is it a CI pipeline or a local dev environment or sth else?

In meantime I’ll see how we can map a volumen to the container’s data directory without breaking the solution.

Re “Also at the moment there is an issue there - if one stops docker container and resumes it again, cluster after resuming shows IDLE state and not responsible anymore” - can you provide steps you’re taking here to make sure we’re on the same page?
Also, what happens when you try running atlas deployments start right after the container is restarted?

Hi @Jakub_Lazinski, I’ve had similar issues, and can share my details. I appreciate your engagement and trying to get this all to work in a local experience.

  1. docker-compose. We want to use an Atlas Docker container both for local dev and a CI pipeline.

A. CI pipeline. The CI pipeline seems doable; the Docker container spins up, we can create a deployment using command, and based on what we’ve seen, we should be able to connect, populate data, run tests on functions/endpoints accessing/manipulating that data, etc. Running this all once, in isolation, seems to work ok.

datastore: image: mongodb/atlas:v1.14.0 hostname: mongo_atlas container_name: mongo_atlas privileged: true command: | /bin/bash -c "atlas deployments setup --type local --port 27017 --bindIpAll --username root --password root --mdbVersion 7.0 --force && tail -f /dev/null" ports: - 27017:27017 volumes: - ./run/docker.sock:/var/run/docker.sock

The one concern we have about CI is that there doesn’t seem to be any caching of the MongoDB binaries downloaded in step 2 of creation of the cluster. Not being able to cache costs money in the form of additional pipeline minutes.

B. Local dev. Local dev is the real issue for us. We want to persist data, so that as devs start and stop containers as they need to, they don’t have to keep recreating the entire database (losing whatever objects they may have populated on their own by running a mongorestore on shared data or whatever).

Or, in other words, we want to be able to create a dev deployment once, and basically restart it and reconnect to it whenever we start the container, storing the data on a persisted volume. Ideally, we could do all of this via a single command: create the deployment if it doesn’t exist, start if it does.

Any input on how to do that would be wonderful. Here is what we have tried so far.

We have kinda been able to persist the data to a mapped volume, but have not been able to access the same deployment once a container is stopped. And although we can see the data on the mapped volume, we haven’t been able to access that data again via the deployment (more on that below).

We tried giving the deployment the name dev. We searched for the .wt files in the Docker container, which led us to adding this volume mapping:

command: | /bin/bash -c "atlas deployments setup dev --type local --port 27017 --username root --password root --mdbVersion 7.0 --force && tail -f /dev/null" ports: - 27017:27017 volumes: - ./run/docker.sock:/var/run/docker.sock - ./data/atlas:/var/lib/containers/storage/volumes/mongod-local-data-dev/_data

In my own attempts, that does allow me to create the deployment and persist the data. When first upping my docker-compose, I get:

mongo_atlas | 3/3: Creating your deployment dev... mongo_atlas | Deployment created! mongo_atlas | mongo_atlas | connection skipped mongo_atlas | mongo_atlas | Connection string: mongodb://root:root@localhost:27017/?directConnection=true

… and I can connect using that connection string as well as see the data in my ./data/atlas folder.

If I then stop my container and re-up it using the same docker-compose, I have gotten a variety of responses.

Obviously, that docker-compose has a command of atlas deployments setup, which might well be inherently problematic to rerun.

In any event, I might be wrong, but there seems to be some relationship to the time elapsed since I stopped the container and the error that I get.

I seem to first get this if immediately re-upping:

mongo_atlas | Error: "dev" deployment already exists and is currently in "running" state mongo_atlas exited with code 1

If I try to re-up rapidly after that, I get this:

mongo_atlas | 2/2: Creating your deployment dev... mongo_atlas | Error: exit status 125: Error: network name mdb-local-dev already used: network already exists

If I get that “network already exists” error, it seems to repeat forever.

However, sometimes I have gotten this instead (which I think has happened when I have waited longer to re-up):

mongo_atlas | 3/3: Creating your deployment dev... mongo_atlas | Error: failed to connect to mongodb server: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed. mongo_atlas exited with code 1

When I get that error, I am eventually able to restart the deployment by upping that same docker-compose, but it seems to re-initialize things with default data, losing whatever I tried to persist.

Again, that docker-compose has a command of atlas deployments setup. If I try changing the docker-compose command to this:

command: | /bin/bash -c "atlas deployments start && tail -f /dev/null"

… and then immediately re-up after stopping my container, I get this message:

mongo_atlas | Error: currently there are no deployments

Basically, no matter what I have tried, once the container has been stopped, I have not been able to subsequently access the same deployment I created the first time I ran the docker-compose and keep my data.

If there are different commands that I should be running, I’d be grateful to learn what they are – and some further documentation for maintaining local dev environments might be helpful.

  1. Running the container and deploying manually. I can’t speak for @Igor_Prokopenkov, who may know more and have done more, but here is what I have done:
  • Execute docker run -p 27017:27017 --privileged -it mongodb/atlas bash
  • Set up a deployment via atlas deployments setup dev --type local --port 27017 --bindIpAll --username root --password root --mdbVersion 7.0 --force or similar

Running atlas deployments lists shows me this:

sh-4.4# atlas deployments list NAME TYPE MDB VER STATE dev LOCAL 7.0.4 IDLE

And I can connect using the connection string.

If I then run atlas deployments pause dev before stopping the container, then atlas deployments list gives me this both before I stop the container and after I restart it:

sh-4.4# atlas deployments list NAME TYPE MDB VER STATE dev LOCAL 7.0.4 STOPPED

And after restarting, I can run atlas deployments start dev and reconnect.

However, if I simply stop the container without first running atlas deployments pause dev, then I get this when running atlas deployments list after re-upping:

sh-4.4# atlas deployments list NAME TYPE MDB VER STATE dev LOCAL 7.0.4 IDLE

… but I can’t connect using the connection string. I also can’t pause it:

sh-4.4# atlas deployments pause dev Error: exit status 255: Error: OCI runtime error: runc: exec failed: cannot exec in a stopped container

Note that it still continues to show as IDLE when I run atlas deployments pause list.

And if I try to run atlas deployments start dev, it hangs.

So we could follow an OS-agnostic dev approach by using the Docker image to run a dev deployment in this way, but remembering to pause the deployment every time before stopping the container – or even being able to – doesn’t seem feasible.

This is a very comprehensive description. Nothing to add really, just want to share my docker compose automation script to cover the manual steps mentioned above:

#!/bin/bash -x # Function to stop Atlas deployment gracefully stop_atlas() { echo "Stopping Atlas deployment..." atlas deployments stop } # Function to start Atlas deployment start_atlas() { echo "Starting Atlas deployment..." atlas deployments start } # Trap SIGTERM and call stop_atlas trap 'stop_atlas' SIGTERM SIGINT # Check if the deployment exists and its state deployment_status=$(atlas deployments list | grep 'LOCAL') if [[ -z "$deployment_status" ]]; then echo "No local deployment found. Setting up..." atlas deployments setup --bindIpAll --username root --password root --type local --force else if [[ $deployment_status == *"STOPPED"* ]]; then start_atlas fi fi while true do #sleep 1000 - Doesn't work with sleep. Not sure why. tail -f /dev/null & wait ${!} done

Sorry, worth to share how to use this script as well:

atlas: image: mongodb/atlas:latest ports: - 27777:27017 privileged: true volumes: - ./docker-compose/atlas/entrypoint.sh:/entrypoint.sh - /var/run/docker.sock:/var/run/docker.sock entrypoint: ["/entrypoint.sh"]