read 8 min
34 / 34
Mar 2024

Thank you, it seem to work just fine on docker-ce linux!
On question is regards to the compose configuration, how can I make sure that DB data will be persisted on the host like in this example for a regular mongo:

volumes: - /data/db:/data/db

not sure is this 100% relevant for atlas version.

The thing is that it is creating at the moment a new cluster each time one runs docker compose, but it is obviously not desired behavior, the desired behavior is to create cluster once and reuse it upon subsequent runs.

Also at the moment there is an issue there - if one stops docker container and resumes it again, cluster after resuming shows IDLE state and not responsible anymore. Stopping cluster manually with atlas deployments stop before stopping the container and than resuming when container is resumed with atlas deployments start fixes the issue but this seem to be fragile and not really a proper solution either.

8 days later

Thanks @Igor_Prokopenkov for raising these great questions.
Could you shed a bit more light on how are you using the docker-compose method here? Is it a CI pipeline or a local dev environment or sth else?

In meantime I’ll see how we can map a volumen to the container’s data directory without breaking the solution.

Re “Also at the moment there is an issue there - if one stops docker container and resumes it again, cluster after resuming shows IDLE state and not responsible anymore” - can you provide steps you’re taking here to make sure we’re on the same page?
Also, what happens when you try running atlas deployments start right after the container is restarted?

Hi @Jakub_Lazinski, I’ve had similar issues, and can share my details. I appreciate your engagement and trying to get this all to work in a local experience.

  1. docker-compose. We want to use an Atlas Docker container both for local dev and a CI pipeline.

A. CI pipeline. The CI pipeline seems doable; the Docker container spins up, we can create a deployment using command, and based on what we’ve seen, we should be able to connect, populate data, run tests on functions/endpoints accessing/manipulating that data, etc. Running this all once, in isolation, seems to work ok.

datastore: image: mongodb/atlas:v1.14.0 hostname: mongo_atlas container_name: mongo_atlas privileged: true command: | /bin/bash -c "atlas deployments setup --type local --port 27017 --bindIpAll --username root --password root --mdbVersion 7.0 --force && tail -f /dev/null" ports: - 27017:27017 volumes: - ./run/docker.sock:/var/run/docker.sock

The one concern we have about CI is that there doesn’t seem to be any caching of the MongoDB binaries downloaded in step 2 of creation of the cluster. Not being able to cache costs money in the form of additional pipeline minutes.

B. Local dev. Local dev is the real issue for us. We want to persist data, so that as devs start and stop containers as they need to, they don’t have to keep recreating the entire database (losing whatever objects they may have populated on their own by running a mongorestore on shared data or whatever).

Or, in other words, we want to be able to create a dev deployment once, and basically restart it and reconnect to it whenever we start the container, storing the data on a persisted volume. Ideally, we could do all of this via a single command: create the deployment if it doesn’t exist, start if it does.

Any input on how to do that would be wonderful. Here is what we have tried so far.

We have kinda been able to persist the data to a mapped volume, but have not been able to access the same deployment once a container is stopped. And although we can see the data on the mapped volume, we haven’t been able to access that data again via the deployment (more on that below).

We tried giving the deployment the name dev. We searched for the .wt files in the Docker container, which led us to adding this volume mapping:

command: | /bin/bash -c "atlas deployments setup dev --type local --port 27017 --username root --password root --mdbVersion 7.0 --force && tail -f /dev/null" ports: - 27017:27017 volumes: - ./run/docker.sock:/var/run/docker.sock - ./data/atlas:/var/lib/containers/storage/volumes/mongod-local-data-dev/_data

In my own attempts, that does allow me to create the deployment and persist the data. When first upping my docker-compose, I get:

mongo_atlas | 3/3: Creating your deployment dev... mongo_atlas | Deployment created! mongo_atlas | mongo_atlas | connection skipped mongo_atlas | mongo_atlas | Connection string: mongodb://root:root@localhost:27017/?directConnection=true

… and I can connect using that connection string as well as see the data in my ./data/atlas folder.

If I then stop my container and re-up it using the same docker-compose, I have gotten a variety of responses.

Obviously, that docker-compose has a command of atlas deployments setup, which might well be inherently problematic to rerun.

In any event, I might be wrong, but there seems to be some relationship to the time elapsed since I stopped the container and the error that I get.

I seem to first get this if immediately re-upping:

mongo_atlas | Error: "dev" deployment already exists and is currently in "running" state mongo_atlas exited with code 1

If I try to re-up rapidly after that, I get this:

mongo_atlas | 2/2: Creating your deployment dev... mongo_atlas | Error: exit status 125: Error: network name mdb-local-dev already used: network already exists

If I get that “network already exists” error, it seems to repeat forever.

However, sometimes I have gotten this instead (which I think has happened when I have waited longer to re-up):

mongo_atlas | 3/3: Creating your deployment dev... mongo_atlas | Error: failed to connect to mongodb server: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed. mongo_atlas exited with code 1

When I get that error, I am eventually able to restart the deployment by upping that same docker-compose, but it seems to re-initialize things with default data, losing whatever I tried to persist.

Again, that docker-compose has a command of atlas deployments setup. If I try changing the docker-compose command to this:

command: | /bin/bash -c "atlas deployments start && tail -f /dev/null"

… and then immediately re-up after stopping my container, I get this message:

mongo_atlas | Error: currently there are no deployments

Basically, no matter what I have tried, once the container has been stopped, I have not been able to subsequently access the same deployment I created the first time I ran the docker-compose and keep my data.

If there are different commands that I should be running, I’d be grateful to learn what they are – and some further documentation for maintaining local dev environments might be helpful.

  1. Running the container and deploying manually. I can’t speak for @Igor_Prokopenkov, who may know more and have done more, but here is what I have done:
  • Execute docker run -p 27017:27017 --privileged -it mongodb/atlas bash
  • Set up a deployment via atlas deployments setup dev --type local --port 27017 --bindIpAll --username root --password root --mdbVersion 7.0 --force or similar

Running atlas deployments lists shows me this:

sh-4.4# atlas deployments list NAME TYPE MDB VER STATE dev LOCAL 7.0.4 IDLE

And I can connect using the connection string.

If I then run atlas deployments pause dev before stopping the container, then atlas deployments list gives me this both before I stop the container and after I restart it:

sh-4.4# atlas deployments list NAME TYPE MDB VER STATE dev LOCAL 7.0.4 STOPPED

And after restarting, I can run atlas deployments start dev and reconnect.

However, if I simply stop the container without first running atlas deployments pause dev, then I get this when running atlas deployments list after re-upping:

sh-4.4# atlas deployments list NAME TYPE MDB VER STATE dev LOCAL 7.0.4 IDLE

… but I can’t connect using the connection string. I also can’t pause it:

sh-4.4# atlas deployments pause dev Error: exit status 255: Error: OCI runtime error: runc: exec failed: cannot exec in a stopped container

Note that it still continues to show as IDLE when I run atlas deployments pause list.

And if I try to run atlas deployments start dev, it hangs.

So we could follow an OS-agnostic dev approach by using the Docker image to run a dev deployment in this way, but remembering to pause the deployment every time before stopping the container – or even being able to – doesn’t seem feasible.

This is a very comprehensive description. Nothing to add really, just want to share my docker compose automation script to cover the manual steps mentioned above:

#!/bin/bash -x # Function to stop Atlas deployment gracefully stop_atlas() { echo "Stopping Atlas deployment..." atlas deployments stop } # Function to start Atlas deployment start_atlas() { echo "Starting Atlas deployment..." atlas deployments start } # Trap SIGTERM and call stop_atlas trap 'stop_atlas' SIGTERM SIGINT # Check if the deployment exists and its state deployment_status=$(atlas deployments list | grep 'LOCAL') if [[ -z "$deployment_status" ]]; then echo "No local deployment found. Setting up..." atlas deployments setup --bindIpAll --username root --password root --type local --force else if [[ $deployment_status == *"STOPPED"* ]]; then start_atlas fi fi while true do #sleep 1000 - Doesn't work with sleep. Not sure why. tail -f /dev/null & wait ${!} done

Sorry, worth to share how to use this script as well:

atlas: image: mongodb/atlas:latest ports: - 27777:27017 privileged: true volumes: - ./docker-compose/atlas/entrypoint.sh:/entrypoint.sh - /var/run/docker.sock:/var/run/docker.sock entrypoint: ["/entrypoint.sh"]

Thank you both! Really appreciate the thorough description of the steps you’re taking.

It seems that there’re a couple of issues here:

  1. When using docker-compose data is not being persisted across runs (or actually ups)
  2. Wehn using docker run… the deployment is corrupted if not paused before restarting the container
  3. When using docker-compose MongoDB binaries are not being cached, requiring its download with every run

Would it ublock you (for the time being) if #2 was fixed? Trying to figure out the priorities here.

In meantime, I’ll regroup internally to see what’s the best course of action to provide a quick help for you.

Thank you for the prompt response.
In regards to numer 1. It is preserved, at least for me (under ubuntu 22.04), but the one can’t control where the data is stored as in community mongo image.
For our team number 2 is definitely a priority since if cluster breaks it will becomes unusable and has to be re-created which leads to data loss same as it would be in the 1st case.
Can’t confirm number 3 under my ubuntu environment either.

Thanks Igor,

For #2: can you also share the details of how you’re stopping and starting the container after the initial “docker run… bash”?

#2 is also the priority for us. We think #1 might be doable if #2 is solved.

I would like to add some input to this discussion because we’d like to persist our local Atlas development environment, too.

For #2: Some team members use Macs, some Windows with Docker on a Vagrant machine (provider virtualbox), so for the latter, the Docker service runs on a virtual Linux system.
The containers are usually only stopped when we restart our computers. That, however, happens regularly, e.g. to apply operating system updates.

Relating to the current issues, it seems I’m unable to establish a connection to a Local Mongo Atlas Docker container from another Docker container within my local environment. Is such connection unfeasible?
I am seemingly able to connect to the local Atlas container utilizing ‘mongosh’, indicative that the container is indeed accepting connections. Despite this, a connection between my application in a separate container, and the Mongo Atlas container remains unsuccessful.

pymongo.errors.ServerSelectionTimeoutError: atlasdb:27778: [Errno 111] Connection refused, Timeout: 5.0s, Topology Description: <TopologyDescription id: *****, topology_type: Single, servers: [<ServerDescription ('atlasdb', 27778) server_type: Unknown, rtt: None, error=AutoReconnect('atlasdb:27778: [Errno 111] Connection refused')>]>

Thank you

13 days later

We are also running into all of the issues described here. I did manage to find a workaround for #2 (container restarts). You can intercept the container kill signal in your entrypoint script and pause the deployment:

atlas-entrypoint

#!/usr/bin/env bash DEPLOYMENT_INFO=$(atlas deployments list | grep 'my_deployment') if [[ $DEPLOYMENT_INFO ]]; then # Restart a deployment atlas deployments start my_deployment else # Create a new deployment atlas deployments setup my_deployment --type local --port 27778 --username root --password root --bindIpAll --skipSampleData --force fi # Pause the deployment whenever this container is shutdown to avoid corruption. function graceful_shutdown() { atlas deployments pause my_deployment } trap 'graceful_shutdown' EXIT sleep infinity & wait $!

docker-compose.yml

... mongodb_atlas: container_name: 'mongodb_atlas' image: 'mongodb/atlas:v1.14.2' ports: - '27778:27778' privileged: true entrypoint: '/home/scripts/atlas-entrypoint' volumes: - './scripts:/home/scripts'

Haven’t tested it extensively but it at least works when you run docker compose down.
Importantly, this does not work if you use tail -f /dev/null as suggested in the docs. I had to switch that to the sleep and wait commands shown above.

Hello! Sharing my answer to another post here in case it might help others:

I haven’t been able to get persistence between runs working, either. I’m assuming it has to do with the cluster not being available when first running the compose stack, and the data directory is not in the usual place. This is a killer for me. I can’t see how anybody could use this as is, frankly

Thanks John, I’ll check it out! I’d moved onto just using a cloud instance in the meantime, but I’d prefer a local solution for dev purposes. Thanks again sir

24 days later

Hi Jakub and MongoDB team,
I am wondering if you have any update or ETA for an update on the issue #3 summarized by Jakub earlier. We are using MongoDB Atlas local deployments for integration tests and solving this issue with caching MongoDB binaries would be a big help to speed things up.
Thanks in advance,
Leo