I created this container following this documentation
mongo:
container_name: mongo
image: mongodb/atlas
privileged: true
command: |
/bin/bash -c "atlas deployments setup MyLocalContainer --type local --port 27017 --bindIpAll --mdbVersion 6.0 --username root --password root --force && tail -f /dev/null"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "27017:27017"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The first build was success and I was able to connect using this string
mongodb://root:root@localhost:27017/leadconduit_local?directConnection=true&authSource=admin
but when the container is restarted, I got this error message and I’m unable to connect again
Creating your cluster MyLocalContainer
1/2: Starting your local environment...
Error: "MyLocalContainer" deployment already exists and is currently in "running" state
Error: currently there are no deployments
Connection string: mongodb://root:root@localhost:27017/?directConnection=true
I tried removing the deployment name but every time that the container restarts it create another deployment and I should re-setup again.
Is there a way to keep the deployment name?
1 Like
John_Bench
(John Bench)
2
I have encountered the same issue. If I try to start the container again after that first error (Error: "MyLocalContainer" deployment...) Jose posted I get the following:
Creating your cluster MyLocalContainer
2024-02-06T19:11:38.382999473Z 1/2: Starting your local environment...
2024-02-06T19:11:38.425274647Z 2/2: Creating your deployment MyLocalContainer...
2024-02-06T19:11:38.587681695Z Error: exit status 125: Error: network name mdb-local-MyLocalContainer already used: network already exists
John_Bench
(John Bench)
3
Would this be fixed by having an updated volume mount point other than /data/db? If so, what should the new mount point be?
We’re experiencing similar issues. I’ve found a work around for our case that may help you:
2 Likes
John_Bench
(John Bench)
5
Awesome! Thank you @Brayden_Tidwell : I was able to use that entrypoint methodology to address our issue.
Here is what I wound up using to be able to restart the container and preserve the data after restart, stop, and compose down:
start-atlas.sh
#!/bin/bash
cleanup () {
# stop service and clean up here
echo "stopping atlas deployment"
atlas deployments stop MyMongo --type local 2>/dev/null
echo "stopping podman containers"
podman stop --all
exit 0
}
trap "cleanup" EXIT
# 2>/dev/null is used to silence output about listing atlas instances other than local
PODMAN_HAS_MONGO_CONTAINER=$(podman ps --all 2>/dev/null | grep 'MyMongo')
PODMAN_HAS_MONGO_NETWORK=$(podman network ls 2>/dev/null | grep 'mdb-local-MyMongo')
DEPLOYMENT_INFO=$(atlas deployments list 2>/dev/null | grep 'MyMongo')
if [[ $PODMAN_HAS_MONGO_CONTAINER ]]; then
# If missing network, create it (happens after docker compose down)
if ! [[ $PODMAN_HAS_MONGO_NETWORK ]]; then
# silence the update check
atlas config set skip_update_check true 2>/dev/null
echo "creating podman network:"
podman network create mdb-local-MyMongo
fi
# Restart a deployment
echo "starting podman containers"
podman start --all
fi
if [[ $DEPLOYMENT_INFO ]]; then
atlas deployments start MyMongo --type local 2>/dev/null
else
# silence the update check
atlas config set skip_update_check true 2>/dev/null
atlas deployments setup MyMongo --type local --username root --password root --port 27017 --bindIpAll --force 2>/dev/null
fi
sleep infinity &
wait $!
docker-compose.yml
my_mongo:
container_name: my_mongo
image: mongodb/atlas:v1.14.2
privileged: true
ports:
- "27017:27017"
entrypoint: /home/scripts/start-atlas.sh
volumes:
- ./scripts/start-atlas.sh:/home/scripts/start-atlas.sh
- /var/run/docker.sock:/var/run/docker.sock # <---- didn't really find this to be necessary included because it was in the orignal docs
- mongodb-data:/var/lib/containers # <---- needed to perserve podman containers for restart after docker compose down
I was also able to use the container in our GitHub Actions by using the following action:
- name: Docker Compose
uses: isbang/compose-action@v1.5.1
with:
services: |
my_mongo
up-flags: "--build --detach --pull always --quiet-pull --wait --wait-timeout 300"
down-flags: "--volumes --remove-orphans --timeout 5"
2 Likes
system
(system)
Closed
6
This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.