All I’ve done is done is changed the directory. I’m amazed that this is so complicated…

Thanks for supporting me, Fabio

1 Like

Hi @sam_ames,
Great!!

Regards

Sorry, I wasn’t clear. The problem is not resolved, I was just complaining about how complicated this issue is to resolve.

Hi @sam_ames,
Sorry, I didn’t understand before.
Now what problem is indicated to you by the logs and systemctl status?

Regards

sysadmin@soft-serve:~$ sudo systemctl status mongod
[sudo] password for sysadmin: 
× mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Fri 2023-08-11 15:42:52 UTC; 3 days ago
       Docs: https://docs.mongodb.org/manual
    Process: 88464 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=100)
   Main PID: 88464 (code=exited, status=100)
        CPU: 34ms

Aug 11 15:42:52 soft-serve systemd[1]: Started MongoDB Database Server.
Aug 11 15:42:52 soft-serve systemd[1]: mongod.service: Main process exited, code=exited, status=100/n/a
Aug 11 15:42:52 soft-serve systemd[1]: mongod.service: Failed with result 'exit-code'.

Logs:


sysadmin@soft-serve:~$ sudo nano /var/log/mongodb/mongod.log
{"t":{"$date":"2023-08-11T15:42:52.159+00:00"},"s":"I",  "c":"CONTROL",  "id":20698,   "ctx":"-","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2023-08-11T15:42:52.160+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingIn>
{"t":{"$date":"2023-08-11T15:42:52.161+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2023-08-11T15:42:52.161+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueu>
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrati>
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMig>
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","namespace":"config.tenantSplitDonors">
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":88464,"port":27017,"dbPath":"/data/var/lib/mongodb","architecture":"64-bit","host":">
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"6.0.8","gitVersion":"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74","openSSLV>
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"22.04"}}}
{"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/etc/mongod.conf","net":{"bindIp":"127.0.0.1","port":27017>
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"E",  "c":"CONTROL",  "id":20557,   "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"Location28596: Unable to determine status of lock file i>
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"REPL",     "id":4794602, "ctx":"initandlisten","msg":"Attempting to enter quiesce mode"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"-",        "id":6371601, "ctx":"initandlisten","msg":"Shutting down the FLE Crud thread pool"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"CONTROL",  "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"COMMAND",  "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"CONTROL",  "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"CONTROL",  "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"CONTROL",  "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"CONTROL",  "id":6278511, "ctx":"initandlisten","msg":"Shutting down the Change Stream Expired Pre-images Remover"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"CONTROL",  "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2023-08-11T15:42:52.172+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}

Thanks, :slight_smile:
Sam

Hi @sam_ames,
So you created a new data directory, gave all the correct permissions to this new data directory, and you get the same error again?

Regards

The only thing I have done is installed the os, made a volume group and fresh volume with luks, mounted it at /data and added my project source code to within /data.

Here are the commands I used:

Create encrypted software Raid1 volume 
To use all free space to create a soft raid 1 mirrored across 2 disks: 
Check volumes, space used and available with: 
sudo fdisk -l 

Create a new partition on /dev/sdb:
sudo fdisk /dev/sdb 

Use the options "n" (new partition), "p" (primary partition), and accept default partition number, starting sector, and ending sector to use the remaining space. 
Use "w" to save the changes and exit. 
Create a new partition on /dev/sdc:
sudo fdisk /dev/sdc

Use the options "n" (new partition), "p" (primary partition), and accept default partition number, starting sector, and ending sector to use the remaining space. 
Use "w" to save the changes and exit. 
Set up RAID 1: 
Run the command following command, but (if different) replace with the new partition you created previously with the other new, matching size partition:
sudo mdadm --create /dev/md4 --level=mirror --raid-devices=2 /dev/sdb5 /dev/sdc5 

Verify the RAID 1 array is synchronized: 
watch cat /proc/mdstat 
 Encrypt the RAID array: 
sudo cryptsetup luksFormat /dev/md4  
Type "YES" and enter a passphrase when prompted. 
Open the encrypted RAID array: 
sudo cryptsetup luksOpen /dev/md4 raid_encrypted  
Create a physical volume (PV) on the encrypted RAID array: 
sudo pvcreate /dev/mapper/raid_encrypted 
 create vg 
sudo vgcreate ubuntu-vg /dev/mapper/raid_encrypted 

create vg 100% available space: 
sudo lvcreate -n encrypted_volume -l 100%FREE ubuntu-vg 
 Format the encrypted volume with a ext4: 
sudo mkfs.ext4 /dev/ubuntu-vg/encrypted_volume 
 sudo mount /dev/ubuntu-vg/encrypted_volume /data 
 sudo mdadm --readwrite /dev/md127 
 watch cat /proc/mdstat

If you think it will make a difference, I can start over by reinstalling and using fresh Ubuntu 22.04.

I wiped the /data volume previously and it broke the ubuntu ssh… It seems like /data/lost+found is quite important.

Regards,
Sam

Not important and it won’t ‘break’ ssh. It might prevent login via ssh is a users home directory was under /data.

Only xfs filesystem is recommended for mongodb (data directory).

Please read further production notes in the manual.

Is this a brand new install or did you copy your existing database files into this directory, are there any files in the dbPath?

You could try the following:
sudo -u mongodb mongod -f /etc/mongod.conf --repair

And then see if it will start via systemctl.

Another possibility is incompatible permissions along the rest of the path, though I would expect a different error. Check that mongodb or ‘others’ have read and execute on /data /data/var /data/var/lib

Thank you very much, if your terminal command doesn’t resolve this I will start over with the instructions that I compiled after my research, changing to luks with xfs.

This is a fresh database with no existing data.

That’s a huge help, please allow me a couple of days to follow all you have suggested; as I have other projects as well.

It’s unfortunate that my company won’t justify the paid version of mongo. I’m guessing that would be much easier to set up. We’re a tiny start up.

Anyway, thank you for your support and have a nice day.

Sam