read 12 min
29 / 29
Aug 2023

I commented out the var.
Now I have:

sysadmin@soft-serve:~$ sudo nano /usr/lib/systemd/system/mongod.service sysadmin@soft-serve:~$ sudo systemctl start mongod Warning: The unit file, source configuration file or drop-ins of mongod.service changed on disk. Run 'systemctl daemon-reload' to reload units. sysadmin@soft-serve:~$ sudo systemctl daemon-reload sysadmin@soft-serve:~$ sudo systemctl start mongod sysadmin@soft-serve:~$ sudo systemctl status mongod × mongod.service - MongoDB Database Server Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2023-08-11 15:33:50 UTC; 14s ago Docs: https://docs.mongodb.org/manual Process: 88421 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=100) Main PID: 88421 (code=exited, status=100) CPU: 39ms Aug 11 15:33:50 soft-serve systemd[1]: Started MongoDB Database Server. Aug 11 15:33:50 soft-serve systemd[1]: mongod.service: Main process exited, code=exited, status=100/n/a Aug 11 15:33:50 soft-serve systemd[1]: mongod.service: Failed with result 'exit-code'.

Log:

{"t":{"$date":"2023-08-11T15:33:50.795+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"-","msg":"***** SERVER RESTARTED *****"} {"t":{"$date":"2023-08-11T15:33:50.797+00:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingIn> {"t":{"$date":"2023-08-11T15:33:50.797+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2023-08-11T15:33:50.797+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueu> {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrati> {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMig> {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","namespace":"config.tenantSplitDonors"> {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"CONTROL", "id":5945603, "ctx":"main","msg":"Multi threading initialized"} {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":88421,"port":27017,"dbPath":"/data/var/lib/mongodb","architecture":"64-bit","host":"> {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"6.0.8","gitVersion":"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74","openSSLV> {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"22.04"}}} {"t":{"$date":"2023-08-11T15:33:50.805+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/etc/mongod.conf","net":{"bindIp":"127.0.0.1","port":27017> {"t":{"$date":"2023-08-11T15:33:50.806+00:00"},"s":"E", "c":"CONTROL", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"Location28596: Unable to determine status of lock file i> {"t":{"$date":"2023-08-11T15:33:50.806+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"REPL", "id":4794602, "ctx":"initandlisten","msg":"Attempting to enter quiesce mode"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"-", "id":6371601, "ctx":"initandlisten","msg":"Shutting down the FLE Crud thread pool"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"ASIO", "id":22582, "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"COMMAND", "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":6278511, "ctx":"initandlisten","msg":"Shutting down the Change Stream Expired Pre-images Remover"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"} {"t":{"$date":"2023-08-11T15:33:50.807+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}

Would you consider downgrading, if in this situation yourself?.. As suggested in this post?

Thanks for your support :slight_smile:

Sam

Hi @sam_ames,
Is selinux disabled?
The error now Is changed:

In the extreme case, I would attempt to remove the files under the path /data/var/lib/mongodb (if you haven’ t collection populated with document in your istance) because seems corrupted the lock file or as you’ve suggested, try a downgrade or update.

Regards

It is a difficult install, I don’t even know what is se Linux is to be perfectly honest

Thanks,
Sam

I will try your suggestions on Monday morning, many thanks

sysadmin@soft-serve:~$ sudo cat /etc/selinux/config [sudo] password for sysadmin: cat: /etc/selinux/config: No such file or directory

It seems like your suspicions were correct.

Please suggest how I can resolve this. I did not delete selinux and am not sure why it’s missing.

Thanks for the support, :slight_smile:

Sam

Hi @sam_ames,
Another way to get selinux status Is toselinux status with the command
getenforce or cat /etc/sysconfig/selinux.
If the getenforce command results in output of the type enforcing or permissive, use the following command:
setenforce 0.
But as suggested in a previous post, i think is better to clean your data directory or do and update for resolve the problem
Regards

Please see the following output.

sysadmin@soft-serve:~$ cat /etc/sysconfig/selinux cat: /etc/sysconfig/selinux: No such file or directory sysadmin@soft-serve:~$ getenforce Command 'getenforce' not found, but can be installed with: sudo apt install selinux-utils sysadmin@soft-serve:~$ sudo getenforce [sudo] password for sysadmin: sudo: getenforce: command not found sysadmin@soft-serve:~$

Do I need to install selinux tools?

Thanks,
Sam

I can’t swipe the data volume because it is part of a raid partition and it was very complicated to set up.

However, I can delete everything in the directory that I have personally added. Is this worth doing?

You previously mentioned a downgrade, which version should I consider? I think I am currently using the latest release of version six. The post I shared, which mentioned downgrading was quite an old one so I assume the recommended version is no no longer the optimal solution.

Thank you,
Sam

All I’ve done is done is changed the directory. I’m amazed that this is so complicated…

Thanks for supporting me, Fabio

Sorry, I wasn’t clear. The problem is not resolved, I was just complaining about how complicated this issue is to resolve.

sysadmin@soft-serve:~$ sudo systemctl status mongod [sudo] password for sysadmin: × mongod.service - MongoDB Database Server Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2023-08-11 15:42:52 UTC; 3 days ago Docs: https://docs.mongodb.org/manual Process: 88464 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=100) Main PID: 88464 (code=exited, status=100) CPU: 34ms Aug 11 15:42:52 soft-serve systemd[1]: Started MongoDB Database Server. Aug 11 15:42:52 soft-serve systemd[1]: mongod.service: Main process exited, code=exited, status=100/n/a Aug 11 15:42:52 soft-serve systemd[1]: mongod.service: Failed with result 'exit-code'.

Logs:

sysadmin@soft-serve:~$ sudo nano /var/log/mongodb/mongod.log
{"t":{"$date":"2023-08-11T15:42:52.159+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"-","msg":"***** SERVER RESTARTED *****"} {"t":{"$date":"2023-08-11T15:42:52.160+00:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingIn> {"t":{"$date":"2023-08-11T15:42:52.161+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2023-08-11T15:42:52.161+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueu> {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrati> {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMig> {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","namespace":"config.tenantSplitDonors"> {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"CONTROL", "id":5945603, "ctx":"main","msg":"Multi threading initialized"} {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":88464,"port":27017,"dbPath":"/data/var/lib/mongodb","architecture":"64-bit","host":"> {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"6.0.8","gitVersion":"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74","openSSLV> {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"22.04"}}} {"t":{"$date":"2023-08-11T15:42:52.170+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/etc/mongod.conf","net":{"bindIp":"127.0.0.1","port":27017> {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"E", "c":"CONTROL", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"Location28596: Unable to determine status of lock file i> {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"REPL", "id":4794602, "ctx":"initandlisten","msg":"Attempting to enter quiesce mode"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"-", "id":6371601, "ctx":"initandlisten","msg":"Shutting down the FLE Crud thread pool"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"CONTROL", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"ASIO", "id":22582, "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"COMMAND", "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"CONTROL", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"CONTROL", "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"CONTROL", "id":6278511, "ctx":"initandlisten","msg":"Shutting down the Change Stream Expired Pre-images Remover"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2023-08-11T15:42:52.171+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"} {"t":{"$date":"2023-08-11T15:42:52.172+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}

Thanks, :slight_smile:
Sam

The only thing I have done is installed the os, made a volume group and fresh volume with luks, mounted it at /data and added my project source code to within /data.

Here are the commands I used:

Create encrypted software Raid1 volume To use all free space to create a soft raid 1 mirrored across 2 disks: Check volumes, space used and available with: sudo fdisk -l Create a new partition on /dev/sdb: sudo fdisk /dev/sdb Use the options "n" (new partition), "p" (primary partition), and accept default partition number, starting sector, and ending sector to use the remaining space. Use "w" to save the changes and exit. Create a new partition on /dev/sdc: sudo fdisk /dev/sdc Use the options "n" (new partition), "p" (primary partition), and accept default partition number, starting sector, and ending sector to use the remaining space. Use "w" to save the changes and exit. Set up RAID 1: Run the command following command, but (if different) replace with the new partition you created previously with the other new, matching size partition: sudo mdadm --create /dev/md4 --level=mirror --raid-devices=2 /dev/sdb5 /dev/sdc5 Verify the RAID 1 array is synchronized: watch cat /proc/mdstat Encrypt the RAID array: sudo cryptsetup luksFormat /dev/md4 Type "YES" and enter a passphrase when prompted. Open the encrypted RAID array: sudo cryptsetup luksOpen /dev/md4 raid_encrypted Create a physical volume (PV) on the encrypted RAID array: sudo pvcreate /dev/mapper/raid_encrypted create vg sudo vgcreate ubuntu-vg /dev/mapper/raid_encrypted create vg 100% available space: sudo lvcreate -n encrypted_volume -l 100%FREE ubuntu-vg Format the encrypted volume with a ext4: sudo mkfs.ext4 /dev/ubuntu-vg/encrypted_volume sudo mount /dev/ubuntu-vg/encrypted_volume /data sudo mdadm --readwrite /dev/md127 watch cat /proc/mdstat

If you think it will make a difference, I can start over by reinstalling and using fresh Ubuntu 22.04.

I wiped the /data volume previously and it broke the ubuntu ssh… It seems like /data/lost+found is quite important.

Regards,
Sam

Not important and it won’t ‘break’ ssh. It might prevent login via ssh is a users home directory was under /data.

Only xfs filesystem is recommended for mongodb (data directory).

Please read further production notes in the manual.

Is this a brand new install or did you copy your existing database files into this directory, are there any files in the dbPath?

You could try the following:
sudo -u mongodb mongod -f /etc/mongod.conf --repair

And then see if it will start via systemctl.

Another possibility is incompatible permissions along the rest of the path, though I would expect a different error. Check that mongodb or ‘others’ have read and execute on /data /data/var /data/var/lib

Thank you very much, if your terminal command doesn’t resolve this I will start over with the instructions that I compiled after my research, changing to luks with xfs.

This is a fresh database with no existing data.

That’s a huge help, please allow me a couple of days to follow all you have suggested; as I have other projects as well.

It’s unfortunate that my company won’t justify the paid version of mongo. I’m guessing that would be much easier to set up. We’re a tiny start up.

Anyway, thank you for your support and have a nice day.

Sam