Upgrade a Sharded Cluster to 5.0
On this page
Familiarize yourself with the content of this document, including thoroughly reviewing the prerequisites, prior to upgrading to MongoDB 5.0.
The following steps outline the procedure to upgrade a
mongod
that is a shard member from version 4.4
to 5.0.
If you need guidance on upgrading to 5.0, MongoDB professional services offer major version upgrade support to help ensure a smooth transition without interruption to your MongoDB application.
Upgrade Recommendations and Checklists
When upgrading, consider the following:
Upgrade Version Path
To upgrade an existing MongoDB deployment to 5.0, you must be running a 4.4-series release.
To upgrade from a version earlier than the 4.4-series, you must successively upgrade major releases until you have upgraded to 4.4-series. For example, if you are running a 4.2-series, you must upgrade first to 4.4 before you can upgrade to 5.0.
Check Driver Compatibility
Before you upgrade MongoDB, check that you're using a MongoDB 5.0-compatible driver. Consult the driver documentation for your specific driver to verify compatibility with MongoDB 5.0.
Upgraded deployments that run on incompatible drivers might encounter unexpected or undefined behavior.
Warning
If your drivers use legacy opcodes that were deprecated in v3.6, update your drivers to a version that uses supported opcodes. Drivers that use legacy opcodes are no longer supported.
Preparedness
Before beginning your upgrade, see the Compatibility Changes in MongoDB 5.0 document to ensure that your applications and deployments are compatible with MongoDB 5.0. Resolve the incompatibilities in your deployment before starting the upgrade.
Before upgrading MongoDB, always test your application in a staging environment before deploying the upgrade to your production environment.
Downgrade Consideration
Once upgraded to 5.0, if you need to downgrade, we recommend downgrading to the latest patch release of 4.4.
Prerequisites
Before you upgrade your sharded cluster, check the 5.0 Performance Considerations for any potential performance impacts when upgrading to 5.0.
Ensure TTL Config is Valid
Ensure that the TTL configuration is valid.
Before upgrading, remove or correct any TTL indexes that have
expireAfterSeconds
set to NaN
. In MongoDB 5.0 and later,
setting expireAfterSeconds
to NaN
has the same effect as
setting expireAfterSeconds
to 0
. For details, see
TTL expireAfterSeconds
Behavior When Set to NaN
.
All Members Version
To upgrade a sharded cluster to 5.0, all members of the cluster must be at least version 4.4. The upgrade process checks all components of the cluster and will produce warnings if any component is running version earlier than 4.4.
Confirm Clean Shutdown
Prior to upgrading a member of the sharded cluster, confirm that the member was cleanly shut down.
Feature Compatibility Version
The 4.4 sharded cluster must have
featureCompatibilityVersion
set to "4.4"
.
To ensure that all members of the sharded cluster have
featureCompatibilityVersion
set to "4.4"
, connect to each
shard replica set member and each config server replica set member
and check the featureCompatibilityVersion
:
Tip
For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
All members should return a result that includes
"featureCompatibilityVersion" : { "version" : "4.4" }
.
To set or update featureCompatibilityVersion
, run the
following command on the mongos
:
db.adminCommand( { setFeatureCompatibilityVersion: "4.4" } )
For more information, see
setFeatureCompatibilityVersion
.
Replica Set Member State
For shards and config servers, ensure that no replica set member is in
ROLLBACK
or RECOVERING
state.
Back up the config
Database
Optional but Recommended. As a precaution, take a backup of the
config
database before upgrading the sharded cluster.
Download 5.0 Binaries
Use Package Manager
If you installed MongoDB from the MongoDB apt
, yum
, dnf
, or
zypper
repositories, you should upgrade to 5.0 using your package
manager.
Follow the appropriate 5.0 installation instructions for your Linux system. This will involve adding a repository for the new release, then performing the actual upgrade process.
Download 5.0 Binaries Manually
If you have not installed MongoDB using a package manager, you can manually download the MongoDB binaries from the MongoDB Download Center.
See 5.0 installation instructions for more information.
Upgrade Process
Warning
If you upgrade an existing instance of MongoDB to MongoDB
5.0.15, that instance may fail to start if fork: true
is
set in the mongod.conf
file.
The upgrade issue affects all MongoDB instances that use .deb
or
.rpm
installation packages. Installations that use the tarball
(.tgz
) release or other package types are not affected. For more
information, see SERVER-74345.
To remove the fork: true
setting, run these commands from a system
terminal:
systemctl stop mongod.service sed -i.bak '/fork: true/d' /etc/mongod.conf systemctl start mongod.service
The second systemctl
command starts the upgraded instance after the
setting is removed.
Disable the Balancer.
Connect mongosh
to a mongos
instance in
the sharded cluster, and run sh.stopBalancer()
to
disable the balancer:
sh.stopBalancer()
Note
If a migration is in progress, the system will complete the
in-progress migration before stopping the balancer. You can run
sh.isBalancerRunning()
to check the balancer's current
state.
To verify that the balancer is disabled, run
sh.getBalancerState()
, which returns false if the balancer
is disabled:
sh.getBalancerState()
For more information on disabling the balancer, see Disable the Balancer.
Upgrade the config servers.
Upgrade the secondary members of the replica set one at a time:
Shut down the secondary
mongod
instance and replace the 4.4 binary with the 5.0 binary.Start the 5.0 binary with the
--configsvr
,--replSet
, and--port
. Include any other options as used by the deployment.mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to specify
sharding.clusterRole: configsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 5.0 binary:sharding: clusterRole: configsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other settings as appropriate for your deployment.
Wait for the member to recover to
SECONDARY
state before upgrading the next secondary member. To check the member's state, issuers.status()
inmongosh
.Repeat for each secondary member.
Step down the replica set primary.
Connect
mongosh
to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() When
rs.status()
shows that the primary has stepped down and another member has assumedPRIMARY
state, shut down the stepped-down primary and replace themongod
binary with the 5.0 binary.Start the 5.0 binary with the
--configsvr
,--replSet
,--port
, and--bind_ip
options. Include any optional command line options used by the previous deployment:mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to specify
sharding.clusterRole: configsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 5.0 binary:sharding: clusterRole: configsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Upgrade the shards.
Upgrade the shards one at a time.
For each shard replica set:
Upgrade the secondary members of the replica set one at a time:
Shut down the
mongod
instance and replace the 4.4 binary with the 5.0 binary.Start the 5.0 binary with the
--shardsvr
,--replSet
,--port
, and--bind_ip
options. Include any additional command line options as appropriate for your deployment:mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to include
sharding.clusterRole: shardsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 5.0 binary:sharding: clusterRole: shardsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Wait for the member to recover to
SECONDARY
state before upgrading the next secondary member. To check the member's state, you can issuers.status()
inmongosh
.Repeat for each secondary member.
Step down the replica set primary.
Connect
mongosh
to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() When
rs.status()
shows that the primary has stepped down and another member has assumedPRIMARY
state, upgrade the stepped-down primary:Shut down the stepped-down primary and replace the
mongod
binary with the 5.0 binary.Start the 5.0 binary with the
--shardsvr
,--replSet
,--port
, and--bind_ip
options. Include any additional command line options as appropriate for your deployment:mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to specify
sharding.clusterRole: shardsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 5.0 binary:sharding: clusterRole: shardsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Upgrade the mongos
instances.
Replace each mongos
instance with the 5.0 binary
and restart. Include any other configuration as appropriate for your
deployment.
Note
The --bind_ip
option must be specified when
the sharded cluster members are run on different hosts or if
remote clients connect to the sharded cluster.
mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3> --bind_ip localhost,<ip address>
Re-enable the balancer.
Using mongosh
, connect to a
mongos
in the cluster and run
sh.startBalancer()
to re-enable the balancer:
sh.startBalancer()
Starting in MongoDB 6.0.3, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.
In MongoDB versions earlier than 6.0.3, sh.startBalancer()
also enables auto-splitting for the sharded cluster.
If you do not wish to enable auto-splitting while the balancer is
enabled, you must also run sh.disableAutoSplit()
.
For more information about re-enabling the balancer, see Enable the Balancer.
Enable backwards-incompatible 5.0 features.
At this point, you can run the 5.0 binaries without the 5.0 features that are incompatible with 4.4.
To enable these 5.0 features, set the feature compatibility
version (fCV
) to 5.0.
Tip
Enabling these backwards-incompatible features can complicate the downgrade process since you must remove any persisted backwards-incompatible features before you downgrade.
It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.
On a mongos
instance, run the
setFeatureCompatibilityVersion
command in the admin
database:
db.adminCommand( { setFeatureCompatibilityVersion: "5.0" } )
Setting featureCompatibilityVersion (fCV) : "5.0"
implicitly performs a replSetReconfig
on each shard to
add the term
field to the shard replica configuration
document.
The command doesn't complete until the new configuration propagates to a majority of replica set members.
This command must perform writes to an internal system collection.
If for any reason the command does not complete successfully, you
can safely retry the command on the mongos
as the
operation is idempotent.
Note
While setFeatureCompatibilityVersion
is running on
the sharded cluster, chunk migrations, splits, and merges
can fail with ConflictingOperationInProgress
.
Any orphaned documents that exist on your
shards will be cleaned up when you set the
setFeatureCompatibilityVersion
to 5.0. The
cleanup process:
Does not block the upgrade from completing, and
Is rate limited. To mitigate the potential effect on performance during orphaned document cleanup, see Range Deletion Performance Tuning.
Note
Additional Consideration
The mongos
binary will crash when attempting to connect
to mongod
instances whose
feature compatibility version (fCV) is greater than
that of the mongos
. For example, you cannot connect
a MongoDB 4.4 version mongos
to a 5.0
sharded cluster with fCV set to 5.0. You
can, however, connect a MongoDB 4.4 version
mongos
to a 5.0 sharded cluster with fCV set to 4.4.
Additional Upgrade Procedures
To upgrade a standalone, see Upgrade a Standalone to 5.0.
To upgrade a replica set, see Upgrade a Replica Set to 5.0.