Docs Menu
Docs Home
/
MongoDB Ops Manager
/

Installation Checklist

On this page

  • Topology Decisions
  • Security Decisions
  • Backup Decisions

You must make several decisions based on the content of this page before you install Ops Manager. During the installation process, you will make choices based on these decisions.

To install Ops Manager:

  1. Read the Ops Manager Overview.

  2. Plan your installation according to the questions on this page.

  3. Provision servers that meet the Ops Manager System Requirements.

    Warning

    Potential for Production Failure

    Your Ops Manager instance can fail in production if you fail to configure the following:

    • Ops Manager hosts per the Ops Manager System Requirements.

    • MongoDB hosts per the Production Notes in the MongoDB manual. MongoDB instances in Ops Manager include:

      • The Ops Manager Application Database,

      • Each blockstore.

      • Each Ops Manager Backup Daemon head database. This only applies to FCV 4.0 and earlier. FCV 4.2 and later do not use head databases for backups.

  4. Install the Application Database and optional Backup Database.

  5. Install Ops Manager with one of the following methods:

    Note

    To install a simple evaluation deployment on a single server, see Install a Simple Test Ops Manager Installation.

The topology you choose for your deployment affects the redundancy and availability of both your metadata and snapshots, and the availability of the Ops Manager Application.

Ops Manager stores application metadata and snapshots in the Ops Manager Application Database and Backup Database respectively. To provide data redundancy, run each database as a three-member replica set on multiple servers.

To provide high availability for write operations to the databases, set up each replica set so that all three members hold data. This way, if a member is unreachable the replica set can still write data. Ops Manager uses w:2 write concern, which requires acknowledgement from the primary and one secondary for each write operation.

To provide high availability for the Ops Manager Application, run at least two instances of the application and use a load balancer. A load balancer placed in front of the Ops Manager Application must not return cached content. For more information, see Configure a Highly Available Ops Manager Application.

The following tables describe the pros and cons for different topologies.

This deployment runs on one server and has no data-redundancy. If you lose the server, you must start over from scratch.

Pro
Needs only needs one server.
Con
If you lose the server, you lose everything: users and projects, metadata, backups, automation configurations, stored monitoring metrics, etc.

This install requires at least three servers. The replica sets for the Ops Manager Application Database and the Backup Database each comprise at least three data-bearing members. This requires sufficient storage and memory.

Pro
You can lose a member of the Ops Manager Application Database or Backup Database and still maintain Ops Manager availability. No Ops Manager functionality is lost while the member is down.
Con
Loss of the Ops Manager instance requires you to manually start a new Ops Manager instance. No Ops Manager functionality is available while the application is down.

This runs multiple Ops Manager Applications behind a load balancer and requires infrastructure outside of what Ops Manager offers. For details, see Configure a Highly Available Ops Manager Application.

Pro
Ops Manager continues to be available even when any individual server is lost.
Con
Requires a larger number of servers, and requires a load balancer capable of routing traffic to available application servers.

If the servers where you deploy MongoDB don't have internet access and if you use Automation, then before you create the first managed MongoDB deployment from Ops Manager, you must configure local mode and store the binaries. MongoDB Agents can then download the binaries directly from Ops Manager. To learn more, see Configure Deployment to Have Limited Internet Access.

If Ops Manager will use a proxy server to access external services, you must configure the proxy settings in Ops Manager's conf-mms.properties configuration file. If you have already started Ops Manager, you must restart after configuring the proxy settings.

If you will use authentication or TLS for connections to the Ops Manager Application Database and Backup Database, you must configure those options on each database when deploying the database and then you must configure Ops Manager with the necessary certificate information for accessing the databases. For details, see Configure the Connections to the Application Database

If you want to use LDAP for user management, you can configure LDAP authentication before or after creating your first project. There are different prerequisites for implementing a new LDAP authentication scheme or for converting an existing authentication scheme to LDAP. To learn more about these differences, see Prerequisites.

For details on LDAP authentication, see Configure Ops Manager Users for LDAP Authentication and Authorization.

If you will use TLS for connections to Ops Manager from MongoDB Agents, users, and the API, then you must configure Ops Manager to use TLS. The procedure to install Ops Manager includes the option to configure TLS access.

If the servers that run your Backup Daemons have no internet access, you must configure offline binary access for the Backup Daemon before running the Daemon. The Configure Deployment to Have Limited Internet Access page covers the option to configure offline binary access.

If you need to assign backups of particular MongoDB deployments to particular data centers, then each data center requires its own Ops Manager instance, Backup Daemon, and MongoDB Agent. The separate Ops Manager instances must share a single dedicated Ops Manager Application Database. The MongoDB Agent in each data center must use the URL for its local Ops Manager instance, which you can configure through either different hostnames or split-horizon DNS. For detailed requirements, see Assign Snapshot Stores to Specific Data Centers.

Back

Install