Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/

Deploy an Ops Manager Resource

On this page

  • Considerations
  • Prerequisites
  • Procedure

You can deploy Ops Manager as a resource in a Kubernetes container using the Kubernetes Operator.

The following considerations apply:

When you configure your Ops Manager deployment, you must choose whether to run connections over HTTPS or HTTP.

The following HTTPS procedure:

  • Establishes TLS-encrypted connections to/from the Ops Manager application.

  • Establishes TLS-encrypted connections between the application database's replica set members.

  • Requires valid certificates for TLS encryption.

The following HTTP procedure:

  • Doesn't encrypt connections to or from the Ops Manager application.

  • Doesn't encrypt connections between the application database's replica set members.

  • Has fewer setup requirements.

When running over HTTPS, Ops Manager runs on port 8443 by default.

Select the appropriate tab based on whether you want to encrypt your Ops Manager and application database connections with TLS.

To deploy an Ops Manager instance in the central cluster and connect to it, use the following procedures:

  • Review the Ops Manager resource architecture

  • Review the Ops Manager resource considerations and prerequisites

  • Deploy an Ops Manager instance on the central cluster with TLS encryption

These procedures are the same as the procedures for single clusters deployed with the Kubernetes Operator with the following exceptions:

  • Run the procedures to deploy Ops Manager only on the central cluster of your multi-Kubernetes-cluster deployment.

  • Set the context and the namespace.

    If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment:

    • Set the context to the name of the central cluster, such as: kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".

    • Set the --namespace to the same scope that you used for your multi-Kubernetes-cluster deployment, such as: kubectl config --namespace "mongodb".

  • Configure external connectivity for Ops Manager.

    To connect member clusters to the Ops Manager resource's deployment in the central cluster in a multi-Kubernetes-cluster deployment, use one of the following methods:

    • Set the spec.externalConnectivity to true and specify the Ops Manager port in it. Use the ops-manager-external.yaml example script, modify it to your needs, and apply the configuration. For example, run:

      kubectl apply \
      --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \
      --namespace "mongodb" \
      -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
    • Add the central cluster and all member clusters to the service mesh. The service mesh establishes communication from the central and all member clusters to the Ops Manager instance. To learn more, see the Multi-Kubernetes-Cluster Quick Start procedures and see the step that references the istio-injection=enabled label for Istio. Also, see Automatic sidecar injection in the Istio documentation.

  • Complete the Prerequisites.

  • Read the Considerations.

  • Create one TLS certificate for the Application Database's replica set.

    This TLS certificate requires the following attributes:

    DNS Names

    Ensure that you add SANs or Subject Names for each Pod that hosts a member of the Application Database replica set. The SAN for each pod must use the following format:

    <opsmgr-metadata.name>-db-<index>.<opsmgr-metadata.name>-db-svc.<namespace>.svc.cluster.local
    Key Usages

    Ensure that the TLS certificates include the following key-usages (5280):

    • "server auth"

    • "client auth"

Important

The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn't support concatenated PEM files stored as Opaque secrets.

Before you deploy an Ops Manager resource, make sure you plan for your Ops Manager resource:

Follow these steps to deploy the Ops Manager resource to run over HTTPS and secure the application database using TLS.

1

If you have not already, run the following command to execute all kubectl commands in the namespace you created.

Note

If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment:

  • Set the context to the name of the central cluster, such as: kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".

  • Set the --namespace to the same scope that you used for your multi-Kubernetes-cluster deployment, such as: kubectl config --namespace "mongodb".

kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
2

If you're using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

To learn about your options for secret storage, see Configure Secret Storage.

  1. Once you have your TLS certificates and private keys, run the following command to create a secret that stores Ops Manager's TLS certificate:

    kubectl create secret tls <prefix>-<metadata.name>-cert \
    --cert=<om-tls-cert> \
    --key=<om-tls-key>
  2. Run the following command to create a new secret that stores the application database's TLS certificate:

    kubectl create secret tls <prefix>-<metadata.name>-db-cert \
    --cert=<appdb-tls-cert> \
    --key=<appdb-tls-key>
3

If your Ops Manager TLS certificate or your application database TLS certificate is signed by a custom CA, you must provide a CA certificate to validate the TLS certificate(s). To validate the TLS certificate(s), create a ConfigMap to hold the CA certificate:

Warning

You must concatenate your custom CA file and the entire TLS certificate chain from downloads.mongodb.com to prevent Ops Manager from becoming inoperable if the application database restarts.

Important

The Kubernetes Operator requires that:

  • Your Ops Manager certificate is named mms-ca.crt in the ConfigMap.

  • Your application database certficate is named ca-pem in the ConfigMap.

  1. Obtain the entire TLS certificate chain for both Ops Manager and the application database from downloads.mongodb.com. The following openssl command outputs each certificate in the chain to your current working directory, in .crt format:

    openssl s_client -showcerts -verify 2 \
    -connect downloads.mongodb.com:443 -servername downloads.mongodb.com < /dev/null \
    | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".crt"; print >out}'
  2. Concatenate your CA's certificate file for Ops Manager with the entire TLS certificate chain from downloads.mongodb.com that you obtained in the previous step:

    cat cert1.crt cert2.crt cert3.crt cert4.crt >> mms-ca.crt
  3. Concatenate your CA's certificate file for the application database with the entire TLS certificate chain from downloads.mongodb.com that you obtained in the previous step:

    cat cert1.crt cert2.crt cert3.crt cert4.crt >> ca-pem
  4. Create the ConfigMap for Ops Manager:

    kubectl create configmap om-http-cert-ca --from-file="mms-ca.crt"
  5. Create the ConfigMap for the application database:

    kubectl create configmap ca --from-file="ca-pem"
4

Change the highlighted settings to match your desired Ops Manager and application database configuration.

1---
2apiVersion: mongodb.com/v1
3kind: MongoDBOpsManager
4metadata:
5 name: <myopsmanager>
6spec:
7 replicas: 1
8 version: <opsmanagerversion>
9 adminCredentials: <adminusercredentials> # Should match metadata.name
10 # in the Kubernetes secret
11 # for the admin user
12
13 externalConnectivity:
14 type: LoadBalancer
15 security:
16 certsSecretPrefix: <prefix> # Required. Text to prefix
17 # the name of the secret that contains
18 # Ops Manager's TLS certificate.
19 tls:
20 ca: "om-http-cert-ca" # Optional. Name of the ConfigMap file
21 # containing the certificate authority that
22 # signs the certificates used by the Ops
23 # Manager custom resource.
24
25 applicationDatabase:
26 members: 3
27 version: "4.4.0-ubi8"
28 security:
29 certsSecretPrefix: <prefix> # Required. Text to prefix to the
30 # name of the secret that contains the Application
31 # Database's TLS certificate. Name the secret
32 # <prefix>-<metadata.name>-db-cert.
33 tls:
34 ca: "appdb-ca" # Optional. Name of the ConfigMap file
35 # containing the certicate authority that
36 # signs the certificates used by the
37 # application database.
38
39...
5
6
Key
Type
Description
Example
string

Name for this Kubernetes Ops Manager object.

Resource names must be 44 characters or less.

Tip

See also:

om
number

Number of Ops Manager instances to run in parallel.

The minimum valid value is 1.

Note

Highly Available Ops Manager Resources

For high availability, set this value to more than 1. Multiple Ops Manager instances can read from the same Application Database, ensuring failover if one instance is unavailable and enabling you to update the Ops Manager resource without downtime.

1
string

Version of Ops Manager to be installed.

The format should be X.Y.Z. To view available Ops Manager versions, view the container registry.

6.0.0
string

Name of the secret you created for the Ops Manager admin user.

Note

Configure the secret to use the same namespace as the Ops Manager resource.

om-admin-secret
spec
.security
string

Required.

Text to prefix to the name of the secret that contains Ops Managers TLS certificates.

om-prod
spec
.security
.tls
string

Name of the ConfigMap you created to verify your Ops Manager TLS certificates signed using a custom CA.

Important

This field is required if you signed your Ops Manager TLS certificates using a custom CA.

om-http-cert-ca
spec
.externalConnectivity
string

The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes.

Note

Exclude the spec.externalConnectivity setting and its children if you don't want the Kubernetes Operator to create a Kubernetes service to route external traffic to the Ops Manager application.

LoadBalancer
spec
.applicationDatabase
integer
Number of members of the Ops Manager Application Database replica set.
3
spec
.applicationDatabase
string

Required.

Version of MongoDB that the Ops Manager Application Database should run.

The format should be X.Y.Z-ubi8 for the Enterprise edition and X.Y.Z for the Community edition. Do not add the -ubi8 tag suffix to the Community edition image because the Kubernetes Operator adds the tag suffix automatically.

Important

Ensure that you choose a compatible MongoDB Server version.

Compatible versions differ depending on the base image that the MongoDB database resource uses.

To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual.

For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version.

spec
.applicationDatabase
.security
string

Required.

Text to prefix to the name of the secret that contains the application database's TLS certificates.

appdb-prod
spec
.applicationDatabase
.security
.tls
string

Name of the ConfigMap you created to verify your application database TLS certificates signed using a custom CA.

Important

This field is required if you signed your application database TLS certificates using a custom CA.

ca

Note

The Kubernetes Operator mounts the CA you add using the spec.applicationDatabase.security.tls.ca setting to both the Ops Manager and the Application Database pods.

7

If you want to enable Backup for your Ops Manager instance, you must configure all of the following settings:

Key
Type
Description
Example
spec
.backup
boolean
Flag that indicates that Backup is enabled. You must specify spec.backup.enabled: true to configure settings for the head database, oplog store, and snapshot store.
true
spec
.backup
.headDB
collection
A collection of configuration settings for the head database. For descriptions of the individual settings in the collection, see spec.backup.headDB.
spec
.backup
.opLogStores
string
Name of the oplog store.
oplog1
spec
.backup
.opLogStores
.mongodbResourceRef
string
Name of the MongoDB database resource for the oplog store.
my-oplog-db

You must also configure an S3 snapshot store or a blockstore.

Note

If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for Backup.

To configure a snapshot store, configure the following settings:

Key
Type
Description
Example
spec
.backup
.s3Stores
string
Name of the S3 snapshot store.
s3store1
spec
.backup
.s3Stores
.s3SecretRef
string
Name of the secret that contains the accessKey and secretKey fields. The Backup Daemon Service uses the values of these fields as credentials to access the S3 or S3-compatible bucket.
my-s3-credentials
spec
.backup
.s3Stores
string
URL of the S3 or S3-compatible bucket that stores the database Backup snapshots.
s3.us-east-1.amazonaws.com
spec
.backup
.s3Stores
string
Name of the S3 or S3-compatible bucket that stores the database Backup snapshots.
my-bucket

To configure a blockstore, configure the following settings:

Key
Type
Description
Example
spec
.backup
.blockStores
string
Name of the blockstore.
blockStore1
spec
.backup
.blockStores
.mongodbResourceRef
string
Name of the MongoDB database resource that you create for the blockstore. You must deploy this database resource in the same namespace as the Ops Manager resource.
my-mongodb-blockstore
8

Add any optional settings that you want to apply to your deployment to the object specification file.

9
10

Run the following kubectl command on the filename of the Ops Manager resource definition:

kubectl apply -f <opsmgr-resource>.yaml

Note

If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment, run:

kubectl apply \
--context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \
--namespace "mongodb"
-f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
11

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

The command returns the following output under the status field while the resource deploys:

status:
applicationDatabase:
lastTransition: "2020-04-01T09:49:22Z"
message: AppDB Statefulset is not ready yet
phase: Reconciling
type: ""
version: ""
backup:
phase: ""
opsManager:
phase: ""

The Kubernetes Operator reconciles the resources in the following order:

  1. Application Database.

  2. Ops Manager.

  3. Backup.

The Kubernetes Operator doesn't reconcile a resource until the preceding one enters the Running phase.

After the Ops Manager resource completes the Reconciling phase, the command returns the following output under the status field if you enabled Backup:

status:
applicationDatabase:
lastTransition: "2020-04-01T09:50:20Z"
members: 3
phase: Running
type: ReplicaSet
version: "4.4.5-ubi8"
backup:
lastTransition: "2020-04-01T09:57:42Z"
message: The MongoDB object <namespace>/<oplogresourcename>
doesn't exist
phase: Pending
opsManager:
lastTransition: "2020-04-01T09:57:40Z"
phase: Running
replicas: 1
url: https://om-svc.cloudqa.svc.cluster.local:8443
version: "5.0.0"

Backup remains in a Pending state until you configure the Backup databases.

Tip

The status.opsManager.url field states the resource's connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

After the resource completes the Reconciling phase, the command returns the following output under the status field:

status:
applicationDatabase:
lastTransition: "2019-12-06T18:23:22Z"
members: 3
phase: Running
type: ReplicaSet
version: "4.4.5-ubi8"
opsManager:
lastTransition: "2019-12-06T18:23:26Z"
message: The MongoDB object namespace/oplogdbname doesn't exist
phase: Pending
url: https://om-svc.dev.svc.cluster.local:8443
version: ""

Backup remains in a Pending state until you configure the Backup databases.

Tip

The status.opsManager.url field states the resource's connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

12

The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:

  1. Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.

    https://ops.example.com:8443
  3. Log in to Ops Manager using the admin user credentials.

  1. Set your firewall rules to allow access from the Internet to the spec.externalConnectivity.port on the host on which your Kubernetes cluster is running.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and the spec.externalConnectivity.port.

    https://ops.example.com:30036
  3. Log in to Ops Manager using the admin user credentials.

To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.

13

To configure credentials, you must create an Ops Manager organization, generate programmatic API keys, and create a secret. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.

14

To create a project, follow the prerequisites and procedure on the Create One Project using a ConfigMap page.

Set the following fields in your project ConfigMap:

  • Set data.baseUrl in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:


    kubectl get om -o yaml -w

    The command returns the URL of the Ops Manager Application in the status.opsManager.url field.


    status:
    applicationDatabase:
    lastTransition: "2019-12-06T18:23:22Z"
    members: 3
    phase: Running
    type: ReplicaSet
    version: "4.4.5-ubi8"
    opsManager:
    lastTransition: "2019-12-06T18:23:26Z"
    message: The MongoDB object namespace/oplogdbname doesn't exist
    phase: Pending
    url: https://om-svc.dev.svc.cluster.local:8443
    version: ""

    Important

    If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it's deployed to, you must set data.baseUrl to the same value of the spec.configuration.mms.centralUrl setting in the Ops Manager resource specification.

  • Set data.sslMMSCAConfigMap to the name of your ConfigMap containing the root CA certificate used to sign the Ops Manager host's certificate. The Kubernetes Operator requires that you name this Ops Manager resource's certificate mms-ca.crt in the ConfigMap.

15

By default, Ops Manager enables Backup. Create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.

  1. Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.

    Note

    Create this database as a three-member replica set.

    Match the metadata.name of the resource with the spec.backup.opLogStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

  2. Deploy a MongoDB database resource for the S3 snapshot store in the same namespace as the Ops Manager resource.

    Note

    Create the S3 snapshot store as a replica set.

    Match the metadata.name of the resource to the spec.backup.s3Stores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

16

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

When Ops Manager is running, the command returns the following output under the status field:

status:
applicationDatabase:
lastTransition: "2019-12-06T17:46:15Z"
members: 3
phase: Running
type: ReplicaSet
version: "4.4.5-ubi8"
opsManager:
lastTransition: "2019-12-06T17:46:32Z"
phase: Running
replicas: 1
url: https://om-backup-svc.dev.svc.cluster.local:8443
version: "5.0.0"

See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.

Follow these steps to deploy the Ops Manager resource to run over HTTP:

1

If you have not already, run the following command to execute all kubectl commands in the namespace you created.

Note

If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment:

  • Set the context to the name of the central cluster, such as: kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".

  • Set the --namespace to the same scope that you used for your multi-Kubernetes-cluster deployment, such as: kubectl config --namespace "mongodb".

kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
2

Change the highlighted settings to match your desired Ops Manager configuration.

1---
2apiVersion: mongodb.com/v1
3kind: MongoDBOpsManager
4metadata:
5 name: <myopsmanager>
6spec:
7 replicas: 1
8 version: <opsmanagerversion>
9 adminCredentials: <adminusercredentials> # Should match metadata.name
10 # in the secret
11 # for the admin user
12 externalConnectivity:
13 type: LoadBalancer
14
15 applicationDatabase:
16 members: 3
17 version: <mongodbversion>
18...
3
4
Key
Type
Description
Example
string

Name for this Kubernetes Ops Manager object.

Resource names must be 44 characters or less.

Tip

See also:

om
number

Number of Ops Manager instances to run in parallel.

The minimum valid value is 1.

Note

Highly Available Ops Manager Resources

For high availability, set this value to more than 1. Multiple Ops Manager instances can read from the same Application Database, ensuring failover if one instance is unavailable and enabling you to update the Ops Manager resource without downtime.

1
string

Version of Ops Manager to be installed.

The format should be X.Y.Z. For the list of available Ops Manager versions, view the container registry.

6.0.0
string

Name of the secret you created for the Ops Manager admin user.

Note

Configure the secret to use the same namespace as the Ops Manager resource.

om-admin-secret
spec
.externalConnectivity
string

Optional.

The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes.

Note

Exclude the spec.externalConnectivity setting and its children if you don't want the Kubernetes Operator to create a Kubernetes service to route external traffic to the Ops Manager application.

LoadBalancer
spec
.applicationDatabase
integer
Number of members of the Ops Manager Application Database replica set.
3
spec
.applicationDatabase
string

Required.

Version of MongoDB that the Ops Manager Application Database should run.

The format should be X.Y.Z for the Community edition and X.Y.Z-ubi8 for the Enterprise edition.

Important

Ensure that you choose a compatible MongoDB Server version.

Compatible versions differ depending on the base image that the MongoDB database resource uses.

To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual.

For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version.

5

If you want to enable backup, you must configure all of the following settings:

Key
Type
Description
Example
spec
.backup
boolean
Flag that indicates that backup is enabled. You must specify spec.backup.enabled: true to configure settings for the head database, oplog store, and snapshot store.
true
spec
.backup
.headDB
collection
A collection of configuration settings for the head database. For descriptions of the individual settings in the collection, see spec.backup.headDB.
spec
.backup
.opLogStores
string
Name of the oplog store.
oplog1
spec
.backup
.opLogStores
.mongodbResourceRef
string
Name of the MongoDB database resource for the oplog store.
my-oplog-db

You must also configure an S3 snapshot store or a blockstore.

Note

If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.

To configure an S3 snapshot store, configure the following settings:

Key
Type
Description
Example
spec
.backup
.s3Stores
string
Name of the S3 snapshot store.
s3store1
spec
.backup
.s3Stores
.s3SecretRef
string
Name of the secret that contains the accessKey and secretKey fields. The Backup Daemon Service uses the values of these fields as credentials to access the S3 or S3-compatible bucket.
my-s3-credentials
spec
.backup
.s3Stores
string
URL of the S3 or S3-compatible bucket that stores the database backup snapshots.
s3.us-east-1.amazonaws.com
spec
.backup
.s3Stores
string
Name of the S3 or S3-compatible bucket that stores the database backup snapshots.
my-bucket
spec
.backup
.s3Stores
string
Region where your S3-compatible bucket resides. Use this field only if your S3 store's s3BucketEndpoint doesn't include a region in its URL. Don't use this field with AWS S3 buckets.
us-east-1

To configure a blockstore, configure the following settings:

Key
Type
Description
Example
spec
.backup
.blockStores
string
Name of the blockstore.
blockStore1
spec
.backup
.blockStores
.mongodbResourceRef
string
Name of the MongoDB database resource that you create for the blockstore. You must deploy this database resource in the same namespace as the Ops Manager resource.
my-mongodb-blockstore
6

Add any optional settings for backups that you want to apply to your deployment to the object specification file. For example, for each type of backup store, and for Ops Manager backup daemon processes, you can assign labels to associate particular backup backup stores or backup daemon processes with specific projects. Use spec.backup.[*].assignmentLabels elements of the OpsManager resources.

7

Add any optional settings that you want to apply to your deployment to the object specification file.

8
9

Run the following kubectl command on the filename of the Ops Manager resource definition:

kubectl apply -f <opsmgr-resource>.yaml

Note

If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment, run:

kubectl apply \
--context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \
--namespace "mongodb"
-f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
10

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

The command returns the following output under the status field while the resource deploys:

status:
applicationDatabase:
lastTransition: "2020-04-01T09:49:22Z"
message: AppDB Statefulset is not ready yet
phase: Reconciling
type: ""
version: ""
backup:
phase: ""
opsManager:
phase: ""

The Kubernetes Operator reconciles the resources in the following order:

  1. Application Database.

  2. Ops Manager.

  3. Backup.

The Kubernetes Operator doesn't reconcile a resource until the preceding one enters the Running phase.

After the Ops Manager resource completes the Reconciling phase, the command returns the following output under the status field if you enabled backup:

status:
applicationDatabase:
lastTransition: "2020-04-01T09:50:20Z"
members: 3
phase: Running
type: ReplicaSet
version: "4.4.5-ubi8"
backup:
lastTransition: "2020-04-01T09:57:42Z"
message: The MongoDB object <namespace>/<oplogresourcename>
doesn't exist
phase: Pending
opsManager:
lastTransition: "2020-04-01T09:57:40Z"
phase: Running
replicas: 1
url: http://om-svc.cloudqa.svc.cluster.local:8080
version: "5.0.0"

Backup remains in a Pending state until you configure the backup databases.

Tip

The status.opsManager.url field states the resource's connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

11

The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:

  1. Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.

    http://ops.example.com:8080
  3. Log in to Ops Manager using the admin user credentials.

  1. Set your firewall rules to allow access from the Internet to the spec.externalConnectivity.port on the host on which your Kubernetes cluster is running.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and the spec.externalConnectivity.port.

    http://ops.example.com:30036
  3. Log in to Ops Manager using the admin user credentials.

To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.

12

If you enabled backup, you must create an Ops Manager organization, generate programmatic API keys, and create a secret in your secret-storage-tool. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.

13

If you enabled backup, create a project by following the prerequisites and procedure on the Create One Project using a ConfigMap page.

You must set data.baseUrl in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:

kubectl get om -o yaml -w

The command returns the URL of the Ops Manager Application in the status.opsManager.url field.

status:
applicationDatabase:
lastTransition: "2020-04-01T10:00:32Z"
members: 3
phase: Running
type: ReplicaSet
version: "4.4.5-ubi8"
backup:
lastTransition: "2020-04-01T09:57:42Z"
message: The MongoDB object <namespace>/<oplogresourcename>
doesn't exist
phase: Pending
opsManager:
lastTransition: "2020-04-01T09:57:40Z"
phase: Running
replicas: 1
url: http://om-svc.cloudqa.svc.cluster.local:8080
version: "5.0.0"

Important

If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it's deployed to, you must set data.baseUrl to the same value of the spec.configuration.mms.centralUrl setting in the Ops Manager resource specification.

14

If you enabled Backup, create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.

  1. Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.

    Note

    Create this database as a replica set.

    Match the metadata.name of the resource with the spec.backup.opLogStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

  2. Choose one of the following:

    1. Deploy a MongoDB database resource for the blockstore in the same namespace as the Ops Manager resource.

      Match the metadata.name of the resource to the spec.backup.blockStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

    2. Configure an S3 bucket to use as the S3 snapshot store.

      Ensure that you can access the S3 bucket using the details that you specified in your Ops Manager resource definition.

15

If you enabled backup, check the status of your Ops Manager resource by invoking the following command:

kubectl get om -o yaml -w

When Ops Manager is running, the command returns the following output under the status field:

status:
applicationDatabase:
lastTransition: "2020-04-01T10:00:32Z"
members: 3
phase: Running
type: ReplicaSet
version: "4.4.5-ubi8"
backup:
lastTransition: "2020-04-01T10:00:53Z"
phase: Running
version: "4.2.8"
opsManager:
lastTransition: "2020-04-01T10:00:34Z"
phase: Running
replicas: 1
url: http://om-svc.cloudqa.svc.cluster.local:8080
version: "5.0.0"

See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.