Deploy an Ops Manager Resource
On this page
You can deploy Ops Manager as a resource in a Kubernetes container using the Kubernetes Operator.
Considerations
The following considerations apply:
Encrypting Connections
When you configure your Ops Manager deployment, you must choose whether to run connections over HTTPS or HTTP.
The following HTTPS procedure:
Establishes TLS-encrypted connections to/from the Ops Manager application.
Establishes TLS-encrypted connections between the application database's replica set members.
Requires valid certificates for TLS encryption.
The following HTTP procedure:
Doesn't encrypt connections to or from the Ops Manager application.
Doesn't encrypt connections between the application database's replica set members.
Has fewer setup requirements.
When running over HTTPS, Ops Manager runs on port 8443
by
default.
Select the appropriate tab based on whether you want to encrypt your Ops Manager and application database connections with TLS.
Deploying on the Central Cluster in a Multi-Kubernetes-Cluster Deployment
To deploy an Ops Manager instance in the central cluster and connect to it, use the following procedures:
Review the Ops Manager resource considerations and prerequisites
Deploy an Ops Manager instance on the central cluster with TLS encryption
These procedures are the same as the procedures for single clusters deployed with the Kubernetes Operator with the following exceptions:
Run the procedures to deploy Ops Manager only on the central cluster of your multi-Kubernetes-cluster deployment.
Set the context and the namespace.
If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment:
Set the
context
to the name of the central cluster, such as:kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME"
.Set the
--namespace
to the same scope that you used for your multi-Kubernetes-cluster deployment, such as:kubectl config --namespace "mongodb"
.
Configure external connectivity for Ops Manager.
To connect member clusters to the Ops Manager resource's deployment in the central cluster in a multi-Kubernetes-cluster deployment, use one of the following methods:
Set the
spec.externalConnectivity
totrue
and specify the Ops Manager port in it. Use the ops-manager-external.yaml example script, modify it to your needs, and apply the configuration. For example, run:kubectl apply \ --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \ --namespace "mongodb" \ -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml Add the central cluster and all member clusters to the service mesh. The service mesh establishes communication from the central and all member clusters to the Ops Manager instance. To learn more, see the Multi-Kubernetes-Cluster Quick Start procedures and see the step that references the
istio-injection=enabled
label for Istio. Also, see Automatic sidecar injection in the Istio documentation.
Prerequisites
Complete the Prerequisites.
Read the Considerations.
Create one TLS certificate for the Application Database's replica set.
This TLS certificate requires the following attributes:
DNS NamesEnsure that you add SANs or Subject Names for each Pod that hosts a member of the Application Database replica set. The SAN for each pod must use the following format:
<opsmgr-metadata.name>-db-<index>.<opsmgr-metadata.name>-db-svc.<namespace>.svc.cluster.local Key UsagesEnsure that the TLS certificates include the following key-usages (5280):
"server auth"
"client auth"
Important
The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn't support concatenated PEM files stored as Opaque secrets.
Before you deploy an Ops Manager resource, make sure you plan for your Ops Manager resource:
Complete the Prerequisites
Read the Considerations.
Procedure
Follow these steps to deploy the Ops Manager resource to run over HTTPS and secure the application database using TLS.
Configure kubectl
to default to your namespace.
If you have not already, run the following command to execute all
kubectl
commands in the namespace you created.
Note
If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment:
Set the
context
to the name of the central cluster, such as:kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME"
.Set the
--namespace
to the same scope that you used for your multi-Kubernetes-cluster deployment, such as:kubectl config --namespace "mongodb"
.
kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
Create secrets for your certificates.
If you're using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.
To learn about your options for secret storage, see Configure Secret Storage.
Once you have your TLS certificates and private keys, run the following command to create a secret that stores Ops Manager's TLS certificate:
kubectl create secret tls <prefix>-<metadata.name>-cert \ --cert=<om-tls-cert> \ --key=<om-tls-key> Run the following command to create a new secret that stores the application database's TLS certificate:
kubectl create secret tls <prefix>-<metadata.name>-db-cert \ --cert=<appdb-tls-cert> \ --key=<appdb-tls-key>
If necessary, validate your TLS certificates.
If your Ops Manager TLS certificate or your application database TLS certificate is signed by a custom CA, you must provide a CA certificate to validate the TLS certificate(s). To validate the TLS certificate(s), create a ConfigMap to hold the CA certificate:
Warning
You must concatenate your custom CA file and the entire
TLS certificate chain from downloads.mongodb.com
to prevent
Ops Manager from becoming inoperable if the application database
restarts.
Important
The Kubernetes Operator requires that:
Your Ops Manager certificate is named
mms-ca.crt
in the ConfigMap.Your application database certficate is named
ca-pem
in the ConfigMap.
Obtain the entire TLS certificate chain for both Ops Manager and the application database from
downloads.mongodb.com
. The followingopenssl
command outputs each certificate in the chain to your current working directory, in.crt
format:openssl s_client -showcerts -verify 2 \ -connect downloads.mongodb.com:443 -servername downloads.mongodb.com < /dev/null \ | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".crt"; print >out}' Concatenate your CA's certificate file for Ops Manager with the entire TLS certificate chain from
downloads.mongodb.com
that you obtained in the previous step:cat cert1.crt cert2.crt cert3.crt cert4.crt >> mms-ca.crt Concatenate your CA's certificate file for the application database with the entire TLS certificate chain from
downloads.mongodb.com
that you obtained in the previous step:cat cert1.crt cert2.crt cert3.crt cert4.crt >> ca-pem Create the ConfigMap for Ops Manager:
kubectl create configmap om-http-cert-ca --from-file="mms-ca.crt" Create the ConfigMap for the application database:
kubectl create configmap ca --from-file="ca-pem"
Copy the following example Ops Manager Kubernetes object.
Change the highlighted settings to match your desired Ops Manager and application database configuration.
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: <myopsmanager> 6 spec: 7 replicas: 1 8 version: <opsmanagerversion> 9 adminCredentials: <adminusercredentials> # Should match metadata.name 10 # in the Kubernetes secret 11 # for the admin user 12 13 externalConnectivity: 14 type: LoadBalancer 15 security: 16 certsSecretPrefix: <prefix> # Required. Text to prefix 17 # the name of the secret that contains 18 # Ops Manager's TLS certificate. 19 tls: 20 ca: "om-http-cert-ca" # Optional. Name of the ConfigMap file 21 # containing the certificate authority that 22 # signs the certificates used by the Ops 23 # Manager custom resource. 24 25 applicationDatabase: 26 members: 3 27 version: "4.4.0-ubi8" 28 security: 29 certsSecretPrefix: <prefix> # Required. Text to prefix to the 30 # name of the secret that contains the Application 31 # Database's TLS certificate. Name the secret 32 # <prefix>-<metadata.name>-db-cert. 33 tls: 34 ca: "appdb-ca" # Optional. Name of the ConfigMap file 35 # containing the certicate authority that 36 # signs the certificates used by the 37 # application database. 38 39 ...
Open your preferred text editor and paste the object specification into a new text file.
Configure the settings highlighted in the prior example.
Key | Type | Description | Example |
---|---|---|---|
string | Name for this Kubernetes Ops Manager object. Resource names must be 44 characters or less. | om | |
number | Number of Ops Manager instances to run in parallel. The minimum valid value is NoteHighly Available Ops Manager ResourcesFor high availability, set this value to more than | 1 | |
string | Version of Ops Manager to be installed. The format should be X.Y.Z. To view available Ops Manager versions, view the container registry. | 6.0.0 | |
string | om-admin-secret | ||
string | Required. Text to prefix to the name of the secret that contains Ops Managers TLS certificates. | om-prod | |
string | Name of the ConfigMap you created to verify your Ops Manager TLS certificates signed using a custom CA. ImportantThis field is required if you signed your Ops Manager TLS certificates using a custom CA. | om-http-cert-ca | |
string | The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes. NoteExclude the
| LoadBalancer | |
integer | Number of members of the Ops Manager Application Database
replica set. | 3 | |
string | Required. Version of MongoDB that the Ops Manager Application Database should run. The format should be
ImportantEnsure that you choose a compatible MongoDB Server version. Compatible versions differ depending on the base image that the MongoDB database resource uses. To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual. | For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version. | |
string | Required. Text to prefix to the name of the secret that contains the application database's TLS certificates. | appdb-prod | |
string | Name of the ConfigMap you created to verify your application database TLS certificates signed using a custom CA. ImportantThis field is required if you signed your application database TLS certificates using a custom CA. | ca |
Note
The Kubernetes Operator mounts the CA you add using the
spec.applicationDatabase.security.tls.ca
setting to
both the Ops Manager and the Application Database pods.
Optional: Configure Backup settings
If you want to enable Backup for your Ops Manager instance, you must configure all of the following settings:
Key | Type | Description | Example |
---|---|---|---|
boolean | Flag that indicates that Backup is enabled. You must
specify spec.backup.enabled: true to configure settings
for the head database, oplog store, and snapshot store. | true | |
spec .backup .headDB | collection | A collection of configuration settings for the
head database. For descriptions of the individual
settings in the collection, see
spec.backup.headDB . | |
string | Name of the oplog store. | oplog1 | |
string | Name of the MongoDB database resource for the oplog store. | my-oplog-db |
You must also configure an S3 snapshot store or a blockstore.
Note
If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for Backup.
To configure a snapshot store, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the S3 snapshot store. | s3store1 | |
string | Name of the secret that contains the accessKey and
secretKey fields. The Backup Daemon Service uses the
values of these fields as credentials to access the S3 or
S3-compatible bucket. | my-s3-credentials | |
string | s3.us-east-1.amazonaws.com | ||
string | Name of the S3 or S3-compatible bucket that stores the
database Backup snapshots. | my-bucket |
To configure a blockstore, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the blockstore. | blockStore1 | |
string | Name of the MongoDB database resource that you create for the
blockstore. You must deploy this database resource in the same
namespace as the Ops Manager resource. | my-mongodb-blockstore |
Optional: Configure any additional settings for an Ops Manager deployment.
Add any optional settings that you want to apply to your deployment to the object specification file.
Create your Ops Manager instance.
Run the following kubectl
command on the filename of the
Ops Manager resource definition:
kubectl apply -f <opsmgr-resource>.yaml
Note
If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment, run:
kubectl apply \ --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \ --namespace "mongodb" -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
Track the status of your Ops Manager instance.
To check the status of your Ops Manager resource, invoke the following command:
kubectl get om -o yaml -w
The command returns the following output under the status
field
while the resource deploys:
status: applicationDatabase: lastTransition: "2020-04-01T09:49:22Z" message: AppDB Statefulset is not ready yet phase: Reconciling type: "" version: "" backup: phase: "" opsManager: phase: ""
The Kubernetes Operator reconciles the resources in the following order:
Application Database.
Ops Manager.
Backup.
The Kubernetes Operator doesn't reconcile a resource until the preceding
one enters the Running
phase.
After the Ops Manager resource completes the Reconciling
phase, the
command returns the following output under the status
field if you
enabled Backup:
status: applicationDatabase: lastTransition: "2020-04-01T09:50:20Z" members: 3 phase: Running type: ReplicaSet version: "4.4.5-ubi8" backup: lastTransition: "2020-04-01T09:57:42Z" message: The MongoDB object <namespace>/<oplogresourcename> doesn't exist phase: Pending opsManager: lastTransition: "2020-04-01T09:57:40Z" phase: Running replicas: 1 url: https://om-svc.cloudqa.svc.cluster.local:8443 version: "5.0.0"
Backup remains in a Pending
state until you configure the Backup
databases.
Tip
The status.opsManager.url
field states the resource's
connection URL. Using this URL, you can reach Ops Manager from
inside the Kubernetes cluster or create a project using a
ConfigMap.
After the resource completes the Reconciling
phase, the command
returns the following output under the status
field:
status: applicationDatabase: lastTransition: "2019-12-06T18:23:22Z" members: 3 phase: Running type: ReplicaSet version: "4.4.5-ubi8" opsManager: lastTransition: "2019-12-06T18:23:26Z" message: The MongoDB object namespace/oplogdbname doesn't exist phase: Pending url: https://om-svc.dev.svc.cluster.local:8443 version: ""
Backup remains in a Pending
state until you configure the Backup
databases.
Tip
The status.opsManager.url
field states the resource's
connection URL. Using this URL, you can reach Ops Manager from
inside the Kubernetes cluster or create a project using a
ConfigMap.
Access the Ops Manager application.
The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:
Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.
Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.
https://ops.example.com:8443 Log in to Ops Manager using the admin user credentials.
Set your firewall rules to allow access from the Internet to the
spec.externalConnectivity.
port
on the host on which your Kubernetes cluster is running.Open a browser window and navigate to the Ops Manager application using the FQDN and the
spec.externalConnectivity.
port
.https://ops.example.com:30036 Log in to Ops Manager using the admin user credentials.
To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.
Create credentials for the Kubernetes Operator.
To configure credentials, you must create an Ops Manager organization, generate programmatic API keys, and create a secret. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.
Create a project using a ConfigMap.
To create a project, follow the prerequisites and procedure on the Create One Project using a ConfigMap page.
Set the following fields in your project ConfigMap:
Set
data.baseUrl
in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:kubectl get om -o yaml -w The command returns the URL of the Ops Manager Application in the
status.opsManager.url
field.status: applicationDatabase: lastTransition: "2019-12-06T18:23:22Z" members: 3 phase: Running type: ReplicaSet version: "4.4.5-ubi8" opsManager: lastTransition: "2019-12-06T18:23:26Z" message: The MongoDB object namespace/oplogdbname doesn't exist phase: Pending url: https://om-svc.dev.svc.cluster.local:8443 version: "" Important
If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it's deployed to, you must set
data.baseUrl
to the same value of thespec.configuration.mms.centralUrl
setting in the Ops Manager resource specification.Set
data.sslMMSCAConfigMap
to the name of your ConfigMap containing the root CA certificate used to sign the Ops Manager host's certificate. The Kubernetes Operator requires that you name this Ops Manager resource's certificatemms-ca.crt
in the ConfigMap.
Deploy MongoDB database resources to complete the Backup configuration.
By default, Ops Manager enables Backup. Create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.
Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.
Note
Create this database as a three-member replica set.
Match the
metadata.name
of the resource with thespec.backup.opLogStores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.Deploy a MongoDB database resource for the S3 snapshot store in the same namespace as the Ops Manager resource.
Note
Create the S3 snapshot store as a replica set.
Match the
metadata.name
of the resource to thespec.backup.s3Stores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.
Confirm that the Ops Manager resource is running.
To check the status of your Ops Manager resource, invoke the following command:
kubectl get om -o yaml -w
When Ops Manager is running, the command returns the following
output under the status
field:
status: applicationDatabase: lastTransition: "2019-12-06T17:46:15Z" members: 3 phase: Running type: ReplicaSet version: "4.4.5-ubi8" opsManager: lastTransition: "2019-12-06T17:46:32Z" phase: Running replicas: 1 url: https://om-backup-svc.dev.svc.cluster.local:8443 version: "5.0.0"
See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.
Follow these steps to deploy the Ops Manager resource to run over HTTP:
Configure kubectl
to default to your namespace.
If you have not already, run the following command to execute all
kubectl
commands in the namespace you created.
Note
If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment:
Set the
context
to the name of the central cluster, such as:kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME"
.Set the
--namespace
to the same scope that you used for your multi-Kubernetes-cluster deployment, such as:kubectl config --namespace "mongodb"
.
kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
Copy the following example Ops Manager Kubernetes object.
Change the highlighted settings to match your desired Ops Manager configuration.
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: <myopsmanager> 6 spec: 7 replicas: 1 8 version: <opsmanagerversion> 9 adminCredentials: <adminusercredentials> # Should match metadata.name 10 # in the secret 11 # for the admin user 12 externalConnectivity: 13 type: LoadBalancer 14 15 applicationDatabase: 16 members: 3 17 version: <mongodbversion> 18 ...
Open your preferred text editor and paste the object specification into a new text file.
Configure the settings included in the prior example.
Key | Type | Description | Example |
---|---|---|---|
string | Name for this Kubernetes Ops Manager object. Resource names must be 44 characters or less. | om | |
number | Number of Ops Manager instances to run in parallel. The minimum valid value is NoteHighly Available Ops Manager ResourcesFor high availability, set this value to more than | 1 | |
string | Version of Ops Manager to be installed. The format should be X.Y.Z. For the list of available Ops Manager versions, view the container registry. | 6.0.0 | |
string | om-admin-secret | ||
string | Optional. The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes. NoteExclude the
| LoadBalancer | |
integer | Number of members of the Ops Manager Application Database
replica set. | 3 | |
string | Required. Version of MongoDB that the Ops Manager Application Database should run. The format should be ImportantEnsure that you choose a compatible MongoDB Server version. Compatible versions differ depending on the base image that the MongoDB database resource uses. To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual. | For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version. |
Optional: Configure backup settings.
If you want to enable backup, you must configure all of the following settings:
Key | Type | Description | Example |
---|---|---|---|
boolean | Flag that indicates that backup is enabled. You must specify
spec.backup.enabled: true to configure settings
for the head database, oplog store, and snapshot store. | true | |
spec .backup .headDB | collection | A collection of configuration settings for the
head database. For descriptions of the individual
settings in the collection, see
spec.backup.headDB . | |
string | Name of the oplog store. | oplog1 | |
string | Name of the MongoDB database resource for the oplog store. | my-oplog-db |
You must also configure an S3 snapshot store or a blockstore.
Note
If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.
To configure an S3 snapshot store, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the S3 snapshot store. | s3store1 | |
string | Name of the secret that contains the accessKey and
secretKey fields. The Backup Daemon Service uses the
values of these fields as credentials to access the S3 or
S3-compatible bucket. | my-s3-credentials | |
string | s3.us-east-1.amazonaws.com | ||
string | Name of the S3 or S3-compatible bucket that stores the
database backup snapshots. | my-bucket | |
string | Region where your S3-compatible bucket resides. Use this
field only if your S3 store's
s3BucketEndpoint
doesn't include a region in its URL. Don't use this field with AWS S3 buckets. | us-east-1 |
To configure a blockstore, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the blockstore. | blockStore1 | |
string | Name of the MongoDB database resource that you create for the
blockstore. You must deploy this database resource in the same
namespace as the Ops Manager resource. | my-mongodb-blockstore |
Optional: Configure any additional settings for an Ops Manager backup.
Add any optional settings for backups
that you want to apply to your deployment to the object specification
file. For example, for each type of backup store, and for Ops Manager backup
daemon processes, you can assign labels to associate particular backup
backup stores or backup daemon processes with specific projects.
Use spec.backup.[*].assignmentLabels
elements of the OpsManager
resources.
Optional: Configure any additional settings for an Ops Manager deployment.
Add any optional settings that you want to apply to your deployment to the object specification file.
Create your Ops Manager instance.
Run the following kubectl
command on the filename of the Ops Manager resource definition:
kubectl apply -f <opsmgr-resource>.yaml
Note
If you are deploying an Ops Manager resource on a multi-Kubernetes-cluster deployment, run:
kubectl apply \ --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \ --namespace "mongodb" -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
Track the status of your Ops Manager instance.
To check the status of your Ops Manager resource, invoke the following command:
kubectl get om -o yaml -w
The command returns the following output under the status
field
while the resource deploys:
status: applicationDatabase: lastTransition: "2020-04-01T09:49:22Z" message: AppDB Statefulset is not ready yet phase: Reconciling type: "" version: "" backup: phase: "" opsManager: phase: ""
The Kubernetes Operator reconciles the resources in the following order:
Application Database.
Ops Manager.
Backup.
The Kubernetes Operator doesn't reconcile a resource until the preceding
one enters the Running
phase.
After the Ops Manager resource completes the Reconciling
phase, the
command returns the following output under the status
field if you
enabled backup:
status: applicationDatabase: lastTransition: "2020-04-01T09:50:20Z" members: 3 phase: Running type: ReplicaSet version: "4.4.5-ubi8" backup: lastTransition: "2020-04-01T09:57:42Z" message: The MongoDB object <namespace>/<oplogresourcename> doesn't exist phase: Pending opsManager: lastTransition: "2020-04-01T09:57:40Z" phase: Running replicas: 1 url: http://om-svc.cloudqa.svc.cluster.local:8080 version: "5.0.0"
Backup remains in a Pending
state until you configure the backup
databases.
Tip
The status.opsManager.url
field states the resource's
connection URL. Using this URL, you can reach Ops Manager from
inside the Kubernetes cluster or create a project using a
ConfigMap.
Access the Ops Manager application.
The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:
Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.
Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.
http://ops.example.com:8080 Log in to Ops Manager using the admin user credentials.
Set your firewall rules to allow access from the Internet to the
spec.externalConnectivity.
port
on the host on which your Kubernetes cluster is running.Open a browser window and navigate to the Ops Manager application using the FQDN and the
spec.externalConnectivity.
port
.http://ops.example.com:30036 Log in to Ops Manager using the admin user credentials.
To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.
Optional: Create credentials for the Kubernetes Operator.
If you enabled backup, you must create an Ops Manager organization, generate programmatic API keys, and create a secret in your secret-storage-tool. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.
Optional: Create a project using a ConfigMap.
If you enabled backup, create a project by following the prerequisites and procedure on the Create One Project using a ConfigMap page.
You must set data.baseUrl
in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:
kubectl get om -o yaml -w
The command returns the URL of the Ops Manager Application in the
status.opsManager.url
field.
status: applicationDatabase: lastTransition: "2020-04-01T10:00:32Z" members: 3 phase: Running type: ReplicaSet version: "4.4.5-ubi8" backup: lastTransition: "2020-04-01T09:57:42Z" message: The MongoDB object <namespace>/<oplogresourcename> doesn't exist phase: Pending opsManager: lastTransition: "2020-04-01T09:57:40Z" phase: Running replicas: 1 url: http://om-svc.cloudqa.svc.cluster.local:8080 version: "5.0.0"
Important
If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will
manage MongoDB database resources deployed outside of the Kubernetes
cluster it's deployed to, you must set data.baseUrl
to the same
value of the
spec.configuration.mms.centralUrl
setting in the Ops Manager resource specification.
Optional: Deploy MongoDB database resources to complete the backup configuration.
If you enabled Backup, create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.
Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.
Note
Create this database as a replica set.
Match the
metadata.name
of the resource with thespec.backup.opLogStores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.Choose one of the following:
Deploy a MongoDB database resource for the blockstore in the same namespace as the Ops Manager resource.
Match the
metadata.name
of the resource to thespec.backup.blockStores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.Configure an S3 bucket to use as the S3 snapshot store.
Ensure that you can access the S3 bucket using the details that you specified in your Ops Manager resource definition.
Optional: Confirm that the Ops Manager resource is running.
If you enabled backup, check the status of your Ops Manager resource by invoking the following command:
kubectl get om -o yaml -w
When Ops Manager is running, the command returns the following
output under the status
field:
status: applicationDatabase: lastTransition: "2020-04-01T10:00:32Z" members: 3 phase: Running type: ReplicaSet version: "4.4.5-ubi8" backup: lastTransition: "2020-04-01T10:00:53Z" phase: Running version: "4.2.8" opsManager: lastTransition: "2020-04-01T10:00:34Z" phase: Running replicas: 1 url: http://om-svc.cloudqa.svc.cluster.local:8080 version: "5.0.0"
See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.