Connect to a MongoDB Database Resource from Outside Kubernetes
On this page
The following procedure describes how to connect to a MongoDB resource deployed in Kubernetes from outside of the Kubernetes cluster.
Prerequisite
Compatible MongoDB Versions
For your databases to be accessed outside of Kubernetes, they must run MongoDB 4.2.3 or later.
Considerations
Configure Readiness Probe Overrides
If you create custom services that require external access to MongoDB custom
resources deployed by the Kubernetes Operator and use readiness probes
in Kubernetes, set the publishNotReadyAddresses
setting in Kubernetes to true
.
The publishNotReadyAddresses
setting indicates that an agent that
interacts with endpoints for this service should disregard the service's
ready
state. Setting publishNotReadyAddresses
to true
overrides the
behavior of the readiness probe configured for the Pod hosting your service.
By default, the publishNotReadyAddresses
setting is set to false
.
In this case, when the Pods that host the MongoDB custom resources in the
Kubernetes Operator lose connectivity to Cloud Manager or Ops Manager, the
readiness probes configured for these Pods fail.
However, when you set the publishNotReadyAddresses
setting to true
:
Kubernetes does not shut down the service whose readiness probe fails.
Kubernetes considers all endpoints as ready even if the probes for the Pods hosting the services for these endpoints indicate that they aren't ready.
MongoDB custom resources are still available for read and write operations.
Procedure
The following procedure walks you through the process of configuring external connectivity for your deployment by using the built-in configuration options in the Kubernetes Operator.
How you connect to a MongoDB resource that the Kubernetes Operator deployed from outside of the Kubernetes cluster depends on the resource.
To connect to your Kubernetes Operator-deployed MongoDB standalone resource from outside of the Kubernetes cluster:
Deploy a standalone resource with the Kubernetes Operator.
If you haven't deployed a standalone resource, follow the instructions to deploy one.
This procedure uses the following example:
20 21 apiVersion: mongodb.com/v1 22 kind: MongoDB 23 metadata: 24 name: <my-standalone> 25 spec: 26 version: "4.2.2-ent" 27 opsManager: 28 configMapRef: 29 name: <configMap.metadata.name> 30 # Must match metadata.name in ConfigMap file 31 credentials: <mycredentials> 32 type: Standalone 33 ...
Create an external service for the MongoDB Pod.
To connect to your standalone resource from an external resource, configure the spec.externalAccess setting:
externalAccess: {}
This setting instructs the Kubernetes Operator to create an external LoadBalancer service for the MongoDB Pod in your standalone resource. The external service provides an entry point for external connections. Adding this setting with no values creates an external service with the following default values:
Field | Value | Description |
---|---|---|
Name | <pod-name>-svc-external | Name of the external service. You can't change this value. |
Type | LoadBalancer | Creates an external LoadBalancer service. |
Port | <Port Number> | A port for mongod . |
publishNotReadyAddress | true | Specifies that DNS records
are created even if the Pod isn't ready.
Do not set to false for any database Pod. |
Optionally, if you need to add values to the service or override the default values, specify:
Annotations specific to your cloud provider, in
spec.externalAccess.externalService.annotations
Overrides for the service specification, in
spec.externalAccess.externalService.spec
.
For example, the following settings override the default values for the external service to configure your standalone resource to create NodePort services that expose the MongoDB Pod:
externalAccess: externalService: annotations: # cloud-specific annotations for the service spec: type: NodePort # default is LoadBalancer port: 27017 # you can specify other spec overrides if necessary
Tip
To learn more, see Annotations and ServiceSpec in the Kubernetes documentation.
Verify the external services.
In your standalone resource, run the following command to verify that the Kubernetes Operator created the external service for your deployment.
kubectl get services
The command returns a list of services similar to the following output. For each database Pod in the cluster, the Kubernetes Operator creates an external service named <pod-name>-0-svc-external. This service is configured according to the values and overrides you provide in the external service specification.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE <my-standalone>-0-svc-external LoadBalancer 10.102.27.116 <lb-ip-or-fqdn> 27017:27017/TCP 8m30s
Depending on your cluster configuration or cloud provider, the IP address of the LoadBalancer service is an externally accessible IP address or FQDN. You can use the IP address or FQDN to route traffic from your external domain.
Test the connection to the standalone resource.
To connect to your deployment from outside of the Kubernetes cluster,
use the MongoDB Shell (mongosh
) and specify the MongoDB Pod address
that you've exposed through the external domain.
Example
If you have an external FQDN of <my-standalone>.<external-domain>
, you can
connect to this sharded cluster instance from outside of the Kubernetes
cluster by using the following command:
mongosh "mongodb://<my-standalone>.<external-domain>"
Important
This procedure explains the least complicated way to enable external connectivity. Other utilities can be used in production.
To connect to your Kubernetes Operator-deployed MongoDB replica set resource from outside of the Kubernetes cluster:
Deploy a replica set with the Kubernetes Operator.
If you haven't deployed a replica set, follow the instructions to deploy one.
You must enable TLS for the replica set by providing a value for
the spec.security.certsSecretPrefix
setting. The replica
set must use a custom CA certificate stored with
spec.security.tls.ca
.
Create an external service for the MongoDB Pods.
To connect to your replica set from an external resource, configure the spec.externalAccess setting:
externalAccess: {}
This setting instructs the Kubernetes Operator to create an external LoadBalancer service for the MongoDB Pods in your replica set. The external service provides an entry point for external connections. Adding this setting with no values creates an external service with the following default values:
Field | Value | Description |
---|---|---|
Name | <pod-name>-svc-external | Name of the external service. You can't change this value. |
Type | LoadBalancer | Creates an external LoadBalancer service. |
Port | <Port Number> | A port for mongod . |
publishNotReadyAddress | true | Specifies that DNS records
are created even if the Pod isn't ready.
Do not set to false for any database Pod. |
Optionally, if you need to add values to the service or override the default values, specify:
Annotations specific to your cloud provider, in
spec.externalAccess.externalService.annotations
Overrides for the service specification, in
spec.externalAccess.externalService.spec
.
For example, the following settings override the default values for the external service to configure your replica set to create NodePort services that expose the MongoDB Pods:
externalAccess: externalService: annotations: # cloud-specific annotations for the service spec: type: NodePort # default is LoadBalancer port: 27017 # you can specify other spec overrides if necessary
Tip
To learn more, see Annotations and ServiceSpec in the Kubernetes documentation.
Verify the external services.
In your replica set, run the following command to verify that the Kubernetes Operator created the external service for your deployment.
kubectl get services
The command returns a list of services similar to the following output. For each database Pod in the cluster, the Kubernetes Operator creates an external service named <pod-name>-<pod-idx>-svc-external. This service is configured according to the values and overrides you provide in the external service specification.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE <my-replica-set>-0-svc-external LoadBalancer 10.102.27.116 <lb-ip-or-fqdn> 27017:27017/TCP 8m30s
Depending on your cluster configuration or cloud provider, the IP address of the LoadBalancer service is an externally accessible IP address or FQDN. You can use the IP address or FQDN to route traffic from your external domain.
Copy the sample replica set resource.
Change the settings of this YAML file to match your desired replica set configuration.
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDB 4 metadata: 5 name: <my-replica-set> 6 spec: 7 members: 3 8 version: "4.2.2-ent" 9 type: ReplicaSet 10 opsManager: 11 configMapRef: 12 name: <configMap.metadata.name> 13 credentials: <mycredentials> 14 persistent: true
15 security: 16 tls: 17 enabled: true 18 connectivity: 19 replicaSetHorizons: 20 - "example-website": "web1.example.com:30907" 21 - "example-website": "web2.example.com:32350" 22 - "example-website": "web3.example.com:31185" 23 ...
Paste the copied example section into your existing replica set resource.
Open your preferred text editor and paste the object specification
at the end of your resource file in the spec
section.
Change the highlighted settings to your preferred values.
Key | Type | Necessity | Description | Example |
---|---|---|---|---|
spec.connectivity | collection | Conditional | Add this parameter and values if you need your database to be accessed outside of Kubernetes. This setting allows you to provide different DNS settings within the Kubernetes cluster and to the Kubernetes cluster. The Kubernetes Operator uses split horizon DNS for replica set members. This feature allows communication both within the Kubernetes cluster and from outside Kubernetes. You may add multiple external mappings per host. NoteSplit Horizon Requirements
| |
spec.security | string | Required | Add the <prefix> of the secret
name that contains your MongoDB deployment's TLS certificates. | devDb |
Confirm the external hostnames and external service values in your replica set resource.
Confirm that the external hostnames in the
spec.connectivity.replicaSetHorizons
setting are correct.
External hostnames should match the DNS names of Kubernetes worker nodes. These can be any nodes in the Kubernetes cluster. Kubernetes nodes use internal routing if the pod runs on another node.
Set the ports in spec.connectivity.replicaSetHorizons
to
the external service values.
Example
15 security: 16 tls: 17 enabled: true 18 connectivity: 19 replicaSetHorizons: 20 - "example-website": "web1.example.com:30907" 21 - "example-website": "web2.example.com:32350" 22 - "example-website": "web3.example.com:31185" 23 ...
Save your replica set config file.
Update and restart your replica set deployment.
In any directory, invoke the following Kubernetes command to update and restart your replica set:
kubectl apply -f <replica-set-conf>.yaml
Test the connection to the replica set.
In the development environment, for each host in a replica set, run the following command:
mongosh --host <my-replica-set>/web1.example.com \ --port 30907 --ssl \ --sslAllowInvalidCertificates
Note
Don't use the --sslAllowInvalidCertificates
flag in production.
In production, for each host in a replica set, specify the TLS certificate and the CA to securely connect to client tools or applications:
mongosh --host <my-replica-set>/web1.example.com \ --port 30907 \ --tls \ --tlsCertificateKeyFile server.pem \ --tlsCAFile ca-pem
If the connection succeeds, you should see:
Enterprise <my-replica-set> [primary]
To connect to your Kubernetes Operator-deployed MongoDB replica set resource from outside of the Kubernetes cluster with OpenShift:
Deploy a replica set with the Kubernetes Operator.
If you haven't deployed a replica set, follow the instructions to deploy one.
You must enable TLS for the replica set by providing a value for
the spec.security.certsSecretPrefix
setting. The replica
set must use a custom CA certificate stored with
spec.security.tls.ca
.
Configure services to ensure connectivity.
Paste the following example services into a text editor:
1 --- 2 kind: Service 3 apiVersion: v1 4 metadata: 5 name: my-external-0 6 spec: 7 ports: 8 - name: mongodb 9 protocol: TCP 10 port: 443 11 targetPort: 27017 12 selector: 13 statefulset.kubernetes.io/pod-name: my-external-0 14 15 --- 16 kind: Service 17 apiVersion: v1 18 metadata: 19 name: my-external-1 20 spec: 21 ports: 22 - name: mongodb 23 protocol: TCP 24 port: 443 25 targetPort: 27017 26 selector: 27 statefulset.kubernetes.io/pod-name: my-external-1 28 29 --- 30 kind: Service 31 apiVersion: v1 32 metadata: 33 name: my-external-2 34 spec: 35 ports: 36 - name: mongodb 37 protocol: TCP 38 port: 443 39 targetPort: 27017 40 selector: 41 statefulset.kubernetes.io/pod-name: my-external-2 42 43 ... Note
If the
spec.selector
has entries that target headless services or applications, OpenShift may create a software firewall rule explicitly dropping connectivity. Review the selectors carefully and consider targeting the stateful set pod members directly as seen in the example. Routes in OpenShift offer port 80 or port 443. This example service uses port 443.Change the settings to your preferred values.
Save this file with a
.yaml
file extension.To create the services, invoke the following
kubectl
command on the services file you created:kubectl apply -f <my-external-services>.yaml
Configure routes to ensure TLS terminination passthrough.
Paste the following example routes into a text editor:
1 --- 2 apiVersion: v1 3 kind: Route 4 metadata: 5 name: my-external-0 6 spec: 7 host: my-external-0.{redacted} 8 to: 9 kind: Service 10 name: my-external-0 11 tls: 12 termination: passthrough 13 --- 14 apiVersion: v1 15 kind: Route 16 metadata: 17 name: my-external-1 18 spec: 19 host: my-external-1.{redacted} 20 to: 21 kind: Service 22 name: my-external-1 23 tls: 24 termination: passthrough 25 --- 26 apiVersion: v1 27 kind: Route 28 metadata: 29 name: my-external-2 30 spec: 31 host: my-external-2.{redacted} 32 to: 33 kind: Service 34 name: my-external-2 35 tls: 36 termination: passthrough 37 38 ... 39 Change the settings to your preferred values.
Save this file with a
.yaml
file extension.To create the routes, invoke the following
kubectl
command on the routes file you created:kubectl apply -f <my-external-routes>.yaml
Configure your replica set resource YAML file.
Use the following example to edit your replica set resource YAML file:
1 --- 2 apiVersion: mongodb.com/v1 3 kind: MongoDB 4 metadata: 5 name: my-external 6 namespace: mongodb 7 spec: 8 type: ReplicaSet 9 members: 3 10 version: 4.2.2-ent 11 opsManager: 12 configMapRef: 13 name: {redacted} 14 credentials: {redacted} 15 persistent: false 16 security: 17 tls: 18 # TLS must be enabled to allow external connectivity 19 enabled: true 20 authentication: 21 enabled: true 22 modes: ["SCRAM","X509"] 23 connectivity: 24 # The "localhost" routes are included to enable the creation of localhost 25 # TLS SAN in the CSR, per OpenShift route requirements. 26 # "ocroute" is the configured route in OpenShift. 27 replicaSetHorizons: 28 - "ocroute": "my-external-0.{redacted}:443" 29 "localhost": "localhost:27017" 30 - "ocroute": "my-external-1.{redacted}:443" 31 "localhost": "localhost:27018" 32 - "ocroute": "my-external-2.{redacted}:443" 33 "localhost": "localhost:27019" 34 35 ...
Note
OpenShift clusters require localhost horizons if you intend to use the Kubernetes Operator to create each CSR. If you manually create your TLS certificates, ensure you include localhost in the SAN list.
Change the settings to your preferred values.
Key | Type | Necessity | Description | Example |
---|---|---|---|---|
spec.connectivity | collection | Conditional | Add this parameter and values if you need your database to be accessed outside of Kubernetes. This setting allows you to provide different DNS settings within the Kubernetes cluster and to the Kubernetes cluster. The Kubernetes Operator uses split horizon DNS for replica set members. This feature allows communication both within the Kubernetes cluster and from outside Kubernetes. You may add multiple external mappings per host. NoteSplit Horizon Requirements
| |
spec.security | string | Required | Add the <prefix> of the secret
name that contains your MongoDB deployment's TLS certificates. | devDb |
Save your replica set config file.
Create the necessary TLS certificates and Kubernetes secrets.
Configure TLS for your replica set. Create one secret for the MongoDB replica set and one for the certificate authority. The Kubernetes Operator uses these secrets to place the TLS files in the pods for MongoDB to use.
Update and restart your replica set deployment.
In any directory, invoke the following Kubernetes command to update and restart your replica set:
kubectl apply -f <replica-set-conf>.yaml
Test the connection to the replica set.
The Kubernetes Operator should deploy the MongoDB replica set, configured with the horizon routes created for ingress. After the Kubernetes Operator completes the deployment, you may connect with the horizon using TLS connectivity. If the certificate authority is not present on your workstation, you can view and copy it from a MongoDB pod using the following command:
oc exec -it my-external-0 -- cat /mongodb-automation/ca.pem
To test the connections, run the following command:
Note
In the following example, for each member of the replica set, use
your replica set names and replace {redacted}
with the domain
that you manage.
mongosh --host my-external/my-external-0.{redacted} \ --port 443 --ssl \ --tlsAllowInvalidCertificates
Warning
Don't use the --tlsAllowInvalidCertificates
flag in production.
In production, for each host in a replica set, specify the TLS certificate and the CA to securely connect to client tools or applications:
mongosh --host my-external/my-external-0.{redacted} \ --port 443 \ --tls \ --tlsCertificateKeyFile server.pem \ --tlsCAFile ca-pem
If the connection succeeds, you should see:
Enterprise <my-replica-set> [primary]
To connect to your Kubernetes Operator-deployed MongoDB sharded cluster resource from outside of the Kubernetes cluster:
Deploy a sharded cluster with the Kubernetes Operator.
If you haven't deployed a sharded cluster, follow the instructions to deploy one.
You must enable TLS for the sharded cluster by configuring the following settings:
Key | Type | Necessity | Description | Example |
---|---|---|---|---|
spec.security | string | Required | Add the <prefix> of the secret
name that contains your MongoDB deployment's TLS certificates. | devDb |
spec.security.tls | collection | Optional | List of every domain that should be added to TLS certificates
to each pod in this deployment. When you set this parameter,
every CSR that the Kubernetes Operator transforms into a TLS
certificate includes a SAN in the form <pod
name>.<additional cert domain> . | example.com |
Create an external service for the mongos
Pods.
To connect to your sharded cluster from an external resource, configure the spec.externalAccess setting:
externalAccess: {}
This setting instructs the Kubernetes Operator to create an external LoadBalancer service for the mongos
Pods in your
sharded cluster. The external service provides an entry point for external connections.
Adding this setting with no values creates an external service with the following default
values:
Field | Value | Description |
---|---|---|
Name | <pod-name>-svc-external | Name of the external service. You can't change this value. |
Type | LoadBalancer | Creates an external LoadBalancer service. |
Port | <Port Number> | A port for mongod . |
publishNotReadyAddress | true | Specifies that DNS records
are created even if the Pod isn't ready.
Do not set to false for any database Pod. |
Optionally, if you need to add values to the service or override the default values, specify:
Annotations specific to your cloud provider, in
spec.externalAccess.externalService.annotations
Overrides for the service specification, in
spec.externalAccess.externalService.spec
.
For example, the following settings override the default values for the external service
to configure your sharded cluster to create NodePort services that expose the mongos
Pods:
externalAccess: externalService: annotations: # cloud-specific annotations for the service spec: type: NodePort # default is LoadBalancer port: 27017 # you can specify other spec overrides if necessary
Tip
To learn more, see Annotations and ServiceSpec in the Kubernetes documentation.
Add Subject Alternate Names to your TLS certificates.
Add each external DNS name to the certificate SAN.
Each MongoDB host uses the following SANs:
<my-sharded-cluster>-<shard>-<pod-index>.<external-domain> <my-sharded-cluster>-config-<pod-index>.<external-domain> <my-sharded-cluster>-mongos-<pod-index>.<external-domain>
The mongos
instance uses the following SAN:
<my-sharded-cluster>-mongos-<pod-index>-svc-external.<external-domain>
Configure the spec.security.tls.additionalCertificateDomains
setting similar
to the following example. Each TLS certificate that you use must include the
corresponding SAN for the shard, config server, or mongos
instance.
The Kubernetes Operator validates your configuration.
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDB 4 metadata: 5 name: <my-sharded-cluster> 6 spec: 7 version: "4.2.2-ent" 8 opsManager: 9 configMapRef: 10 name: <configMap.metadata.name> 11 # Must match metadata.name in ConfigMap file 12 shardCount: 2 13 mongodsPerShardCount: 3 14 mongosCount: 2 15 configServerCount: 3 16 credentials: my-secret 17 type: ShardedCluster 18 externalAccess: {} 19 security: 20 tls: 21 certsSecretPrefix: <prefix> 22 additionalCertificateDomains: 23 - "<external-domain>" 24 ...
Verify the external services.
In your sharded cluster, run the following command to verify that the Kubernetes Operator created the external services for your deployment.
kubectl get services
The command returns a list of services similar to the following output.
For each mongos
instance in the cluster, the Kubernetes Operator creates an external service
named <pod-name>-<pod-idx>-svc-external
. This service is configured according to the values
and overrides you provide in the external service specification.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE <my-sharded-cluster>-mongos-0-svc-external LoadBalancer 10.102.27.116 <lb-ip-or-fqdn> 27017:27017/TCP 8m30s <my-sharded-cluster>-mongos-1-svc-external LoadBalancer 10.102.27.116 <lb-ip-or-fqdn> 27017:27017/TCP 8m30s
Depending on your cluster configuration or cloud provider, the IP address of the
LoadBalancer service is an externally accessible IP address or FQDN. You can use
the IP address or FQDN to route traffic from your external domain. This example
has two mongos
instances, therefore the Kubernetes Operator creates two external services.
Test the connection to the sharded cluster.
To connect to your deployment from outside of the Kubernetes cluster,
use the MongoDB Shell (mongosh
) and specify the addresses for the mongos
instances
that you've exposed through the external domain.
Example
If you have external FQDN of <my-sharded-cluster>-mongos-0-svc-external.<external-domain>
and <my-sharded-cluster>-mongos-1-svc-external.<external-domain>
addressCommand: mongodb://<my-sharded-cluster>-mongos-0-svc-external.<external-domain>,<my-sharded-cluster>-mongos-1-svc-external.<external-domain>, you can
connect to this sharded cluster instance from outside of the Kubernetes
cluster by using the following command:
mongosh ""