Prerequisites
On this page
- Review Supported Hardware Architectures
- Clone the MongoDB Enterprise Kubernetes Operator Repository
- Set Environment Variables and GKE Zones
- Set up GKE Clusters
- Obtain User Authentication Credentials for Central and Member Clusters
- Install Go and Helm
- Understand Kubernetes Roles and Role Bindings
- Set the Deployment's Scope
- Plan for External Connectivity: Should You Use a Service Mesh?
- Check Connectivity Across Clusters
- Review the Requirements for Deploying Ops Manager
- Prepare for TLS-Encrypted Connections
- Choose GitOps or the kubectl MongoDB Plugin
- Install the kubectl MongoDB Plugin
- Configure Resources for GitOps
Before you create a multi-Kubernetes-cluster deployment using either the quick start or a deployment procedure, complete the following tasks:
Review Supported Hardware Architectures
Clone the MongoDB Enterprise Kubernetes Operator Repository
Clone the MongoDB Enterprise Kubernetes Operator repository:
git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
Set Environment Variables and GKE Zones
Set the environment variables with cluster names and the available GKE zones where you deploy the clusters, as in this example:
export MDB_GKE_PROJECT={GKE project name} export MDB_CENTRAL_CLUSTER="mdb-central" export MDB_CENTRAL_CLUSTER_ZONE="us-west1-a" export MDB_CLUSTER_1="mdb-1" export MDB_CLUSTER_1_ZONE="us-west1-b" export MDB_CLUSTER_2="mdb-2" export MDB_CLUSTER_2_ZONE="us-east1-b" export MDB_CLUSTER_3="mdb-3" export MDB_CLUSTER_3_ZONE="us-central1-a" export MDB_CENTRAL_CLUSTER_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CENTRAL_CLUSTER_ZONE}_${MDB_CENTRAL_CLUSTER}" export MDB_CLUSTER_1_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_1_ZONE}_${MDB_CLUSTER_1}" export MDB_CLUSTER_2_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_2_ZONE}_${MDB_CLUSTER_2}" export MDB_CLUSTER_3_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_3_ZONE}_${MDB_CLUSTER_3}"
Set up GKE Clusters
Set up GKE (Google Kubernetes Engine) clusters:
Set up your Google Cloud account.
If you have not done so already, create a Google Cloud project, enable billing on the project, enable the Artifact Registry and GKE APIs, and launch Cloud Shell by following the relevant procedures in the Google Kubernetes Engine Quickstart in the Google Cloud documentation.
Create a central cluster and member clusters.
Create one central cluster and one or more member clusters, specifying the GKE zones, the number of nodes, and the instance types, as in these examples:
gcloud container clusters create $MDB_CENTRAL_CLUSTER \ --zone=$MDB_CENTRAL_CLUSTER_ZONE \ --num-nodes=5 \ --machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_1 \ --zone=$MDB_CLUSTER_1_ZONE \ --num-nodes=5 \ --machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_2 \ --zone=$MDB_CLUSTER_2_ZONE \ --num-nodes=5 \ --machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_3 \ --zone=$MDB_CLUSTER_3_ZONE \ --num-nodes=5 \ --machine-type "e2-standard-2"
Obtain User Authentication Credentials for Central and Member Clusters
Obtain user authentication credentials for the central and member Kubernetes
clusters and save the credentials. You will later use these credentials
for running kubectl
commands on these clusters.
Run the following commands:
gcloud container clusters get-credentials $MDB_CENTRAL_CLUSTER \ --zone=$MDB_CENTRAL_CLUSTER_ZONE gcloud container clusters get-credentials $MDB_CLUSTER_1 \ --zone=$MDB_CLUSTER_1_ZONE gcloud container clusters get-credentials $MDB_CLUSTER_2 \ --zone=$MDB_CLUSTER_2_ZONE gcloud container clusters get-credentials $MDB_CLUSTER_3 \ --zone=$MDB_CLUSTER_3_ZONE
Install Go and Helm
Install the following tools:
Understand Kubernetes Roles and Role Bindings
To use a multi-Kubernetes-cluster deployment, you must have a specific set of Kubernetes Roles, ClusterRoles, RoleBindings, ClusterRoleBindings, and ServiceAccounts, which you can configure in any of the following ways:
Follow the Multi-Kubernetes-Cluster Quick Start, which tells you how to use the MongoDB Plugin to automatically create the required objects and apply them to the appropriate clusters within your multi-Kubernetes-cluster deployment.
Use Helm to configure the required Kubernetes Roles and service accounts for each member cluster:
helm template --show-only \ templates/database-roles.yaml \ mongodb/enterprise-operator \ --set namespace=mongodb | \ kubectl apply -f - \ --context=$MDB_CLUSTER_1_FULL_NAME \ --namespace mongodb helm template --show-only \ templates/database-roles.yaml \ mongodb/enterprise-operator \ --set namespace=mongodb | \ kubectl apply -f - \ --context=$MDB_CLUSTER_2_FULL_NAME \ --namespace mongodb helm template --show-only \ templates/database-roles.yaml \ mongodb/enterprise-operator \ --set namespace=mongodb | \ kubectl apply -f - \ --context=$MDB_CLUSTER_3_FULL_NAME \ --namespace mongodb Manually create Kubernetes object
.yaml
files and add the required Kubernetes Roles and service accounts to your multi-Kubernetes-cluster deployment with thekubectl apply
command. This may be necessary for certain highly automated workflows. MongoDB provides sample configuration files.For namespace-scoped resources:
Roles, Role Bindings, and Service Accounts for your Central Cluster
Roles, Role Bindings, and Service Accounts for your Member Clusters
For cluster-scoped resources:
ClusterRoles, ClusterRoleBindings, and ServiceAccounts for your Central Cluster
ClusterRoles, ClusterRoleBindings, and ServiceAccounts for your Member Cluster
Each file defines multiple resources. To support your deployment, you must replace the placeholder values in the following fields:
subjects.namespace
in eachRoleBinding
orClusterRoleBinding
resourcemetadata.namespace
in eachServiceAccount
resource
After modifying the definitions, apply them by running the following command for each file:
kubectl apply -f <fileName>
Set the Deployment's Scope
By default, the multi-cluster Kubernetes Operator is scoped to the namespace
in which you install it. The Kubernetes Operator reconciles the MongoDBMultiCluster
resource
deployed in the same namespace as the Kubernetes Operator.
When you run the MongoDB kubectl plugin
as part of the multi-cluster quick start, and don't modify the kubectl mongodb
plugin
settings, the plugin:
Creates a default ConfigMap named
mongodb-enterprise-operator-member-list
that contains all the member clusters of the multi-Kubernetes-cluster deployment. This name is hard-coded and you can't change it. See Known Issues.Creates service accounts, Roles, and RoleBindings in the central cluster and each member cluster.
Applies the correct permissions for service accounts.
Uses the preceding settings to create your multi-Kubernetes-cluster deployment.
Once the Kubernetes Operator creates the multi-Kubernetes-cluster deployment, the Kubernetes Operator
starts watching MongoDB
resources in the mongodb
namespace.
To configure the Kubernetes Operator with the correct permissions to deploy in multiple or all namespaces, run the following command and specify the namespaces that you would like the Kubernetes Operator to watch.
kubectl mongodb multicluster setup \ --central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \ --member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \ --member-cluster-namespace="mongodb2" \ --central-cluster-namespace="mongodb2" \ --cluster-scoped="true"
When you install the multi-Kubernetes-cluster deployment to multiple or all namespaces, you can configure the Kubernetes Operator to:
Watch Resources in Multiple Namespaces
If you set the scope for the multi-Kubernetes-cluster deployment to many namespaces, you can
configure the Kubernetes Operator to watch MongoDB
resources in these namespaces
in the multi-Kubernetes-cluster deployment.
Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE
in the mongodb-enterprise.yaml
file from the MongoDB Enterprise Kubernetes Operator GitHub Repository to the comma-separated list
of namespaces that you would like the Kubernetes Operator to watch:
WATCH_NAMESPACE: "$namespace1,$namespace2,$namespace3"
Run the following command and replace the values in the last line with the namespaces that you would like the Kubernetes Operator to watch.
helm upgrade \ --install \ mongodb-enterprise-operator-multi-cluster \ mongodb/enterprise-operator \ --namespace mongodb \ --set namespace=mongodb \ --version <mongodb-kubernetes-operator-version>\ --set operator.name=mongodb-enterprise-operator-multi-cluster \ --set operator.createOperatorServiceAccount=false \ --set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \ --set operator.watchNamespace="$namespace1,$namespace2,$namespace3"
Watch Resources in All Namespaces
If you set the scope for the multi-Kubernetes-cluster deployment to all namespaces instead
of the default mongodb
namespace, you can configure the Kubernetes Operator
to watch MongoDB
resources in all namespaces in the multi-Kubernetes-cluster deployment.
Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE
in mongodb-enterprise.yaml
to "*"
. You must include the double quotation marks ("
)
around the asterisk (*
) in the YAML file.
WATCH_NAMESPACE: "*"
Run the following command:
helm upgrade \ --install \ mongodb-enterprise-operator-multi-cluster \ mongodb/enterprise-operator \ --namespace mongodb \ --set namespace=mongodb \ --version <mongodb-kubernetes-operator-version>\ --set operator.name=mongodb-enterprise-operator-multi-cluster \ --set operator.createOperatorServiceAccount=false \ --set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \ --set operator.watchNamespace="*"
Plan for External Connectivity: Should You Use a Service Mesh?
A service mesh enables inter-cluster communication between the replica set members deployed in different Kubernetes clusters. Using a service mesh greatly simplifies creating multi-Kubernetes-cluster deployments and is the recommended way of deploying MongoDB across multiple Kubernetes clusters. However, if your IT organization doesn't use a service mesh, you can deploy a replica set in a multi-Kubernetes-cluster deployment without it.
Depending on your environment, do the following:
If you can use a service mesh, install Istio.
If you can't use a service mesh:
How Does the Kubernetes Operator Establish Connectivity?
Regardless of the deployment type, a MongoDB deployment in Kubernetes must establish the following connections:
From the Ops Manager Automation Agent in the Pod to its
mongod
process, to enable MongoDB deployment's lifecycle management and monitoring.From the Ops Manager Automation Agent in the Pod to the Ops Manager instance, to enable automation.
Between all
mongod
processes, to allow replication.
When the Kubernetes Operator deploys the MongoDB resources, it treats these connectivity requirements in the following ways, depending on the type of deployment:
In a single Kubernetes cluster deployment, the Kubernetes Operator configures hostnames in the replica set as FQDNs of a Headless Service. This is a single service that resolves the DNS of a direct IP address of each Pod hosting a MongoDB instance by the Pod's FQDN, as follows:
<pod-name>.<replica-set-name>-svc.<namespace>.svc.cluster.local
.In a multi-Kubernetes-cluster deployment that uses a service mesh, the Kubernetes Operator creates a separate StatefulSet for each MongoDB replica set member in the Kubernetes cluster. A service mesh allows communication between
mongod
processes across distinct Kubernetes clusters.Using a service mesh allows the multi-Kubernetes-cluster deployment to:
Achieve global DNS hostname resolution across Kubernetes clusters and establish connectivity between them. For each MongoDB deployment Pod in each Kubernetes cluster, the Kubernetes Operator creates a ClusterIP service through the
spec.duplicateServiceObjects: true
configuration in theMongoDBMultiCluster
resource. Each process has a hostname defined to this service's FQDN:<pod-name>-svc.<namespace>.svc.cluster.local
. These hostnames resolve from DNS to a service's ClusterIP in each member cluster.Establish communication between Pods in different Kubernetes clusters. As a result, replica set members hosted on different clusters form a single replica set across these clusters.
In a multi-Kubernetes-cluster deployment without a service mesh, the Kubernetes Operator uses the following
MongoDBMultiCluster
resource settings to expose all itsmongod
processes externally. This enables DNS resolution of hostnames between distinct Kubernetes clusters, and establishes connectivity between Pods routed through the networks that connect these clusters.
Optional: Install Istio
Install Istio in a multi-primary mode on different networks, using the Istio documentation. Istio is a service mesh that simplifies DNS resolution and helps establish inter-cluster communication between the member Kubernetes clusters in a multi-Kubernetes-cluster deployment. If you choose to use a service mesh, you need to install it. If you can't utilize a service mesh, skip this section and use external domains and configure DNS to enable external connectivity.
In addition, we offer the install_istio_separate_network example script. This script is based on Istio documentation and provides an example installation that uses the multi-primary mode on different networks. We don't guarantee the script's maintenance with future Istio releases. If you choose to use the script, review the latest Istio documentation for installing a multicluster, and, if necessary, adjust the script to match the documentation and your deployment. If you use another service mesh solution, create your own script for configuring separate networks to facilitate DNS resolution.
Enable External Connectivity through External Domains and DNS Zones
If you don't use a service mesh, do the following to enable external
connectivity to and between mongod
processes and the
Ops Manager Automation Agent:
When you create a multi-Kubernetes-cluster deployment, use the spec.clusterSpecList.externalAccess.externalDomain setting to specify an external domain and instruct the Kubernetes Operator to configure hostnames for
mongod
processes in the following pattern:<pod-name>.<externalDomain> Note
You can specify external domains only for new deployments. You can't change external domains after you configure a multi-Kubernetes-cluster deployment.
After you configure an external domain in this way, the Ops Manager Automation Agents and
mongod
processes use this domain to connect to each other.Customize external services that the Kubernetes Operator creates for each Pod in the Kubernetes cluster. Use the global configuration in the spec.externalAccess settings and Kubernetes cluster-specific overrides in the spec.clusterSpecList.externalAccess.externalService settings.
Configure Pod hostnames in a DNS zone to ensure that each Kubernetes Pod hosting a
mongod
process allows establishing an external connection to the othermongod
processes in a multi-Kubernetes-cluster deployment. A Pod is considered "exposed externally" when you can connect to amongod
process by using the<pod-name>.<externalDomain>
hostname on ports 27017 (this is the default database port) and 27018 (this is the database port + 1). You may also need to configure firewall rules to allow TCP traffic on ports 27017 and 27018.
After you complete these prerequisites, you can deploy a multi-Kubernetes cluster without a service mesh.
Check Connectivity Across Clusters
Follow the steps in this procedure to verify that service FQDNs are reachable across Kubernetes clusters.
In this example, you deploy a sample application defined in sample-service.yaml across two Kubernetes clusters.
Create a namespace in each cluster.
Create a namespace in each of the Kubernetes clusters to deploy the sample-service.yaml
.
kubectl create --context="${CTX_CLUSTER_1}" namespace sample kubectl create --context="${CTX_CLUSTER_2}" namespace sample
Note
In certain service mesh solutions, you might need to annotate or label the namespace.
Verify CLUSTER_1
can connect to CLUSTER_2
.
Deploy the Pod in CLUSTER_1
and check that you can reach the sample application in CLUSTER_2
.
kubectl run --context="${CTX_CLUSTER_1}" \ -n sample \ curl --image=radial/busyboxplus:curl \ -i --tty \ curl -sS helloworld2.sample:5000/hello
You should see output similar to this example:
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
Verify CLUSTER_2
can connect to CLUSTER_1
.
Deploy the Pod in CLUSTER_2
and check that you can reach the sample application in CLUSTER_1
.
kubectl run --context="${CTX_CLUSTER_2}" \ -n sample \ curl --image=radial/busyboxplus:curl \ -i --tty \ curl -sS helloworld1.sample:5000/hello
You should see output similar to this example:
Hello version: v1, instance: helloworld-v1-758dd55874-6x4t8
Review the Requirements for Deploying Ops Manager
As part of the Quick Start, you deploy an Ops Manager resource on the central cluster. To learn more, see Deploy an Ops Manager Resource on the Central Cluster and Connect to Ops Manager.
Prepare for TLS-Encrypted Connections
If you plan to secure your multi-Kubernetes-cluster deployment using TLS encryption, complete the following tasks to enable internal cluster authentication and generate TLS certificates for member clusters and the MongoDB Agent:
Note
You must possess the CA certificate and the key that you used to sign your TLS certificates.
Generate a TLS certificate for Kubernetes services.
Use one of the following options:
Generate a wildcard TLS certificate that covers hostnames of the services that the Kubernetes Operator creates for each Pod in the deployment.
If you generate wildcard certificates, you can continue using the same certificates when you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.
For example, add the hostname similar to the following format to the SAN:
*.<namespace>.svc.cluster.local For each Kubernetes service that the Kubernetes Operator generates corresponding to each Pod in each member cluster, add SANs to the certificate. In your TLS certificate, the SAN for each Kubernetes service must use the following format:
<metadata.name>-<member_cluster_index>-<n>-svc.<namespace>.svc.cluster.local where
n
ranges from0
toclusterSpecList[member_cluster_index].members - 1
.
Generate one TLS certificate for your project's MongoDB Agents.
For the MongoDB Agent TLS certificate:
The Common Name in the TLS certificate must not be empty.
The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.
To speed up creating TLS certificates for member Kubernetes clusters, we offer the setup_tls script. We don't guarantee the script's maintenance. If you choose to use the script, test it and adjust it to your needs. The script does the following:
Creates the
cert-manager
namespace in the connected cluster and installs cert-manager using Helm in thecert-manager
namespace.Installs a local CA using mkcert.
Downloads TLS certificates from
downloads.mongodb.com
and concatenates them with the CA file name andca-chain
.Creates a ConfigMap that includes the
ca-chain
files.Creates an
Issuer
resource, which cert-manager uses to generate certificates.Creates a
Certificate
resource, which cert-manager uses to create a key object for the certificates.
To use the script:
Install mkcert
.
Install mkcert on the machine you plan to run this script.
Run the setup_tls
script.
curl https://raw.githubusercontent.com/mon mongodb-enterprise-kubernetes/master/tools/multicluster/setup_tl -o setup_tls.sh
The output includes:
A secret containing the CA named
ca-key-pair
.A secret containing the server certificates on the central n
clustercert-${resource}-cert
.A ConfigMap containing the CA certificates named
issuer-ca
.
Generate one TLS certificate for your project's MongoDB Agents.
For the MongoDB Agent TLS certificate:
The Common Name in the TLS certificate must not be empty.
The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.
Generate a TLS certificate for SAN hostnames.
Use one of the following options:
Generate a wildcard TLS certificate that contains all externalDomains that you created in the SAN. For example, add the hostnames similar to the following format to the SAN:
*.cluster-0.example.com, *.cluster-1.example.com If you generate wildcard certificates, you can continue using them when you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.
Generate a TLS certificate for each MongoDB replica set member hostname in the SAN. For example, add the hostnames similar to the following to the SAN:
my-replica-set-0-0.cluster-0.example.com, my-replica-set-0-1.cluster-0.example.com, my-replica-set-1-0.cluster-1.example.com, my-replica-set-1-1.cluster-1.example.com If you generate an individual TLS certificate that contains all the specific hostnames, you must create a new certificate each time you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.
Generate one TLS certificate for your project's MongoDB Agents.
For the MongoDB Agent TLS certificate:
The Common Name in the TLS certificate must not be empty.
The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.
Important
The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn't support concatenated PEM files stored as Opaque secrets.
Choose GitOps or the kubectl MongoDB Plugin
You can choose to create and maintain the resource files needed for the MongoDBMultiCluster
resources deployment in a GitOps environment.
If you use a GitOps workflow, you can't use the kubectl mongodb plugin, which automatically configures role-based access control (RBAC) and creates the kubeconfig file that allows the central cluster to communicate with its member clusters. Instead, you must manually configure or build your own automation for configuring the RBAC and kubeconfig
files based on the procedure and examples in Configure Resources for GitOps.
The following prerequisite sections describe how to install the kubectl MongoDB plugin if you don't use GitOps or configure resources for GitOps if you do.
Install the kubectl MongoDB Plugin
Use the kubectl mongodb
plugin to:
Note
If you use GitOps, you can't use the kubectl mongodb
plugin. Instead, follow the procedure in Configure Resources for GitOps.
To install the kubectl mongodb
plugin:
Download your desired Kubernetes Operator package version.
Download your desired Kubernetes Operator package version from the Release Page of the MongoDB Enterprise Kubernetes Operator Repository.
The package's name uses this pattern:
kubectl-mongodb-multicluster_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz
.
Use one of the following packages:
kubectl-mongodb-multicluster_{{ .Version }}_darwin_amd64.tar.gz
kubectl-mongodb-multicluster_{{ .Version }}_darwin_arm64.tar.gz
kubectl-mongodb-multicluster_{{ .Version }}_linux_amd64.tar.gz
kubectl-mongodb-multicluster_{{ .Version }}_linux_arm64.tar.gz
Locate the kubectl mongodb
plugin binary and copy it to its desired destination.
Find the kubectl-mongodb
binary in the unpacked directory and move it
to its desired destination, inside the PATH for the Kubernetes Operator user,
as shown in the following example:
mv kubectl-mongodb /usr/local/bin/kubectl-mongodb
Now you can run the kubectl mongodb
plugin using the following commands:
kubectl mongodb multicluster setup kubectl mongodb multicluster recover
To learn more about the supported flags, see the MongoDB kubectl plugin Reference.
Configure Resources for GitOps
If you use a GitOps workflow, you won't be able to use the kubectl mongodb plugin to automatically configure role-based access control (RBAC) or the kubeconfig file that allows the central cluster to communicate with its member clusters. Instead, you must manually configure and apply the following resource files or build your own automation based on the information below.
Note
To learn how the kubectl mongodb
plugin automates the following steps,
view the code in GitHub.
To configure RBAC and the kubeconfig
for GitOps:
Create and apply RBAC resources to each cluster.
Use these RBAC resource examples to create your own. To learn more about these RBAC resources, see Understand Kubernetes Roles and Role Bindings.
To apply them to your central and member clusters with GitOps, you can use a tool like Argo CD.
Create and apply the ConfigMap file.
The Kubernetes Operator keeps track of its member clusters using a ConfigMap file. Copy, modify, and apply the following example ConfigMap:
apiVersion: v1 kind: ConfigMap data: cluster1: "" cluster2: "" metadata: namespace: <namespace> name: mongodb-enterprise-operator-member-list labels: multi-cluster: "true"
Configure the kubeconfig
secret for the Kubernetes Operator.
The Kubernetes Operator, which runs in the central cluster, communicates with the Pods in
the member clusters through the Kubernetes API. For this to work, the Kubernetes Operator
needs a kubeconfig
file that contains the service account tokens of the member clusters. Create this
kubeconfig
file by following these steps:
Obtain a list of service accounts configured in the Kubernetes Operator's namespace. For example, if you chose to use the default
mongodb
namespace, then you can obtain the service accounts using the following command:kubectl get serviceaccounts -n mongodb Get the secret for each service account that belongs to a member cluster.
kubectl get secret <service-account-name> -n mongodb -o yaml In each service account secret, copy the CA certificate and token. For example, copy
<ca_certificate>
and<token>
from the secret, as shown in the following example:apiVersion: v1 kind: Secret metadata: name: my-service-account namespace: mongodb data: ca.crt: <ca_certificate> token: <token> Copy the following
kubeconfig
example for the central cluster and replace the placeholders with the<ca_certificate>
and<token>
you copied from the service account secrets.apiVersion: v1 clusters: - cluster: certificate-authority-data: <cluster-1-ca.crt> server: https://: name: kind-e2e-cluster-1 - cluster: certificate-authority-data: <cluster-2-ca.crt> server: https://: name: kind-e2e-cluster-2 contexts: - context: cluster: kind-e2e-cluster-1 namespace: mongodb user: kind-e2e-cluster-1 name: kind-e2e-cluster-1 - context: cluster: kind-e2e-cluster-2 namespace: mongodb user: kind-e2e-cluster-2 name: kind-e2e-cluster-2 kind: Config users: - name: kind-e2e-cluster-1 user: token: <cluster-1-token> - name: kind-e2e-cluster-2 user: token: <cluster-2-token> Save the
kubeconfig
file.Create a secret in the central cluster that you mount in the Kubernetes Operator as illustrated in the reference Helm chart. For example:
kubectl --context="${CTX_CENTRAL_CLUSTER}" -n <operator-namespace> create secret --from-file=kubeconfig=<path-to-kubeconfig-file> <kubeconfig-secret-name>