Deploy a Cluster through the API
On this page
This tutorial manipulates the Ops Manager Administration API's automation configuration to deploy a sharded cluster that is owned by another user. The tutorial first creates a new project, then a new user as owner of the project, and then a sharded cluster owned by the new user. You can create a script to automate these procedures for use in routine operations.
To perform these steps, you must have sufficient access to Ops Manager. A
user with the Global Owner
or
Project Owner
role has sufficient access.
The procedures install a cluster with two shards. Each
shard comprises a three-member replica set. The tutorial
installs one mongos
and three config servers.
Each component of the cluster resides on its own server, requiring a
total of 10 hosts.
The tutorial installs the MongoDB Agent on each host.
Prerequisites
Ops Manager must have an existing user. If you are deploying the sharded cluster on a fresh install of Ops Manager, you must register the first user.
You must have the URL of the Ops Manager host, as set in the mmsbaseurl setting of the MongoDB Agent configuration file.
Provision ten hosts to serve the components of the sharded cluster. For host requirements, see the Production Notes in the MongoDB manual.
Each host must provide its
MongoDB Agent with full
networking access to the hostnames and ports of the MongoDB Agents on
all the other hosts. Each agent runs the command hostname -f
to self-identify its hostname and port and report them to Ops Manager.
Tip
To ensure agents can reach each other, provision the hosts using Automation. This installs the MongoDB Agents with correct network access. Use this tutorial to reinstall the Automations on those machines.
Examples
As you work with the API, you can view examples on the GitHub example page.
Variables for Cluster Creation API Resources
The API resources use one or more of these variables. Replace these variables with your desired values before calling these API resources.
Name | Type | Description |
---|---|---|
PUBLIC-KEY | string | Your public API Key for your API credentials. |
PRIVATE-KEY | string | Your private API Key for your API
credentials. |
<OpsManagerHost>:<Port> | string | URL of your Ops Manager instance. |
GROUP-ID | string | Unique identifier of your project from your
project settings. |
Prerequisites
Configure API Access to enable you to use the API.
Complete the MongoDB Agent Prerequisites.
Procedures
Create the Group and the User through the API
Use the API to create a project.
Use the Ops Manager Administration API to send a projects document to create the new project.
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --header "Content-Type: application/json" \ --request POST "https://<OpsManagerHost>:<Port>/api/public/v1.0/groups?pretty=true" \ --data ' { "name": "{GROUP-NAME}", "orgId": "{ORG-ID}" }'
The API returns a document that includes the project's
agentApiKey
and id
.
Use the API to create a user in the new project.
Use the /users
endpoint to add a user to the new project.
The body of the request should contain a users JSON document with the user's information.
Set the user's roles.roleName
to GROUP_OWNER
and the user's
roles.groupId
set to the new group's' id
.
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --header "Content-Type: application/json" \ --request POST "https://<OpsManagerHost>:<Port>/api/public/v1.0/users?pretty=true" \ --data ' { "username": "<new_user@example.com>", "emailAddress": "<new_user@example.com>", "firstName": "<First>", "lastName": "<Last>", "password": "<password>", "roles": [{ "groupId": "{PROJECT-ID}", "roleName": "GROUP_OWNER" }] }'
(Optional) If you used a global owner user to create the project, you can remove that user from the project.
The user you use to create the project is automatically added to the
project. If you used a user with the Global Owner
role,
you can remove the user from the project without losing the ability
to make changes to the project in the future. As long as you have the
project's agentApiKey
and id
, you have full access to the
project when logged in as the global owner.
GET
the global owner's ID. Issue the following command to
request the project's users:
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --request GET "https://<OpsManagerHost>:<Port>/api/public/v1.0/groups/{PROJECT-ID}/users?pretty=true"
The API returns a JSON document that lists all the project's
users. Locate the user with roles.roleName
set to
GLOBAL_OWNER
. Copy the user's id
value, and issue the
following to remove the user from the project, replacing
{USER-ID}
with the user's id
value:
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --request GET "https://<OpsManagerHost>:<Port>/api/public/v1.0/groups/{PROJECT-ID}/users/{USER-ID}?pretty=true"
If Ops Manager removes the user successfully, the API returns the
HTTP 200 OK
status code.
Install the MongoDB Agent on each Provisioned Host
Complete the MongoDB Agent installation procedure on each host.
To learn how to install the MongoDB Agent, follow the procedure for the appropriate platform.
Confirm the initial state of the automation configuration.
When the MongoDB Agent first runs, it downloads the
mms-cluster-config-backup.json
file, which describes the desired
state of the automation configuration.
On one of the hosts, navigate to /var/lib/mongodb-mms-automation/
and open mms-cluster-config-backup.json
. Confirm that the file's
version
field is set to 1
. Ops Manager automatically increments
this field as changes occur.
Deploy the New Cluster
To add or update a deployment, retrieve the configuration, make changes as needed, and send the updated configuration though the API to Ops Manager.
The following procedure deploys an updated automation configuration through the API:
Retrieve the automation configuration from Ops Manager.
Use the automationConfig resource to retrieve the configuration. Issue the following command, replacing the placeholders with the Variables for Cluster Creation API Resources.
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --request GET "https://<OpsManagerHost>:<Port>/api/public/v1.0/groups/{PROJECT-ID}/automationConfig?pretty=true" \ --output currentAutomationConfig.json Validate the downloaded Automation Configuration file.
Compare the
version
field of thecurrentAutomationConfig.json
with that of the Automation Configuration backup file,mms-cluster-config-backup.json
. Theversion
value is the last element in both JSON documents. You can find this file on any host running the MongoDB Agent at:Linux and macOS:
/var/lib/mongodb-mms-automation/mms-cluster-config-backup.json
Windows:
%SystemDrive%\MMSAutomation\versions\mms-cluster-config-backup.json
If the
version
values match, you are working with the current version of the Automation Configuration file.
Create the top level of the new automation configuration.
Create a document with the following fields. As you build the configuration document, refer the description of an automation configuration for detailed explanations of the settings. For examples, see the MongoDB Labs page.
1 { 2 "options": { 3 "downloadBase": "/var/lib/mongodb-mms-automation", 4 }, 5 "mongoDbVersions": [], 6 "monitoringVersions": [], 7 "backupVersions": [], 8 "processes": [], 9 "replicaSets": [], 10 "sharding": [] 11 }
Add the Monitoring to the automation configuration.
In the monitoringVersions.hostname
field, enter the hostname of
the server where Ops Manager should install the Monitoring. Use the fully
qualified domain name that running hostname -f
on the server
returns, as in the following:
1 "monitoringVersions": [ 2 { 3 "hostname": "<server_x.example.com>", 4 "logPath": "/var/log/mongodb-mms-automation/monitoring-agent.log", 5 "logRotate": { 6 "sizeThresholdMB": 1000, 7 "timeThresholdHrs": 24 8 } 9 } 10 ]
This configuration example also includes the logPath
field, which
specifies the log location, and logRotate
, which specifies the
log thresholds.
Add the servers to the automation configuration.
This sharded cluster has 10 MongoDB instances, as described in the
Deploy a Cluster through the API, each running on its own
server. Thus, the automation configuration's processes
array will
have 10 documents, one for each MongoDB instance.
The following example adds the first document to the processes
array. Replace <process_name_1>
with any name you choose, and
replace <server1.example.com>
with the FQDN of the host.
Add 9 documents: one for each MongoDB instance in your sharded cluster.
Specify the args2_6
syntax for the processes.<args>
field.
The processes.args2_6
object accepts most MongoDB settings and
parameters for MongoDB versions 2.6 and later. To learn more, see
MongoDB Settings and Automation Support.
1 "processes": [ 2 { 3 "version": "4.0.6", 4 "name": "<process_name_1>", 5 "hostname": "<server1.example.com>", 6 "logRotate": { 7 "sizeThresholdMB": 1000, 8 "timeThresholdHrs": 24 9 }, 10 "authSchemaVersion": 5, 11 "featureCompatibilityVersion": "4.0", 12 "processType": "mongod", 13 "args2_6": { 14 "net": { 15 "port": 27017 16 }, 17 "storage": { 18 "dbPath": "/data/" 19 }, 20 "systemLog": { 21 "path": "/data/mongodb.log", 22 "destination": "file" 23 }, 24 "replication": { 25 "replSetName": "rs1" 26 } 27 } 28 }, 29 ]
Add the sharded cluster topology to the automation configuration.
Add two replica set documents to the replicaSets
array. Add
three members to each document.
Example
This section adds one replica set member to the first replica set document:
Important
You must include "protocolVersion": 1
in the root document
for each replica set.
1 "replicaSets": [ 2 { 3 "_id": "rs1", 4 "members": [ 5 { 6 "_id": 0, 7 "host": "<process_name_1>", 8 "priority": 1, 9 "votes": 1, 10 "secondaryDelaySecs": 0, 11 "hidden": false, 12 "arbiterOnly": false 13 } 14 ], 15 "protocolVersion": 1 16 } 17 ]
In the sharding
array, add the replica sets to the shards, and
add the config server replica set name, as in the following:
1 "sharding": [ 2 { 3 "shards": [ 4 { 5 "tags": [], 6 "_id": "shard1", 7 "rs": "rs1" 8 }, 9 { 10 "tags": [], 11 "_id": "shard2", 12 "rs": "rs2" 13 } 14 ], 15 "name": "sharded_cluster_via_api", 16 "configServerReplica": "rs-config", 17 "collections": [] 18 } 19 ]
Send the updated automation configuration.
Use the automationConfig resource to send the updated automation configuration.
Issue the following command with path to the updated configuration document and replace the placeholders with the Variables for Cluster Creation API Resources.
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --header "Content-Type: application/json" --request PUT "https://<OpsManagerHost>:<Port>/api/public/v1.0/groups/{PROJECT-ID}/automationConfig?pretty=true" \ --data @currentAutomationConfig.json
Upon successful update of the configuration, the API returns the HTTP
200 OK
status code to indicate the request has succeeded.
Confirm successful update of the automation configuration.
Retrieve the automation configuration from Ops Manager and confirm it contains the changes. To retrieve the configuration, issue the following command, replacing the placeholders with the Variables for Cluster Creation API Resources.
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --request GET "https://<OpsManagerHost>:<Port>/api/public/v1.0/groups/{PROJECT-ID}/automationConfig?pretty=true"
Verify that the configuration update is deployed.
Use the automationStatus resource to verify the configuration update is fully deployed. Issue the following command:
curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \ --request GET "https://<OpsManagerHost>:<Port>/api/public/v1.0/groups/{PROJECT-ID}/automationStatus?pretty=true"
The curl
command returns a JSON object containing the
processes
array and the goalVersion
key and value. The
processes
array contains a document for each server that hosts a
MongoDB instance. The new configuration is successfully deployed when
all lastGoalVersionAchieved
fields in the processes
array
equal the value specified for goalVersion
.
Example
In this response, processes[2].lastGoalVersionAchieved
is
behind goalVersion
. This indicates that the MongoDB instance
at server3.example.com
is running one version behind the
goalVersion
. Wait several seconds and issue the curl
command again.
1 { 2 "goalVersion": 2, 3 "processes": [{ 4 "hostname": "server1.example.com", 5 "lastGoalVersionAchieved": 2, 6 "name": "ReplSet_0", 7 "plan": [] 8 }, { 9 "hostname": "server2.example.com", 10 "lastGoalVersionAchieved": 2, 11 "name": "ReplSet_1", 12 "plan": [] 13 }, { 14 "hostname": "server3.example.com", 15 "lastGoalVersionAchieved": 1, 16 "name": "ReplSet_2", 17 "plan":[] 18 }] 19 }
To view the new configuration in the Ops Manager console, click Deployment.
Next Steps
To make an additional version of MongoDB available in the cluster, see Update the MongoDB Version of a Deployment.