Supported Clients
Index Limitations
If you create a MongoDB Search index that has or will soon
have more than 2.1 billion index objects, you must use numPartitions
or shard your cluster. For this limit, each top-level
document or nested embeddedDocument in the indexed collection
fields counts as one object
By default, MongoDB Search stops replicating changes for a single index that grows larger than 2.1 billion index objects on any given replica set member or shard. This means your index remains queryable, but you might get stale results.
If your collection contains documents that are 16MB or larger, MongoDB Search fails to index your document, causing your index to become STALE, and initiates a full index rebuild. You must delete the offending document(s) from your collection in order for index rebuild to successfully complete. This issue can also occur when update operations on large documents cause the change stream event to exceed the 16MB BSON limit. To avoid this, we recommend that no single document in your collection exceeds 8MB. Consider the following best practices:
Structure your documents to minimize the size of sub-documents or arrays.
Avoid operations that update or replace large fields, sub-documents, or arrays.
To learn more, see Change Streams Production Recommendations and Reduce the Size of Large Documents.
To learn more about index limitations, see:
Field Type Limitations
To learn about field type limitations, see:
Other Limitations
If you're using a clustered collection
and have the notablescan parameter set to
true, your MongoDB Search indexes may not finish building. To resolve this issue, you
must set the notablescan parameter to false or check your log for index
status transitions.
Query Limitations
Operator Limitations
To learn about query operator limitations, see:
Option Compatibility and Limitations
To learn about query option compatibility and limitations, see: