Docs Menu
Docs Home
/ / /
PyMongoArrow

Comparing to PyMongo

On this page

  • Reading Data
  • Writing Data
  • Benchmarks

In this guide, you can learn about the differences between PyMongoArrow and the PyMongo driver. This guide assumes familiarity with basic PyMongo and MongoDB concepts.

The most basic way to read data using PyMongo is:

coll = db.benchmark
f = list(coll.find({}, projection={"_id": 0}))
table = pyarrow.Table.from_pylist(f)

This works, but you have to exclude the _id field, otherwise you get the following error:

pyarrow.lib.ArrowInvalid: Could not convert ObjectId('642f2f4720d92a85355671b3') with type ObjectId: did not recognize Python value type when inferring an Arrow data type

The following code example shows a workaround for the preceding error when using PyMongo:

>>> f = list(coll.find({}))
>>> for doc in f:
... doc["_id"] = str(doc["_id"])
...
>>> table = pyarrow.Table.from_pylist(f)
>>> print(table)
pyarrow.Table
_id: string
x: int64
y: double

Even though this avoids the error, a drawback is that Arrow can't identify that _id is an ObjectId, as noted by the schema showing _id as a string.

PyMongoArrow supports BSON types through Arrow or Pandas Extension Types. This allows you to avoid the preceding workaround.

>>> from pymongoarrow.types import ObjectIdType
>>> schema = Schema({"_id": ObjectIdType(), "x": pyarrow.int64(), "y": pyarrow.float64()})
>>> table = find_arrow_all(coll, {}, schema=schema)
>>> print(table)
pyarrow.Table
_id: extension<arrow.py_extension_type<ObjectIdType>>
x: int64
y: double

With this method, Arrow correctly identifies the type. This has limited use for non-numeric extension types, but avoids unnecessary casting for certain operations, such as sorting datetimes.

f = list(coll.find({}, projection={"_id": 0, "x": 0}))
naive_table = pyarrow.Table.from_pylist(f)
schema = Schema({"time": pyarrow.timestamp("ms")})
table = find_arrow_all(coll, {}, schema=schema)
assert (
table.sort_by([("time", "ascending")])["time"]
== naive_table["time"].cast(pyarrow.timestamp("ms")).sort()
)

Additionally, PyMongoArrow supports Pandas extension types. With PyMongo, a Decimal128 value behaves as follows:

coll = client.test.test
coll.insert_many([{"value": Decimal128(str(i))} for i in range(200)])
cursor = coll.find({})
df = pd.DataFrame(list(cursor))
print(df.dtypes)
# _id object
# value object

The equivalent in PyMongoArrow is:

from pymongoarrow.api import find_pandas_all
coll = client.test.test
coll.insert_many([{"value": Decimal128(str(i))} for i in range(200)])
df = find_pandas_all(coll, {})
print(df.dtypes)
# _id bson_PandasObjectId
# value bson_PandasDecimal128

In both cases, the underlying values are the BSON class type:

print(df["value"][0])
Decimal128("0")

Writing data from an Arrow table using PyMongo looks like the following:

data = arrow_table.to_pylist()
db.collname.insert_many(data)

The equivalent in PyMongoArrow is:

from pymongoarrow.api import write
write(db.collname, arrow_table)

As of PyMongoArrow 1.0, the main advantage to using the write function is that it iterates over the arrow table, data frame, or numpy array, and doesn't convert the entire object to a list.

The following measurements were taken with PyMongoArrow version 1.0 and PyMongo version 4.4. For insertions, the library performs about the same as when using conventional PyMongo, and uses the same amount of memory.

ProfileInsertSmall.peakmem_insert_conventional 107M
ProfileInsertSmall.peakmem_insert_arrow 108M
ProfileInsertSmall.time_insert_conventional 202±0.8ms
ProfileInsertSmall.time_insert_arrow 181±0.4ms
ProfileInsertLarge.peakmem_insert_arrow 127M
ProfileInsertLarge.peakmem_insert_conventional 125M
ProfileInsertLarge.time_insert_arrow 425±1ms
ProfileInsertLarge.time_insert_conventional 440±1ms

For reads, the library is slower for small documents and nested documents, but faster for large documents. It uses less memory in all cases.

ProfileReadSmall.peakmem_conventional_arrow 85.8M
ProfileReadSmall.peakmem_to_arrow 83.1M
ProfileReadSmall.time_conventional_arrow 38.1±0.3ms
ProfileReadSmall.time_to_arrow 60.8±0.3ms
ProfileReadLarge.peakmem_conventional_arrow 138M
ProfileReadLarge.peakmem_to_arrow 106M
ProfileReadLarge.time_conventional_ndarray 243±20ms
ProfileReadLarge.time_to_arrow 186±0.8ms
ProfileReadDocument.peakmem_conventional_arrow 209M
ProfileReadDocument.peakmem_to_arrow 152M
ProfileReadDocument.time_conventional_arrow 865±7ms
ProfileReadDocument.time_to_arrow 937±1ms

Back

What's New