Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
MongoDB
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
MongoDBchevron-right

Aggregation Pipeline: Applying Benford's Law to COVID-19 Data

John Page, Maxime Beugnet16 min read • Published Jan 07, 2022 • Updated Jan 26, 2023
MongoDBAggregation Framework
Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty

Introduction

In this blog post, I will show you how I built an aggregation pipeline to apply Benford's law on the COVID-19 data set that we have made available in the following cluster:
1mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19
If you want to know more about this cluster and how we transformed the CSV files from Johns Hopkins University's repository into clean MongoDB documents, check out this blog post.
Finally, based on this pipeline, I was able to produce a dashboard in MongoDB Charts. For example, here is one Chart that applies Benford's law on the worldwide daily cases of COVID-19:
Disclaimer: This article will focus on the aggregation pipeline and the stages I used to produce the result I wanted to get to be able to produce these charts—not so much on the results themselves, which can be interpreted in many different ways. One of the many issues here is the lack of data. The pandemic didn't start at the same time in all the countries, so many countries don't have enough data to make the percentages accurate. But feel free to interpret these results the way you want...

Prerequisites

This blog post assumes that you already know the main principles of the aggregation pipeline and you are already familiar with the most common stages.
If you want to follow along, feel free to use the cluster mentioned above or take a copy using mongodump or mongoexport, but the main takeaway from this blog post is the techniques I used to produce the output I wanted.
Also, I can't recommend you enough to use the aggregation pipeline builder in MongoDB Atlas or Compass to build your pipelines and play with the ones you will see in this blog post.
All the code is available in this repository.

What is Benford's Law?

Before we go any further, let me tell you a bit more about Benford's law. What does Wikipedia say?
Benford's law [...] is an observation about the frequency distribution of leading digits in many real-life sets of numerical data. The law states that in many naturally occurring collections of numbers, the leading digit is likely to be small. In sets that obey the law, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time. If the digits were distributed uniformly, they would each occur about 11.1% of the time. Benford's law also makes predictions about the distribution of second digits, third digits, digit combinations, and so on.
Here is the frequency distribution of the first digits that we can expect for a data set that respects Benford's law:
A little further down in Wikipedia's article, in the "Applications" section, you can also read the following:
Accounting fraud detection
In 1972, Hal Varian suggested that the law could be used to detect possible fraud in lists of socio-economic data submitted in support of public planning decisions. Based on the plausible assumption that people who fabricate figures tend to distribute their digits fairly uniformly, a simple comparison of first-digit frequency distribution from the data with the expected distribution according to Benford's law ought to show up any anomalous results.
Simply, if your data set distribution is following Benford's law, then it's theoretically possible to detect fraudulent data if a particular subset of the data doesn't follow the law.
In our situation, based on the observation of the first chart above, it looks like the worldwide daily confirmed cases of COVID-19 are following Benford's law. But is it true for each country?
If I want to answer this question (I don't), I will have to build a relatively complex aggregation pipeline (I do 😄).

The Data Set

I will only focus on a single collection in this blog post: covid19.countries_summary.
As its name suggests, it's a collection that I built (also using an aggregation pipeline) that contains a daily document for each country in the data set.
Here is an example:
1{
2 _id: ObjectId("608b24d4e7a11f5710a66b05"),
3 uids: [ 504 ],
4 confirmed: 19645,
5 deaths: 305,
6 country: 'Morocco',
7 date: 2020-07-25T00:00:00.000Z,
8 country_iso2s: [ 'MA' ],
9 country_iso3s: [ 'MAR' ],
10 country_codes: [ 504 ],
11 combined_names: [ 'Morocco' ],
12 population: 36910558,
13 recovered: 16282,
14 confirmed_daily: 811,
15 deaths_daily: 6,
16 recovered_daily: 182
17}
As you can see, for each day and country, I have daily counts of the COVID-19 confirmed cases and deaths.

The Aggregation Pipeline

Let's apply Benford's law on these two series of numbers.

The Final Documents

Before we start applying stages (transformations) to our documents, let's define the shape of the final documents which will make it easy to plot in MongoDB Charts.
It's easy to do and defines clearly where to start (the document in the previous section) and where we are going:
1{
2 country: 'US',
3 confirmed_size: 435,
4 deaths_size: 424,
5 benford: [
6 { digit: 1, confirmed: 22.3, deaths: 36.1 },
7 { digit: 2, confirmed: 21.1, deaths: 14.4 },
8 { digit: 3, confirmed: 11.5, deaths: 10.6 },
9 { digit: 4, confirmed: 11.7, deaths: 8 },
10 { digit: 5, confirmed: 11, deaths: 5 },
11 { digit: 6, confirmed: 11.7, deaths: 4.7 },
12 { digit: 7, confirmed: 6.7, deaths: 6.8 },
13 { digit: 8, confirmed: 2.3, deaths: 6.4 },
14 { digit: 9, confirmed: 1.6, deaths: 8 }
15 ]
16}
Setting the final objective makes us focused on the target while doing our successive transformations.

The Pipeline in English

Now that we have a starting and an ending point, let's try to write our pipeline in English first:
  1. Regroup all the first digits of each count into an array for the confirmed cases and into another one for the deaths for each country.
  2. Clean the arrays (remove zeros and negative numbers—see note below).
  3. Calculate the size of these arrays.
  4. Remove countries with empty arrays (countries without cases or deaths).
  5. Calculate the percentages of 1s, 2s, ..., 9s in each arrays.
  6. Add a fake country "BenfordTheory" with the theoretical values of 1s, 2s, etc. we are supposed to find.
  7. Final projection to get the document in the final shape I want.
Note: The daily fields that I provide in this collection covid19.countries_summary are computed from the cumulative counts that Johns Hopkins University (JHU) provides. Simply: Today's count, for each country, is today's cumulative count minus yesterday's cumulative count. In theory, I should have zeros (no deaths or no cases that day), but never negative numbers. But sometimes, JHU applies corrections on the counts without applying them retroactively in the past (as these counts were official counts at some point in time, I guess). So, negative values exist and I chose to ignore them in this pipeline.
Now that we have a plan, let's execute it. Each of the points in the above list is an aggregation pipeline stage, and now we "just" have to translate them.

Stage 1: Arrays of Leading Digits

First, I need to be able to extract the first character of $confirmed_daily, which is an integer.
MongoDB provides a $substring operator which we can use if we transform this integer into a string. This is easy to do with the $toString operator.
1{ "$substr": [ { "$toString": "$confirmed_daily" }, 0, 1 ] }
Then, apply this transformation to each country and regroup ($group) the result into an array using $push.
Here is the first stage:
1{
2 "$group": {
3 "_id": "$country",
4 "confirmed": {
5 "$push": {
6 "$substr": [
7 {
8 "$toString": "$confirmed_daily"
9 },
10 0,
11 1
12 ]
13 }
14 },
15 "deaths": {
16 "$push": {
17 "$substr": [
18 {
19 "$toString": "$deaths_daily"
20 },
21 0,
22 1
23 ]
24 }
25 }
26 }
27}
Here is the shape of my documents at this point if I apply this transformation:
1{
2 _id: 'Japan',
3 confirmed: [ '1', '3', '7', [...], '7', '5' ],
4 deaths: [ '7', '6', '0', [...], '-' , '2' ]
5}

Stage 2: Clean the Arrays

As mentioned above, my arrays might contains zeros and - which is the leading character of a negative number. I decided to ignore this for my little mathematical experimentation.
If I now translate "clean the arrays" into something more "computer-friendly," what I actually want to do is "filter the arrays." We can leverage the $filter operator and overwrite our existing arrays with their filtered versions without zeros and dashes by using the $addFields stage.
1{
2 "$addFields": {
3 "confirmed": {
4 "$filter": {
5 "input": "$confirmed",
6 "as": "elem",
7 "cond": {
8 "$and": [
9 {
10 "$ne": [
11 "$$elem",
12 "0"
13 ]
14 },
15 {
16 "$ne": [
17 "$$elem",
18 "-"
19 ]
20 }
21 ]
22 }
23 }
24 },
25 "deaths": { ... } // same as above with $deaths
26 }
27}
At this point, our documents in the pipeline have the same shape as previously.

Stage 3: Array Sizes

The final goal here is to calculate the percentages of 1s, 2s, ..., 9s in these two arrays, respectively. To compute this, I will need the size of the arrays to apply the rule of three.
This stage is easy as $size does exactly that.
1{
2 "$addFields": {
3 "confirmed_size": {
4 "$size": "$confirmed"
5 },
6 "deaths_size": {
7 "$size": "$deaths"
8 }
9 }
10}
To be completely honest, I could compute this on the fly later, when I actually need it. But I'll need it multiple times later on, and this stage is inexpensive and eases my mind so... Let's KISS.
Here is the shape of our documents at this point:
1{
2 _id: 'Japan',
3 confirmed: [ '1', '3', '7', [...], '7', '5' ],
4 deaths: [ '7', '6', '9', [...], '2' , '1' ],
5 confirmed_size: 452,
6 deaths_size: 398
7}
As you can see for Japan, our arrays are relatively long, so we could expect our percentages to be somewhat accurate.
It's far from being true for all the countries...
1{
2 _id: 'Solomon Islands',
3 confirmed: [
4 '4', '1', '1', '3',
5 '1', '1', '1', '2',
6 '1', '5'
7 ],
8 deaths: [],
9 confirmed_size: 10,
10 deaths_size: 0
11}
1{
2 _id: 'Fiji',
3 confirmed: [
4 '1', '1', '1', '2', '2', '1', '6', '2',
5 '2', '1', '2', '1', '5', '5', '3', '1',
6 '4', '1', '1', '1', '2', '1', '1', '1',
7 '1', '2', '4', '1', '1', '3', '1', '4',
8 '3', '2', '1', '4', '1', '1', '1', '5',
9 '1', '4', '8', '1', '1', '2'
10 ],
11 deaths: [ '1', '1' ],
12 confirmed_size: 46,
13 deaths_size: 2
14}

Stage 4: Eliminate Countries with Empty Arrays

I'm not good enough at math to decide which size is significant enough to be statistically accurate, but good enough to know that my rule of three will need to divide by the size of the array.
As dividing by zero is bad for health, I need to remove empty arrays. A sound statistician would probably also remove the small arrays... but not me 😅.
This stage is a trivial $match:
1{
2 "$match": {
3 "confirmed_size": {
4 "$gt": 0
5 },
6 "deaths_size": {
7 "$gt": 0
8 }
9 }
10}

Stage 5: Percentages of Digits

We are finally at the central stage of our pipeline. I need to apply a rule of three to calculate the percentage of 1s in an array:
  • Find how many 1s are in the array.
  • Multiply by 100.
  • Divide by the size of the array.
  • Round the final percentage to one decimal place. (I don't need more precision for my charts.)
Then, I need to repeat this operation for each digit and each array.
To find how many times a digit appears in the array, I can reuse techniques we learned earlier:
1{
2 "$size": {
3 "$filter": {
4 "input": "$confirmed",
5 "as": "elem",
6 "cond": {
7 "$eq": [
8 "$$elem",
9 "1"
10 ]
11 }
12 }
13 }
14}
I'm creating a new array which contains only the 1s with $filter and I calculate its size with $size.
Now I can $multiply this value (let's name it X) by 100, $divide by the size of the confirmed array, and $round the final result to one decimal.
1{
2 "$round": [
3 {
4 "$divide": [
5 { "$multiply": [ 100, X ] },
6 "$confirmed_size"
7 ]
8 },
9 1
10 ]
11}
As a reminder, here is the final document we want:
1{
2 country: 'US',
3 confirmed_size: 435,
4 deaths_size: 424,
5 benford: [
6 { digit: 1, confirmed: 22.3, deaths: 36.1 },
7 { digit: 2, confirmed: 21.1, deaths: 14.4 },
8 { digit: 3, confirmed: 11.5, deaths: 10.6 },
9 { digit: 4, confirmed: 11.7, deaths: 8 },
10 { digit: 5, confirmed: 11, deaths: 5 },
11 { digit: 6, confirmed: 11.7, deaths: 4.7 },
12 { digit: 7, confirmed: 6.7, deaths: 6.8 },
13 { digit: 8, confirmed: 2.3, deaths: 6.4 },
14 { digit: 9, confirmed: 1.6, deaths: 8 }
15 ]
16}
The value we just calculated above corresponds to the 22.3 that we have in this document.
At this point, we just need to repeat this operation nine times for each digit of the confirmed array and nine other times for the deaths array and assign the results accordingly in the new benford array of documents.
Here is what it looks like in the end:
1{
2 "$addFields": {
3 "benford": [
4 {
5 "digit": 1,
6 "confirmed": {
7 "$round": [
8 {
9 "$divide": [
10 {
11 "$multiply": [
12 100,
13 {
14 "$size": {
15 "$filter": {
16 "input": "$confirmed",
17 "as": "elem",
18 "cond": {
19 "$eq": [
20 "$$elem",
21 "1"
22 ]
23 }
24 }
25 }
26 }
27 ]
28 },
29 "$confirmed_size"
30 ]
31 },
32 1
33 ]
34 },
35 "deaths": {
36 "$round": [
37 {
38 "$divide": [
39 {
40 "$multiply": [
41 100,
42 {
43 "$size": {
44 "$filter": {
45 "input": "$deaths",
46 "as": "elem",
47 "cond": {
48 "$eq": [
49 "$$elem",
50 "1"
51 ]
52 }
53 }
54 }
55 }
56 ]
57 },
58 "$deaths_size"
59 ]
60 },
61 1
62 ]
63 }
64 },
65 {"digit": 2...},
66 {"digit": 3...},
67 {"digit": 4...},
68 {"digit": 5...},
69 {"digit": 6...},
70 {"digit": 7...},
71 {"digit": 8...},
72 {"digit": 9...}
73 ]
74 }
75}
At this point in our pipeline, our documents look like this:
1{
2 _id: 'Luxembourg',
3 confirmed: [
4 '1', '5', '2', '1', '1', '4', '3', '1', '2', '5', '8', '4',
5 '1', '4', '1', '1', '1', '2', '3', '1', '9', '5', '3', '2',
6 '2', '2', '1', '7', '4', '1', '2', '5', '1', '2', '1', '8',
7 '9', '6', '8', '1', '1', '3', '7', '8', '6', '6', '4', '2',
8 '2', '1', '1', '1', '9', '5', '8', '2', '2', '6', '1', '6',
9 '4', '8', '5', '4', '1', '2', '1', '3', '1', '4', '1', '1',
10 '3', '3', '2', '1', '2', '2', '3', '2', '1', '1', '1', '3',
11 '1', '7', '4', '5', '4', '1', '1', '1', '1', '1', '7', '9',
12 '1', '4', '4', '8',
13 ... 242 more items
14 ],
15 deaths: [
16 '1', '1', '8', '9', '2', '3', '4', '1', '3', '5', '5', '1',
17 '3', '4', '2', '5', '2', '7', '1', '1', '5', '1', '2', '2',
18 '2', '9', '6', '1', '1', '2', '5', '3', '5', '1', '3', '3',
19 '1', '3', '3', '4', '1', '1', '2', '4', '1', '2', '2', '1',
20 '4', '4', '1', '3', '6', '5', '8', '1', '3', '2', '7', '1',
21 '6', '8', '6', '3', '1', '2', '6', '4', '6', '8', '1', '1',
22 '2', '3', '7', '1', '8', '2', '1', '6', '3', '3', '6', '2',
23 '2', '2', '3', '3', '3', '2', '6', '3', '1', '3', '2', '1',
24 '1', '4', '1', '1',
25 ... 86 more items
26 ],
27 confirmed_size: 342,
28 deaths_size: 186,
29 benford: [
30 { digit: 1, confirmed: 36.3, deaths: 32.8 },
31 { digit: 2, confirmed: 16.4, deaths: 19.9 },
32 { digit: 3, confirmed: 9.1, deaths: 14.5 },
33 { digit: 4, confirmed: 8.8, deaths: 7.5 },
34 { digit: 5, confirmed: 6.4, deaths: 6.5 },
35 { digit: 6, confirmed: 9.6, deaths: 8.6 },
36 { digit: 7, confirmed: 5.8, deaths: 3.8 },
37 { digit: 8, confirmed: 5, deaths: 4.8 },
38 { digit: 9, confirmed: 2.6, deaths: 1.6 }
39 ]
40}
Note: At this point, we don't need the arrays anymore. The target document is almost there.

Stage 6: Introduce Fake Country BenfordTheory

In my final charts, I wanted to be able to also display the Bendord's theoretical values, alongside the actual values from the different countries to be able to spot easily which one is potentially producing fake data (modulo the statistic noise and many other reasons).
Just to give you an idea, it looks like, globally, all the countries are producing legit data but some arrays are small and produce "statistical accidents."
To be able to insert this "perfect" document, I need to introduce in my pipeline a fake and perfect country that has the perfect percentages. I decided to name it "BenfordTheory."
But (because there is always one), as far as I know, there is no stage that can just let me insert a new document like this in my pipeline.
So close...
Lucky for me, I found a workaround to this problem with the new (since 4.4) $unionWith stage. All I have to do is insert my made-up document into a collection and I can "insert" all the documents from this collection into my pipeline at this stage.
I inserted my fake document into the new collection randomly named benford. Note that I made this document look like the documents at this current stage in my pipeline. I didn't care to insert the two arrays because I'm about to discard them anyway.
1{
2 _id: 'BenfordTheory',
3 benford: [
4 { digit: 1, confirmed: 30.1, deaths: 30.1 },
5 { digit: 2, confirmed: 17.6, deaths: 17.6 },
6 { digit: 3, confirmed: 12.5, deaths: 12.5 },
7 { digit: 4, confirmed: 9.7, deaths: 9.7 },
8 { digit: 5, confirmed: 7.9, deaths: 7.9 },
9 { digit: 6, confirmed: 6.7, deaths: 6.7 },
10 { digit: 7, confirmed: 5.8, deaths: 5.8 },
11 { digit: 8, confirmed: 5.1, deaths: 5.1 },
12 { digit: 9, confirmed: 4.6, deaths: 4.6 }
13 ],
14 confirmed_size: 999999,
15 deaths_size: 999999
16}
With this new collection in place, all I need to do is $unionWith it.
1{
2 "$unionWith": {
3 "coll": "benford"
4 }
5}

Stage 7: Final Projection

At this point, our documents look almost like the initial target document that we have set at the beginning of this blog post. Two differences though:
  • The name of the countries is in the _id key, not the country one.
  • The two arrays are still here.
We can fix this with a simple $project stage.
1{
2 "$project": {
3 "country": "$_id",
4 "_id": 0,
5 "benford": 1,
6 "confirmed_size": 1,
7 "deaths_size": 1
8 }
9}
Note that I chose which field should be here or not in the final document by inclusion here. _id is an exception and needs to be explicitly excluded. As the two arrays aren't explicitly included, they are excluded by default, like any other field that would be there. See considerations.
Here is our final result:
1{
2 confirmed_size: 409,
3 deaths_size: 378,
4 benford: [
5 { digit: 1, confirmed: 32.8, deaths: 33.6 },
6 { digit: 2, confirmed: 20.5, deaths: 13.8 },
7 { digit: 3, confirmed: 15.9, deaths: 11.9 },
8 { digit: 4, confirmed: 10.8, deaths: 11.6 },
9 { digit: 5, confirmed: 5.9, deaths: 6.9 },
10 { digit: 6, confirmed: 2.9, deaths: 7.7 },
11 { digit: 7, confirmed: 4.4, deaths: 4.8 },
12 { digit: 8, confirmed: 3.2, deaths: 5.6 },
13 { digit: 9, confirmed: 3.7, deaths: 4.2 }
14 ],
15 country: 'Bulgaria'
16}
And please remember that some documents still look like this in the pipeline because I didn't bother to filter them:
1{
2 confirmed_size: 2,
3 deaths_size: 1,
4 benford: [
5 { digit: 1, confirmed: 0, deaths: 0 },
6 { digit: 2, confirmed: 50, deaths: 100 },
7 { digit: 3, confirmed: 0, deaths: 0 },
8 { digit: 4, confirmed: 0, deaths: 0 },
9 { digit: 5, confirmed: 0, deaths: 0 },
10 { digit: 6, confirmed: 0, deaths: 0 },
11 { digit: 7, confirmed: 50, deaths: 0 },
12 { digit: 8, confirmed: 0, deaths: 0 },
13 { digit: 9, confirmed: 0, deaths: 0 }
14 ],
15 country: 'MS Zaandam'
16}

The Final Pipeline

My final pipeline is pretty long due to the fact that I'm repeating the same block for each digit and each array for a total of 9*2=18 times.
I wrote a factorised version in JavaScript that can be executed in mongosh:
1use covid19;
2
3let groupBy = {
4 "$group": {
5 "_id": "$country",
6 "confirmed": {
7 "$push": {
8 "$substr": [{
9 "$toString": "$confirmed_daily"
10 }, 0, 1]
11 }
12 },
13 "deaths": {
14 "$push": {
15 "$substr": [{
16 "$toString": "$deaths_daily"
17 }, 0, 1]
18 }
19 }
20 }
21};
22
23let createConfirmedAndDeathsArrays = {
24 "$addFields": {
25 "confirmed": {
26 "$filter": {
27 "input": "$confirmed",
28 "as": "elem",
29 "cond": {
30 "$and": [{
31 "$ne": ["$$elem", "0"]
32 }, {
33 "$ne": ["$$elem", "-"]
34 }]
35 }
36 }
37 },
38 "deaths": {
39 "$filter": {
40 "input": "$deaths",
41 "as": "elem",
42 "cond": {
43 "$and": [{
44 "$ne": ["$$elem", "0"]
45 }, {
46 "$ne": ["$$elem", "-"]
47 }]
48 }
49 }
50 }
51 }
52};
53
54let addArraySizes = {
55 "$addFields": {
56 "confirmed_size": {
57 "$size": "$confirmed"
58 },
59 "deaths_size": {
60 "$size": "$deaths"
61 }
62 }
63};
64
65let removeCountriesWithoutConfirmedCasesAndDeaths = {
66 "$match": {
67 "confirmed_size": {
68 "$gt": 0
69 },
70 "deaths_size": {
71 "$gt": 0
72 }
73 }
74};
75
76function calculatePercentage(inputArray, digit, sizeArray) {
77 return {
78 "$round": [{
79 "$divide": [{
80 "$multiply": [100, {
81 "$size": {
82 "$filter": {
83 "input": inputArray,
84 "as": "elem",
85 "cond": {
86 "$eq": ["$$elem", digit]
87 }
88 }
89 }
90 }]
91 }, sizeArray]
92 }, 1]
93 }
94}
95
96function calculatePercentageConfirmed(digit) {
97 return calculatePercentage("$confirmed", digit, "$confirmed_size");
98}
99
100function calculatePercentageDeaths(digit) {
101 return calculatePercentage("$deaths", digit, "$deaths_size");
102}
103
104let calculateBenfordPercentagesConfirmedAndDeaths = {
105 "$addFields": {
106 "benford": [{
107 "digit": 1,
108 "confirmed": calculatePercentageConfirmed("1"),
109 "deaths": calculatePercentageDeaths("1")
110 }, {
111 "digit": 2,
112 "confirmed": calculatePercentageConfirmed("2"),
113 "deaths": calculatePercentageDeaths("2")
114 }, {
115 "digit": 3,
116 "confirmed": calculatePercentageConfirmed("3"),
117 "deaths": calculatePercentageDeaths("3")
118 }, {
119 "digit": 4,
120 "confirmed": calculatePercentageConfirmed("4"),
121 "deaths": calculatePercentageDeaths("4")
122 }, {
123 "digit": 5,
124 "confirmed": calculatePercentageConfirmed("5"),
125 "deaths": calculatePercentageDeaths("5")
126 }, {
127 "digit": 6,
128 "confirmed": calculatePercentageConfirmed("6"),
129 "deaths": calculatePercentageDeaths("6")
130 }, {
131 "digit": 7,
132 "confirmed": calculatePercentageConfirmed("7"),
133 "deaths": calculatePercentageDeaths("7")
134 }, {
135 "digit": 8,
136 "confirmed": calculatePercentageConfirmed("8"),
137 "deaths": calculatePercentageDeaths("8")
138 }, {
139 "digit": 9,
140 "confirmed": calculatePercentageConfirmed("9"),
141 "deaths": calculatePercentageDeaths("9")
142 }]
143 }
144};
145
146let unionBenfordTheoreticalValues = {
147 "$unionWith": {
148 "coll": "benford"
149 }
150};
151
152let finalProjection = {
153 "$project": {
154 "country": "$_id",
155 "_id": 0,
156 "benford": 1,
157 "confirmed_size": 1,
158 "deaths_size": 1
159 }
160};
161
162let pipeline = [groupBy,
163 createConfirmedAndDeathsArrays,
164 addArraySizes,
165 removeCountriesWithoutConfirmedCasesAndDeaths,
166 calculateBenfordPercentagesConfirmedAndDeaths,
167 unionBenfordTheoreticalValues,
168 finalProjection];
169
170let cursor = db.countries_summary.aggregate(pipeline);
171
172printjson(cursor.next());
If you want to read the entire pipeline, it's available in this github repository.
If you want to see more visually how this pipeline works step by step, import it in MongoDB Compass once you are connected to the cluster (see the URI in the Introduction). Use the New Pipeline From Text option in the covid19.countries_summary collection to import it.

An Even Better Pipeline?

Did you think that this pipeline I just presented was perfect?
Well well... It's definitely getting the job done, but we can make it better in many ways. I already mentioned in this blog post that we could remove Stage 3, for example, if we wanted to. It might not be as optimal, but it would be shorter.
Also, there is still Stage 5, in which I literally copy and paste the same piece of code 18 times... and Stage 6, where I have to use a workaround to insert a document in my pipeline.
Another solution could be to rewrite this pipeline with a $facet stage and execute two sub-pipelines in parallel to compute the results we want for the confirmed array and the deaths array. But this solution is actually about two times slower.
However, my colleague John Page came up with this pipeline that is just better than mine because it's applying more or less the same algorithm, but it's not repeating itself. The code is a lot cleaner and I just love it, so I thought I would also share it with you.
John is using very smartly a $map stage to iterate over the nine digits which makes the code a lot simpler to maintain.

Wrap-Up

In this blog post, I tried my best to share with you the process of creating a relatively complex aggregation pipeline and a few tricks to transform as efficiently as possible your documents.
We talked about and used in a real pipeline the following aggregation pipeline stages and operators:
If you are a statistician and you can make sense of these results, please post a message on the Community Forum and ping me!
Also, let me know if you can find out if some countries are clearly generating fake data.
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.

Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

Exploring the Advanced Search Capabilities With MongoDB Atlas Search


Aug 20, 2024 | 6 min read
Tutorial

Spring Data Unlocked: Getting Started With Java and MongoDB


Nov 11, 2024 | 5 min read
Article

Building a Quarkus Application to Perform MongoDB Vector Search


Oct 07, 2024 | 9 min read
Article

Handling MongoDB PHP Errors


Aug 28, 2024 | 7 min read
Table of Contents