In my opinion, you are mixing too much application logic and database access to be able to get an accurate measure.

You reading files so your performance might restricted by the media from which you are reading.

I recommend to completely separate you file reading logic and bucket creation from the insert/update done to the database, otherwise you will have difficulty determining what is slow. Once all your buckets are created then and only then you do the database operations.

But this brings another issue, all these buckets needs to fit in RAM otherwise you might introduce another factor (swapping) that will influence your performance negatively.

Another issue to consider is resource contention between your client and the database. So it is better to have a dedicated database server with enough resource for your use case. You better make sure you can isolate the traffic.

Since you simulate time series like an IOT scenario, then I think that @Pavel_Duchovny’s idea to split it in 11 clients is the best approach. Because in real life your database access will come from many sources. Now with a simple client you are serializing everything which is not realistic.