Hi @malay_tiwari ,
Can you described more about the setup - where is your spark cluster hosted (local /cloud / a managed platform) and how are you using MongoDB (community, EA, Atlas)?
Is the batch mode working okay?
How is your data flowing in MongoDB? Is their data that was recently updated, since streaming will work in that case only? As described in documentation (https://www.mongodb.com/docs/spark-connector/current/streaming-mode/streaming-read/): "The connector reads from your MongoDB deployment’s change stream. To generate change events on the change stream, perform update operations on your database.
"