Im getting stuck with archiving a huge amount of data in MongoDB 3.6
I want to delete 56 crore records in a collection. I tried to remove using bulk.remove(), but that is also slow.(50 records are removing per second).
But somewhere I read, TTL index and do scan interval every 1hour. So it'll remove in faster way.
But if I create this index in foreground, it'll lock the collection. So im thinking to do with the rolling index creation method.
If do like that, lets say on a 3 node replica set, detach the node3 and then create index. Once its created it'll start automatically remove the data. Then once I add the node back to the Replica set, maybe the primary will do the delete after I create the index, that time it'll try to replicate, in the worst case, the data is already removed on that node, then it'll break the replication?