Outgrew MongoDB … now what?

Posted by samsmith on Server Fault See other posts from Server Fault or by samsmith
Published on 2011-11-27T21:21:18Z Indexed on 2011/11/28 1:54 UTC
Read the original article Hit count: 609

Filed under:

We dump debug and transaction logs into mongodb.

We really like mongodb because:

Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM).

Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM.

Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?)

Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW!

I am sure we are not the first to hit this issue, so we ask the community:

Where do we go next?

(We already run Mongo v2)

Thanks!!

© Server Fault or respective owner

Related posts about mongodb