Mongo DB v5 restarts

Hi,

We are running Mongo DB v5 (Container mode) and constantly getting container restarts. Do you guys have an idea what could be the problem?

Hi @km1414 , do you have the logs from MongoDB? Maybe not enough CPU and RAM?

Adding more CPU fixes that, but it’s strange that minimal resources are not enough to run empty DB, isn’t it?

It actually seems pretty low for Mongov5 I guess. You can check this documentation from Mongo to dig more.

Hi @bchastanier,

It seems Mongo likes to use memory proportional to the memory available (see here:

Starting in MongoDB 3.4, the default WiredTiger internal cache size is the larger of either:

  • 50% of (RAM - 1 GB), or
  • 256 MB.

According to their Docker image documentation, the memory allocation will be calculated based on host memory and not container limit.

By default Mongo will set the wiredTigerCacheSizeGB to a value proportional to the host’s total memory regardless of memory limits you may have imposed on the container.

We have nodes with 64GB memory and we don’t want our PE Mongo containers using up to 31GB of memory. It is possible to limit this with the following CLI option:

$ docker run --name some-mongo -d mongo --wiredTigerCacheSizeGB 1.5

However, we can’t customize Docker run command when deploying Mongo container database via Qovery. I think Qovery should customize --wiredTigerCacheSizeGB parameter based on the memory allocated to the container.

Hey @prki,

It feels weird, IMO mongo will use the pod memory but won’t be able to leak to node’s memory, so if you set the pod to be 2Go RAM, even if your nodes are 64Go RAM, your Mongo pod shouldn’t consider being 32Go RAM.

Did you observed this behavior? If so can you point me tests / results I am curious on this one.

CF: MongoDB 4.4.3 ignoring `wiredTigerCacheSizeGB` limit in Kubernetes - Database Administrators Stack Exchange

Thanks

It feels weird, IMO mongo will use the pod memory but won’t be able to leak to node’s memory

It’s not really leaking, what I think is happening is Mongo sets itself a very high memory target/limit based on memory available on host and eventually runs out of memory and gets killed by OOM killer. If it had proper configuration it would not allow its caches to go above certain threshold and limit itself to work within pod allocation.

Did you observed this behavior? If so can you point me tests / results I am curious on this one.

We experience OOM issues but we can’t really point to an exact problem. There’s no way for us to customize RUN command or config files and try different settings. I thought I would share it here for Qovery to investigate. I think you should tune the Mongo configuration to make it work with resources specified in Qovery UI.

We can investigate and it’s interesting, but reading their docs I am not sure this setting will love your issue: the memory you set via Qovery is the memory which will be propagated to the pod and hence to mongo. Oomwill happen if somehow the pod is going over this limit. Given that memory will grow based for example based on number of connections for example, it might crash eventually.
I think if mongo thinks it has 32go whereas you set 2go for the pod, it wouldn’t even start.
I will do some tests on that front and let you know.
By curiosity, how many concurrent connections you have ? How much memory do you have set?

Given that memory will grow based for example based on number of connections for example, it might crash eventually.

Right, there are additional things consuming memory besides the WiredTiger cache. However, if we lock the cache at 50% of the pod memory, we should still have plenty left for other things. If I recall correctly each connection only needs 1MB.

By curiosity, how many concurrent connections you have ? How much memory do you have set?

We are trying to get away with very low memory allocation since we are deploying a lot of PEs. However, each PE only holds 10-20MB of data in that database so it doesn’t make sense to allocate large amounts. We tried using 256MB but Mongo didn’t even start (I guess because 256MB is the default minimum for WiredTiger cache but if we had ability to customize it maybe we could set it lower), at 512MB it works with occasional OOM kills.

This topic was automatically closed after 15 hours. New replies are no longer allowed.