How to find the reason for a Qovery warning?

We started seeing a Qovery warning next to one of our services. How can we find the reason for this warning? Service stability and resource usage look ok.

Screenshot 2023-07-12 at 10.53.26

Hello @prki,

It seems your app got a lot of evicted pods.

Seems your pods had disk pressure:

You can have more info around eviction here

Your metabase app has an autoscaler set (1-2), you cannot add any volume on it.

So you have two solutions:

  • Increase your nodes disk size (doubling it for example) and redeploy your cluster which will give some fresh space to metabase
  • Make metabase not auto-scalable (1-1) and attach a volume to it. You should also probably declare this volume into Metabase config (not sure how it works)

We will prioritize better reporting of such issues in the future so you can better understand what’s the issue.

Cheers,

Thanks for looking into this, and we are looking forward to getting this info somewhere next to the warning in the future.

However, it also brings more questions. This application uses a DB service to store its data and doesn’t (shouldn’t) need storage. When we deployed our apps directly on K8S we would by default use:

    securityContext:
      readOnlyRootFilesystem: true

To avoid unexpected writes to disk (eventually blowing up the nodes underneath). It would be good to have this as a feature on Qovery as well. /cc @rophilogene @a_carrano

In this particular case, we are deploying a 3rd party app that’s likely writing logs to disk and we will not be able to easily disable it. In such cases, we would mount tmpfs on the particular paths to keep nodes clean over time.

Hope to see these solutions implemented on Qovery in the future.

Indeed, that’s something we can support in the future, however, even with that settings done on K8S side, how do you tell this app not to write anything locally?
If you can specify this settings in your app, then you should be good. Am I missing anything?

In this particular case, I don’t think we have such a setting in the app and we don’t even know exactly what’s being written (it’s 3rd party app). If we had an option for readOnlyRootFilesystem it would have been easier for us to find this issue. This setting would likely produce errors inside the application logs or during startup telling about the files it wants to write. We would then determine what to do about it. Our own apps should never write to disk but use other means of storage, and we would just fix the app. In this case, we will likely need to attach a volume to some path.

I agree, I will loop back with the team to support this feature. But even if you have it now, it would have helped you to debug this issue but result would be the same, your app would be crashing and you would have to do one of the two options I gave you above.

I will let you know about this setting because it makes sense.

Cheers

Hi @bchastanier, what did the team say? It would be nice if it was supported in “Advanced Settings” like the other options you have there.

Hello @prki,
It is coming, we are in the process of making a new UI in order to bring out more information regarding the current status of the containers.

We are currently working on It, and it should be released before the end of the month.

Hi Erebe,

In this case, I was referring to support for:

It would be great to have it in “Advanced Settings”.

Thanks for the clarification.

I am adding a ticket to do it in the current sprint, as it is not much work.

So it should be good in ~2 weeks, we will keep you posted once released.

1 Like

hi back @prki,

The advanced settings is released, you have it under security.read_only_root_filesystem

1 Like

That’s great, thanks!

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.