Stateful sets deployment via Helm Chart

I am trying to deploy a stateful application load with PVC via Helm to my cluster. I know that Qovery uses a single load balancer and then NGINX to route traffic. How can I expose my stateful set deployment using the same load balancer and nginx as we can do for normal deployments via the qovery console.

Hi,

Can you please confirm it’s an HTTP service?

Thanks

Yes, it is an HTTP service.

Thanks

For an HTTP service, where you want to use the existing Nginx ingress, you need to add an ingress object with the following annotations. Here is an example

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: <ingress_name>
  namespace: <namespace>
  annotations:
    # set issuer for TLS
    cert-manager.io/issuer: <name_of_the_issuer>
    ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
    external-dns.alpha.kubernetes.io/exclude: "true"
    # nginx-qovery to use the Nginx ingress,
    # and load balancer deployed by Qovery
    kubernetes.io/ingress.class: nginx-qovery
....

For the TLS, you will also have to use this kind of issuer configuration for the Let’s encrypt HTTP challenge:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: <name_of_the_issuer>
  namespace: <namespace>
spec:
  acme:
    email: <your_email_address_for_tls_advertises>
    preferredChain: ""
    privateKeySecretRef:
      name: acme-<name_of_the_issuer>-key
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
    - http01:
        ingress:
          class: nginx-qovery

As Qovery is open source, you can view the complete router part configuration here: engine/lib/common/charts/q-ingress-tls/templates at main · Qovery/engine · GitHub

Thank you I will take a look at this.

I see that once you attach a volume to a deployment it deploys a stateful set, but the number of replicas is just 1 is there some way to scale it ?

You can set what you want as you control your Kubernetes cluster. However, Qovery decided to disable it from the product for several reasons:

  1. A PVC is bound to an AWS Zone. Suppose you don’t have a node in this region available (for some reason, AWS outage, no more node available in this region/zone, etc…). In that case, your application won’t be able to start at all, which is a pain for customers who require high availability.
  2. PVC performances are not that great as the disk is available through the network. So for intensive usage, customers were already disappointed with it. Not comparable to local storage performances (even developer laptops), which are far better. Primarily for database usage, we encourage using managed databases in this case.
  3. Many customers in the past thought the data were shared across all replicas, which is not the case. Leading to misunderstandings and complex reconciliation for those customers.
  4. Cluster operations take a lot more time when doing node rebalancing. Why? Because you can’t start a pod if its data is available to be mounted. As you can’t mount the same disk on two different nodes (for integrity and unsupported filesystem simultaneous access reasons), a pod has to stop. The PVC has to be unmounted from the EC2 instance and mounted on the new one. Then it can start. The more PVC you have, the longer the operation time will occur
  5. Same for application rollout, it can take minutes…hours for complex and large distributed systems (Cassandra, for example).

This is why we do not recommend using the attached disk if you can avoid it and use S3 storage or a managed service. We know that some “pre-build services” (Wordpress…) cannot do this, and unfortunately, it’s impossible to avoid using PVC. So we decided to keep the volume on Qovery, but if you can avoid it, we recommend doing so.

Thank you so much Pierre for the detailed response I completely get the drawbacks of not having network volumes with speed and high availability. But in our case we have a data streaming app that uses local files to store events so we need to have persistent volumes otherwise data would be lost across restarts.
I would explore the Qovery helm chart and annotations and get that setup, btw loving the product and currently we are moving everything to Qovery step by step.

1 Like

In previous experiences, I managed thousands of pods on statefulset to host NoSQL databases (on bare metal), so I completely get your point. If you think it’s a critical area for you, feel free to add your wish to the roadmap, we may reconsider it later with a different product view than we had before.

Sure I will add that to the roadmap. Also, an interesting bit could be to get access to Yaml files where we can make minor modifications and the rest of everything is managed via the existing workflow so we can see logs, scale services etc from the qovery dashboard.

That’s a good idea :slight_smile: ! Feel free to add it as well!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.