Hi !
We would like to know how we could run https://www.ntppool.org/ on Qovery or if there is any other way that all our applications within a same cluster could share the same clock.
Thank you.
Hi !
We would like to know how we could run https://www.ntppool.org/ on Qovery or if there is any other way that all our applications within a same cluster could share the same clock.
Thank you.
Hi @enzoferey ,
Did you see time shifts between your apps? If that’s the case I wonder if there is a native way provided by AWS to time sync all EC2 nodes attached to the Kubernetes cluster since your apps ultimately runs on EC2 instances and inherit from the host clock.
I’ll check online to see what I can find. I’ll come back to you.
Hi, thank you for the quick answer.
We have not explicitly observed time shifts between our apps, but we are getting ready to ship a new part of our system running many workers all targeting a Cassandra database and it is advised in such situations to sync clocks among all the apps in order to avoid race conditions on the timestamps of the queries.
On EC2 instances I guess you can perform everything stated on the NTP docs, or maybe AWS has its own solution (didn’t research bout it), but since our EC2 instances are being handled by Qovery we thought the solution should go through this platform?
We appreciate your support and let us know if we can help in any way to move this conversation faster.
Thanks!
Do you use Cassandra on Kubernetes managed by Qovery or your Cassandra cluster run elsewhere?
It’s completely unrelated, but maybe you should take a look at ScyllaDB if you consider using Cassandra It’s a drop in replacement of Cassandra written in C++ (instead of Java).
We run Cassandra on DataStax, they manage it for us.
About ScyllaDB, we are aware of it but we have still opted to go for Cassandra on DataStax. We use their Node.js driver, so we do not have to touch any Java bits, and they run Cassandra in a serverless manner, so the cost is greatly reduced comparing to running your own traditional Cassandra setup. But thank you for the recommendation!
Looking forward to hearing back from your about the clocks topic.
Thanks!
Hey @rophilogene, any updates on this one? Thanks!
Hi @rophilogene, bumping one more time. We would appreciate some help from you on this topic
Hello @enzoferey,
It seems EKS uses chrony to handle time synchronization, are you talking about something like that? AWS exposes a local endpoint provided by Amazon Time Sync Service.
Let me know,
Cheers
Hi @bchastanier,
That’s indeed exactly what we were looking for, nice to see AWS has it built-in via the Amazon Time Sync Service.
One question regarding it, it seems the only way to perform this is manually SSHing into the EC2 instances and connecting them. Looking at the creation time of our EC2 instances (created by Qovery), it seems Qovery is re-creating them quite frequently. For example, we have 9 EC2 instances on our cluster, 7 of them were created today, another one on May 27th, and the last one on May 24th. If we are not mistaken, whenever a new instance will be created by Qovery, it won’t have the manual config to connect to Amazon Time Sync Service. So the questions are: when/why are you re-creating EC2 instances? Is there any way to have some “on-startup” script or something to automate the process.
Thanks!
Hey @enzoferey,
Regarding nodes being populated, depends on your cluster configuration and node autoscaler (if using Karpenter on AWS, autopilot on GKE). Basically, the autoscaler is provisioning nodes when resources on the cluster is too low to handle workload and those nodes get unprovisioned when usage is low enough and node’s workload can be spread on other nodes. Depends on the size of your nodes, but the smaller your nodes are, the more likely you will see nodes shuffling.
Regarding the clock sync, I was reading some docs and threads about it, for sure there might be a way to call an init script on node start, but I would like to try something else first.
There is a tool which act as admission controller in your cluster (if you configure it like so), which inject the proper TZ to all your pods.
Source: (here and here).
Would something like that might solve your issue? If so, we can try deploying it via container or chart on your cluster (doing a dedicated project / env for it to keep things clean).
Let me know if it helps and how I can help you further
Cheers
Hey @bchastanier,
Thanks for the quick response. k8tz does look like good enough for the job. We would like to try it out, how may we proceed? Where can we configure such Helm chart file?
Thanks!
Hey @enzoferey !
Qovery does support deploying Helm charts, CF this doc.
Following their doc for helm installation, you should be able to deploying it.
Let me know if you need assistance
Cheers
Hi @bchastanier!
We believe to have deployed k8tz properly into our cluster. However, we are not sure how we may test it?
We have SSHed into our instances and ran date +"%T.%N"
to get milliseconds timestamps and they look in sync, but we would like a confirmation on your side that the Helm chart is properly setup.
These are the identifiers of the service:
Cluster ID: 7c90cccb-693f-46c3-b38f-69ccacb508f8
Organization ID: 85c36783-e768-4432-ad04-db3dffdcc0b3
Project ID: 33d174f0-d26b-48c6-b6cc-3831b2846f39
Environment ID: 853df8a6-ce4d-4f0a-9bd2-904a1d0bbf7e
Service ID: e876a773-f6e5-4a8e-9af0-b0431e187bb2
Thanks!
Hey @enzoferey
I do see your service as stopped, not sure if it’s normal (k8tz).
But from their docs you should be able to test the setup running this command in the k8tz pod directly (you can use the pod shell via console to do so):
helm test k8tz
I didn’t read the whole doc, but maybe you also need to redeploy your service in order it gets properly tagged by the tool (not sure, to be confirmed from documentation).
Cheers
Hey @bchastanier, we thought Helm charts services should not be running since they are just charts? The deployment goes well, but then it never starts?
We did read the documentation provided, but the integration step with Qovery is not clear to us here. We are also not very knowledgeable about Kubernetes
We have redeployed other services and tried running helm test k8tz
in their respective shells but the helm
command is not even found. On the k8tz service itself we can’t access the shell since the service is not running.
Let us know how we may help here, we are a bit lost.
Thanks!
Hey !
Your setup is almost correct, but I think you missed to change the default namespace on which the app will be running, by default it’s k8tz
, but it our case we want to use the one set by helm, so just need to set the namespace to true, I agree the error wasn’t obvious :
As for the command to validate the setup, indeed, it seems the app doesn’t come with helm command in it, so either you can deploy a debug app having helm command in it and output the command you want (but you need to pass kubeconfig and so on) OR you can grab your cluster kubeconfig from the UI:
And then run the helm command locally on your machine (assuming you’ve got helm installed), providing the kubeconfig along with your AWS credentials:
Get the helm app deployed details (in order to know the name of the pod and its namespace), note grep is just here to filter on k8tz and keep helm ls command result header:
$ AWS_ACCESS_KEY_ID= \
AWS_SECRET_ACCESS_KEY= \
AWS_DEFAULT_REGION= \
KUBECONFIG=/tmp/kubeconfig-7c90cccb-693f-46c3-b38f-69ccacb508f8.yaml \
helm ls -A | grep -E 'k8tz|NAME'
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
helm-ze876a773-ktz z33d174f0-z853df8a6 3 2024-06-03 13:10:19.49590903 +0000 UTC deployed k8tz-0.16.1 0.16.1
Then apply the helm test command to your helm app / namespace
$ AWS_ACCESS_KEY_ID= \
AWS_SECRET_ACCESS_KEY= \
AWS_DEFAULT_REGION= \
KUBECONFIG=/tmp/kubeconfig-7c90cccb-693f-46c3-b38f-69ccacb508f8.yaml \
helm test --namespace z33d174f0-z853df8a6 helm-ze876a773-ktz
NAME: helm-ze876a773-ktz
LAST DEPLOYED: Mon Jun 3 13:10:19 2024
NAMESPACE: z33d174f0-z853df8a6
STATUS: deployed
REVISION: 3
TEST SUITE: helm-ze876a773-ktz-k8tz-health-test
Last Started: Mon Jun 3 15:40:09 2024
Last Completed: Mon Jun 3 15:40:14 2024
Phase: Succeeded
Also looking at k8tz logs, I don’t see any warnings (you can check via service live logs on the UI):
Just one last check, if you look at one pod, you can see that it has extra k8tz annotations, so I guess everything is properly set:
For sure, the checking part is not that easy since it requires you to get the kubeconfig and retrieve the helm service name and namespace, but we are working on improving this part
Let me know how it goes !
Cheers
Thanks for all these details @bchastanier
The service is looking good indeed. We changed the override as file in favour of override as arguments so both the timezone and the namespace are together.
We have tried downloading the Kubeconfig and running helm on our local machine but even though we are setting the expected environment variables (with admin rights on the AWS account that owns the cluster) we get the following error:
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
Any idea why?
The pods having the k8tz annotations does confirm that the setup is correct. So we are happy about that
That’s great !
Usually this error means you didn’t set the env variables properly, did you pass those to the helm command?
AWS_ACCESS_KEY_ID= \
AWS_SECRET_ACCESS_KEY= \
AWS_DEFAULT_REGION= \
KUBECONFIG= \
helm <YOUR_SUB_COMMAND>
Yes, we also tried setting those values in the terminal session. Executing for example echo $AWS_ACCESS_KEY_ID
would print the value in the terminal. So not sure what’s wrong