How to retrieve application and infrastructure logs from Loki on my EKS cluster?

Hey there,

I’ve installed Datadog on my cluster, it’s working really well, but we only have 2 weeks of log retention there.

How can I easily retrieve and search for my application logs that were generated before two weeks?


Hi @Kmaschta :wave: ,

Correct me if I am wrong, but what you want is getting access to the past logs from Qovery since you only keep Datadog logs for 2 weeks right?

@rophilogene exactly!

Here is what you can do since Qovery stores all your logs for 12 weeks in one of your S3 buckets.

  1. Follow this guide to connect to your EKS cluster with kubectl
  2. Connect to your Grafana instance running on your Kubernetes cluster.
kubectl -n prometheus port-forward svc/grafana 80:8888
  1. Retrieve your Grafana credentials by running

For login

kubectl -n prometheus get secrets/grafana --template='{{.data.admin-user | base64decode}}'

For password

kubectl -n prometheus get secrets/grafana --template='{{.data.admin-password | base64decode}}'
  1. Then, you can connect to your Grafana interface on http://localhost:8888.
  2. Go to explore and select loki filter with the pod name or namespace.

I hope it helps :slight_smile:


Ok! I’ve figured this out, thanks.

Small changes I had to made:

-kubectl -n prometheus port-forward svc/grafana 80:8888
+kubectl -n prometheus port-forward svc/grafana 8888:80

Moreover, kubectl was not happy with the dashes in admin-user and admin-password so I just ran the following command:

kubectl -n prometheus get secrets/grafana --template='{{.data}}'

And manually decoded the base64.

Finally the question, what if I want to keep the logs for more than 12 weeks?
This is needed for some regulatory / compliance reasons, and I’d love Qovery to provide an easy way to stream or archive our logs to an S3 or a CloudWatch log group.
Do you provide something like that?

1 Like

Well done @Kmaschta :+1:

Thank you for sharing what you found.

We can add an advanced setting at the cluster level for that. @Enzo can you do it?

Qovery and Loki store your logs already in an S3 bucket. Is it good enough for you?

Thank you

Sure! As soon as the logs are:

  1. stored on our own infra, for a period of time we are owning (1yr seems a good start)
  2. discoverable at will
  3. exportable in another format if needed

Seems like Loki is offering all of that features!


  1. Logs are indeed stored in your own infrastructure, under a s3 bucket called qovery-logs-zxxxx, where xxx is the beginning of your cluster uuid
  2. You can use grafana or logcli (loki cli) in order to discover the content of what is stored
  3. To export logs, you can either use logcli
    logcli query '{ job="foo"}' --limit=5000 --since=48h -o raw > mylog.txt

or use grafana inspector

Hello @Kmaschta,

The feature have been implemented (but not yet documented).
You can change the settings of your cluster in order to increase the log retention of loki.
For example if you want to increase the log retention to 54 weeks, you can do

curl -X PUT -H “Authorization: Token xxxxxx” -v ‘’ -d ‘{“loki.log_retention_in_week”:54}’ -H ‘Content-Type: application/json’

And after that, just re-deploy your cluster in order to apply the changes.
Let me know if you have some issue with it

Hi Romaric, really helpful guide, i have been able to open up my shell, but now if i run any kubectl comand, i get sh: 1: kubectl: not found, do i have to install in the shell?

Hi @Prometheo ,

Yes you have to Install Tools | Kubernetes

Hello @Qovery_Team ,
Are we able to also access our container metrics using this method?

At the moment, metrics are only available with a third party tool like Datadog

Hey @Pierre_Mavro
Thanks for your quick response.
I am looking to setup Container Metrics in EKS.
Can you please take a look at this and confirm is that something I can do without interfering with Qovery Control plane?

@sama213 after taking a look, it looks good to me. It will not interfere with Qovery, you can go safely :slight_smile:

1 Like