Project to project communication using internal k8s DNS

INFORMATION

No issues more of a how/is this possible today question:

we want applications in different projects to have network connectivity to each other via the k8s internal network using internal DNS. Preferably using qovery’s internal host built in environment variables.

  • Is that functionality available out the door?
  • If so is there a way to get the FQDN of an application without using kubectl?
  • lastly can we also do this communication between environments within a give project?

Hi @pantera-travis ,

The short answer is YES, it’s possible!

:warning: I assume that your projects and environments are running on the same cluster. Otherwise, getting access from one environment to another is impossible.

:warning: If you do this, you have to be cautious because Qovery can’t guarantee that project X will still be accessible from project Y if you make a change.

Since your environments are running on the same cluster, environment X can technically connect to other environments within the same cluster. You just need to guess the namespace, which is the following concatenation.

Let’s say your project ID is: ead2fb0f-058f-4dc0-865e-70a9a7d2ed80
and your environment ID is: 26a4cdd6-deaa-4325-b708-88c3c0d3afac

Then your kubernetes namespace will be z{first-digits-project-id}-z{first-digits-environment-id} / zead2fb0f-z26a4cdd6

Why those ugly ids? Because we want to prevent potential collisions and get predictable namespace IDs based on the internal info of Qovery.

Yes, it’s the same as above. There is no difference. But be careful by doing this :slight_smile:

One question, what is your current use case? Why do you want to create such interconnection between environments from a different project?

Use case falls under couple of things but here is the high level idea. We need to build a real time reporting and analytics platform. Some of the use cases for this platform will be “operational monitoring/alerting”. Think Prometheus and APM style monitoring for k8s and other resources. Others will be for a Finance, Trading, and Quant teams. These systems are “coupled” to each other but there is no need for these things to talk over the wire to each other.

Below is a list of the “tools” we are using to pull this all together.

We will be running the following:

Apache Supersets - https://superset.apache.org/

TimescaleDB - https://www.timescale.com/

NodeRed - https://nodered.org/

Apache Airflow - https://airflow.apache.org/

Grafana

Both of the Apache Products listed above are there only “micro-service like” solution. Thus makes sense to be there own project or at the very least their own environment. The are going to be updated at their own cadence based on custom code for Pantera and updates to the open source projects themselves. However, they both use TimescaleDB to read/write records too. From a performance and KISS perspective having Supersets and Airflow talk to TimescaleDB over internal DNS makes the most sense.

As for the specific questions around environment vs project cross communications. Pantera is trying to figure out how to best use Qovery’s offering of a Project vs Environment. If I treat Project like a use case this it would be “Data Visualization” and within that project I would have a supersets-dev environment for example. Having them all under the same Projects make sense because it easy to identify where new services should good.

I have a follow up question.

What are the options for allowing other AWS service to connect to a running “application” is k8s?

For example we are using AWS Managed Airflow and we would like that to be able to connect TimescaleDB (postgres+time series) from airflow which is in a different VPC from our EKS/Qovery cluster. These two VPCs are currently peered. I’m looking for a solution that will allow Managed Airflow to connect to TimeScale using TCP as Postgres doesn’t support connecting over http/https. Here are the two approaches I have considered thus far:

  1. Allow Airflow to write records to TimeScale over “private dns” using TCP connection.
  2. Allow Airflow to write records to TimeScale over public DNS using a TCP with SSL connection.

From what I can tell an application is deployed publicly with Qovery is only listening via https.

@rophilogene I have uploaded/attached a screenshot and I’m not 100% sure the provided solution works.

  • Short version I have an application and a postgres container in the same project/env.
  • Using Qovery shell I can exec into the app container.
  • From there I install Ping and attempt to ping the postgres container
    • This does not work with pod name but does work with internal host name variable for postgres form within qovery.
    • It also works using internal host + namespace

However, when I try to use the same internal host+namespace for an app in a different project/env the k8s DNS does not resolve.

And yes these are all in the same k8s cluster

Hey @pantera-travis,

Can you give me your app console link so I can have a look ?

Thanks :slight_smile: