How to select an EC2 instance with Deep Learning AMI GPU TensorFlow 2.10.0 (Amazon Linux 2)?

Hi,

We fail building TensorFlow.
Should we use an EC2 DLAMI instance that is preconfigured with Cuda and DL frameworks? if so, how to select this DLMAI or set it in a config file?
Thks.
Raphael.

Hi @RSZ-Raphael , welcome here :slight_smile:

Just to make sure I understand your request correctly. You want to deploy an EC2 instance with a specific AMI, right?

Hi and yes, @rophilogene
After many tests we concluded that the only way to build TF correctly is to do this in the specific DLAMI offered among the Amazon Linux MI. If there is a better way, we’ll of course take it :wink:

1 Like

Solved.
Rather than using buildpacks for our python app, we defined a dockerfile with specific TF versions. Standard EC2 AMI works fine. Thanks for help @rophilogene !

1 Like

You’re welcome @RSZ-Raphael