We fail building TensorFlow.
Should we use an EC2 DLAMI instance that is preconfigured with Cuda and DL frameworks? if so, how to select this DLMAI or set it in a config file?
Hi @RSZ-Raphael , welcome here
Just to make sure I understand your request correctly. You want to deploy an EC2 instance with a specific AMI, right?
Hi and yes, @rophilogene
After many tests we concluded that the only way to build TF correctly is to do this in the specific DLAMI offered among the Amazon Linux MI. If there is a better way, we’ll of course take it
Rather than using buildpacks for our python app, we defined a dockerfile with specific TF versions. Standard EC2 AMI works fine. Thanks for help @rophilogene !
You’re welcome @RSZ-Raphael