We fail building TensorFlow.
Should we use an EC2 DLAMI instance that is preconfigured with Cuda and DL frameworks? if so, how to select this DLMAI or set it in a config file?
Thks.
Raphael.
Hi and yes, @rophilogene
After many tests we concluded that the only way to build TF correctly is to do this in the specific DLAMI offered among the Amazon Linux MI. If there is a better way, we’ll of course take it
Solved.
Rather than using buildpacks for our python app, we defined a dockerfile with specific TF versions. Standard EC2 AMI works fine. Thanks for help @rophilogene !