I'm trying to set up a Docker image based on Ubuntu 18.04 that runs some code with Python 3.7
The dockerfile specifically mentions installing python 3.7 but in the image binary folder there is also python 3.6 that the image uses as default and which I don't even need.
What change would I need to make in order to only have python 3.7 on the image?
The dockerfile currently looks like this:
FROM ubuntu:18.04 #format changes required for asammdf v3.4.0 ENV LANG C.UTF-8 ENV LC_ALL C.UTF-8 #install python 3.7 and pip RUN apt-get update && apt-get install -y \ python3.7 \ python3-pip #set main entry point as working directory WORKDIR / RUN pip3 install --upgrade setuptools RUN alias python3=/usr/bin/python3.7 RUN pip3 install -r requirements.txt
alias command hoping that it would work but it seems to be ignored in the dockerfile, the execution still goes to python3.6
The requirements installation works fine but installs all packages in the 3.6 directory so even if I manually use the python3.7 execution while in the container, it doesn't see the installed modules
I would suggest using offical python image which contains everything which you need.
docker run -it --rm --name my_python3.7 python:3.7-alpine3.9
You can find the offical Dockerfile here.
The bonus with the above image is you will get python3.7 in just under 100 BM
So you can use this image as the base image in your Dockerfile
python:3.7-alpine3.9 ENV LANG C.UTF-8 ENV LC_ALL C.UTF-8 . . .
Ubuntu 18.04 comes preinstalled with Python 3.6. How are you invoking the python binary execution to run your code? Try explicitly stating the complete python 3.7 binary path to execute your code.
That should work for ubuntu 18.04 I checked the location of the python 3.7 binary using
Using the official python image would be efficient though.