Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

By default (for security reasons), you should not have access to docker, even if you have access to the host machine. To check whether you have access, log into the host machine and run any docker command, e.g.:

Code Block
languagepybash
themeMidnight
titleShow docker images/processes
linenumberstrue
collapsetrue
# Get a list of all the docker images
docker images 
# or
docker image list 

# To view a list of docker processes
docker ps

...

Our end goal is to make a container from which we can run our own code. However, to achieve this, we need to create something called aimage first. An image is a prototype of a container; it serves as a premade snapshot that can be used to spawn any number of containers. An image is created from something called a Dockerfile, which in its most basic form is just a list of prerequisites you want installed and commands you want to run before every startup. The example below should be a nice starting point.

Code Block
languagepybash
themeMidnight
titleThe Dockerfile - Building an image
linenumberstrue
collapsetrue
# Use the latest tf GPU image as parent. This operation is analogous to inheritance in OOP. 
# The image ships with tensorlfow-gpu and jupyter installed for python 2. It is also  
# configured so that a jupyter server will be launched at container startup. Note that you 
# don't have to use this image as parent. 
FROM tensorflow/tensorflow:latest-gpu 

# Set working directory for container 
WORKDIR /app  

# Make ssh directory (useful for adding ssh keys later) 
RUN mkdir -p /root/.ssh 

# Update repositories 
RUN apt-get update 

# Install git  
RUN apt-get install git -y 

# Install pip3 (parent image only comes with python2 stuff) 
RUN apt-get install python3-pip -y 

# Install your python packages  
RUN pip3 install --upgrade pip 
RUN pip3 install numpy 

# Add more pip installs here. Alternatively move everything to a dedicated requirements file.

...