embed core ai env into docker container.

github address

many thanks to this bro

the folder structure

.
├── Dockerfile
├── pytorch3d.tgz
├── README.md
├── requirements.txt
├── tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
├── torch-1.12.1+cu113-cp38-cp38-linux_x86_64.whl
├── torchaudio-0.12.1+cu113-cp38-cp38-linux_x86_64.whl
└── torchvision-0.13.1+cu113-cp38-cp38-linux_x86_64.whl

because of the connection slowest in china mainland, these files can be downloaded through a proxy

build image

sudo docker build -t aigallery . -f Dockerfile

run container

  • style1: open only some ports to outside:

    sudo docker run --name aigo --gpus all \
    -v /home/aikedaer/project_0:/workspace/project_0 \ # Change to your local directory
    -dp 8888:8888 \
    -dp 6789:22 \
    -it aigallery
  • style2(only supported in Linux): set –net host

    sudo docker run -d --name aigo --gpus all \
    --net host \
    -v /home/aikedaer/project_0:/workspace/project_0 \ # Change to your local directory
    aigallery

    this way container and host share the same port.
    note: the following contents can been written to ~/.bashrc and ~/.zshrc if you want to go through the system proxy.

    export http_proxy=http://127.0.0.1:7890
    export https_proxy=http://127.0.0.1:7890

enter the container

sudo docker exec -it aigo zsh # Yes, we use zsh!!!

verify GPU support

Make sure the GPU driver is successfully installed on your machine and read this note to allow Docker Engine communicate with this physical GPU.

# GPU driver
nvidia-smi

# CUDA version
nvcc --version

# cuDNN version
cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2

# TensorFlow works with CPU?
python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"

# TensorFlow works with GPU?
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

# Torch works with GPU?
python3 -c "import torch; print(torch.cuda.is_available())"

use SSH to access the container (only for style1)

# First run the ssh server in the container first
sudo docker exec -d aigo /usr/sbin/sshd

# Access it via
ssh -p 6789 root@localhost
# password of root: 99521

Go to http://localhost:8888 to open the Jupyter Notebook

offline backup image

$ sudo docker images
REPOSITORY   TAG          IMAGE ID       CREATED        SIZE
aigallery    latest       afd9254c601e   12 hours ago   21.8GB

because of the big size of image, it is slow to push to dockerhub, so I choose to backup the image offline.

docker save -o aigallery.tar aigallery
docker load -i aigallery.tar

Resouces setting

If you are in windows docker desktop, you can create a file in %USERPROFILE%, that is .wslconfig:

# Settings apply across all Linux distros running on WSL 2
[wsl2]

# Limits VM memory to use no more than 4 GB, this can be set as whole numbers using GB or MB
memory=64GB 

# Sets the VM to use two virtual processors
processors=8

# Sets amount of swap storage space to 8GB, default is 25% of available RAM
swap=8GB
打赏作者

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注

CAPTCHA