...
- At SkyHiGh (IIK's production instance):
- General purpose flavors, all IIK affiliates are eligible:
- dx5.8c90r.v100-8g: A flavor with 90GB RAM, 8 vCPU's and a 1/4 of a Tesla v100 (8GB GPU-RAM).
- de3.12c60r.a100-10g: A flavor with 60GB RAM, 12 vCPUs and 1/4 of a Tesla A100 (10GB GPU-RAM)
- Flavors only available for SFI NORCICS:
- de3.24c120r.a100-20g: A flavor with 120GB RAM, 24 vCPUs and 1/2 of a Tesla A100 (20GB GPU-RAM)
- Only available for SFI NORCICS
- de3.48c240r.a100-40g: A flavor with 240GB RAM, 48 vCPUS and 1/1 of a Tesla A100 (40GB GPU-RAM)
Only
- Flavors only available for SFI NORCICSNorwegian Biomtetrics Lab:
- dx4
- .24c60r.p40-24g: A flavor with 60GB RAM, 24 vCPUs and 1/1 of a Tesla P40 (24GB GPU-RAM)
- Only available for Norwegian Biometrics Lab
- de2.24c240r.a100-20g: A flavor with 240GB RAM, 24 vCPU's and 1/2 of a Tesla A100 (20GB GPU-RAM)
- Only available for Norwegian Biometrics Lab
- de3.24c120r.a100d-20g: A flavor with 120GB RAM, 24 vCPU's and 1/4 of a Tesla A100 80GB (20GB GPU-RAM)
- Only available for Norwegian Biometrics lab
- General purpose flavors, all IIK affiliates are eligible:
- At SkyLow (IIK's development instance):
- dx4.8c20r.m10-8G: A flavor with 20GB RAM, 8 vCPU's and one core of a Tesla M10 card (8GB GPU-RAM).
- dx4.24c110r.p100: A flavor with 110GB RAM, 24 vCPU's and a Tesla p100 card (16GB GPU-RAM)
- dx4.48c220r.2p100: A flavor with 220GB RAM, 48 vCPU's and two Tesla p100 cards (2*16GB GPU-RAM)
- At stackit (NTNU IT's production platform):
- dx4.28c120r.a100-20g: A flavor with 120GB RAM, 28vCPU's and 1/2 of a Tesla a100 (20GB GPU-RAM)
- dx5s.96c470r.a100d-80g.e3400g: A flavor with 470GB RAM, 96 vCPUs and a Tesla a100d (80 GB GPU-RAM) and 3.4TiB with compute-local flash storage.
- Only available for an IV-EPT project.
...
We provide an image with pre-installed Nvidia driver and CUDA package. These This image contains the word "GRID" in its name and are a regular Ubuntu Server LTS image with the following additions:
...
Many of our GPU users will probably need Nvidia's cuDNN library. This is not pre-installed in our imagesimage, because Nvidia requires all users to register for the Nvidia Developer Program before dowloading. So, please follow the instructions here, to install it on your VM; and use tar file options. DO NOT USE THE DEB OR RPM ALTERNATIVE. Be sure to download the cuDNN version that corresponds to our current CUDA version.
...
Code Block |
---|
# Enable the repositories distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \ && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list # Install the package sudo apt update && sudo apt -y install nvidia-docker2 # Restart the docker daemon sudo systemctl restart docker # Run a test to verifiy that it works sudo docker run --rm --gpus all nvidia/cuda:12.0.1-base-ubuntu22.04 nvidia-smi # Optionally run a test with Tensorflow that actually runs a bit of code on the GPU via docker sudo docker run --gpus all -it --rm tensorflow/tensorflow:latest2.14.0-gpu \ python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))" |
...