GPUs on container would be the host container ones. NVIDIA engineers found a way to share GPU drivers from host to containers, without having them installed on each container individually. Fortunately, I have an NVIDIA graphic card on my laptop. Build and run Docker containers leveraging NVIDIA GPUs.As introduced in one of my previous post (link below), nvidia-docker only depends on the NVIDIA driver, so we get to use different versions of the CUDA toolkit in. I chose to use nvidia-docker and used docker images to mange my environments. Run nvidia-smi command to check if the installation was successful.Thank you to $ sudo nvidia-docker run -rm nvidia/cuda nvidia-smi Using default tag: latest latest: Pulling from nvidia/cuda ba76e97bb96c: Pull complete 4d6181e6b423: Pull complete 4854897be9ac: Pull complete 4458f3097eef: Pull complete 9989a8de1a9e: Pull complete 97b9fecc40a9: Pull.yum search -showduplicates nvidia-docker1 最终输出结果是下面这张图: yum search -showduplicates nvidia-docker 最终输出结果是下面这张图: 查找可安装的nvidia docker版本.