Using Thunder Compute with Docker
This guide explains how to use Thunder Compute with Docker from within a Thunder Compute instance
Note: do not enable GPU passthrough
Do not use the —gpus all flag or NVIDIA runtime Docker images (e.g., nvidia/cuda). These require a physical GPU on your machine and can cause errors like:
nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown.
Instead, follow this guide to create a dockerfile that supports Thunder Compute.
1. Connect to a Thunder Compute instance
Follow the instructions in our quickstart guide to create and connect to a Thunder Compute instance.
If you are running linux, you can directly run the following steps on your local machine, with significantly reduced performance.
2. Install TNR inside the container
Modify your dockerfile to include the following lines:
3. Set the TNR API Token:
Replace <your_api_token_here>
with the API token generated from the Thunder Compute console to authenticate requests to TNR.
Alternatively, you can pass the api token at runtime
4. Use tnr run to Execute Commands
Prefix your commands with tnr run to execute them on a remote GPU:
Conclusion
By installing tnr inside your Docker container and avoiding GPU passthrough, you can run your applications on remote GPUs provided by Thunder Compute. Use the TNR_API_TOKEN
environment variable for authentication, and prefix your commands with tnr run
to execute them on the remote GPU.