cudaErrorInsufficientDriver (35)cudaErrorInsufficientDriver (error code 35) occurs when the CUDA runtime requires a newer driver than what's installed. This happens when you update the CUDA toolkit but forget to update the NVIDIA driver. CUDA has a forward compatibility model: newer drivers support older CUDA toolkits, but older drivers cannot run code compiled with newer toolkits. The solution is typically updating your NVIDIA driver. This guide covers driver-toolkit compatibility and how to resolve version mismatches.
CUDA error: CUDA driver version is insufficient for CUDA runtime version cudaErrorInsufficientDriver: CUDA driver version is insufficient CUDA_ERROR_INSUFFICIENT_DRIVER forward compatibility was attempted on non supported HW
Identify your driver and CUDA versions.
# Check driver version
nvidia-smi
# Shows "Driver Version: XXX.XX" and "CUDA Version: XX.X"
# Check CUDA toolkit version
nvcc --version
# Check PyTorch CUDA version
python -c "import torch; print(torch.version.cuda)"
# The nvidia-smi CUDA version shows max supported,
# not necessarily what's installedEach CUDA version requires a minimum driver.
# CUDA Toolkit to Minimum Driver Version:
# CUDA 12.3+ -> Driver 545.xx+
# CUDA 12.0 -> Driver 525.xx+
# CUDA 11.8 -> Driver 520.xx+
# CUDA 11.7 -> Driver 515.xx+
# CUDA 11.0 -> Driver 450.xx+
# CUDA 10.2 -> Driver 440.xx+
# Check full compatibility table at:
# https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/Install a driver compatible with your CUDA version.
# Ubuntu - using apt
sudo apt update
sudo apt install nvidia-driver-545 # Adjust version as needed
sudo reboot
# Ubuntu - using official installer
# Download from nvidia.com/drivers
chmod +x NVIDIA-Linux-x86_64-545.xx.run
sudo ./NVIDIA-Linux-x86_64-545.xx.run
sudo reboot
# Verify after reboot
nvidia-smiContainer CUDA must match host driver capability.
# Check host driver's maximum CUDA version
nvidia-smi # Shows "CUDA Version: XX.X"
# Use a container image compatible with your driver
# If host shows CUDA 12.2 max, use:
docker run --gpus all nvidia/cuda:12.2.0-base nvidia-smi
# Don't use images requiring newer CUDA than host supports
# Wrong if host max is 12.2:
docker run --gpus all nvidia/cuda:12.4.0-base # Will fail!Match framework CUDA version to your driver.
# Check your driver's max CUDA version first
nvidia-smi
# Install PyTorch for specific CUDA version
# For CUDA 11.8:
pip install torch --index-url https://download.pytorch.org/whl/cu118
# For CUDA 12.1:
pip install torch --index-url https://download.pytorch.org/whl/cu121
# List available versions:
# https://pytorch.org/get-started/locally/PyTorch may be compiled with newer CUDA than your driver supports.
# Installing latest PyTorch without checking driver
pip install torch
# Then running code:
import torch
torch.cuda.is_available() # Returns False or crashesMatch the PyTorch CUDA version to your driver capability.
# First, check driver capability
# nvidia-smi shows: CUDA Version: 11.8
# Install PyTorch matching your CUDA
pip install torch --index-url https://download.pytorch.org/whl/cu118
# Verify it works
import torch
print(f"PyTorch CUDA: {torch.version.cuda}")
print(f"CUDA available: {torch.cuda.is_available()}")No. You must update your driver. Newer CUDA versions require newer drivers. Check the compatibility table and update your driver to the minimum required version.
nvidia-smi shows the maximum CUDA version your driver can support, not what toolkit is installed. This is the ceiling for what CUDA version you can use with this driver.
You cannot update the host driver from inside a container. Either update the host driver, or use a container image with an older CUDA version that matches your host driver capability.
Can occur if driver fails to load
Driver issues can cause device detection failures
Driver problems cause init failures
Need help debugging CUDA errors? Download RightNow AI for intelligent error analysis and optimization suggestions.