cudaErrorNoDevice (100)cudaErrorNoDevice (error code 100) means CUDA cannot find any GPU on your system. This is one of the most fundamental CUDA errors and indicates that either no NVIDIA GPU is present, drivers aren't installed, or something is blocking GPU access. This error commonly appears as "no CUDA-capable device is detected" and can occur even when nvidia-smi shows your GPU. The issue is usually related to driver installation, environment configuration, or containerization. This guide walks through systematic troubleshooting to get your CUDA environment working correctly.
CUDA error: no CUDA-capable device is detected cudaErrorNoDevice: no CUDA-capable device is detected RuntimeError: No CUDA GPUs are available torch.cuda.is_available() returns False Could not find any NVIDIA driver on your system
Confirm an NVIDIA GPU is physically present and detected by the system.
# Check for NVIDIA hardware
lspci | grep -i nvidia
# Expected output like:
# 01:00.0 VGA compatible controller: NVIDIA Corporation...
# On Windows
wmic path win32_VideoController get name
# If nothing shows, GPU may not be seated properly
# or PCIe slot issueEnsure the correct NVIDIA driver is installed and loaded.
# Check if driver is installed
nvidia-smi
# If command not found, install drivers
# Ubuntu/Debian
sudo apt update
sudo apt install nvidia-driver-535 # Use appropriate version
# Or use official installer
wget https://us.download.nvidia.com/...
chmod +x NVIDIA-Linux-x86_64-535.xx.run
sudo ./NVIDIA-Linux-x86_64-535.xx.run
# After installation, reboot
sudo reboot
# Verify driver loaded
lsmod | grep nvidiaAn empty or invalid CUDA_VISIBLE_DEVICES hides all GPUs.
# Check current value
echo $CUDA_VISIBLE_DEVICES
# If empty or -1, unset it
unset CUDA_VISIBLE_DEVICES
# Or in Python (before importing CUDA libraries)
import os
if "CUDA_VISIBLE_DEVICES" in os.environ:
del os.environ["CUDA_VISIBLE_DEVICES"]
# Then check
import torch
print(torch.cuda.is_available()) # Should be True nowEnsure your user has access to GPU devices.
# Check device permissions
ls -la /dev/nvidia*
# Add user to video group
sudo usermod -aG video $USER
# Apply group changes (or logout/login)
newgrp video
# If still failing, check device permissions
sudo chmod 666 /dev/nvidia* # Temporary fix
# Permanent fix via udev rules
echo 'KERNEL=="nvidia*", MODE="0666"' | sudo tee /etc/udev/rules.d/70-nvidia.rules
sudo udevadm control --reload-rulesThe open-source Nouveau driver conflicts with NVIDIA's driver.
# Check if nouveau is loaded
lsmod | grep nouveau
# Blacklist nouveau
sudo bash -c "echo 'blacklist nouveau' >> /etc/modprobe.d/blacklist-nouveau.conf"
sudo bash -c "echo 'options nouveau modeset=0' >> /etc/modprobe.d/blacklist-nouveau.conf"
# Regenerate initramfs
sudo update-initramfs -u
# Reboot
sudo reboot
# Verify nouveau is not loaded
lsmod | grep nouveau # Should be emptyContainers need nvidia-container-toolkit for GPU access.
# Install nvidia-container-toolkit
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
sudo apt install nvidia-container-toolkit
sudo systemctl restart docker
# Run container with GPU
docker run --gpus all nvidia/cuda:12.0-base nvidia-smiDirectly calling .cuda() without checking availability causes crashes on CPU-only systems.
# Assumes GPU is always available
import torch
model = torch.nn.Linear(100, 10).cuda() # Crashes if no GPUThis provides helpful diagnostics when CUDA isn't available and gracefully falls back to CPU.
import torch
import sys
def check_cuda():
if not torch.cuda.is_available():
print("WARNING: CUDA not available")
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA built: {torch.version.cuda}")
# Diagnostic info
try:
import subprocess
result = subprocess.run(['nvidia-smi'], capture_output=True, text=True)
if result.returncode != 0:
print("nvidia-smi failed - driver may not be installed")
except FileNotFoundError:
print("nvidia-smi not found")
return False
print(f"CUDA available: {torch.cuda.device_count()} GPU(s)")
return True
if check_cuda():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print("Falling back to CPU")
model = torch.nn.Linear(100, 10).to(device)This usually means PyTorch was installed without CUDA support. Reinstall with: pip install torch --index-url https://download.pytorch.org/whl/cu121 (adjust version as needed).
All NVIDIA GPUs from GeForce 8 series onwards support CUDA. Check your GPU's compute capability at developer.nvidia.com/cuda-gpus. Modern deep learning requires compute capability 3.5+ (Kepler or newer).
No, CUDA is NVIDIA-proprietary. For AMD GPUs, use ROCm with HIP (PyTorch supports this) or OpenCL. The PyTorch ROCm builds work on supported AMD GPUs.
Permission issue with /dev/nvidia* devices. Add your user to the video group: sudo usermod -aG video $USER, then logout and login again.
Occurs when GPU exists but wrong index specified
Driver installed but version too old
Driver present but initialization failed
Need help debugging CUDA errors? Download RightNow AI for intelligent error analysis and optimization suggestions.