cudaErrorSharedObjectInitFailed (302)cudaErrorSharedObjectInitFailed (error code 302) occurs when CUDA cannot load required shared libraries. This typically indicates missing libraries, version mismatches, or incorrect library paths. This error often appears during CUDA initialization when the runtime tries to load libcuda.so (Linux) or nvcuda.dll (Windows). The underlying cause is usually an installation or configuration issue. This guide covers library loading troubleshooting and common solutions.
CUDA error: shared object initialization failed cudaErrorSharedObjectInitFailed: a]shared object initialization failed CUDA_ERROR_SHARED_OBJECT_INIT_FAILED cannot open shared object file: No such file or directory cuInit failed
Ensure driver is properly installed and loaded.
# Check if driver is loaded
lsmod | grep nvidia
# Check driver files exist
ls -la /usr/lib/x86_64-linux-gnu/libnvidia*
ls -la /usr/lib64/libnvidia*
# Test driver
nvidia-smi
# If nvidia-smi fails, reinstall driver
sudo apt install nvidia-driver-535 # Ubuntu
sudo rebootEnsure CUDA libraries are in the search path.
# Check current library path
echo $LD_LIBRARY_PATH
# Add CUDA libraries to path
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
# Make permanent (add to ~/.bashrc)
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
# Update ldconfig
echo '/usr/local/cuda/lib64' | sudo tee /etc/ld.so.conf.d/cuda.conf
sudo ldconfigRecreate broken library symlinks.
# Check for broken symlinks
find /usr/local/cuda/lib64 -type l -xtype l
# Common fix: recreate libcuda.so symlink
cd /usr/local/cuda/lib64
sudo ln -sf /usr/lib/x86_64-linux-gnu/libcuda.so.1 libcuda.so
# For libcudart
sudo ln -sf libcudart.so.12.0 libcudart.so
# Verify
ls -la libcuda*.so*Ensure libraries are available in container.
# Run with GPU access and proper library mounts
docker run --gpus all \
-v /usr/local/cuda:/usr/local/cuda:ro \
-e LD_LIBRARY_PATH=/usr/local/cuda/lib64 \
your-image
# Or use nvidia-container-toolkit properly
sudo apt install nvidia-container-toolkit
sudo systemctl restart docker
# Use official NVIDIA images
docker run --gpus all nvidia/cuda:12.0-base nvidia-smiClean reinstall if libraries are corrupted.
# Remove existing CUDA
sudo apt remove --purge cuda*
sudo apt autoremove
sudo rm -rf /usr/local/cuda*
# Reinstall
wget https://developer.download.nvidia.com/compute/cuda/12.0.0/local_installers/cuda_12.0.0_525.60.13_linux.run
sudo sh cuda_12.0.0_525.60.13_linux.run
# Add to path
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATHCUDA libraries not in library path.
# Running without proper library path
python -c "import torch; print(torch.cuda.is_available())"
# Error: libcudart.so.11.0: cannot open shared object fileLibrary path set correctly before running.
# Set library path before running
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
# Verify libraries are found
ldd /usr/local/cuda/lib64/libcudart.so | grep "not found"
# Should show no missing libraries
python -c "import torch; print(torch.cuda.is_available())"nvidia-smi uses the kernel driver directly. CUDA programs need runtime libraries in LD_LIBRARY_PATH. The driver can be fine while runtime libraries are misconfigured.
Conda manages its own library paths. Try: conda install cudatoolkit=12.0 (matching your driver). Or set LD_LIBRARY_PATH before activating conda.
Check for broken symlinks with "find /usr/local/cuda -type l -xtype l". Also check SELinux/AppArmor are not blocking library loading.
Library loading precedes device detection
Library version mismatches
Init fails if libraries fail to load
Need help debugging CUDA errors? Download RightNow AI for intelligent error analysis and optimization suggestions.