cudaErrorContextIsDestroyed (709)cudaErrorContextIsDestroyed (error code 709) occurs when CUDA operations are attempted after the CUDA context has been destroyed. This typically happens during program shutdown when cleanup order is incorrect. The CUDA context is the connection between your program and the GPU. Once destroyed, no CUDA operations can occur. This error often appears in atexit handlers, destructors, or when using GPU resources across thread boundaries incorrectly. This guide covers proper context lifecycle management and common pitfalls.
CUDA error: context is destroyed cudaErrorContextIsDestroyed: context is destroyed CUDA_ERROR_CONTEXT_IS_DESTROYED Cannot use GPU after context destroyed
Ensure GPU objects are destroyed before context.
// BAD: Static GPU object with destructor
static CudaBuffer* g_buffer; // Destroyed after CUDA context!
// GOOD: Use explicit cleanup before exit
class CudaBuffer {
public:
void cleanup() {
if (d_ptr) cudaFree(d_ptr);
d_ptr = nullptr;
}
~CudaBuffer() { cleanup(); } // Safe if already cleaned
};
void shutdown() {
g_buffer->cleanup(); // Explicit cleanup
cudaDeviceReset(); // Now safe
}Do not use GPU in shutdown callbacks.
// BAD: CUDA operations in atexit
void cleanup() {
cudaFree(d_ptr); // Context may be destroyed!
}
atexit(cleanup);
// GOOD: Clean up before registering atexit
void main_cleanup() {
if (d_ptr) {
cudaFree(d_ptr);
d_ptr = nullptr;
}
// No cudaDeviceReset needed if not required
}
int main() {
// ... program ...
main_cleanup(); // Explicit cleanup
return 0;
}Explicit cleanup before Python shutdown.
import torch
import atexit
# Store tensors that need cleanup
gpu_tensors = []
def cleanup_gpu():
global gpu_tensors
for t in gpu_tensors:
del t
gpu_tensors.clear()
# Force cleanup before context destroyed
if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.synchronize()
# Register cleanup early
atexit.register(cleanup_gpu)
# Or use context manager
def main():
tensor = torch.randn(1000).cuda()
try:
# ... use tensor ...
pass
finally:
del tensor
torch.cuda.empty_cache()Each thread needs proper context association.
// Threads inherit context from parent at creation time
// But context can be destroyed in main thread
void worker_thread() {
// Check context is still valid
int device;
if (cudaGetDevice(&device) != cudaSuccess) {
printf("No valid CUDA context in thread\n");
return;
}
// Now safe to use CUDA
}
// Main thread must keep context alive until threads finish
void main() {
std::thread worker(worker_thread);
worker.join(); // Wait for thread
cudaDeviceReset(); // Only after all threads done
}Static destructor runs after CUDA runtime cleanup.
static std::vector<float*> g_gpu_buffers;
void allocate() {
float* ptr;
cudaMalloc(&ptr, 1024);
g_gpu_buffers.push_back(ptr);
}
// Static destructor tries to free after context destroyed
~Cleanup() {
for (auto ptr : g_gpu_buffers)
cudaFree(ptr); // Crashes - context gone!
}Explicit cleanup before program exit keeps context valid.
static std::vector<float*> g_gpu_buffers;
void cleanup_buffers() {
for (auto ptr : g_gpu_buffers) {
if (ptr) cudaFree(ptr);
}
g_gpu_buffers.clear();
}
int main() {
allocate();
// ... use buffers ...
cleanup_buffers(); // Explicit cleanup while context valid
return 0;
}The CUDA runtime cleans up during program exit. If your destructors or atexit handlers try to use CUDA after this cleanup, the context is already destroyed.
Delete GPU tensors explicitly before exit, use context managers, and call torch.cuda.empty_cache() in cleanup. Consider gc.collect() to force Python garbage collection before CUDA cleanup.
Not usually. CUDA cleans up automatically at exit. Only call cudaDeviceReset() if you need to reinitialize CUDA or explicitly release resources during program execution.
Context issues can cause device errors
Destroyed context similar to no device
Context issues during init
Need help debugging CUDA errors? Download RightNow AI for intelligent error analysis and optimization suggestions.