The NVIDIA GeForce GTX 980 Ti from the Maxwell generation represents vintage GPU technology from 2015. With 2,816 CUDA cores and 6GB GDDR5 memory, it is a legacy card with severely limited relevance for modern CUDA development. As a decade-old card, the GTX 980 Ti lacks Tensor Cores, has outdated compute capabilities, and faces diminishing software support. It is only mentioned here for historical context and is not recommended for any CUDA work in 2025. This guide provides minimal coverage for users who may encounter this legacy hardware.
| Architecture | Maxwell (GM200) |
| CUDA Cores | 2,816 |
| Tensor Cores | 0 |
| Memory | 6GB GDDR5 |
| Memory Bandwidth | 336 GB/s |
| Base / Boost Clock | 1000 / 1075 MHz |
| FP32 Performance | 6.1 TFLOPS |
| FP16 Performance | 0.19 TFLOPS |
| L2 Cache | 3MB |
| TDP | 250W |
| NVLink | No |
| MSRP | $649 |
| Release | June 2015 |
This code snippet shows how to detect your GTX 980 Ti, check available memory, and configure optimal settings for the Maxwell (GM200) architecture.
import torch
import pynvml
# Check if GTX 980 Ti is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Using device: {torch.cuda.get_device_name(0)}")
# GTX 980 Ti Memory: 6GB - Optimal batch sizes
# Architecture: Maxwell (GM200)
# CUDA Cores: 2,816
# Memory-efficient training for GTX 980 Ti
torch.backends.cuda.matmul.allow_tf32 = True # Enable TF32 for Maxwell (GM200)
torch.backends.cudnn.allow_tf32 = True
# Check available memory
pynvml.nvmlInit()
handle = pynvml.nvmlDeviceGetHandleByIndex(0)
info = pynvml.nvmlDeviceGetMemoryInfo(handle)
print(f"Free memory: {info.free / 1024**3:.1f} GB / 6 GB total")
# Recommended batch size calculation for GTX 980 Ti
model_memory_gb = 2.0 # Adjust based on your model
batch_multiplier = (6 - model_memory_gb) / 4 # 4GB per batch unit
recommended_batch = int(batch_multiplier * 32)
print(f"Recommended batch size for GTX 980 Ti: {recommended_batch}")| Task | Performance | Comparison |
|---|---|---|
| Any modern workload | Obsolete | Not applicable in 2025 |
| Legacy FP32 compute | Poor | Outdated performance |
| ML/DL work | Unusable | No Tensor Cores, too old |
| Classical CUDA | Poor | Better options exist |
| Learning | Not recommended | Use modern hardware |
| Production | Obsolete | Replace immediately |
| Use Case | Rating | Notes |
|---|---|---|
| Any CUDA Work | Poor | Not recommended for 2025 |
| ML/DL | Poor | Completely obsolete |
| Learning | Poor | Use modern hardware instead |
| Production | Poor | Replace with current gen |
| Historical Reference | Fair | Educational context only |
| Vintage Collection | Good | Historical significance |
No. This is a decade-old card with Compute Capability 5.2. Many frameworks are dropping Maxwell support. If you have one, consider it a paperweight and invest in at least an RTX 3060 12GB.
Not recommended. You will miss all modern CUDA features (Tensor Cores, TF32, FP16 acceleration, etc). Even GTX 1660 Super at $90 is vastly better. Do not start with Maxwell in 2025.
Only for vintage GPU collectors. For CUDA work, it has no value in 2025. If someone offers it free, politely decline and buy a modern budget card instead.
Museum display, historical reference, or recycling. Not suitable for any CUDA development, machine learning, or modern compute workloads. Technology has advanced significantly since 2015.
Modern replacement, vastly better
Budget modern option
Current gen entry level
Excellent budget option
Ready to optimize your CUDA kernels for GTX 980 Ti? Download RightNow AI for real-time performance analysis.