Get productive with CUDA development in 3 simple steps
Required for Windows/Linux: CUDA Toolkit 11.0-12.5
nvcc --versionmacOS users: Use remote GPUs for free or our GPU emulator - no local CUDA installation needed.
Tip: RightNow AI will automatically detect your CUDA installation
# Create a new project folder
mkdir my-cuda-project
cd my-cuda-project
# Open in RightNow AI
rightnow .Or use File → Open Folder in the UI.
Create a simple hello.cu file:
1#include <stdio.h>2 3__global__ void hello() {4 printf("Hello from GPU thread %d!\n", threadIdx.x);5}6 7int main() {8 hello<<<1, 10>>>();9 cudaDeviceSynchronize();10 return 0;11}Required for AI features - OpenRouter is the only supported BYOK provider
Alternative: Upgrade to RightNow Pro for managed AI access
__global__ functionTip: Try asking the AI: "How can I optimize this kernel for memory bandwidth?"
Use Ctrl+K editing and chat for CUDA development
Explore comprehensive NCU metrics and AI optimization
Set up Ollama, vLLM for privacy-first development
Access 200+ models through OpenRouter integration
Need help? Try the chat sidebar and ask: "How do I get started with CUDA profiling?"