30 hours per week of free GPU time. Plus about 20–30 hours per week of TPU v3-8 time. That’s the core of Kaggle free GPU access, and it’s one of the easiest ways to get real accelerator hours without a credit card.
ML engineers testing training loops, founders trying to stretch runway, and students who just need a place to fine-tune a model can all get value here. The setup is basically “open a browser notebook, pick a GPU or TPU, run.” Phone verification is the only real gate.
This guide covers eligibility, the exact signup steps, what Kaggle’s accelerators can and can’t do, and how to squeeze the most training out of your weekly quota.
Program at a Glance
| Provider | Google (via Kaggle) |
| Credit Amount | 30 GPU hrs/week + 20–30 TPU hrs/week |
| Duration | Rolling weekly quota reset |
| Eligibility | Verified Kaggle account with phone number |
| Credit Card Required? | No. Never required for Kaggle notebooks. |
| Difficulty | Easy. Phone verification unlocks accelerators. |
| Best For | Model training, fine-tuning, competitions |
| Official Page | Google Program Page |
What You Actually Get
Kaggle gives every verified account holder weekly access to NVIDIA GPUs and a TPU v3-8 accelerator inside browser-based Jupyter notebooks. Your GPU pool is shared across NVIDIA Tesla P100 (16 GB) and a dual NVIDIA T4 setup (T4 x2 in beta, 32 GB total VRAM). On the TPU side, you get TPU v3-8 (128 GB HBM across 8 cores), with a floating weekly quota that can vary a bit depending on demand. You also get background execution via “Save & Run All (Commit),” so a training run can keep going after you close the tab.
In practical terms, the 16–32 GB VRAM options are enough for a lot of serious work: QLoRA fine-tuning for 7B–13B models, classic CV training (ResNet, EfficientNet), and plenty of iterative experimentation. The time caps (9 hours per GPU/TPU session) matter, but with checkpoints and commits you can usually stitch progress across sessions without too much pain.
Who Qualifies (and Who Doesn’t)
If you can create a Kaggle account and verify your phone number, you qualify for the GPU/TPU accelerators. Kaggle keeps it simple on purpose. The big restriction is that verification is mandatory for accelerators, and Kaggle enforces it pretty strictly to prevent abuse.
- You need a Kaggle account created through kaggle.com using a supported signup method.
- Phone verification via SMS is required to unlock GPU and TPU accelerators.
- Expect “one phone number per account” enforcement, which Kaggle uses as an anti-abuse control.
- No billing account is needed, and no credit card is ever required for Kaggle notebook access.
If you can’t complete phone verification (some carriers and many VoIP numbers reportedly fail), you will be stuck on CPU notebooks only. Also, if you try to create multiple accounts to farm quota, that “one number per account” rule is designed to stop you.
How to Sign Up
Registration is quick, but do the phone verification before you expect a GPU to show up.
- Go to kaggle.com and click Register.
- Sign up with a Google account, email, or another supported method (it’s completely free).
- Navigate to your profile settings (click your avatar, then Settings).
- Scroll to Phone Verification and verify your phone number via SMS code.
- Once verified, open or create any notebook at kaggle.com/code, click Settings in the right sidebar, then choose an accelerator in the Accelerator dropdown (GPU P100, GPU T4 x2, or TPU v3-8).
After verification, accelerators become selectable per notebook. If verification doesn’t work, try a different carrier-backed number (VoIP is a common failure) because Kaggle may reject it.
What the Credits Cover
Kaggle’s “credits” are really weekly compute quotas tied to notebook sessions. You can run GPU notebooks on P100 or T4 x2, or run TPU notebooks on TPU v3-8, all inside Kaggle’s hosted environment. CPU notebooks are available without a weekly cap, with only a per-session limit.
| Service / Feature | What It Does | Included? |
|---|---|---|
| NVIDIA GPU notebooks (P100, T4 x2) | Train and run deep learning workloads with CUDA GPUs. | ✓ |
| TPU v3-8 notebooks | Accelerate TensorFlow/JAX workloads on TPU hardware. | ✓ |
| Background execution (Commit) | Runs notebooks in the background after closing the tab. | ✓ |
| Internet access toggle | Allow pip installs and downloads when enabled in Settings. | Partial |
Notable exclusions: you don’t get a persistent VM, you can’t SSH in, and Kaggle isn’t a deployment platform. It’s built for experiments and training runs, not production serving.
Limitations to Know About
Every free compute program has trade-offs. Kaggle’s are reasonable, but you should know them before you plan a week of training around it.
- There is no persistent VM, so each session starts fresh and you rely on commits for saved outputs.
- No SSH or terminal access is provided, which means you work inside the notebook UI.
- GPU/TPU sessions cap at 9 hours, and CPU-only sessions cap at 12 hours.
- Kaggle uses a floating TPU quota and can also reduce available GPU hours during peak demand.
When you run out of GPU quota mid-week, you don’t get billed. You simply lose GPU access until your rolling weekly window resets, but CPU notebooks keep working. If your session hits the time limit or you go inactive in an interactive session, Kaggle may terminate the session after an “Are you still there?” prompt, so long unattended runs should be done via commit.
Have Unused Google Credits?
Kaggle itself doesn’t hand you a transferable “credit balance,” but many teams also sit on Google Cloud or Google startup credits they never fully burn before expiration. It happens a lot with accelerator-heavy workloads: you migrate, priorities change, or the quota clock runs out. If you have unused Google credits from other programs or agreements, AI Credit Mart lets you list them so they don’t just expire worthless. Honestly, it’s a better outcome than watching a five-figure allocation disappear.
Need More Google Credits?
If Kaggle’s weekly quota isn’t enough, the next step usually costs money somewhere. You can apply for larger Google programs, or you can buy surplus credits at a discount. AI Credit Mart lists discounted Google credits from organizations that can’t use their full allocations, typically around 30–70% below retail. That can buy you time while you decide whether to move to a full production setup.
Tips for Getting the Most Out of Your Credits
- Use background execution by saving a version and selecting “Save & Run All (Commit)” so your run won’t die when you close the tab.
- Enable mixed precision (AMP) on T4 notebooks, because Tensor Cores do nothing if you stay FP32.
- Save checkpoints frequently to /kaggle/working/ so you can resume after the 9-hour session cap.
- Chain notebooks by saving checkpoints as a Kaggle dataset, then loading them into a new notebook to continue training.
- Monitor your remaining GPU/TPU hours in the notebook Settings panel before you kick off a long run.
Frequently Asked Questions
They’re worth 30 hours/week of NVIDIA GPU time plus about 20–30 hours/week of TPU v3-8 time, which is often enough for a full week of experiments or a few serious training runs. In terms of capability, you’re getting access to a P100 (16 GB) or dual T4s (32 GB total) for PyTorch/TensorFlow training, and a TPU v3-8 (128 GB HBM) for TensorFlow/JAX workloads. The real “value” comes from the environment too: pre-installed ML libraries, easy dataset attachment, and background execution for long runs. If you checkpoint well, you can push surprisingly far without paying anything.
No.
For Kaggle notebooks, GPU and TPU access runs on a rolling weekly window that resets over time rather than expiring on a single date. CPU notebooks don’t have a weekly cap, only a per-session limit.
Yes. If you have Google credits you won’t use before they expire, you can list them on AI Credit Mart and sell them at up to 70% of face value. Companies regularly list surplus credits from startup programs and enterprise agreements.
AI Credit Mart has discounted Google credits available from companies with surplus allocations. Prices are typically 30-70% below retail.
On Kaggle, you don’t get charged when your GPU/TPU quota runs out; you simply lose accelerator access until the rolling weekly quota window refreshes.
Kaggle offers NVIDIA Tesla P100 (16 GB), NVIDIA T4 x2 in beta (2 × 16 GB), and TPU v3-8 (128 GB HBM across 8 cores) as notebook accelerators once your account is phone-verified.
Use “Save Version” and choose “Save & Run All (Commit)” so the notebook runs in the background. Interactive sessions can prompt “Are you still there?” after inactivity and may terminate if you don’t confirm, which is brutal if you walked away for lunch. Commit runs still obey the session caps (9 hours for GPU/TPU and 12 hours for CPU), so save checkpoints to /kaggle/working/ and plan to resume in a new session if needed.
Kaggle’s free GPU and TPU quota is real compute with a low barrier to entry. Verify your phone, use commits, checkpoint often, and you can get a lot done before you spend a dollar.
Your AI credits are losing value every day
Join the marketplace and start trading unused credits today.