Explore our credit programs for startups and researchers
Blog

RunPod Blog.

Our team’s insights on building better
and scaling smarter.
All
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

How Much GPU VRAM Does Your LLM Need? (Complete Guide)

Learn how much GPU VRAM your LLM actually needs for training and inference, plus how to choose the right GPU for your workload.
Read article
AI Workloads

H200s Tensor Core GPUs Now Available on RunPod

We're pleased to announce that H200 is now available on RunPod at a price point of $3.99/hr in Secure Cloud. This GPU spec boosts the available VRAM for NVidia-based applications up to 141GB in a single unit along with increased memory bandwidth.
Read article
Hardware & Trends

What's New for Serverless LLM Usage in RunPod in 2025?

Out of all of the use cases that our serverless architecture has, LLMs are one of the best examples of it. Because so much of LLM use is dependent on the human using it to process, digest, and type a response, you save so much...
Read article
AI Workloads

5090s Are Almost Here: How Do They Shape Up Against the 4090?

Another year has come and another new card generation from NVidia is on the way. 5090s are due to become widely available this January...
Read article
Learn AI

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning
models—ready when you are.