· AI Engineering · 2 min read
Doing Machine Learning on a Budget with Vast AI
Spin up Vast AI rigs only when you train, keeping GPU spending close to electricity costs.

If you aspire to learn how to train machine learning models, or just want to try it as a hobby without buying expensive hardware, Vast AI is for you.
You can rent both consumer and datacenter GPUs there at very low prices. Based on my calculations, the cost is about the same as the electricity you’d pay in the UK to run the same GPU yourself, and roughly 4-10x lower than Amazon AWS or Google GCP. Vast AI is a marketplace that connects people or organizations with idle GPU servers to people like us who need compute for training runs.
The ideal workflow is to code on your local machine, then fire up a server on Vast AI when you’re ready, sync your data, and train. When the job finishes, download your model back to your local machine and destroy the server. Next time, spin up a new machine and repeat. These servers are ephemeral: you only provision one when you need to train, and you ideally avoid storing anything there. Two reasons for that:
- You only pay for the training time, nothing more.
- You might not get the exact same server next time because someone else could rent it first, so don’t rely on persistent storage.
Of course, there are security and privacy questions, since the server owner can see what’s happening. So:
- Don’t train anything sensitive there yet. Don’t store private keys or credentials. If you’re just learning or experimenting, your data probably isn’t sensitive anyway.
- Choose trusted data centers from verified organizations to improve security, but still follow point (1).
That way, you can enjoy your ML journey without buying GPUs upfront or worrying about them depreciating over the next couple of years.



