GPU Instances
in progress
Log In
David Lechevalier
GPUs are now available in public preview! Everyone can now deploy and run GPU-accelerated workloads on Koyeb.
We have also added H100 and A100 GPU cards to our catalog. With 80GB of vRAM, these cards are ideal for generative AI processes including large language models, recommendation models, and video and image generation.
David Lechevalier
GPUs are now available in public preview! Everyone can now deploy and run GPU-accelerated workloads on Koyeb.
We have also added H100 and A100 GPU cards to our catalog. With 80GB of vRAM, these cards are ideal for generative AI processes including large language models, recommendation models, and video and image generation.
Édouard Bonlieu
We’re excited to announce that Serverless GPUs are available for all your AI inference needs directly through the Koyeb platform!
We're starting with 4 Instances (L4, L40S, and V100), starting at $0.50/hr and billed by the second.
Request access to join the GPU serverless private preview.
Yann Léger
in progress
Yann Léger
planned
If you're interested by V100, L40S, A100, H100, or other GPUs available on the platform with autoscaling and scale-to-zero capabilities, let us know and request early access.
Yann Léger
under review