About
Overview
Welcome to the official documentation for Galadriel: Infinite LLM Inference Network.
Galadriel is building the world’s largest distributed LLM inference network - for developers to scale their LLM apps while cutting costs.
The problem Galadriel solves
While developing LLM apps in production devs face 3 major problems:
- Running LLM inference at scale is expensive
- Many REST API providers i.e. Groq push startups to enterprise deals
- Cloud and REST API providers are running out of GPUs and can’t provide enough throughput or do it at a very high markup
As devs, we have faced these problems ourselves multiple times thus we believe solving these are important.
Infinite LLM Inference
By leveraging a distributed network of GPUs, Galadriel enables devs to tap into a vast amount of FLOPS with a single line of code change.
Galadriel has four key features:
- Scalability: as a dev, you never have to worry if you are hitting API rate limits or need to sign an enterprise deal. We make sure that the network scales dynamically based on the load and you should be able to make infinite LLM requests as long as you have the money to pay for them.
- Cost-savings: We have aggregated thousands of datacenter and consumer-grade GPUs and cut out the middlemen which enable us to pass down the cost savings to developers. We are constantly working on how to further lower the price so LLMs become more accessible.
- High-availability: Galadriel benefits from a robust, fault-tolerant distributed network that ensures that your production application remains operational.
- Easy-integration: Galadriel API is mostly OpenAI compatible so it’s simple to integrate into your existing applications.