Galadriel Docs home page
Search...
⌘K
Get support
Get support
Search...
Navigation
Verified Inference
Galadriel L1 (Alpha)
Github
Twitter
Discord
About
Overview
For agents developers
Quickstart
How TEE works
Verify Signatures
Verify Attestation
Models
Rate limits
Examples
Eliza
API docs
POST
Proof of Sentience API
GET
History API
GET
History By Hash API
Other
Changelog
FAQ
On this page
Changelog
galadriel-node 0.0.19
😎 Improvements
galadriel-node 0.0.18
😎 Improvements
galadriel-node 0.0.17
🚀 New Features
😎 Improvements
galadriel-node 0.0.16
🚀 New Features
😎 Improvements
🐛 Bug Fix
galadriel-node 0.0.15
🚀 New Features
😎 Improvements
🐛 Bug Fix
galadriel-node 0.0.14
🚀 New Features
😎 Improvements
🐛 Bug Fix
Other
Changelog
Changelog
galadriel-node 0.0.19
😎 Improvements
Fixed python dependencies
galadriel-node 0.0.18
😎 Improvements
Small readme updates
Missing power_percent in healthcheck json
Change log level from error to info for protocol handling
Upgrades openai dependency
Fix logs duplicated by RichHandler and default root handler
Improvements and refactoring
galadriel-node 0.0.17
🚀 New Features
Report power limit and utilization of GPUs
Adds benchmarking CLI command
😎 Improvements
Improves node status report
Sets VLLM GPU utilisation to 0.95 to support bigger prompt sizes
Updates vllm version
Logging fixes in ping protocol
galadriel-node 0.0.16
🚀 New Features
Multiple backends were set up and the node will now auto-reconnect to the optimal one. This will optimize the latency of the backend-node connection.
Add LMDeploy support which based on our tests, serves inference responses around 15% faster
😎 Improvements
Redesigned logging to be more useful. The logs are now colored and contain timestamps. Also, every command accepts
--debug
flag and exceptions print stack traces to give more visibility and improve debugging
🐛 Bug Fix
Code cleanups
galadriel-node 0.0.15
🚀 New Features
Report node health
😎 Improvements
Reuse client for llm inference
🐛 Bug Fix
Some code cleanup
galadriel-node 0.0.14
🚀 New Features
Report GPU count
😎 Improvements
Increase minimum reconnecting backoff time, from 4 sec to 24 sec
Make required benchmarks model specific
🐛 Bug Fix
Make the node always try to reconnect
History By Hash API
FAQ
Assistant
Responses are generated using AI and may contain mistakes.