In this tutorial, we’ll enhance a ChatGPT-like chatbot with a knowledge base. The knowledge base will be used to retrieve relevant information that can be used to generate more informative and contextually relevant responses. This is also known as retrieval-augmented generation (RAG).

Prerequisites

You can read this tutorial as-is to understand the basics of RAG with Galadriel. However, to create a knowledge base you can use in your contract, you will need:

  • Python 3.11 or later installed on your system.
  • A Galadriel devnet account. For more information on setting up a wallet, visit Setting Up A Wallet.
  • Some devnet tokens. Get your free devnet tokens from the Faucet.
  • An API key (JWT) for pinata.cloud to facilitate document uploading to IPFS. You can obtain this key by registering at pinata.cloud.
  • The files you want to add to the knowledge base (see below for supported formats).

What is RAG?

RAG is a solution to a simple problem. Assume you want to make an app that can answer questions based on a particular book. To make an LLM answer questions based on a book, you need the relevant parts of the book to be available to the model, so you need to put these parts into the context window (input prompt to the LLM). However, since input tokens are expensive, you don’t want to put in the whole book on every query.

RAG solves this problem by storing the book in a knowledge base and only retrieving the relevant parts of the book when needed. The straightforward implementation of this is to:

  1. Divide the book into smaller parts (e.g., paragraphs).
  2. Put these paragraphs into an index structure for fast retrieval.
  3. When a query comes in, retrieve the relevant paragraphs from the index and put them into the context window.

Galadriel’s RAG implementation does this and uses the industry-standard approach of creating document embeddings, specifically using OpenAI’s text-embedding-3-small model, to support semantic search (as opposed to simple keyword-based search).

The key parts of the RAG implementation are:

  1. Collecting the text: you upload to IPFS a list of text documents that will comprise the knowledge base.
  2. Building the index: once you give the IPFS link (pointing to a list of documents), our oracle running in a TEE will embed these documents and build an index.
  3. Querying the index: you can query the index by calling a function in your contract, including a reference to the index in IPFS. The oracle will retrieve the relevant documents and return them to you.

This way, the knowledge base is stored off-chain in IPFS. However, a reference to the knowledge base is stored on-chain. This way it is guaranteed to be used correctly because the oracle runs in a TEE, and the IPFS file cannot be modified (otherwise the CID will change)

Repository and environment setup

Here’s how you can setup your environment to create a knowledge base.

1

If you haven’t already, clone the Galadriel repository:

git clone https://github.com/galadriel-ai/contracts.git
cd contracts/rag_tools
2

Create a virtual environment for Python dependencies:

python -m venv venv
source venv/bin/activate
3

Install the required Python packages:

pip install -r requirements.txt
4

Create a .env file and add your pinata.cloud JWT and wallet private key as follows:

ORACLE_ADDRESS=galadriel_oracle_address
PRIVATE_KEY=your_wallet_private key
PINATA_API_KEY=your_api_key_here

ORACLE_ADDRESS should be set to the current Galadriel oracle address: .

PRIVATE_KEY will be used to make an indexing request to the oracle. You can use any account as long as it has enough devnet tokens to pay for the transaction.

PINATA_API_KEY will be used to upload the documents to IPFS.

Collect your knowledge base files

Collect all files you want to end up in your knowledge base in a single directory. For example, into a kb folder:

$ ls kb/
bitcoin.pdf
ethereum.pdf

You can put anything you want into the knowledge base, as long as it can be converted into text format. The scripts we provide uses the Unstructured file loader in langchain to do a default conversion for you for a list of common formats: text files, powerpoints, html, pdfs, images, and more. However, you can customize the extraction process by customizing the loader in add_knowledge_base.py script.

Index the knowledge base

Now we can execute the script that will extract the text from the documents in the kb/ directory and upload them to IPFS. After uploading the documents, the script will request the oracle to index the documents. Once the index is complete, the script will output the IPFS hash of the index.

The execution of the script should look like this:

$ python add_knowledge_base.py -d kb -s 1500
[Loading 2 files from kb.]
Processing Files: 100%|██████████████████████| 2/2 [00:08<00:00,  4.19s/file]
Generated 78 documents from 2 files.
Uploading documents to IPFS, please wait...done.
Requesting indexing, please wait...done.
Waiting for indexing to complete...done.
Knowledge base indexed, index CID `QmdVEfixTc7qf7HQLN6UqxPsiMKSDcdKx3fSjiri2MRAfR`.
Use CID `bafkreiduon46ccwwteicuqwjwikd43f7mt7xzpp6kjdheducgzoex555xa` in your contract to query the indexed knowledge base.

Make sure you store the index IPFS CID from this output as we will need it later — in this example it is bafkreiduon46ccwwteicuqwjwikd43f7mt7xzpp6kjdheducgzoex555xa.

Integrating in your contract

Now that your knowledge base is indexed, we can actually query it in a contract. To do this, we need to do two things:

  1. Store the index IPFS hash in the contract.
  2. Make a call to the oracle to start the knowledge base query: calling createKnowledgeBaseQuery.
  3. Add a listener to our contract which the oracle will call once the results of the query are ready: implementing onOracleKnowledgeBaseQueryResponse.

We will go through these steps in the next sections. See the oracle reference for a full description of the RAG-related interface.

Store the index IPFS hash in the contract

Since on every query we need to reference the index, we need to store the index IPFS hash in the contract. This can be done by adding a new state variable to the contract:

string public knowledgeBase;

You might also want to add this into the constructor of your contract:

constructor(address initialOracleAddress, string memory knowledgeBaseCID) {
    owner = msg.sender;
    oracleAddress = initialOracleAddress;
    knowledgeBase = knowledgeBaseCID;
}

At contract deployment time, you will pass in the IPFS CID of the specific knowledge base you created above.

Create a knowledge base query

In your contract, at the point where you want to query the knowledge base, you need to call the createKnowledgeBaseQuery function in the oracle. This function takes the IPFS link to the index as an argument. The oracle will then retrieve the relevant documents and return them to you.

Make sure you have the oracle interface defined in your contract file:

interface IOracle {
    function createKnowledgeBaseQuery(
        uint kbQueryCallbackId,
        string memory cid,
        string memory query,
        uint32 num_documents
    ) external returns (uint i);
}

You need to call the createKnowledgeBaseQuery function immediately prior to every call to the LLM (because on every LLM call we want the knowledge base contents to be available). If you’ve built your ChatGPT-like chatbot similarly to our example, that means adding a call to createKnowledgeBaseQuery in two places: startChat and addMessage. Refer to the chatbot tutorial for a full explanation of these methods.

In both cases, you should make sure the knowledge base IPFS hash is non-empty. There’s another tricky thing: if there is no knowledge base, you want to call the LLM immediately. If there is one, you need to call the LLM in the onOracleKnowledgeBaseQueryResponse callback. (You can simplify this if you always deploy your contract with a knowledge base and it is never empty.)

if (bytes(knowledgeBase).length > 0) {
    // If there is a knowledge base, create a knowledge base query
    IOracle(oracleAddress).createKnowledgeBaseQuery(
        runId,
        knowledgeBase,
        message,
        3
    );
} else {
    // Otherwise, create an LLM call
    IOracle(oracleAddress).createLlmCall(runId);
}

In the snippet above, message is the text used to query documents in the knowledge base.

Implement the callback

Once the knowledge base query is complete, the oracle will call the onOracleKnowledgeBaseQueryResponse function in your contract. This function should be implemented to handle the results of the query. The results will be a list of plaintext documents.

See the inline comments below for an explanation of this function — which again assumes you are using the ChatGPT-like chatbot from our example.

function onOracleKnowledgeBaseQueryResponse(
    uint runId,
    string [] memory documents,
    string memory errorMessage
) public onlyOracle {
    ChatRun storage run = chatRuns[runId];
    require(
        keccak256(abi.encodePacked(run.messages[run.messagesCount - 1].role)) == keccak256(abi.encodePacked("user")),
        "No message to add context to"
    );
    // Retrieve the last user message
    Message storage lastMessage = run.messages[run.messagesCount - 1];

    // Start with the original message content
    string memory newContent = lastMessage.content;

    // Append "Relevant context:\n" only if there are documents
    if (documents.length > 0) {
        newContent = string(abi.encodePacked(newContent, "\n\nRelevant context:\n"));
    }

    // Iterate through the documents and append each to the newContent
    for (uint i = 0; i < documents.length; i++) {
        newContent = string(abi.encodePacked(newContent, documents[i], "\n"));
    }

    // Finally, set the lastMessage content to the newly constructed string
    lastMessage.content = newContent;

    // Call LLM
    IOracle(oracleAddress).createLlmCall(runId);
}

Putting it all together

That’s it — if you now deploy the contract to the Galadriel Devnet, you can start chatting with your on-chain chatbot.

You’ll find a fully functional KB-enabled chatbot in our example contracts directory: ChatGpt.sol.

What’s Next?

Congratulations on deploying your RAG-enabled chatbot! Explore further:

  • Dive deeper into the Galadriel documentation, particularly the How It Works section, to understand the underlying technology.
  • Review the full RAG reference to understand the full capabilities of the RAG oracle.
  • Experiment with different LLMs, e.g. Groq-hosted open-source LLMS or take control over the nuances of text generation: see Solidity reference.
  • Explore other Use Cases to get inspired for your next project.

Happy building!