Let's explore an example of integrating promptquality with a LangChain chain
This example is pulled from Langchain Docs and most of the code is just LangChain implementation of a simple chain.
If you are using Vertex AI through langchain, concurrent requests to Vertex AI LLMs will fail to compute node outputs. Use one worker for best results.
Creating a simple chain with LangChain
First let's build the components of our chain
we want to ask a chat model a question about hallucinations, but we want to give it the context to answer correctly, so naturally we set up a vector db and use RAG. In this case we'll get the context from a Galileo blog post
from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.chat_models import ChatOpenAIfrom langchain.document_loaders import WebBaseLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom typing import Listfrom langchain.prompts import ChatPromptTemplatefrom langchain.schema import StrOutputParserfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.schema.document import Document# Load text from webpageloader =WebBaseLoader("https://www.rungalileo.io/blog/deep-dive-into-llm-hallucinations-across-generative-tasks")data = loader.load()# Split text into documentstext_splitter =RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)splits = text_splitter.split_documents(data)# Add text to vector dbembedding =OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=splits, embedding=embedding)# Create a retrieverretriever = vectordb.as_retriever()
Now we have the retriever, we can build our chain. The chain will
Take in a question.
Feed that question to our retriever for some context.
Fill out the prompt with the question and context.
Feed the prompt to a chat model.
output the answer from the model.
defformat_docs(docs: List[Document]) ->str:return"\n\n".join([d.page_content for d in docs])template ="""Answer the question based only on the following context:{context} Question: {question} """prompt = ChatPromptTemplate.from_template(template)model =ChatOpenAI()chain ={"context": retriever | format_docs,"question":RunnablePassthrough()}| prompt | model |StrOutputParser()
Integrating our chain with promptquality
Now all we have to do to integrate with promptquality, is to add our callback. In just 3 lines of code we can integrate promptquality into any existing LangChain experiments
import promptquality as pq# Create callback handlerprompt_handler = pq.GalileoPromptCallback( scorers=[pq.Scorers.latency, pq.Scorers.groundedness, pq.Scorers.factuality])# Run your chain experiments across multiple inputs with the galileo callbackinputs = ["What are hallucinations?","What are intrinsic hallucinations?","What are extrinsic hallucinations?"]chain.batch(inputs, config=dict(callbacks=[prompt_handler]))# publish the results of your runprompt_handler.finish()
Adding Tools and Agents
More complex chains including LangChain Tools and Agents, also integrate well with Galileo Evaluate.
First we can use our retriever, created above, and convert it to a tool.
from langchain.tools.retriever import create_retriever_tool# Create retreiver toolretriever_tool =create_retriever_tool( retriever,"hallucination_search","Search for information about hallucinations. Use this tool for any questions about hallucinations",)
Now let's create a ReAct Agent that has access to this tool.
Now that we have that, we're ready to integrate with promptquality, the exact same way as above.
import promptquality as pq# Create callback handlerprompt_handler = pq.GalileoPromptCallback( scorers=[pq.Scorers.latency, pq.Scorers.groundedness, pq.Scorers.factuality])# Run your chain experiments across multiple inputs with the galileo callbackinputs = [{"input":"What are hallucinations?"},{"input":"What are intrinsic hallucinations?"},{"input":"What are extrinsic hallucinations?"}]agent_executor.batch(inputs, config=dict(callbacks=[prompt_handler]))# publish the results of your runprompt_handler.finish()