LangChain with LocalAI

LangChain
Published

June 17, 2023

LocalAI is able to run LLMs locally, and offers REST API compatible with OpenAI API.

Create a chat model

For a locally hosted model, we don’t need to specify openai’s api key, but do need to specify the endpoint of the model.

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

model = 'nous-hermes-13b.ggmlv3.q4_0.bin'
openai_api_key = 'dummy api key'
openai_api_base = 'http://localhost:8080/v1'

chat = ChatOpenAI(
    model = model,
    openai_api_key = openai_api_key,
    openai_api_base = openai_api_base,
    temperature=0.0)

Use prompt template to create messages

template_string = """
Translate the text that is delimited by triple backticks into French.

text:
```{text}```
"""
prompt_template = ChatPromptTemplate.from_template(template_string)
user_prompt = """
Hello, how are you?
"""
messages = prompt_template.format_messages(text=user_prompt)
print(messages)
[HumanMessage(content='\nTranslate the text that is delimited by triple backticks into French.\n\ntext:\n```\nHello, how are you?\n```\n', additional_kwargs={}, example=False)]
import time

start_time = time.perf_counter()
response = chat(messages)
elapsed = time.perf_counter() - start_time

print(f'Time elapsed: {elapsed}')
print(response.content)
Time elapsed: 20.956661384552717
translation:
Bonjour, comment vas-tu ?