from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
= 'nous-hermes-13b.ggmlv3.q4_0.bin'
model = 'dummy api key'
openai_api_key = 'http://localhost:8080/v1'
openai_api_base
= ChatOpenAI(
chat = model,
model = openai_api_key,
openai_api_key = openai_api_base,
openai_api_base =0.0) temperature
LocalAI is able to run LLMs locally, and offers REST API compatible with OpenAI API.
Create a chat model
For a locally hosted model, we don’t need to specify openai’s api key, but do need to specify the endpoint of the model.
Use prompt template to create messages
= """
template_string Translate the text that is delimited by triple backticks into French.
text:
```{text}```
"""
= ChatPromptTemplate.from_template(template_string) prompt_template
= """
user_prompt Hello, how are you?
"""
= prompt_template.format_messages(text=user_prompt)
messages print(messages)
[HumanMessage(content='\nTranslate the text that is delimited by triple backticks into French.\n\ntext:\n```\nHello, how are you?\n```\n', additional_kwargs={}, example=False)]
import time
= time.perf_counter()
start_time = chat(messages)
response = time.perf_counter() - start_time
elapsed
print(f'Time elapsed: {elapsed}')
print(response.content)
Time elapsed: 20.956661384552717
translation:
Bonjour, comment vas-tu ?