The LLM component/service is used to query a Large Language Model (LLM) and generate a response.

To use the LLM service, we first obtain the service instance using the get_llm method, passing it the LLM service identifier service_name, and specifying the model we want to use with the model parameter.

Then, we invoke the generate method, passing it our Prompt to receive a response back, also represented as a Prompt.

Let’s try it out.


  1. Ensure bodhilib is installed.

  2. Ensure bodhiext package for openai is installed

!pip install -q bodhilib bodhiext.openai python-dotenv
# setup the environment variables
# input your OpenAI API key when prompted

import os
from getpass import getpass
from dotenv import load_dotenv

if "OPENAI_API_KEY" not in os.environ:
    os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API key:")
# import the `get_llm` method from bodhilib package
from bodhilib import get_llm
# Get the service instance to OpenAI Chat LLM service using the model `gpt-3.5-turbo`

llm = get_llm("openai_chat", model="gpt-3.5-turbo")
# build the prompt using the `Prompt` class
from bodhilib import Prompt
prompt = Prompt("Hello, how are you doing today? Can you tell me more about yourself?")

# Generate a response from the LLM service using the `generate` method
response = llm.generate(prompt)

# Let's print the response
import textwrap
print(">", textwrap.fill(response.text, 100))
> Hello! I'm an AI language model developed by OpenAI called GPT-3. Although I don't have personal
experiences or emotions, I'm here to assist you with any questions or tasks you might have. My
purpose is to generate human-like text based on the prompts given to me. Is there something specific
you would like to know or discuss? I'm here to help!

🎉 We just generated our first response from LLM using bodhilib.

Next, let’s explore how to generate a streaming response from the LLM APIs using PromptStream.