In the previous Getting Started guide, we learned how to generate a synchronous response from LLM using the generate method.

If we want to receive the response as a stream, we need to pass stream=True to the generate method. In this case, we receive a PromptStream that we can iterate over to obtain the response as it is generated by the LLM.

Let’s get started.


  1. Ensure bodhilib is installed, along with one of the LLM plugins.

!pip install -q bodhilib bodhiext.openai python-dotenv
# setup the environment variables
# input your OpenAI API key when prompted

import os
from getpass import getpass
from dotenv import load_dotenv

if "OPENAI_API_KEY" not in os.environ:
    os.environ["OPENAI_API_KEY"] = getpass("Enter your API key: ")
# Get instance of OpenAI Chat LLM service
from bodhilib import get_llm

llm = get_llm("openai_chat", model="gpt-3.5-turbo")
# Generate the streaming response passing stream=True

response = llm.generate("Write a 30 words introduction on the topic of global warming", stream=True)

# print the streaming response
for chunk in response:
    print(chunk.text, end="")
Global warming is the gradual increase in Earth's surface temperature due to human activities, primarily the excessive release of greenhouse gases, posing a significant threat to the planet's ecosystems and biodiversity.

🎉 We just generated a streaming response.

Next, let’s check out how we can templatize prompts using PromptTemplate.