Chat¶
Looking for example notebooks?
For example notebooks, check out examples/ai/chat
on our
GitHub.
The chat UI element provides an interactive chatbot interface for conversations. It can be customized with different models, including built-in AI models from popular providers or custom functions.
- class marimo.ui.chat(model: Callable[[List[ChatMessage], ChatModelConfig], object], *, prompts: List[str] | None = None, on_message: Callable[[List[ChatMessage]], None] | None = None, show_configuration_controls: bool = False, config: ChatModelConfigDict | None = None, allow_attachments: bool | List[str] = False, max_height: int | None = None)¶
A chatbot UI element for interactive conversations.
Example: Using a custom model.
Define a chatbot by implementing a function that takes a list of
ChatMessage
s and optionally a config object as input, and returns the chat response. The response can be any object, including text, plots, or marimo UI elements.def my_rag_model(messages, config): # Each message has a `content` attribute, as well as a `role` # attribute ("user", "system", "assistant"); question = messages[-1].content docs = find_docs(question) prompt = template(question, docs, messages) response = query(prompt) if is_dataset(response): return dataset_to_chart(response) return response chat = mo.ui.chat(my_rag_model)
Async functions and async generators are also supported, meaning these are both valid chat functions:
async def my_rag_model(messages): return await my_async_function(messages)
async def my_rag_model(messages): for response in my_async_iterator(messages): yield response
The last value yielded by the async generator is treated as the model response. ui.chat does not yet support streaming responses to the frontend. Please file a GitHub issue if this is important to you: https://github.com/marimo-team/marimo/issues
Example: Using a built-in model.
Instead of defining a chatbot function, you can use a built-in model from the
mo.ai.llm
module.chat = mo.ui.chat( mo.ai.llm.openai( "gpt-4o", system_message="You are a helpful assistant.", ), )
You can also allow the user to include attachments in their messages.
chat = mo.ui.chat( mo.ai.llm.openai( "gpt-4o", ), allow_attachments=["image/png", "image/jpeg"], )
Attributes.
value
: the current chat history, a list ofChatMessage
objects.
Initialization Args.
model
:(Callable[[List[ChatMessage], ChatModelConfig], object])
a callable that takes in the chat history and returns a responseprompts
: optional list of initial prompts to present to the useron_message
: optional callback function to handle new messagesshow_configuration_controls
: whether to show the configuration controlsconfig
: optionalChatModelConfigDict
to override the default configuration. Keys include:max_tokens
temperature
top_p
top_k
frequency_penalty
presence_penalty
allow_attachments
: (bool | List[str]) allow attachments. True for any attachments types, or pass a list of mime typesmax_height
: optional maximum height for the chat element
Public methods
Inherited from
UIElement
form
([label, bordered, loading, ...])Create a submittable form out of this
UIElement
.send_message
(message, buffers)Send a message to the element rendered on the frontend from the backend.
Inherited from
Html
batch
(**elements)Convert an HTML object with templated text into a UI element.
center
()Center an item.
right
()Right-justify.
left
()Left-justify.
callout
([kind])Create a callout containing this HTML element.
style
([style])Wrap an object in a styled container.
Public Data Attributes:
Inherited from
UIElement
value
The element’s current value.
Inherited from
Html
text
A string of HTML representing this element.
Basic Usage¶
Here’s a simple example using a custom echo model:
import marimo as mo
def echo_model(messages, config):
return f"Echo: {messages[-1].content}"
chat = mo.ui.chat(echo_model, prompts=["Hello", "How are you?"])
chat
Here, messages
is a list of ChatMessage
objects,
which has role
("user"
, "assistant"
, or "system"
) and content
(the
message string) attributes; config
is a
ChatModelConfig
object with various
configuration parameters, which you are free to ignore.
Using a Built-in AI Model¶
You can use marimo’s built-in AI models, such as OpenAI’s GPT:
import marimo as mo
chat = mo.ui.chat(
mo.ai.llm.openai(
"gpt-4",
system_message="You are a helpful assistant.",
),
show_configuration_controls=True
)
chat
Accessing Chat History¶
You can access the chat history using the value
attribute:
chat.value
This returns a list of ChatMessage
objects, each
containing role
, content
, and optional attachments
attributes.
Custom Model with Additional Context¶
Here’s an example of a custom model that uses additional context:
import marimo as mo
def rag_model(messages, config):
question = messages[-1].content
docs = find_relevant_docs(question)
context = "\n".join(docs)
prompt = f"Context: {context}\n\nQuestion: {question}\n\nAnswer:"
response = query_llm(prompt, config)
return response
mo.ui.chat(rag_model)
This example demonstrates how you can implement a Retrieval-Augmented Generation (RAG) model within the chat interface.
Templated Prompts¶
You can pass sample prompts to mo.ui.chat
to allow users to select from a
list of predefined prompts. By including a {{var}}
in the prompt, you can
dynamically insert values into the prompt; a form will be generated to allow
users to fill in the variables.
mo.ui.chat(
mo.ai.llm.openai("gpt-4o"),
prompts=[
"What is the capital of France?",
"What is the capital of Germany?",
"What is the capital of {{country}}?",
],
)
Including Attachments¶
You can allow users to upload attachments to their messages by passing an
allow_attachments
parameter to mo.ui.chat
.
mo.ui.chat(
rag_model,
allow_attachments=["image/png", "image/jpeg"],
# or True for any attachment type
# allow_attachments=True,
)
Built-in Models¶
marimo provides several built-in AI models that you can use with the chat UI element.
OpenAI¶
import marimo as mo
mo.ui.chat(
mo.ai.llm.openai(
"gpt-4o",
system_message="You are a helpful assistant.",
api_key="sk-proj-...",
),
show_configuration_controls=True
)
- class marimo.ai.llm.openai(model: str, *, system_message: str = 'You are a helpful assistant specializing in data science.', api_key: str | None = None, base_url: str | None = None)¶
OpenAI ChatModel
Args:
model: The model to use. Can be found on the OpenAI models page
system_message: The system message to use
api_key: The API key to use. If not provided, the API key will be retrieved from the OPENAI_API_KEY environment variable or the user’s config.
base_url: The base URL to use
Anthropic¶
import marimo as mo
mo.ui.chat(
mo.ai.llm.anthropic(
"claude-3-5-sonnet-20240620",
system_message="You are a helpful assistant.",
api_key="sk-ant-...",
),
show_configuration_controls=True
)
- class marimo.ai.llm.anthropic(model: str, *, system_message: str = 'You are a helpful assistant specializing in data science.', api_key: str | None = None, base_url: str | None = None)¶
Anthropic ChatModel
Args:
model: The model to use. Can be found on the Anthropic models page
system_message: The system message to use
api_key: The API key to use. If not provided, the API key will be retrieved from the ANTHROPIC_API_KEY environment variable or the user’s config.
base_url: The base URL to use
Google AI¶
import marimo as mo
mo.ui.chat(
mo.ai.llm.google(
"gemini-1.5-pro-latest",
system_message="You are a helpful assistant.",
api_key="AI..",
),
show_configuration_controls=True
)
- class marimo.ai.llm.google(model: str, *, system_message: str = 'You are a helpful assistant specializing in data science.', api_key: str | None = None)¶
Google AI ChatModel
Args:
model: The model to use. Can be found on the Gemini models page
system_message: The system message to use
api_key: The API key to use. If not provided, the API key will be retrieved from the GOOGLE_AI_API_KEY environment variable or the user’s config.
Groq¶
import marimo as mo
mo.ui.chat(
mo.ai.llm.groq(
"llama-3.1-70b-versatile",
system_message="You are a helpful assistant.",
api_key="gsk-...",
),
show_configuration_controls=True
)
- class marimo.ai.llm.groq(model: str, *, system_message: str = 'You are a helpful assistant specializing in data science.', api_key: str | None = None, base_url: str | None = None)¶
Groq ChatModel
Args:
model: The model to use. Can be found on the Groq models page
system_message: The system message to use
api_key: The API key to use. If not provided, the API key will be retrieved from the GROQ_API_KEY environment variable or the user’s config.
base_url: The base URL to use
Types¶
Chatbots can be implemented with a function that receives a list of
ChatMessage
objects and a
ChatModelConfig
.
- class marimo.ai.ChatMessage(role: Literal['user', 'assistant', 'system'], content: object, attachments: List[ChatAttachment] | None = None)¶
A message in a chat.
- class marimo.ai.ChatModelConfig(max_tokens: 'Optional[int]' = None, temperature: 'Optional[float]' = None, top_p: 'Optional[float]' = None, top_k: 'Optional[int]' = None, frequency_penalty: 'Optional[float]' = None, presence_penalty: 'Optional[float]' = None)¶
mo.ui.chat
can be instantiated with an initial
configuration with a dictionary conforming to the config.
ChatMessage
s can also include attachments.
- class marimo.ai.ChatAttachment(url: 'str', name: 'str' = 'attachment', content_type: 'Optional[str]' = None)¶
Supported Model Providers¶
We support any OpenAI-compatible endpoint. If you want any specific provider added explicitly (ones that don’t abide by the standard OpenAI API format), you can file a feature request.
Normally, overriding the base_url
parameter should work. Here are some examples:
chatbot = mo.ui.chat(
mo.ai.llm.openai(
model="llama3.1-8b",
api_key="csk-...", # insert your key here
base_url="https://api.cerebras.ai/v1/",
),
)
chatbot
chatbot = mo.ui.chat(
mo.ai.llm.openai(
model="llama-3.1-70b-versatile",
api_key="gsk_...", # insert your key here
base_url="https://api.groq.com/openai/v1/",
),
)
chatbot
chatbot = mo.ui.chat(
mo.ai.llm.openai(
model="grok-beta",
api_key=key, # insert your key here
base_url="https://api.x.ai/v1",
),
)
chatbot
Note
We have added examples for GROQ and Cerebras. These providers offer free API keys and are great for trying out Llama models (from Meta). You can sign up on their platforms and integrate with various AI integrations in marimo easily. For more information, refer to the AI completion documentation in marimo.