Skip to content

qa ¤

This module defines a protocol and an implementation for a Question-Answering (QA) model that processes questions and returns answers with or without streaming capabilities.

QAModelLike ¤

Bases: Protocol

A protocol that defines the methods that a QA model should implement.

answer ¤

answer(
    question: str,
    context: str,
    history: list[Message] | None = None,
    *args,
    stream: bool = False,
    **kwargs
) -> Message | Iterator[Message]

Answer a question given a context and chat history with or without streaming.

Parameters:

  • question (str) –

    The question to answer.

  • context (str) –

    The context for the question.

  • history (list[Message] | None, default: None ) –

    The chat history.

  • stream (bool, default: False ) –

    Whether to stream the AI response.

Returns:

OllamaQAModel ¤

OllamaQAModel(model_name: str)

A QA model that uses the Ollama library.

It implements the QAModelLike protocol.

Attributes:

  • model_name (str) –

    The name of the model to use.

Parameters:

  • model_name (str) –

    The name of the model to use.

answer ¤

answer(
    question: str,
    context: str,
    history: list[Message] | None = None,
    *args,
    stream: bool = True,
    **kwargs
) -> Message | Iterable[Message]

Answer a question given a context and chat history with or without streaming.

The question and the context are combined into a single message for the QA model, using the following template:

CONTEXT:
<context>

QUERY: <question>

After combining the question and context, the message is appended to the history (if any) and sent to the QA model.

A system prompt can be provided by adding it as the first message in the history.