ragrank.llm

ragrank.llm.base

Base for the llm module

Handle all of the things related to LLM in ragrank

class ragrank.llm.BaseLLM(*, llm_config: LLMConfig = None)

Abstract base class for Language Model (LLM).

This class provides an interface for interacting with language models.

llm_config

Configuration settings for the LLM.

Type:

LLMConfig

set_config()

Set configuration settings for the LLM.

generate_text()

Generate text based on input text.

generate()

Generate responses for a sequence of input texts.

generate(texts: Sequence[str]) List[LLMResult]

Generate responses for a dataset input.

Parameters:

texts (Sequence[str]) – A sequence of input texts.

Returns:

A list of LLM results.

Return type:

List[LLMResult]

abstract generate_text(text: str) LLMResult

Generate the result for a single text input.

Parameters:

text (str) – The input text.

Returns:

The result of the LLM generation.

Return type:

LLMResult

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ConfigDict = {'arbitrary_types_allowed': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'llm_config': FieldInfo(annotation=LLMConfig, required=False, default_factory=LLMConfig, repr=False)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

abstract property name: str

Get the name of the Language Model.

Returns:

Name of the Language Model.

Return type:

str

set_config(config: LLMConfig) None

Set the configuration for the base LLM.

Parameters:

config (LLMConfig) – The configuration for the LLM.

class ragrank.llm.LLMConfig(*, temperature: float = 1.0, max_tokens: int = 300, seed: int = 44, top_p: float = 1.0, stop: List[str] | None = None)

Configuration settings for Language Model (LLM).

temperature

Sampling temperature for text generation. Default is 1.0.

Type:

float

max_tokens

Maximum number of tokens to generate. Default is 300.

Type:

int

seed

Random seed for text generation. Default is 44.

Type:

int

top_p

Sampling top probability for text generation. Default is 1.0.

Type:

float

stop

List of tokens at which text generation should stop.

Type:

Optional[List[str]]

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=int, required=False, default=300, description='Maximum number of tokens to generate.'), 'seed': FieldInfo(annotation=int, required=False, default=44, description='Random seed for text generation.'), 'stop': FieldInfo(annotation=Union[List[str], NoneType], required=False, description='List of tokens at which text generation should stop'), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Sampling temperature for text generation.', metadata=[Ge(ge=0.0), Le(le=1.0)]), 'top_p': FieldInfo(annotation=float, required=False, default=1.0, description='Sampling top probability for text generation.', metadata=[Ge(ge=0.0), Le(le=1.0)])}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class ragrank.llm.LLMResult(*, response: str, response_time: float | None = None, finish_reason: str | None = None, response_tokens: int | None = None, llm: BaseLLM | None = None, llm_config: LLMConfig | None = None)

Result of Language Model (LLM) generation.

response

Generated text response.

Type:

str

response_time

Time taken for text generation.

Type:

Optional[float]

finish_reason

Reason for completion of text generation.

Type:

Optional[str]

response_tokens

Number of tokens in the generated response.

Type:

Optional[int]

llm

Instance of the LLM used for generation.

Type:

Optional[BaseLLM]

llm_config

Configuration settings used for generation.

Type:

Optional[LLMConfig]

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ConfigDict = {'arbitrary_types_allowed': True, 'frozen': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'finish_reason': FieldInfo(annotation=Union[str, NoneType], required=False, description='Reason for completion of text generation'), 'llm': FieldInfo(annotation=Union[BaseLLM, NoneType], required=False, description='Instance of the LLM used for generation.'), 'llm_config': FieldInfo(annotation=Union[LLMConfig, NoneType], required=False, description='Configuration settings used for generation.'), 'response': FieldInfo(annotation=str, required=True, description='Generated text response.'), 'response_time': FieldInfo(annotation=Union[float, NoneType], required=False, description='Time taken for text generation.'), 'response_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, description='Number of tokens in the generated response.')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

ragrank.llm.default_llm() BaseLLM

Get the default Language Model (LLM) instance.

Returns:

Default LLM instance.

Return type:

BaseLLM