ragrank.metric.base

Base module for metric

class ragrank.metric.base.BaseMetric(*, metric_type: MetricType, llm: BaseLLM, prompt: Prompt)

Base class for defining metrics.

metric_type

The type of the metric.

Type:

MetricType

llm

The language model associated with the metric.

Type:

BaseLLM

prompt

The prompt associated with the metric.

Type:

Prompt

load() None

Method to load the metric. Not implemented in base class.

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ConfigDict = {'arbitrary_types_allowed': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'llm': FieldInfo(annotation=BaseLLM, required=True, description='The language model associated with the metric.'), 'metric_type': FieldInfo(annotation=MetricType, required=True, description='The type of the metric.'), 'prompt': FieldInfo(annotation=Prompt, required=True, description='The prompt associated with the metric.')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

abstract property name: str

Get the name for the metric.

Returns:

The name of the metric.

Return type:

str

save() None

Method to save the metric. Not implemented in base class.

abstract score(data: DataNode) MetricResult

Method to compute the metric score.

Parameters:

data (DataNode) – The data node for which the score is computed.

Returns:

The computed score.

Return type:

MetricResult

class ragrank.metric.base.MetricResult(*, datanode: DataNode, metric: BaseMetric, score: float | int, reason: str | None = None, process_time: float | None = None)

Class to hold the result of a metric computation.

datanode

The data node associated with the metric result.

Type:

DataNode

metrics

List of metrics used in the computation.

Type:

List[BaseMetric]

scores

List of scores computed for each metric.

Type:

List[Union[int, float]]

reasons

List of reasons corresponding to each metric score. Defaults to None.

Type:

List[str]

process_time

Processing time for the computation. Defaults to None.

Type:

Optional[float], optional

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ConfigDict = {'frozen': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'datanode': FieldInfo(annotation=DataNode, required=True, description='The data node associated with the metric result.'), 'metric': FieldInfo(annotation=BaseMetric, required=True, description='List of metrics used in the computation.'), 'process_time': FieldInfo(annotation=Union[float, NoneType], required=False, description='Processing time for the computation.', repr=False), 'reason': FieldInfo(annotation=Union[str, NoneType], required=False, description='List of reasons corresponding to each metric score.'), 'score': FieldInfo(annotation=Union[float, int], required=True, description='List of scores computed for each metric.')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class ragrank.metric.base.MetricType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Enumeration of metric types.