Automation Rule Evaluators Client¶
The Automation Rule Evaluators client provides methods for managing automated evaluation rules in the Opik platform.
- class opik.rest_api.automation_rule_evaluators.client.AutomationRuleEvaluatorsClient(*, client_wrapper: SyncClientWrapper)¶
Bases:
object
- find_evaluators(*, project_id: str | None = None, name: str | None = None, page: int | None = None, size: int | None = None, request_options: RequestOptions | None = None) AutomationRuleEvaluatorPagePublic ¶
Find project Evaluators
- Parameters:
project_id (Optional[str])
name (Optional[str])
page (Optional[int])
size (Optional[int])
request_options (Optional[RequestOptions]) – Request-specific configuration.
- Returns:
Evaluators resource
- Return type:
Examples
from Opik import OpikApi client = OpikApi(api_key=”YOUR_API_KEY”, workspace_name=”YOUR_WORKSPACE_NAME”, ) client.automation_rule_evaluators.find_evaluators()
- create_automation_rule_evaluator(*, request: AutomationRuleEvaluatorWrite_LlmAsJudge | AutomationRuleEvaluatorWrite_UserDefinedMetricPython, request_options: RequestOptions | None = None) None ¶
Create automation rule evaluator
- Parameters:
request (AutomationRuleEvaluatorWrite)
request_options (Optional[RequestOptions]) – Request-specific configuration.
- Return type:
None
Examples
from Opik import OpikApi from Opik import AutomationRuleEvaluatorWrite_LlmAsJudge client = OpikApi(api_key=”YOUR_API_KEY”, workspace_name=”YOUR_WORKSPACE_NAME”, ) client.automation_rule_evaluators.create_automation_rule_evaluator(request=AutomationRuleEvaluatorWrite_LlmAsJudge(), )
- delete_automation_rule_evaluator_batch(*, ids: Sequence[str], project_id: str | None = None, request_options: RequestOptions | None = None) None ¶
Delete automation rule evaluators batch
- Parameters:
ids (Sequence[str])
project_id (Optional[str])
request_options (Optional[RequestOptions]) – Request-specific configuration.
- Return type:
None
Examples
from Opik import OpikApi client = OpikApi(api_key=”YOUR_API_KEY”, workspace_name=”YOUR_WORKSPACE_NAME”, ) client.automation_rule_evaluators.delete_automation_rule_evaluator_batch(ids=[‘ids’], )
- get_evaluator_by_id(id: str, *, project_id: str | None = None, request_options: RequestOptions | None = None) AutomationRuleEvaluatorPublic_LlmAsJudge | AutomationRuleEvaluatorPublic_UserDefinedMetricPython ¶
Get automation rule by id
- Parameters:
id (str)
project_id (Optional[str])
request_options (Optional[RequestOptions]) – Request-specific configuration.
- Returns:
Automation Rule resource
- Return type:
AutomationRuleEvaluatorPublic
Examples
from Opik import OpikApi client = OpikApi(api_key=”YOUR_API_KEY”, workspace_name=”YOUR_WORKSPACE_NAME”, ) client.automation_rule_evaluators.get_evaluator_by_id(id=’id’, )
- update_automation_rule_evaluator(id: str, *, request: AutomationRuleEvaluatorUpdate_LlmAsJudge | AutomationRuleEvaluatorUpdate_UserDefinedMetricPython, request_options: RequestOptions | None = None) None ¶
Update Automation Rule Evaluator by id
- Parameters:
id (str)
request (AutomationRuleEvaluatorUpdate)
request_options (Optional[RequestOptions]) – Request-specific configuration.
- Return type:
None
Examples
from Opik import OpikApi from Opik import AutomationRuleEvaluatorUpdate_LlmAsJudge client = OpikApi(api_key=”YOUR_API_KEY”, workspace_name=”YOUR_WORKSPACE_NAME”, ) client.automation_rule_evaluators.update_automation_rule_evaluator(id=’id’, request=AutomationRuleEvaluatorUpdate_LlmAsJudge(), )
- get_evaluator_logs_by_id(id: str, *, size: int | None = None, request_options: RequestOptions | None = None) LogPage ¶
Get automation rule evaluator logs by id
- Parameters:
id (str)
size (Optional[int])
request_options (Optional[RequestOptions]) – Request-specific configuration.
- Returns:
Automation rule evaluator logs resource
- Return type:
Examples
from Opik import OpikApi client = OpikApi(api_key=”YOUR_API_KEY”, workspace_name=”YOUR_WORKSPACE_NAME”, ) client.automation_rule_evaluators.get_evaluator_logs_by_id(id=’id’, )
Usage Example¶
import opik
client = opik.Opik()
# List automation rule evaluators
evaluators = client.rest_client.automation_rule_evaluators.find_automation_rule_evaluators(
page=0,
size=10
)
# Get an evaluator by ID
evaluator = client.rest_client.automation_rule_evaluators.get_automation_rule_evaluator_by_id(
"evaluator-id"
)
# Create a new evaluator
client.rest_client.automation_rule_evaluators.create_automation_rule_evaluator(
name="my-evaluator",
project_id="project-id",
code="def evaluate(trace): return {'score': 0.8}"
)