Optional
criteria?: Criteria | Record<string, string>The "criteria" to insert into the prompt template used for evaluation. See the prompt at https://smith.langchain.com/hub/langchain-ai/criteria-evaluator for more information.
Optional
llm?: ToolkitThe language model to use as the evaluator, defaults to GPT-4
The criteria to use for the evaluator.
The language model to use for the evaluator.
The configuration for the evaluator.
const evalConfig = {
evaluators: [Criteria("helpfulness")],
};
@example
```ts
const evalConfig = {
evaluators: [
Criteria({
"isCompliant": "Does the submission comply with the requirements of XYZ"
})
],
};
const evalConfig = {
evaluators: [{
evaluatorType: "criteria",
criteria: "helpfulness"
formatEvaluatorInputs: ...
}]
};
const evalConfig = {
evaluators: [{
evaluatorType: "criteria",
criteria: { "isCompliant": "Does the submission comply with the requirements of XYZ" },
formatEvaluatorInputs: ...
}]
};
Generated using TypeDoc
Configuration to load a "CriteriaEvalChain" evaluator, which prompts an LLM to determine whether the model's prediction complies with the provided criteria.