Skip to content

Commit 30eeb4c

Browse files
authored
Merge pull request #1088 from guardrails-ai/docs/on-fail-docs-update
Docs: Enhance OnFail documentation
2 parents beaed76 + b4eb516 commit 30eeb4c

File tree

5 files changed

+100
-15
lines changed

5 files changed

+100
-15
lines changed

docs/concepts/concurrency.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ When asynchronous validation occurs, there are multiple levels of concurrency po
9898
When validating unstructured data, i.e. text, the LLM output is treated the same as if it were a property on an object. This means that the validators applied to is have the ability to run concurrently utilizing the event loop.
9999
100100
### Handling Failures During Async Concurrency
101-
The Guardrails validation loop is opinionated about how it handles failures when running validators concurrently so that it spends the least amount of time processing an output that would result in a failure. It's behavior comes down to when and what it returns based on the [corrective action](/how_to_guides/custom_validators#on-fail) specified on a validator. Corrective actions are processed concurrently since they are specific to a given validator on a given property. This means that interruptive corrective actions, namely `EXCEPTION`, will be the first corrective action enforced because the exception is raised as soon as the failure is evaluated. The remaining actions are handled in the following order after all futures are collected from the validation of a specific property:
101+
The Guardrails validation loop is opinionated about how it handles failures when running validators concurrently so that it spends the least amount of time processing an output that would result in a failure. It's behavior comes down to when and what it returns based on the [corrective action](/concepts/validator_on_fail_actions) specified on a validator. Corrective actions are processed concurrently since they are specific to a given validator on a given property. This means that interruptive corrective actions, namely `EXCEPTION`, will be the first corrective action enforced because the exception is raised as soon as the failure is evaluated. The remaining actions are handled in the following order after all futures are collected from the validation of a specific property:
102102
1. `FILTER` and `REFRAIN`
103103
2. `REASK`
104104
3. `FIX`
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# Validator OnFail Actions
2+
3+
## OnFail Actions
4+
5+
Validators ship with several out of the box `on_fail` policies. The `OnFailAction` specifies the corrective action that should be taken if the quality criteria is not met. The corrective action can be one of the following:
6+
7+
| Action | Behavior |
8+
|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
9+
| `OnFailAction.REASK` | Reask the LLM to generate an output that meets the correctness criteria specified in the validator. The prompt used for reasking contains information about which quality criteria failed, which is auto-generated by the validator. |
10+
| `OnFailAction.FIX` | Programmatically fix the generated output to meet the correctness criteria when possible. E.g. the formatter `provenance_llm` validator will remove any sentences that are estimated to be hallucinated. |
11+
| `OnFailAction.FILTER` | (Only applicable for structured data validation) Filter the incorrect value. This only filters the field that fails, and will return the rest of the generated output. |
12+
| `OnFailAction.REFRAIN` | Refrain from returning an output. This is useful when the generated output is not safe to return, in which case a `None` value is returned instead. |
13+
| `OnFailAction.NOOP` | Do nothing. The failure will still be recorded in the logs, but no corrective action will be taken. |
14+
| `OnFailAction.EXCEPTION` | Raise an exception when validation fails. |
15+
| `OnFailAction.FIX_REASK` | First, fix the generated output deterministically, and then rerun validation with the deterministically fixed output. If validation fails, then perform reasking. |
16+
| `OnFailAction.CUSTOM` | This action is set internally when the validator is passed a custom function to handle failures. The function is called with the value that failed validation and the FailResult returned from the Validator. i.e. the custom on fail handler must implement the method signature `def on_fail(value: Any, fail_result: FailResult) -> Any` |
17+
18+
## Example
19+
20+
Let's assess the output of a trivial validator in diffent cases of `OnFailAction`.
21+
Take the following Validator a basic implementation of a Toxic Language Validator:
22+
23+
```python
24+
25+
TOXIC_WORDS = ["asshole", "damn"]
26+
27+
class BasicToxicLanguage(Validator):
28+
def validate(self, value: Any, metadata: Dict) -> ValidationResult:
29+
is_toxic_language = any(toxic_word in value for toxic_word in TOXIC_WORDS)
30+
31+
# if a value contains toxic words we return FailResult otherwise PassResult
32+
if is_toxic_language:
33+
for toxic_word in TOXIC_WORDS:
34+
value = value.replace(toxic_word, "")
35+
return FailResult(
36+
error_message=f"Value '{value}' contains toxic language including words: {TOXIC_WORDS} which is not allowed.",
37+
fix_value=value,
38+
)
39+
40+
return PassResult()
41+
```
42+
43+
> For more information on how to write Custom Validators refer to our guide [here](/how_to_guides/custom_validators)
44+
45+
Now suppose some unhinged LLM returns `damn you!`, in ths scenario:
46+
47+
- `OnFailAction.REASK` an LLM will be reasked to correct it's output based on the `error_message` provided to `FailResult`. In this example it will reask the LLM with a reask prompt which includes the error message: `Value 'damn you!' contains toxic language including words: ["asshole","damn"] which is not allowed.`. You can set `num_reasks` on the `guard()` call to determine how many times we retry.
48+
- `OnFailAction.FIX` the value is replaced with `fix_value` that is provided to `FailResult`. In this example value from the LLM `damn you!` will be returned as `you!`.
49+
- `OnFailAction.FILTER` if used in structured data generation we do not display the field that fails validation (more on this below). In this example value from the LLM `damn you!` will return an empty response.
50+
- `OnFailAction.REFRAIN` we do not return anything as the validation is deemed unsafe for end users. In this example value from the LLM `damn you!` will return an empty response.
51+
- `OnFailAction.NOOP` the value is returned as is and failures are logged in the history. In this example value we return the value as is `damn you!`
52+
- `OnFailAction.EXCEPTION` during a guard execution or direct validation call we raise an error indicating the validation failed.
53+
- `OnFailAction.FIX_REASK` during a fix reask, we first perform the same action as `OnFailAction.FIX` and re-validate the output. If it fails we run a `OnFailAction.REASK` action, otherwise we return the passed validation.
54+
55+
56+
## Structured Data
57+
58+
Using `OnFail` actions is powerful when also working with structured data as we can determine how to treat each field's validation failure.
59+
60+
61+
```python
62+
prompt = """
63+
Given the following fast food order, please provide a summary of the orders.
64+
${order}
65+
${gr.complete_xml_suffix_v2}
66+
"""
67+
68+
order = """I want a burger with two large fries and a coke zero."""
69+
70+
# MinimumOneRange is a hypothetical custom validator that an integer > 0 is supplied
71+
class Lineitem(BaseModel):
72+
item: str = Field(description="The name of the item being ordered", validators=[LowerCase()])
73+
quantity: int = Field(description="The quantity of the item being ordered", validators=[MinimumOneRange(min=1, max=10, on_fail="fix")])
74+
75+
guard = Guard.from_pydantic(output_class=List[Lineitem])
76+
77+
response = guard(
78+
model="gpt-4o",
79+
messages=[{
80+
"role": "system",
81+
"content": "You are a helpful assistant."
82+
},{
83+
"role": "user",
84+
"content": prompt
85+
}],
86+
prompt_params={"order": order},
87+
)
88+
89+
print(response.validated_output)
90+
91+
# [{'item': 'burger', 'quantity': 1},
92+
# {'item': 'fries', 'quantity': 2},
93+
# {'item': 'coke zero', 'quantity': 1}]
94+
```

docs/concepts/validators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Validators are how we apply quality controls to the outputs of LLMs. They speci
66
Each validator is a method that encodes some criteria, and checks if a given value meets that criteria.
77

88
- If the value passes the criteria defined, the validator returns `PassResult`. In most cases this means returning that value unchanged. In very few advanced cases, there may be a a value override (the specific validator will document this).
9-
- If the value does not pass the criteria, a `FailResult` is returned. In this case, the validator applies the user-configured `on_fail` policies (see [On-Fail Policies](/docs/how_to_guides/custom_validators#on-fail)).
9+
- If the value does not pass the criteria, a `FailResult` is returned. In this case, the validator applies the user-configured `on_fail` policies (see [On-Fail Policies](/concepts/validator_on_fail_actions)).
1010

1111
## Runtime Metadata
1212

docs/how_to_guides/custom_validators.md

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -64,20 +64,10 @@ class ToxicWords(Validator):
6464

6565
## On Fail
6666

67-
Validators ship with several out of the box `on_fail` policies. The `OnFailAction` specifies the corrective action that should be taken if the quality criteria is not met. The corrective action can be one of the following:
68-
69-
| Action | Behavior |
70-
|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
71-
| `OnFailAction.REASK` | Reask the LLM to generate an output that meets the correctness criteria specified in the validator. The prompt used for reasking contains information about which quality criteria failed, which is auto-generated by the validator. |
72-
| `OnFailAction.FIX` | Programmatically fix the generated output to meet the correctness criteria when possible. E.g. the formatter `provenance_llm` validator will remove any sentences that are estimated to be hallucinated. |
73-
| `OnFailAction.FILTER` | (Only applicable for structured data validation) Filter the incorrect value. This only filters the field that fails, and will return the rest of the generated output. |
74-
| `OnFailAction.REFRAIN` | Refrain from returning an output. This is useful when the generated output is not safe to return, in which case a `None` value is returned instead. |
75-
| `OnFailAction.NOOP` | Do nothing. The failure will still be recorded in the logs, but no corrective action will be taken. |
76-
| `OnFailAction.EXCEPTION` | Raise an exception when validation fails. |
77-
| `OnFailAction.FIX_REASK` | First, fix the generated output deterministically, and then rerun validation with the deterministically fixed output. If validation fails, then perform reasking. |
78-
| `OnFailAction.CUSTOM` | This action is set internally when the validator is passed a custom function to handle failures. The function is called with the value that failed validation and the FailResult returned from the Validator. i.e. the custom on fail handler must implement the method signature `def on_fail(value: Any, fail_result: FailResult) -> Any` |
79-
8067
In the code below, a `fix_value` will be supplied in the `FailResult`. This value will represent a programmatic fix that can be applied to the output if `on_fail='fix'` is passed during validator initialization.
68+
69+
> For more details about on fail actions refer to: [On Fail Actions](/concepts/validator_on_fail_actions)
70+
8171
```py
8272
from typing import Callable, Dict, Optional
8373
from guardrails.validators import (

docusaurus/sidebars.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ const sidebars = {
5151
concepts: [
5252
"concepts/guard",
5353
"concepts/validators",
54+
"concepts/validator_on_fail_actions",
5455
// "concepts/guardrails",
5556
"concepts/hub",
5657
"concepts/deploying",

0 commit comments

Comments
 (0)