Draft Documentation
This guide is currently in development. Content may be incomplete or subject to change.
Editing Evaluation Scripts
Learn how to view, edit, and manage evaluation scripts (pautas) that define how AI evaluates conversations. Understand the draft workflow, version history, and testing capabilities.
In this guide
Overview
Evaluation scripts (called "pautas" in Spanish) are the templates that define how the AI evaluates conversations. Each script contains evaluation items with specific criteria, expected behaviors, and scoring weights.
Required Permission: You need the scriptEditor edit permission to create and modify evaluation scripts. Contact your administrator if you don't have access.
Key Concepts:
- Evaluation Items: Individual criteria being evaluated (e.g., "Greeting", "Problem Resolution")
- Sub-Items: More specific aspects within an item
- Script Expected: The exact phrases or behaviors the agent should follow
- Critical Errors: Serious issues that significantly impact the score
- Draft/Published: Scripts have a versioning workflow with drafts that must be published
Accessing the Editor
To access the evaluation script editor, navigate through the Campaigns tab to find your scripts.
Steps:
- Go to the AI Auditing module
- Click on the Campaigns tab in the main tab bar
- Select the Pautas (Scripts) sub-tab
- Find your script in the grid and click the Details button

The Details button opens the Evaluation Script Detail modal, where you can view, edit, and manage your evaluation script.
View Tab
The View tab shows a read-only view of the currently published evaluation script. This is the version that's actively being used to evaluate conversations.

View Tab Contents:
- Script Information: ID, creation date, and total number of items
- Instructions: General guidelines for the evaluation
- Items Grid: All evaluation items with their details in a sortable grid
- Notes: Additional notes for evaluators
- Critical Errors: List of errors that result in severe score penalties
| Column | Description |
|---|---|
| Item Name | The main evaluation criterion |
| Sub-Item | Specific aspect being evaluated |
| Percentage | Weight of this item in the total score |
| Definition | What this item measures |
| Scripts Expected | Phrases/behaviors to look for |
| Observations | Additional context or notes |
Edit Tab
The Edit tab allows you to modify evaluation scripts. Changes are made to a draft version that must be explicitly published to take effect.
Important: Changes to evaluation scripts don't take effect immediately. You must save your draft and then publish it. The previous version will be archived.
Creating a Draft
If no draft exists, you'll see a "Create Draft" button. Click it to create a new working copy based on the current published version.

The Script Item Editor
Once a draft exists, you'll see the two-panel editor interface:

Left Panel: Items List
Shows all evaluation items. Click an item to select it for editing. Each item shows its name, sub-item, and percentage weight.
Right Panel: Editor Fields
Edit the selected item's details: name, sub-item, percentage, definition, expected scripts, and observations.
Editor Fields

- Item Name: The main category name (e.g., "Greeting")
- Sub-Item: Specific aspect within the category (optional)
- Percentage: Weight of this item in the total score (0-100)
- Definition: Detailed description of what is being evaluated
- Scripts Expected: List of phrases or behaviors to look for
- Observations: Additional notes or context
AI Suggestions
Click the "AI Suggest" button to get AI-powered suggestions for improving the selected item's definition and expected scripts.
Tip: Use AI suggestions to refine your evaluation criteria and ensure they're clear and comprehensive. You can accept, modify, or reject each suggestion.
Notes and Critical Errors
Below the item editor, you'll find sections for managing Notes and Critical Errors that apply to the entire script:

Save and Publish Workflow
- Save Draft: Saves your changes without publishing (enables Publish button)
- Discard: Deletes the current draft and all unsaved changes
- Publish: Makes the draft the new active version (archives the previous one)
Warning: If you close the modal with unsaved changes, you'll be asked to confirm. Unsaved changes will be lost if you proceed.
History Tab
The History tab shows all versions of the evaluation script, including the current published version, any active draft, and archived versions.

Version States:
Currently active version
Work in progress
Previous versions
Comparing Versions
Click the "Compare" button to enter comparison mode. Select two versions using the checkboxes, then click "Compare Selected" to see the differences.

The comparison result shows items added, modified, and removed between versions, along with an AI-generated analysis of the changes.
Restoring Versions
To restore an archived version, click the "Restore" button on that version. This creates a new draft based on the archived version's content.
Note: Restoring a version doesn't immediately replace the current version. It creates a draft that you can modify and then publish.
Test Tab
The Test tab allows you to re-evaluate conversations using your draft script before publishing it. This helps ensure your changes produce the expected results.

Best Practice: Always test your script changes on a few conversations before publishing. This helps catch issues before they affect all future evaluations.
How to Test:
- Create and save a draft with your changes
- Switch to the Test tab
- Select conversations to re-evaluate
- Review the results to verify your changes work as expected
- If satisfied, go back to Edit tab and publish your draft
Note: You must have an active draft to use the Test tab. If no draft exists, create one first in the Edit tab.