Draft Documentation

This guide is currently in development. Content may be incomplete or subject to change.

~15 minutes

Editing Evaluation Scripts

Learn how to view, edit, and manage evaluation scripts (pautas) that define how AI evaluates conversations. Understand the draft workflow, version history, and testing capabilities.

Overview

Evaluation scripts (called "pautas" in Spanish) are the templates that define how the AI evaluates conversations. Each script contains evaluation items with specific criteria, expected behaviors, and scoring weights.

Required Permission: You need the scriptEditor edit permission to create and modify evaluation scripts. Contact your administrator if you don't have access.

Key Concepts:

  • Evaluation Items: Individual criteria being evaluated (e.g., "Greeting", "Problem Resolution")
  • Sub-Items: More specific aspects within an item
  • Script Expected: The exact phrases or behaviors the agent should follow
  • Critical Errors: Serious issues that significantly impact the score
  • Draft/Published: Scripts have a versioning workflow with drafts that must be published

Accessing the Editor

To access the evaluation script editor, navigate through the Campaigns tab to find your scripts.

Steps:

  1. Go to the AI Auditing module
  2. Click on the Campaigns tab in the main tab bar
  3. Select the Pautas (Scripts) sub-tab
  4. Find your script in the grid and click the Details button
Pautas grid showing evaluation scripts with Details button

The Details button opens the Evaluation Script Detail modal, where you can view, edit, and manage your evaluation script.

View Tab

The View tab shows a read-only view of the currently published evaluation script. This is the version that's actively being used to evaluate conversations.

Evaluation Script Detail modal showing the View tab

View Tab Contents:

  • Script Information: ID, creation date, and total number of items
  • Instructions: General guidelines for the evaluation
  • Items Grid: All evaluation items with their details in a sortable grid
  • Notes: Additional notes for evaluators
  • Critical Errors: List of errors that result in severe score penalties
ColumnDescription
Item NameThe main evaluation criterion
Sub-ItemSpecific aspect being evaluated
PercentageWeight of this item in the total score
DefinitionWhat this item measures
Scripts ExpectedPhrases/behaviors to look for
ObservationsAdditional context or notes

Edit Tab

The Edit tab allows you to modify evaluation scripts. Changes are made to a draft version that must be explicitly published to take effect.

Important: Changes to evaluation scripts don't take effect immediately. You must save your draft and then publish it. The previous version will be archived.

Creating a Draft

If no draft exists, you'll see a "Create Draft" button. Click it to create a new working copy based on the current published version.

Edit tab showing Create Draft button when no draft exists

The Script Item Editor

Once a draft exists, you'll see the two-panel editor interface:

Edit tab with active draft showing the script item editor

Left Panel: Items List

Shows all evaluation items. Click an item to select it for editing. Each item shows its name, sub-item, and percentage weight.

Right Panel: Editor Fields

Edit the selected item's details: name, sub-item, percentage, definition, expected scripts, and observations.

Editor Fields

Right panel showing the editor fields for an evaluation item
  • Item Name: The main category name (e.g., "Greeting")
  • Sub-Item: Specific aspect within the category (optional)
  • Percentage: Weight of this item in the total score (0-100)
  • Definition: Detailed description of what is being evaluated
  • Scripts Expected: List of phrases or behaviors to look for
  • Observations: Additional notes or context

AI Suggestions

Click the "AI Suggest" button to get AI-powered suggestions for improving the selected item's definition and expected scripts.

Tip: Use AI suggestions to refine your evaluation criteria and ensure they're clear and comprehensive. You can accept, modify, or reject each suggestion.

Notes and Critical Errors

Below the item editor, you'll find sections for managing Notes and Critical Errors that apply to the entire script:

Notes and Critical Errors sections in the editor

Save and Publish Workflow

Save Draft Test (Optional) Publish
  • Save Draft: Saves your changes without publishing (enables Publish button)
  • Discard: Deletes the current draft and all unsaved changes
  • Publish: Makes the draft the new active version (archives the previous one)

Warning: If you close the modal with unsaved changes, you'll be asked to confirm. Unsaved changes will be lost if you proceed.

History Tab

The History tab shows all versions of the evaluation script, including the current published version, any active draft, and archived versions.

History tab showing version timeline

Version States:

Published

Currently active version

Draft

Work in progress

Archived

Previous versions

Comparing Versions

Click the "Compare" button to enter comparison mode. Select two versions using the checkboxes, then click "Compare Selected" to see the differences.

History tab in compare mode with checkboxes

The comparison result shows items added, modified, and removed between versions, along with an AI-generated analysis of the changes.

Restoring Versions

To restore an archived version, click the "Restore" button on that version. This creates a new draft based on the archived version's content.

Note: Restoring a version doesn't immediately replace the current version. It creates a draft that you can modify and then publish.

Test Tab

The Test tab allows you to re-evaluate conversations using your draft script before publishing it. This helps ensure your changes produce the expected results.

Test tab showing the re-evaluation panel

Best Practice: Always test your script changes on a few conversations before publishing. This helps catch issues before they affect all future evaluations.

How to Test:

  1. Create and save a draft with your changes
  2. Switch to the Test tab
  3. Select conversations to re-evaluate
  4. Review the results to verify your changes work as expected
  5. If satisfied, go back to Edit tab and publish your draft

Note: You must have an active draft to use the Test tab. If no draft exists, create one first in the Edit tab.