Evaluation sets allow you to reliably and continuously test the accuracy and performance of your AI document processors in Extend. By creating sets of representative document examples with validated outputs, you can verify that your extraction and classification configs are working as intended and identify areas for improvement, while also rapidly evaluating changes to see if they have improved the accuracy of your processors.Documentation Index
Fetch the complete documentation index at: https://docs.extend.app/llms.txt
Use this file to discover all available pages before exploring further.

- Create evaluation sets containing examples that represent the range of documents your processor needs to handle
- Run evaluations on your sets to test processor accuracy and performance
- Review evaluation results to verify processor outputs match expected results, and identify areas for improvement and common errors
- Iterate on processor configuration and training data to improve accuracy based on evaluation insights

