Documentation Index
Fetch the complete documentation index at: https://docs.learningcommons.org/llms.txt
Use this file to discover all available pages before exploring further.
Evaluator last updated May 7, 2026.
At a glance
| |
|---|
| Input type | Informational text |
| Passage length | 200 words or more |
| Supported grades | 3–12 |
A text’s purpose is what it’s trying to do: inform, persuade, explain, describe, or entertain. The Purpose Evaluator assesses how clearly a text communicates its central purpose. It analyzes whether a text’s intent is explicitly stated, indirectly hinted at, or masked (e.g., A passage presents itself as neutral information but is really building a persuasive argument).
Model and prompt
For instructions on running the evaluator, see Running an evaluator.
| |
|---|
| Model used | gemini-3-flash-preview |
| Temperature | 0 |
| Prompts | View prompts ↗ |
| Python notebook | View notebook ↗ |
The prompt is optimized for the model mentioned above. If you use other models and parameters, the output accuracy and result will vary.
| Requirement | Supported | Required |
|---|
| Target grade level | Allows grade-specific complexity guidance | Yes |
| Text type | Informational text | Yes |
Output
| Field | Description |
|---|
complexity_score | Purpose complexity level: slightly_complex, moderately_complex, very_complex, exceedingly_complex, or more_context_needed. |
reasoning | Overall and detailed explanation of the rating, citing specific text features and their impact on student comprehension. |
adjustment_and_scaffolding | Suggestions for adjusting the text or scaffolding students to make it appropriate for the target grade. |
recommended_use_cases | Instructional opportunity recommendations for the passage’s purpose. |
Interpreting results
The evaluator returns one of the following ratings, along with reasoning, to help you determine your best course of action.
| Rating | Meaning |
|---|
| Slightly complex | The purpose is explicitly stated and concrete. Readers can identify intent without inference. |
| Moderately complex | The purpose is mostly clear but may require some inference from context or supporting details. |
| Very complex | The purpose is hinted at or subtle. Readers must infer intent from how details are selected or organized. |
| Exceedingly complex | The purpose is masked or layered. The text appears to do one thing while actually doing another. |
| More context needed | The passage is too short to determine its purpose. Try a longer portion of the text, or a different text. |
More complex ratings indicate a text requires more interpretation by readers. Purpose complexity isn’t inherently good or bad; it depends on the instructional goals. A text rated Very complex or Exceedingly complex may be ideal for teaching students to identify implicit or persuasive intent, while the same rating would signal a mismatch if the goal is content comprehension.
Accuracy and validation
This evaluator is provided as Early access.
Comprehensive accuracy measures are still evolving, and validation testing is ongoing.
The evaluator was optimized using 35 annotated passages from the CLEAR Corpus ↗ and validated through expert review of additional samples.
| Metric | Result |
|---|
| Complexity score accuracy | 84% agreement with expert annotations |
| Expert agreement | 70% |
| Reasoning soundness | Average 3.7 / 5 |
| Dataset source | CLEAR Corpus ↗ |
Exceedingly complex texts aren’t common in lower grades and the benchmark dataset doesn’t include many examples of them. Use caution when applying this evaluator to higher grade levels where such texts are more frequent.
Evaluator release history
| Date | Changes |
|---|
| May 7, 2026 | First release. |