Documentation Index Fetch the complete documentation index at: https://docs.learningcommons.org/llms.txt
Use this file to discover all available pages before exploring further.
Evaluator last updated March 20, 2026.
At a glance
Input type Informational text Supported grades 3–12
The Conventionality Evaluator assesses how directly a text communicates its meaning . It analyzes whether language is literal and explicit or relies on figurative, abstract, or implied meaning that requires interpretation.
Model and prompt
For instructions on running the evaluator, see Running an evaluator .
Model used gemini-3-flash-preview Temperature 0 Prompts View prompts ↗Python notebook View notebook ↗
Other configurations will produce different results and may have lower accuracy.
Requirement Supported Required Target grade level Enables grade context evaluation Yes Text type Informational text Optional length 200-1,000 words Yes
Output
Field Description Complexity rating Conventionality complexity level Reasoning Explanation of the rating based on language features Conventionality features Specific language features driving complexity (for example, idioms, metaphors, irony, or implicit meaning) Grade context Comparison of conventionality demands with expectations for the provided grade Instructional insights Suggestions for scaffolding or teaching unconventional language features
Interpreting results
The evaluator returns one of the following ratings, along with reasoning, to help you interpret the conventionality demands of the text.
Rating Meaning Slightly complex Language is literal and explicit. Meaning is directly stated. Moderately complex Mostly literal language with occasional figurative or implicit meaning. Very complex Frequent figurative language or implied meaning requires interpretation. Exceedingly complex Language relies heavily on abstraction, layered meaning, or sustained figurative expression.
More complex ratings indicate texts that require greater interpretive effort from readers.
Accuracy and validation
This evaluator is provided as Early access.
Comprehensive accuracy measures are still evolving, and validation testing is ongoing.
The evaluator was optimized using 35 annotated passages and validated through expert review of additional samples.
Metric Result Complexity score accuracy 83% agreement with expert annotations Expert agreement 90% (9 of 10 examples approved) Reasoning soundness Average 4.4 / 5 Dataset source CLEAR Corpus ↗
Evaluator release history
Date Changed March 20, 2026 First release.