Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.learningcommons.org/llms.txt

Use this file to discover all available pages before exploring further.

Evaluator last updated March 20, 2026.

At a glance

Input typeInformational text
Supported grades3–12
The Conventionality Evaluator assesses how directly a text communicates its meaning. It analyzes whether language is literal and explicit or relies on figurative, abstract, or implied meaning that requires interpretation.

Model and prompt

For instructions on running the evaluator, see Running an evaluator.
Model usedgemini-3-flash-preview
Temperature0
PromptsView prompts ↗
Python notebookView notebook ↗
Other configurations will produce different results and may have lower accuracy.

Inputs

RequirementSupportedRequired
Target grade levelEnables grade context evaluationYes
Text typeInformational text
Optional length 200-1,000 words
Yes

Output

FieldDescription
Complexity ratingConventionality complexity level
ReasoningExplanation of the rating based on language features
Conventionality featuresSpecific language features driving complexity (for example, idioms, metaphors, irony, or implicit meaning)
Grade contextComparison of conventionality demands with expectations for the provided grade
Instructional insightsSuggestions for scaffolding or teaching unconventional language features

Interpreting results

The evaluator returns one of the following ratings, along with reasoning, to help you interpret the conventionality demands of the text.
RatingMeaning
Slightly complexLanguage is literal and explicit. Meaning is directly stated.
Moderately complexMostly literal language with occasional figurative or implicit meaning.
Very complexFrequent figurative language or implied meaning requires interpretation.
Exceedingly complexLanguage relies heavily on abstraction, layered meaning, or sustained figurative expression.
More complex ratings indicate texts that require greater interpretive effort from readers.

Accuracy and validation

This evaluator is provided as Early access.
Comprehensive accuracy measures are still evolving, and validation testing is ongoing.
The evaluator was optimized using 35 annotated passages and validated through expert review of additional samples.
MetricResult
83% agreement with expert annotations
90% (9 of 10 examples approved)
Average 4.4 / 5
Dataset sourceCLEAR Corpus ↗

Evaluator release history

DateChanged
March 20, 2026First release.