Documentation Index
Fetch the complete documentation index at: https://docs.learningcommons.org/llms.txt
Use this file to discover all available pages before exploring further.
General concepts
Evaluator
A tool that measures the quality of materials generated by AI-powered educational applications. Evaluators assess specific aspects of content for pedagogical alignment and identify areas for improvement.
Rubric
A structured framework used to evaluate a concept based on learning science. It provides consistent criteria and serves as the foundation for a family of evaluators.
Dimension
A specific facet or attribute measured within a rubric. An individual evaluator typically focuses on a single rubric dimension.
Evaluator family
A collection of evaluators that score AI-generated content across multiple dimensions from one or more rubrics.
Accuracy
The degree to which an evaluator’s score aligns with curated or human-annotated validation datasets. Expressed as a percentage, it reflects the evaluator’s reliability.
Early access
An evaluator made available early in its development because it offers useful capabilities for research and experimentation. While stable, it remains limited in scope and under active development. Early-access evaluators invite feedback to guide iterative improvement.
Literacy evaluator family concepts
Quantitative text analysis
An objective measure of text difficulty based on features such as word length, sentence length, and syllable count (for example, the Flesch–Kincaid Grade Level).
Qualitative text analysis
An examination of a text’s structure, language, purpose, and the knowledge it assumes from the reader.
Background knowledge
The prior knowledge a student is expected to have that affects their ability to understand a text. This includes both curriculum-based knowledge and lived experience.