Early Release
This evaluator reflects early-stage work. We’re continuously improving its accuracy and reliability.
What you’ll see
The feedback includes a complexity score for each passage and an explanation about why the passage was given this rating based on the Sentence Structure rubric. This takes the form of a JSON object with two keys: answer and reasoning. For example:How to use the information
Check out the rubric for details of how each result is measured and what it means. Also, take a look at the evaluator’s accuracy to see where the evaluator is most accurate and most likely to make mistakes. The information provided by the Sentence Structure Evaluator can be combined with quantitative measures and feedback from other evaluators and used in a variety of ways. For example:Curriculum and assessment creators
- Ensure AI-generated texts are appropriately complex for their intended use.
- Provide guidance on instructional focus for educators.
- Provide instructional focus guidance on where and why texts may pose challenges to students.
EdTech developers
- Ensure the complexity of texts increases appropriately during a year and across grades.
- Ensure AI-generated texts meet the complexity requirements expected by educators.
Applying the answer
In general:- The sentence structure complexity of texts should generally go up over the school year and from year to year.
- The sentence structure complexity should be customized to the instructional needs of the educator:
- Is this an anchor text?
- Is the educator putting a particular focus on sentence structure with this text?
- Alternately, is the focus supposed to be on another component of literacy, such as vocabulary?