.png)
Context and Problem Framing
Every day, teachers work to understand what their students know, where they are struggling, and how best to support them. While digital learning tools now generate more data than ever, translating that information into meaningful instructional action remains time-consuming and difficult. The challenge is not access to data, it is having the time and support to use it effectively.
ASSISTments was built to help teachers deliver purposeful practice and gain actionable insights that advance student learning. Yet even with rich, real-time data at the problem and standard level, teachers often face a familiar tension: they can see what happened, but deciding what to do next requires time they may not have.
This challenge has real implications for equity. Teachers serving students who need the most support often face the greatest data complexity while also navigating the tightest instructional time constraints. Closing the gap between insight and action is therefore not just a usability challenge; it is central to ensuring that every student is seen, supported, and successful in math.
Key system failures this R&D addresses
- Data-to-action gap: Research from the Data Quality Campaign and others consistently identifies teacher time as the binding constraint on data-driven instruction. Teachers report knowing data matters; they report not having time to act on it.
- Generic AI outputs fall short for experienced teachers: Early AI summarization tools surface patterns that high-frequency teachers already know. Power users need curriculum-connected, predictive intelligence — not pattern recognition.
- Equity implications of the status quo: Teachers serving students furthest from grade-level proficiency — disproportionately in under-resourced communities and schools serving historically marginalized students , face the greatest data complexity with the least planning time.
“I know the data is in there. I just don’t always have time to look at it and figure out what to do next.” — Teacher, ASSISTments user research
R&D Objectives
The AIDA initiative was designed to help teachers move from insight to action more quickly and confidently. During this phase, our goals were to:
1. Design an AI-powered assistant that translates assignment data into clear, plain-language guidance teachers can immediately use.
2. Understand whether AI-supported insights influence teacher behavior — specifically, whether teachers assign practice more frequently after receiving actionable summaries.
3. Ensure that AI-generated guidance remains accurate, trustworthy, and instructionally meaningful as usage scales.
These objectives reflect a broader aim: supporting teachers with tools that enhance — not replace — their professional judgment.
Scope and Activities
AIDA v1 R&D was organized into four main activity clusters, each contributing to one or more of the objectives above.
Activity 1: Prompt Engineering and Quality Benchmarking
We developed and iterated on the AIDA prompt through structured internal testing prior to any teacher-facing deployment. The prompt instructs the model to identify key performance patterns, surface common error types, and suggest instructional next steps within a 200–300 word output readable in under 90 seconds. We established an internal quality rubric with three criteria: accuracy relative to underlying data, specificity sufficient for instructional action, and absence of hallucinated claims.
Activity 2: Beta Cohort Design and Deployment
We launched AIDA to a selected cohort of active ASSISTments teachers during the fall 2025 semester. Teachers adopted the tool across three months (October through December 2025), with November representing the largest single-month cohort. The beta was designed for learning, not experimental control — our priority was understanding how teachers engage with AI summaries in practice.
Activity 3: Behavioral Outcome Measurement and Segmentation Analysis
Our primary outcome measure was assignment creation behavior: the number of assignments created in the 30 days following a teacher’s first AIDA use, compared to the 30 days prior. Because assignment frequency varies substantially across our user base, we segmented the beta cohort into three groups: Casual Teachers (1–8 per month), Occasional Teachers (9–21 per month), and Power Teachers (22 or more per month).
Activity 4: Transparency and Responsible AI Practices
All AIDA deployments included an in-product disclosure indicating AI generation, with a link to our AI Methods and Approaches documentation. Teachers were invited to provide feedback on summary quality through short in-product surveys, a core part of our quality improvement process.
Target Users & System Levers
AIDA is designed to meet teachers where they are. We recognize that teachers engage with data differently depending on experience, workload, and classroom needs. Rather than assuming a single solution fits all, our approach focuses on supporting teachers across a spectrum of use:
- Some teachers benefit from confidence-building summaries that make data more approachable.
- Others value concise insights that save time in reviewing results.
- Highly engaged teachers seek deeper, curriculum-connected intelligence that supports long-term planning.
This learning reinforced an important principle: effective AI should adapt to teacher needs, while keeping educators firmly in control of instructional decisions. See table for more detailed breakdown.

Evidence, Learning, and Data Use
Our goal was not only to build an AI tool, but to understand how it could meaningfully support teachers’ daily work. The AIDA beta allowed us to observe how educators interacted with AI-generated insights in real classroom contexts. These findings helped us better understand where AI adds value, where it falls short, and how we can design more supportive, teacher-centered experiences moving forward.
Objective O1 — Development: How did we Build?
AIDA v1 was deployed to 246 teachers across the experimentation cycles between October and December 2025, embedded in the assignment report view teachers already use, producing summaries within our 200–300 word target.

Behavior Change: Did AIDA change how teachers assign?
The answer depends critically on which teachers you ask. The aggregate picture is mixed; the segmented picture is revealing.

Casual Teachers showed a +134% increase in assignment creation: averaging 3 assignments per month before adoption and 7 after. Occasional Teachers saw a modest +13% gain (15→17 per month), staying engaged with a consistent practice routine. Engaged Teachers showed a small −9% dip (45→41 per month), which should not be seen as a failure, but as a ceiling effect: engaged users already assign at high rates and need deeper curriculum-connected intelligence, not pattern summaries they can already see themselves.
Two of three teacher segments showed meaningful gains after accounting for school-out days. The key learning is that different teacher types need fundamentally different AI: confidence-building summaries for casual teachers, quick insights for occasional teachers, and curriculum-coherent intelligence for engaged teachers.
Quality at Scale: Did output quality hold?
Yes. Our internal quality review found no meaningful degradation in summary accuracy or instructional specificity as the beta cohort scaled from October through November. This is a meaningful proof point against the common concern that LLM-powered tools become unreliable at scale.
How this evidence is shaping the next phase
The AIDA v1 findings highlighted that supporting teachers effectively requires more than summarizing data. Going forward, our work focuses on:
- Tiered AI experiences for v2: The segmentation finding has directly shaped our v2 product architecture. We are building differentiated AI experiences , not a single summary scaled across user types.
- IM curriculum integration as the critical unlock: The most significant technical investment in v2 is integrating Illustrative Mathematics curriculum metadata , unit sequences, prerequisite skill relationships, standard-level progressions, directly into the AI prompt context.
- Grouping and instructional recommendations as the next layer: v2 will generate AI-powered student grouping recommendations and differentiated instructional next steps, moving from “here is what your data shows” to “here are the three groups you should form and what each needs.”
Planned Scale Through Year 2
- Reach: We will be scaling to all district partners next year.
- Iterate: With the results of our first experimentation and User Experience Research we will continue to iterate our AI Data Assistant for our teachers to include curriculum awareness and coherence, “one-click” small groups with relevant next assignment, as well as a version for instructional coaches.
Questions: contact Britt Neuhaus, Co-Executive Director — britt.neuhaus@assistments.org or Camila Franco, VP of Product — camila.franco@assistments.org

.avif)