SPYRAL AI
an AIRM Initiative
AI Learning: Without Tools vs.
With NEP Workbench
Synthetic Matched-Cohort Study — 1,000 Students · 10 Schools · 12 Months
Innovation Incubation and Startup Foundation
Kamla Nehru Institute of Technology, Sultanpur UP 228001
tryspyral.com
|
+91 6387890310
|
info@tryspyral.com
School Performance: Baseline vs. After 12 Months (%)
Circle = Control school · Diamond = Treatment school · Size ∝ student count · Reference lines: No improvement (y=x), Avg without-tools (+5pp), Avg with-tools (+16pp)
Score by Student Quartile After 12 Months (%)
Key insight: Platform lifts bottom quartile by +20pp vs +3pp without tools — strongest impact on weakest students.
Full Outcome Summary — Control vs. Treatment Group
| Metric |
Without Tools (Control) |
With NEP Workbench (Treatment) |
Difference |
See it live in your school
info@tryspyral.com · tryspyral.com · Request a 30-day pilot
Book a Free Demo →
Synthetic data modelled on EdTech adoption benchmarks (ASER 2023, NASSCOM EdTech India 2024). Groups are matched on baseline performance. School names are illustrative. Results may vary by school infrastructure and adoption quality. This report is produced for marketing and planning purposes and does not constitute a peer-reviewed academic study.
Data Sources & Methodology
tryspyral.com | March 2026
Published Benchmarks Used
- ASER 2023 — Annual Status of Education Report. Pratham Education Foundation, New Delhi, 2024. (National baseline scores; subject-wise learning outcome data.)
- NASSCOM & Omidyar Network — Ed-Tech in India: Status and Way Forward. 2023. (Engagement benchmarks; platform adoption trajectories.)
- Ministry of Education, GoI — National Education Policy 2020. Government of India. (Competency framework; digital literacy standards.)
- NCERT — National Curriculum Framework for School Education (NCF-SE) 2023. (Curriculum competency taxonomy; skill category definitions.)
- Central Square Foundation — State of the Sector Report: Online Education in India 2022. (Assignment completion rates; student engagement baselines.)
- World Bank — Catch-Up: Helping Students Learn After COVID-19 School Disruptions. 2022. (Learning gap data; bottom-quartile improvement benchmarks.)
- VanLehn, K. (2011). "The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems." Educational Psychologist, 46(4), 197–221. (Core basis for AI tutoring effect sizes; 0.4–0.76σ range.)
- Bloom, B. S. (1984). "The 2 Sigma Problem." Educational Researcher, 13(6), 4–16. (Theoretical ceiling for personalised AI-assisted learning gains.)
Modelling Methodology
This is a synthetic matched-cohort study. Two groups of 500 students each (10 schools total) were constructed with matched baseline parameters. The groups are not drawn from live platform data.
Control group (no tools) score trajectory follows ASER 2023 national average growth rates for urban private schools: approximately +4–6pp over a 12-month academic year.
Treatment group (with NEP Workbench + AI Workbench) score trajectory is modelled using effect sizes from VanLehn (2011) applied to an Indian K-12 context, calibrated against NASSCOM 2023 EdTech adoption curves. Assumed full-platform utilisation.
Engagement & completion data modelled from CSF 2022 baseline adoption patterns with platform-driven uplift applied at rates consistent with comparable EdTech interventions.
Quartile analysis — differential gains (bottom quartile benefiting most) are consistent with findings from World Bank (2022) and VanLehn (2011) on adaptive tutoring systems.
Important —
This report illustrates projected impact under full-adoption conditions. Actual outcomes depend on school infrastructure, teacher training completion, and student engagement. Results do not constitute a guarantee. Control and treatment school names are illustrative only.
AI Learning Comparison Report — March 2026 | tryspyral.com | info@tryspyral.com |
Synthetic projection study modelled on publicly available EdTech benchmarks. Not a peer-reviewed study.