How to Generate Instant CER Student Feedback
Using ChatGPT, Claude, or Gemini
This one-page guide shows how teachers and coaches can use everyday AI tools to generate meaningful, curriculum-aligned feedback on student Claim-Evidence-Reasoning (CER)—without giving up instructional judgment or adopting new platforms.
This approach was explored in a recent Coaching Science Lab with classroom teachers, instructional coaches, and district leaders using real student work from OpenSciEd classrooms.
What problems this helps solve
Providing high-quality, timely feedback on CERs is powerful—but time-intensive.
Teachers often want to:
- respond to student thinking, not just correctness
- surface strengths and next steps tied to evidence and reasoning
- use patterns in student work to guide instruction
AI can help with the first pass—so teachers can focus on decisions, relationships, and next-day teaching.
What you need
You can do this today with:
- Student work (text, drawings, photos, or screenshots)
- Student instructions (the prompt students responded to)
- A CER framework or rubric
- Any general-purpose AI tool, such as:
- ChatGPT
- Claude
- Gemini
No special software required.
The basic workflow
- Start with the student task
Paste the question, prompt, or instructions given to students. - Add reviewer guidance
Tell the AI how to look at CER:- what counts as a strong claim
- what evidence should reference
- what reasoning should explain
- Include the student work
Paste text or describe/upload an image or drawing. - Ask for formative feedback
Request feedback that highlights:- strengths in the claim, evidence, and reasoning
- questions or suggestions that would help the student improve
- Review and revise
Teachers edit tone, accuracy, and instructional intent before sharing.
What the AI is (and isn’t) doing
AI helps with:
- noticing patterns
- drafting feedback language
- connecting feedback to evidence in student work
Teachers remain responsible for:
- instructional decisions
- tone and relationships
- what feedback students actually receive
The AI is a thinking partner, not the teacher.
Why this works for coaching and planning
When used across a set of student responses, this same approach can:
- surface common strengths and gaps
- identify leverage points for next-day instruction
- support coaching conversations grounded in evidence
Instead of starting with scores, teams start with student thinking.
Try it yourself
To get started, we’ve shared two lightweight resources:
- Sample AI Prompt (copy & adapt)
→ CER Feedback AI Prompt Example
You can experiment with one student response—or a whole class set—in under 10 minutes.
Want to go deeper?
We’re continuing to host Coaching Science Labs where educators and leaders explore real student work together, test ideas, and reflect on what responsible AI use can look like in practice.
If you’re curious, you’re invited to explore alongside us.
