|
|
Student Assessment and Grading Agent
Author: Venkata Sudhakar
Automated student assessment using Gemini reduces the grading burden on educators while providing consistent, detailed feedback at scale. Unlike simple answer matching, Gemini understands context, partial credit, and alternative valid answers - making it suitable for descriptive answers, essay questions, and code submissions where fixed answer keys fall short.
The assessment agent grades student responses against a marking scheme, assigns marks per question, identifies knowledge gaps, and generates personalised feedback. It also produces a class-level report showing which topics need re-teaching based on aggregate performance.
The below example shows a student assessment agent used by a Mumbai-based training institute that grades a Python programming assignment and returns per-student feedback with topic gap analysis.
It gives the following output,
Assessment agent ready
It gives the following output,
Q1 [Python lists]:
marks_awarded: 5
feedback: Correct use of append() in a loop to build the list. A list comprehension
would be more Pythonic but the approach is valid and readable.
correct: true
Q2 [File I/O]:
marks_awarded: 2
feedback: File is opened and read correctly but the context manager (with statement)
is not used. The file may not close properly if an exception occurs.
correct: false
Q3 [Functions and recursion]:
marks_awarded: 10
feedback: Excellent recursive implementation with correct base case and recursive step.
Clean, readable, and handles n=0 correctly.
correct: true
Max possible: 20 marks
To generate a class-level gap report, run the agent across all student submissions and aggregate marks by topic. Topics where average marks fall below 60% of the maximum indicate areas requiring re-teaching. For the training institute, this shifts the instructor role from marking to teaching - Gemini handles the assessment while instructors act on the gap analysis to improve learning outcomes.
|
|