This blog represents my personal views and observations on how AI platforms influence students in formative assessments and their learning process. Artificial Intelligence (AI) tools such as ChatGPT, Microsoft Copilot, and Perplexity are becoming increasingly common in higher education. Students rely on them for explanations, quick problem-solving, and revision support. This makes it crucial for educators and institutions to understand how reliably these systems perform—especially when it comes to formative assessments that involve numerical, conceptual, and diagram-based questions.
As part of my study, I examined how well different AI platforms could solve real quiz questions from three subjects: MAS236(Machine Design), MAS237(Hydraulics), and MA-178(Mathematics). The goal was not only to detect how often AI produces correct answers, but also to understand why the answers go wrong and whether prompt structure or input method influences accuracy.
This blog presents the motivation, methodology, results, and insights from this experiment.
1. Background and Motivation:
Formative assessments are meant to help students learn, not just earn marks. However, with AI becoming powerful and easily accessible, a major concern arises:
“Can students rely solely on AI to solve numerical engineering problems—and if they do, will they actually learn?”
Engineering subjects often require:
• accurate interpretation of figures,
• use of standard tables and textbook values,
• application of specific formulas,
• precise numerical computation with decimal-level accuracy.
This study helps show whether AI can truly replace or automate this thinking, or whether it acts better as a supporting tool that guides students while still requiring manual calculation and conceptual understanding.
2. Structure of the Quiz Questions:
The testing involved quizzes from three different subjects:
Machine Design:
• 5 quizzes
• Unlimited attempts
• Numerical and figure-dependent questions
• Required data from standard design tables (e.g., factor of safety, material properties)
Hydraulics:
• 5 quizzes
• Unlimited attempts
• Numerical questions involving diagrams, flow measurements, and pressure calculations
Mathematics:
• 1 quiz
• 1 question per student but with different numeric values
• Slight dependent on figures—More calculation using formula
Sampling:
A total of 110 questions were selected:
• 10 questions from each quiz × 11 quizzes = 110 questions
• Representing a wide mix of:
o diagram-based questions
o formula-based questions
o concept-based problems
o problems requiring lookup tables from textbooks
3. AI Platforms Used:
The same set of questions was input into multiple AI tools:
• ChatGPT
• Microsoft Copilot
• Perplexity AI
The objective was to compare responses across platforms and observe consistency.
