How AI Tools Helps and Fails in Engineering Assignments at UiA.

AI tools are becoming a common part of student workflows, but they do not always perform as expected. Through my work as a Student Assistant at UiA, I tested how LLMs handle real engineering assignments. The results show clear technical limitations — and new opportunities for teachers.

Artificial Intelligence is rapidly changing how engineering students approach assignments and reports. As a Student Assistant in Mechatronics at UiA, I support bachelor-level courses in Mathematics, Hydraulics and Machine Design. This semester, I explored how students might use AI tools in Canvas assignments — and where Large Language Models (LLMs) fail from a technical perspective.

During my first week, I tested several assignment types: MCQs, numerical problems, short answers, and mechanical questions based on diagrams and PDFs. I uploaded the tasks to advanced AI systems such as ChatGPT Pro, Gemini Advanced, Claude and Perplexity Pro. At first, their answers were inconsistent: numerical expressions were interpreted incorrectly, and multi-image PDFs caused errors in their visual recognition modules. However, after repeatedly providing similar questions, the models improved because transformer-based LLMs use context accumulation to refine predictions.
Despite improvements, several technical weaknesses remained. LLMs struggle with digital image processing, especially engineering diagrams with multiple figures, symbols, cross-sections or hydraulic circuits. Vision modules in current models are trained on general images, not domain-specific mechanical drawings; therefore, they often misinterpret arrows, forces, or cross-sectional views. In story-based questions commonly used in hydraulics or mechanics, the models follow the narrative too literally, leading to hallucinations and incorrect assumptions. These limitations arise from the statistical nature of deep-learning models, which lack physical simulation and do not “understand” engineering principles.

Instead of banning AI, teachers can use these weaknesses constructively. By designing assignments with mixed diagrams, stepwise calculations, non-standard symbols or contextual problem descriptions, AI tools provide partial help but cannot generate complete solutions. Students can still learn reasoning and key concepts while AI acts only as a support tool, not a shortcut.
Instead of banning AI, teachers can use these weaknesses constructively. By designing assignments with mixed diagrams, stepwise calculations, non-standard symbols or contextual problem descriptions, AI tools provide partial help but cannot generate complete solutions. Students can still learn reasoning and key concepts while AI acts only as a support tool, not a shortcut.

Understanding how AI behaves — its strengths and its limitations — helps both students and teachers create a learning environment where digital tools enhance education instead of undermining it. In engineering disciplines, critical thinking and real problem-solving remain essential, and AI should serve as an assistant rather than a replacement.