12. April '24
At the Faculty of Electrical Engineering and Computer Science at Coburg University of Applied Sciences, Prof. Volkhard Pfeiffer has been working on how digital competence-oriented testing works with Moodle Test and Coderunner.
Pfeiffer teaches computer science, is dean of studies and course director and has summarized his findings here. A degree in computer science is fundamentally skills-oriented – including in the field of programming, which involves mastering programming language concepts and applying them to problems.
At the same time, computer scientists must have personal and social skills, for example to develop solutions in teams.
These skills are located at different taxonomy levels.
A good exam therefore measures competencies at several taxonomy levels.
Moodle question types are used for these tasks and a Moodle test with Coderunner is carried out: Instead of writing the program code on paper in a written exam, Coderunner can directly compile, execute and test the program code that the examinees work on as a solution to a programming task.
This type of exam therefore tests the required skills much more realistically than a paper-based exam and also reflects the approach that computer scientists use methodically to solve programming problems.
The exam is largely graded automatically on the basis of all test and answer results.
The stored test cases provide students with direct feedback as to whether their answer and programming solution are correct.
This setting also forms the basis for automated assessment of the exam. Preparation for the digital exam In order to serve the different taxonomy levels, various question types such as multiple choice, assignment and code runner have been defined.
Such a digital examination requires technical and organizational conditions: on the one hand, the IT infrastructure must be reliable, powerful and available (e.g. stable WLAN); on the other hand, large rooms with appropriate equipment such as sufficient sockets are necessary, especially for large cohorts.
A PC is provided for students who do not have a functioning laptop.
In addition, a digital examination must be integrated into the respective university IT examination process: only participants registered for the examination are permitted in the examination course, the examination must be conducted on a separate examination server, grades must be reported to the university IT system and the examination must be archived digitally. Implementation of the digital exam To prevent attempts at cheating, the exam is only conducted using Safe Exam Browser, which secures the laptop both internally and externally.
In addition, Safe Exam enables the specific configuration of which tools (e.g. special URLs) and/or third-party applications are permitted during the exam.
Common programming development environments (such as Eclipse Foundation, etc.) are used in the exercises, but are not permitted during the exam, as they always allow Internet access.
Automatic assessment of the task is canonical for multiple-choice and assignment question types, for example, taking into account guess correction factors.
With Coderunner tasks, the central idea is to award points for each successful test case.
Not all test cases are visible to the examinee in order to prevent the solution from only providing the expected test results but not solving the actual task.
However, Coderunner only allows the evaluation of functional correctness. This means that if checking a learning objective requires a specific solution, this cannot be assessed automatically but must be corrected manually. Findings and outlook A necessary prerequisite for learning objectives of higher taxonomy levels is learning the syntax of a programming language.
Errors in the syntax of the solution mean that no test cases can be executed and the task is therefore awarded zero points.
It became clear that the number of syntax errors increases significantly for more complex tasks under test conditions and time pressure, and that the designed solution would have scored at least partial points in the case of a paper assessment.
This is another reason why digital assessment cannot be completely automated, but requires individual manual corrections.
In addition, the same quality criteria must be applied for an automated assessment as for a paper assessment.
This has further consequences: In a digital examination, tasks in a paper examination with consecutive subtasks must be broken down into individual tasks for an automated assessment in such a way that students can at least still score points for the partial solutions.
The examination under the specified technical and didactic framework conditions has a significantly higher organizational effort, as the examinees have to be divided into different rooms.
Exam results are comparable to the previous paper exams.
In summary, the switch from a paper exam to a digital exam has proven to be worthwhile under the conditions mentioned.
Direct feedback tools help students to learn, although the recorded metrics (average grade, failure rate) cannot currently provide measurable evidence of this.
However, the cost-benefit ratio for switching from paper to digital can only be justified for larger cohorts.
In the future, the competencies of these modules will also be tested by means of a digital examination and the examination tasks will be further developed in terms of measuring learning objectives and improved automated assessment. The text was published in DUZ Wissenschaft & Management, issue 02.2024, www.duz.de.