Online exams are the future of assessments, as they offer greater flexibility, accessibility, and accuracy compared to traditional testing methods. Online tests can be administered on a massive scale without the need for test centers or travel, making it easier and cost-effective for institutions and corporates to reach a wider audience. They also allow for integrating various technology-based tools, providing a more engaging and interactive test-taker experience.
However, with the rise of online testing comes the challenge of ensuring the honesty of these exams. Students may resort to unauthorized methods to pass online tests, threatening the validity of the results. The emergence of AI language models like ChatGPT has raised further concerns about the potential for cheating and the need for measures to protect the integrity of online assessments.
ChatGPT is an AI chatbot developed by OpenAI based on a large language model (LLM). It is designed to generate human-like responses to natural language prompts or questions. ChatGPT has been trained on a massive corpus of text data, enabling it to generate coherent and contextually relevant responses to various prompts.
ChatGPT, like other AI language models, has the potential to pose a threat to the integrity of certain types of online assessments. However, it is essential to note that not all forms of online assessments are equally vulnerable to this threat.
Assessments that require complex and detailed responses, such as those demanding higher-order thinking, are less vulnerable to cheating with ChatGPT. These assessments require students to demonstrate their understanding of the material in a more nuanced and contextualized way, making it difficult for ChatGPT to generate accurate and coherent responses.
Additionally, online proctoring tools can be effective in defending against the threat of ChatGPT in exams. These tools use a combination of technologies, such as video and audio monitoring and cursor/keystroke analysis, to detect suspicious behavior during an online exam. This includes monitoring for the use of virtual machines or other software that may be used to run ChatGPT.
Mercer | Mettl applies a multi-layered approach to maintain the sanctity of online assessments that incorporates both technological solutions and test-design measures.
Bloom’s Taxonomy is a framework for categorizing knowledge goals and objectives. The taxonomy is designed to help assess learning objectives at different levels of complexity.
Mercer | Mettl implements a tailored version of Bloom’s Taxonomy framework for the development of its question banks. The assessments requiring higher–order thinking skills (HOTS) involve critical, creative, and complex cognitive processes beyond simple memorization or comprehension of information. While AI models like GPT-4 can exhibit some of these skills, their abilities are often limited compared to humans.
For example, GPT-4 can generate text that appears analytical or critical but might lack true understanding or insight. Moreover, AI models may struggle with tasks that require synthesis or evaluation of information beyond their training data. However, it is essential to recognize that AI capabilities are continually improving, and future models may better address some of these challenges.
Mercer | Mettl offers an extensive suite of AI–powered online remote proctoring tools to ensure cheating-free examinations. These tools analyze the test taker’s image, video, and audio feeds to raise flags in case of any suspicious or unusual activities, such as an additional person in the frame, the test taker moving away from the test window, a mobile phone detected, the test taker not being visible etc.
The flags are combined into a report element called the ‘Credibility Index,’ which crunches the test takers’ data and categorizes them into Low, Med, and High ranges of credibility.
Reviewers can see the video recordings in the reports to note the instances where AI has specifically flagged suspected behavior. All this information is also available to invigilators in case of live proctoring and enables them to take corrective measures such as chatting with test takers, pausing, and even ending the test.
The best way to prevent candidates from gaining an unfair advantage in an online assessment is by restricting their access to various online tools and forums during the assessment.
Mercer | Mettl enables you to perform a 360-degree check of the test taker’s surroundings before and during the assessment to ensure no help material or person is around. Mercer | Mettl also offers a next-generation anti-cheating lockdown browser that helps host assessments in a highly secure and safe environment by preventing on-screen cheating.
Once test-takers begin their assessments via these apps, their screen is put in a kiosk mode that prevents any form of unfair activity. The lockdown browsers are available for all popular platforms across desktop and mobile.
Mercer | Mettl deploys plagiarism solutions to detect if submissions made by test-takers during assessments are plagiarised. The detection of code plagiarism involves multiple approaches, including the usage of the Measure of Software Similarity (MOSS).
To be able to detect codes generated by AI tools such as ChatGPT more proficiently, we at Mercer | Mettl are developing an advanced next-generation AI–based code plagiarism detection system. This will be available soon to our enterprise users.
Mercer | Mettl brings you the ability to conduct live remote interviews online. During a live interview, an interviewer can ask open-ended questions and assess the candidate’s responses virtually, verifying their knowledge and skills.
With live pair programming capabilities, you can assess the programming competence of candidates in real–time.
Additionally, live online interviews can help to identify any unusual behavior or suspicious patterns, such as a lack of eye contact, long pauses before answering, and pasting from clipboard or cursor movement through browser tabs, which may indicate the use of outside resources like ChatGPT.
ChatGPT, as an AI language model, has the potential to impact our lives in numerous positive ways. It can facilitate improved communication, personalize interactions, automate tasks, provide valuable educational resources, and contribute to ongoing discussions around ethics and responsibility in the use of AI.
However, it is important to consider the potential negative implications of its use and work to mitigate these risks through responsible and ethical practices.
Originally published April 6 2023, Updated July 3 2023
Vaishali has been working as a content creator at Mercer | Mettl since 2022. Her deep understanding and hands-on experience in curating content for education and B2B companies help her find innovative solutions for key business content requirements. She uses her expertise, creative writing style, and industry knowledge to improve brand communications.
The accelerated pace at which businesses are rushing toward digitization has primarily established that digital skills are an enabler. It has also established the ever-changing nature of digital skills, and created a need for continuous digital upskilling and reskilling to protect the workforce from becoming obsolete.