Institution-level dialogues can help in setting up both general and discipline-specific guidelines on what constitutes permissible AI assistance, and what is not
A recent petition filed by a law student before the Punjab and Haryana High Court against a private university for failing him in a course raises important questions regarding generative AI (GenAI) use in academia and research. The university failed the student because he used AI-generated materials to submit responses in an examination. The student challenged the decision on several grounds, including lack of sufficient evidence and violation of principles of natural justice. As the university later informed the Court that they had passed him, it disposed of the petition.
While whether a student used GenAI tools in his submissions is an inquiry conducted ideally by experts in the field, the controversy raises a broader question – how to navigate the ethical and academic challenges posed by GenAI fairly and consistently. When used appropriately, many GenAI tools can act as complementary resources to enrich learning and enhance communication. These tools can, however, defeat many broader goals of education when used in inconsiderate ways. Many institutions are now grappling with the challenge of GenAI-generated submissions by students and researchers. Academic journals and other scientific communication platforms face a similar challenge.
The response from many Indian institutions to this crisis has, unfortunately, been far from satisfactory. A substantial chunk of institutions continues to proceed with traditional modes of evaluation, as if nothing has changed. Some institutions that are concerned about excellence in education and research have gone to the other extreme to rely heavily on technology. Many indiscreetly use AI detection tools like Turnitin AI Detector to penalise students and researchers.
As numerous scientific studies have pointed out, false positive rates are a matter of concern with respect to most AI detection tools. These tools operate on probabilistic assessments and their reliability substantially goes down when human intervention modifies an AI-generated draft. While it may be relatively easy for a tool to detect a completely AI-authored work, the robustness in prediction drops substantially when the output from the GenAI tool is edited or modified by the user. It is important to understand the importance of human intervention. The decision on whether a candidate has engaged in academic malpractice has to be taken by experts in the subject area. Not much reliance should be placed on machine-generated reports.
A constructive starting point for addressing the current GenAI crisis could be opening dialogues within institutions on what constitutes permissible AI assistance, and what is not. This clarity is important, as AI tools are increasingly getting integrated to widely used word processors – use of AI tools for language correction, for example. Without clear guidelines, students and researchers may inadvertently commit mistakes. Institution-level dialogues can lead to both general and discipline-specific guidelines.
Institutions could also consider supplementing written submissions with rigorous oral examinations to enable a more holistic assessment of the candidates and reduce potential misuse of AI tools. This demands greater time and effort from the faculty and examiners. This needs to be factored in in the faculty workload planning of institutions. Regulatory authorities like UGC and AICTE have a major role in facilitating this transformation.
Appropriate disclosures regarding AI use should also become a norm in academia. Students and researchers should be obliged to disclose what tools were used in their writing, and for what purposes. Based on the disclosures as well as the institutional guidelines, inquiry committees could take fair and balanced decisions on AI misuse allegations. It is also important for students and researchers to keep a record of their writings. Tools such as “version history” in Microsoft Word can help to prove what part of the document was authored by them and what modifications were subsequently made with AI tools.
It is also important for policy makers to revisit the incentive structures within academia and research, particularly the relentless focus on publications that promotes a publish-or-perish culture. Though UGC has removed the mandatory publication requirements for the grant of a PhD degree, many institutions continue to demand publications from doctoral candidates. It is high time that we explore better modes of scientific communication and evaluation that values quality over quantity. Comprehensive reforms can help us balance the opportunities and challenges posed by the technology.
Source:Indian Express, 4/12/24