Followers

Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Wednesday, December 04, 2024

Let’s talk about AI in academia

 Institution-level dialogues can help in setting up both general and discipline-specific guidelines on what constitutes permissible AI assistance, and what is not


A recent petition filed by a law student before the Punjab and Haryana High Court against a private university for failing him in a course raises important questions regarding generative AI (GenAI) use in academia and research. The university failed the student because he used AI-generated materials to submit responses in an examination. The student challenged the decision on several grounds, including lack of sufficient evidence and violation of principles of natural justice. As the university later informed the Court that they had passed him, it disposed of the petition.

While whether a student used GenAI tools in his submissions is an inquiry conducted ideally by experts in the field, the controversy raises a broader question – how to navigate the ethical and academic challenges posed by GenAI fairly and consistently. When used appropriately, many GenAI tools can act as complementary resources to enrich learning and enhance communication. These tools can, however, defeat many broader goals of education when used in inconsiderate ways. Many institutions are now grappling with the challenge of GenAI-generated submissions by students and researchers. Academic journals and other scientific communication platforms face a similar challenge.

The response from many Indian institutions to this crisis has, unfortunately, been far from satisfactory. A substantial chunk of institutions continues to proceed with traditional modes of evaluation, as if nothing has changed. Some institutions that are concerned about excellence in education and research have gone to the other extreme to rely heavily on technology. Many indiscreetly use AI detection tools like Turnitin AI Detector to penalise students and researchers.

As numerous scientific studies have pointed out, false positive rates are a matter of concern with respect to most AI detection tools. These tools operate on probabilistic assessments and their reliability substantially goes down when human intervention modifies an AI-generated draft. While it may be relatively easy for a tool to detect a completely AI-authored work, the robustness in prediction drops substantially when the output from the GenAI tool is edited or modified by the user. It is important to understand the importance of human intervention. The decision on whether a candidate has engaged in academic malpractice has to be taken by experts in the subject area. Not much reliance should be placed on machine-generated reports.

A constructive starting point for addressing the current GenAI crisis could be opening dialogues within institutions on what constitutes permissible AI assistance, and what is not. This clarity is important, as AI tools are increasingly getting integrated to widely used word processors – use of AI tools for language correction, for example. Without clear guidelines, students and researchers may inadvertently commit mistakes. Institution-level dialogues can lead to both general and discipline-specific guidelines.

Institutions could also consider supplementing written submissions with rigorous oral examinations to enable a more holistic assessment of the candidates and reduce potential misuse of AI tools. This demands greater time and effort from the faculty and examiners. This needs to be factored in in the faculty workload planning of institutions. Regulatory authorities like UGC and AICTE have a major role in facilitating this transformation.

Appropriate disclosures regarding AI use should also become a norm in academia. Students and researchers should be obliged to disclose what tools were used in their writing, and for what purposes. Based on the disclosures as well as the institutional guidelines, inquiry committees could take fair and balanced decisions on AI misuse allegations. It is also important for students and researchers to keep a record of their writings. Tools such as “version history” in Microsoft Word can help to prove what part of the document was authored by them and what modifications were subsequently made with AI tools.

It is also important for policy makers to revisit the incentive structures within academia and research, particularly the relentless focus on publications that promotes a publish-or-perish culture. Though UGC has removed the mandatory publication requirements for the grant of a PhD degree, many institutions continue to demand publications from doctoral candidates. It is high time that we explore better modes of scientific communication and evaluation that values quality over quantity. Comprehensive reforms can help us balance the opportunities and challenges posed by the technology.

by Arul George Scaria

Source:Indian Express, 4/12/24

Tuesday, July 02, 2024

What is CriticGPT?

 CriticGPT is a powerful  AI tool made using OpenAI’s GPT-4 model. It was made to make it easier for  AI judges to find mistakes in ChatGPT code. One of the most important things this tool does to improve the accuracy and stability of code is to find bugs that human reviewers might miss.

Research and Development

A study paper called “LLM Critics Help Catch LLM Bugs” went into great detail about how CriticGPT was made. To improve the AI’s ability to find mistakes, researchers taught it with a dataset that only included purposefully wrong code. Because of this training, CriticGPT could find and report code errors more accurately. The study found that human annotators liked CriticGCOs given by CriticGPT more than notes made by human judges 63% of the time, especially when it came to finding mistakes related to LLM. This shows that the programming community is very open to AI-generated critical comments.

Innovations in Review Techniques

A new technique called “Force Sampling Beam Search” is used by CriticGPT to help human critics write better and more detailed reviews. This method also lowers the chance of “hallucinations,” which happen when  AI makes or suggests mistakes that don’t exist or aren’t important. In CriticGPT, one of the most important benefits is that users can change how thoroughly errors are found. This gives you the freedom to find the right mix between finding real bugs and avoiding “error” flags that aren’t needed.

Limitations

CriticGPT has some problems, even though it has some good points. It mostly has trouble with long and complicated coding jobs because it was trained on  ChatGPT responses that were pretty short. Another problem is that the  AI doesn’t always find errors that are spread across multiple sections of code. This is a regular problem in software development. To sum up, CriticGPT is a big step forward in AI-assisted code review. It improves the code review process by mixing GPT-4’s features with advanced training and new methods. As with any tool, though, it has some flaws that make it less useful in more complicated code situations.

Thursday, June 06, 2024

Why AI chatbot of your older self won’t stop you from making stupid decisions

 If I only knew then, what I know now…” It is a lament the young do not understand, for they do not know that they do not know. But as time passes — sometimes in days, sometimes in years — many people have wanted to go back and counsel, scold and guide themselves to better decisions. Some may wish to go back only a day, and caution their past selves against that fifth drink, or the late-night binge eating.

For others, regrets can span years and even a lifetime. It may be that, after slaving away at a job for decades, someone may want to go back and quit, when time and opportunity allowed them to. Now, AI is trying to allow people to talk to younger versions of themselves.

According to a report in The Guardian, researchers at the Massachusetts Institute of Technology (MIT) have built an AI-powered chatbot that simulates a user’s older self and dishes out advice. The profile picture is aged — wrinkles, grey hair and perhaps a bit of wisdom in the eyes — to make the faux time travel feel more authentic.

It gives career advice, tells people to cherish their parents, and shares any number of other pearls of wisdom. If the advice sounds a little corny, users have only themselves to blame — the chatbot is based on their behaviour and inputs. Unfortunately, though, it’s unlikely to alter the course of lives.

The problem with the “I wish I’d known then what I know now” aspiration is, as Terry Pratchett pointed out, “when you got older you found out that you wasn’t you then. You then was a twerp.” It takes a life filled with regrets and what-ifs to gain the wisdom to give advice. Ignoring the advice of elders is what being young is often about. A chatbot won’t change that. If kids were so keen on perspective, they would just listen to their parents.

Source: Indian Express, 6/06/24

Thursday, May 30, 2024

Using AI and ChatGPT in legal cases: What Indian courts have said

 

High Courts across India have differed in their stances on using ChatGPT as part of the legal process. Where has it been used, and what are some criticisms of the practice?

The Manipur High Court last week stated that it “was compelled to do extra research through Google and ChatGPT 3.5” while deciding on a case. This is not the first time a High Court has used artificial intelligence (AI) for research. But in India — as in the rest of the world — courts have been rather cautious about the use of AI for judicial work.

How the Manipur HC used ChatGPT in a case

Zakir Hussain, 36, was “disengaged” from his district’s Village Defence Force (VDF) in January 2021, after an alleged criminal escaped from the police station while Hussain was on duty. He never received a copy of the order dismissing him.

After Hussain approached the Manipur High Court challenging his dismissal, Justice A Guneshwar Sharma, in December 2023, directed the police to submit an affidavit detailing the procedure for “disengagement of VDF personnel”. But the affidavit submitted was found wanting, and did not explain what the VDF was. This “compelled” the court to use ChatGPT for further research.

ChatGPT said that the VDF in Manipur comprises “volunteers from the local communities who are trained and equipped to guard their villages against various threats, including insurgent activities and ethnic violence” — information that Justice Sharma used in his ruling.

Ultimately, he set aside Hussain’s dismissal, citing a 2022 memorandum issued by the Manipur Home Department which stated that upon dismissal, VDF personnel must be given “an opportunity to explain in any case of alleged charges”— which the petitioner was denied in this case.

High Courts’ differing stances on using ChatGPT

In March 2023, Justice Anoop Chitkara of the Punjab & Haryana High Court used ChatGPT to deny the bail plea of a certain Jaswinder Singh, accused of assaulting an individual, and causing his death. Justice Chitkara found that there was an element of “cruelty” to the assault — a ground which can be used to deny bail.To supplement his reasoning, Justice Chitkara posed a question to ChatGPT: “What is the jurisprudence on bail when the assailants are assaulted with cruelty?” The court’s eventual order contained the AI chatbot’s three page response which included that “the judge may be less inclined to grant bail or may set the bail amount very high to ensure that the defendant appears in court and does not pose a risk to public safety.”

Justice Chitkara, however, clarified that this reference to ChatGPT was not the same as expressing an opinion on the merits of the case, and that it “is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor.”

The Delhi High Court has been less receptive to the use of AI in courts. In August 2023, Justice Pratibha M Singh ruled in favour of luxury shoe designer Christian Louboutin in a trademark case.

Louboutin’s lawyers had used ChatGPT-generated responses to show that the brand had a reputation for “spike shoe style” with a “red sole” — a design which was being copied by another brand called Shutiq. Justice Singh held that ChatGPT cannot be used to decide “legal or factual issues in a court of law”, highlighting the possibility of “incorrect responses, fictional case laws, imaginative data etc. generated by AI chatbots”.

Elsewhere in the world

This ‘fictional case laws’ scenario is not a mere hypothetical. In 2023, a Manhattan federal judge fined a lawyer $5,000 for submitting fictitious legal research generated using ChatGPT. The lawyer had filed a brief with fictitious cases with titles such as Varghese vs China Southern Airlines and Shaboon vs Egypt Air in a personal injury suit involving Colombian airline Avianca.

Last December, the UK judiciary released a set of guidelines about the use of generative AI in courts. While judges were allowed to use ChatGPT for basic tasks such as summarising large bodies of text, making presentations, or composing emails, they were cautioned not to rely on AI for legal research or analysis.

No such guidelines exist in India.

Written by Ajoy Sinha Karpuram

Source: Indian Express, 28/05/24


Monday, March 04, 2024

Caught in the net

 

Histories can be twisted, maligned, because we believe that which is given on a website, written in a manner that is unambiguous and spoon-fed to us, requiring no commitment from our end


When was the World Wide Web released to the public? Searching for an answer, I went to the only place that can provide me with an instant response — the World Wide Web. An NPR article informed me that it was created by Tim Berners-Lee and gifted to humanity on April 30, 1993, free of charge. By the end of the 1990s, this information web had covered the globe to such an extent that the post-90s generations don’t know of a time before the internet.

This one platform has revolutionised information access, learning, knowledge production and connectivity. And this has happened at a speed which is unfathomable. The number of technological developments that have aided, enhanced and accelerated these processes are mind-boggling. With Artificial Intelligence bursting onto the scene, things are only going to get even more unbelievable, literally and metaphorically. On an aside, it is philosophically valuable to consider Roger Penrose’s argument that Artificial Intelligence is a ‘misnomer’. That the computer can only ‘mimic’ intelligence. He argues that consciousness is not computation. Anyway, let me not wander.

Before you jump the gun and assume that this piece is about fake news, deep fakes, post-truth or the dangers of Artificial Intelligence, let me inform you that it is not. The drawbacks of not having the internet and the democratising role the medium has played are there for all of us to see, acknowledge and appreciate. Therefore, I am not going to dwell on the obvious. Neither am I going on a nostalgic rant on ‘the good old days’. But there are other questions about the pre-networked age that require consideration.

Let us begin with something as simple as thinking, a process that every human being engages in by default. To receive information, comprehend and make decisions is nearly automatic. The question before me is whether there was something different about the way we thought before the online network became a permanent fixture in our lives. Similar to how technology helped us reduce the time we spent on gathering, cooking and consuming food, the Cloud has greatly reduced the burden of remembering dates, times or exact events. Such information was given great importance in the past. Unfortunately, the lifting of this unnecessary weight has not meant that we engage earnestly with serious questions. The ease with which the Web provides us with answers somehow curtails the extent of our questioning.

The rapidity of search results and the way material is presented on and for the Web do not make us curious. Furthermore, the tone is more often than not, definitive. In other words, the internet has surreptitiously removed doubt from learning. Doubt is not distrust. It is a prerequisite for education. It is the opening that leads to further investigation. This does not happen by accident. It is part of knowledge creation. In the sharing of what we know, we embed the possibility of doubt, change and growth.

The virtual information highway largely functions in the opposite manner where you get more hits if you present an assured face. Your fingers itch to click the first possible link and people pay to place their links on top. It requires great effort to go past these innumerable layers of ‘surety’ to get to a place where learning is exciting; dare I even say true! This makes me consume in an unthinking manner. Questioning is stunted and people hold on to the programmed opinion they clicked on.

Hence, we should not be surprised that ‘educated’ folks fall prey to blatant lies. This problem did not begin with social media. The algorithms that nurtured cyberspace have always been designed to lessen the time used for assimilation. Speed in time spent on accessing a page and the way the information is presented are key to its success. The moment we foreground the paucity of time, urgency or the claim that we can do more productive things in those extra minutes that are needed to read, read again, think, read again and pause, we lose the ability to learn.

Is the internet a reality? Since creators, developers and participants are real people, we have to accept that the virtual universe is a part of a larger reality. But this agent has drastically reduced physical interactions. Childhood in the 1980s and 1990s entailed feeling the soil, being close to the trees, and meeting people in person. Today, it is all about video calls, playing games and learning via iPads and mobile phones. Parents say technology has made children smarter at an early age. I am no child psychologist, nor an educationist to counter such a claim confidently. Yet, I have to wonder about this smartness. Building the capacity to solve arithmetic or mathematical problems, or remembering things, or cleverness without empathy, love and care is not intelligence. I will argue that true intelligence is felt and every emotional connection is intelligent. When this is missing, humanity goes into hiding. Watching videos on YouTube or Instagram of the horrors that are unfolding in Palestine or Manipur will not make a person more empathetic. Love and compassion have to be learnt and shared physically, directly, without an intermediary.

If something does not exist on the Web, is it real? And, as an extension, is everything that happened before the virtual age and has not been digitised irrelevant? The first question may sound moot because we cannot imagine that there are people or things that do not find mention on the internet. The falsity of this belief stems from the fact that we trust it as a democratic space. The internet is a marketplace, a bazaar where everyone is selling. The fact that anyone can open a shop without paying rent does not imply equality. Social equations that govern our everyday interactions also control the internet. Hence, there are many unheard, wrongly represented and lost voices.

The imperative to give every­thing a digital avatar wipes out all that does not find space in this all-encompassing network. Innumerable cultures, stories and peoples are lost to posterity not only because we do not look beyond the infobahn but also because we have forgotten to remember from life experiences, from what we hear, see and learn in person. Even lived histories have to be virtualised. Histories can be easily twisted, maligned, because we only believe that which is given on a website, written in a manner that is unambiguous and spoon-fed to us, requiring no commitment from our end. Naysayers may argue that all this is hocus-pocus theorisation. That the website is merely the new avatar of the book. Books also spread lies and wipe out people. This is true. But a book required the writer to explain and demanded attention and time from the reader. The internet, on the other hand, celebrates loudness and preys on the lack of attention.

T.M. Krishna

The Telegraph: 1/03/24

Tuesday, February 27, 2024

Krutrim- India’s first AI unicorn launches Chatbot

 Krutrim, an Artificial Intelligence start-up launched by Ola founder Bhavish Aggarwal, has rolled out an AI chatbot in public beta, similar to OpenAI’s ChatGPT and Google’s Gemini.

India’s first AI unicorn

The launch comes a month after Krutrim disclosed a $50-million financing at $1-billion valuation, to become the country’s first start-up unicorn in 2024. The company mentioned that it is the first AI unicorn in the country.

The chatbot, which has the same name as the company (Krutrim), was announced in December. It is the firm’s first product, which will be powered by its multilingual large language models (LLM), also called Krutrim.

Features of AI models

Krutrim unveiled its AI models in December last year. At the time, the start-up also showcased the AI chatbot. Krutrim’s AI models can understand over 20 Indian languages and generate text in 10 Indian languages, including Bengali, Tamil, Malayalam, Gujarati, and Marathi. A higher and more sophisticated version, Krutrim Pro is anticipated to be available in Q4 FY24.

Krutrim’s ambitions

Krutrim, ‘artificial’ in Sanskrit, will come in two sizes: a base model named Krutrim trained on 2 trillion tokens and unique datasets, and a larger, more complex model called Krutrim Pro, launching next quarter for advanced problem-solving and task execution capabilities.

Krutrim Pro, launching in Q4 FY24, will be multimodal in nature, which means it can understand and work with different formats, including text, audio, image, and video, at the same time. It will also have larger knowledge, advanced problem-solving and task execution capabilities.

Way Forward

The start-up is working on building AI infrastructure, developing indigenous data centres and aims to eventually get into server-computing, edge-computing, and super-computers. The start-up is also working on manufacturing AI-optimised silicon chips.

India’s Unicorns in 2023

In December 2023, Fintech company InCred has struck a valuation of over 1 billion dollars becoming the latest unicorn of the country. It is the second unicorn of the year after Zepto, the e-commerce app which delivers grocery. In 2023, only two companies managed to become unicorns.

Tuesday, February 20, 2024

What is ‘ANUVADINI’- AI Tool?

 

The Government of India has directed all school and higher education institutions across the country to make available digital study material in Indian languages for every academic course within 3 years. This policy aims to enable students to learn in their native tongues aligned to India’s linguistic diversity.

Background

The National Education Policy 2020 has prioritised education in native languages and has also recommended a three-language formula for school education till Class X. The National Curriculum Framework 2023 for school education stated that till Class X a student need to study three languages of which two should be native Indian language and in Classes XI and XII where study of two languages have been recommended, one of which should be native Indian language.

Coverage of Initiative

The digital study materials access mandate applies to both government and private institutions and covers all courses from school textbooks to specialized university texts spanning sciences, humanities, engineering, medicine, law etc.

Anuvadini- the AI Tool

‘Anuvadini’, an Artificial Intelligence-based multilingual translation application developed indigenously, will facilitate swift conversion of existing English materials into multiple languages through machine learning as the bedrock, followed by expert manual reviews for accuracy.

Significant headway has already been achieved over past 2 years with thousands of textbooks translated across domains and curated on the online portal Ekumbh under the initiative. 12 regional languages textual options also exist for national entrance examinations now.

UGC Guidelines

The University Grants Commission (UGC) also issued rules for higher education institutions to provide courses in Indian languages. The UGC said that the Commission for Scientific and Technical Terminology has made standard glossaries that can be used to translate. These glossaries cover a wide range of topics. According to the rules, technical terms that are hard for students to understand may be given in English between quotes after their Indian language counterparts.

Intended Benefits

Removing language barriers in accessing high quality pedagogical resources would democratize quality education for the masses while preventing drop outs. It would also promote usage of native tongues in higher academia and professional domains instead of English.

Uses in other arena

More than five thousand judgments of the Kerala High Court and District Courts have been recently translated into Malayalam with the help of Artificial Intelligence (AI). The judgments are translated using the AI tool ‘Anuvadini’ prepared by AICTE under the Union Ministry of Education.

Digital Ecosystem

In school education, study material is being made available in multiple Indian languages including over 30 languages on DIKSHA portal and competitive exams like JEE, NEET, CUET are being delivered in 12 Indian languages and English.
For the past two years, the translation of engineering, medical, law, UG, PG and skill books are also being done.

In a decision aimed at providing students with the opportunity to study in their own language, Centre has decided that study material for all courses under school and higher education will be made available digitally in Indian languages included in the 8th Schedule of the Constitution.

Wednesday, February 07, 2024

Game-changer

 The promise of AI is not merely about job displacement and creation but as a potential game- changer in public services.


In the unfolding narrative of technological evolution, the tantalising promise of artificial intelligence (AI) is casting its glow on the emerging world, heralding prospects of unprecedented growth and human capital development. This transformation, however, is not with- out its sceptics, echoing concerns that the benefits of AI may disproportionately favour the already privileged, particularly in the Western world. Yet, beneath the surf- ace, there lies a profound potential for AI to act as a cata- lyst for positive change in developing nations. The narra- tive begins with the acknowledgment that technology has, historically, been a double-edged sword. AI emerges as a unique player in this unfolding drama. Unlike earlier waves of technology, AI’s reach extends faster and more broadly. The key lies in the ubiquity of smartphones in the developing world, acting as gateways to a techno- logical revolution. The promise of AI is not merely about job displacement and creation but as a potential game- changer in public services. Education and healthcare, perennial challenges in developing economies, stand to gain substantially. The sheer scale of challenges, such as overcrowded classrooms in India or a scarcity of doctors in Africa, demands innovative solutions. AI, when harne- ssed strategically, can empower teachers, aid healthcare workers, and bridge the gap in resources. What makes this prospect all the more exciting is the participatory role that developing countries can assume. No longer passive recipients, they have the opportunity to shape AI to suit their unique needs. Localised applications, like speech-recognition software aiding illiterate farmers or chat-bots assisting students with homework in Kenya, showcase the adaptability and potential of AI to address specific challenges. Crucially, the narrative underscores that AI need not succumb to the winner-takes-all dyna- mics that defined earlier technological revolutions. Un- like the dominance of social media and internet-search giants, the flexibility of AI allows for diverse approaches to prosper. Developers in India, for instance, are fine- tuning Western models with local data, avoiding heavy capital costs. As we navigate this transformative landsca- pe, it becomes evident that each country is poised to mould AI according to its unique requirements. China’s tech prowess and deep-pocketed internet giants posi- tion it as a frontrunner, while India’s vibrant start-up sce- ne and government support signal innovation on the horizon. Even countries in the Gulf, traditionally reliant on oil, are strategically embracing AI to diversify their economies. Yet, amid the optimism, cautionary notes are sounded. Challenges such as expensive computing pow- er, the need for local data, and potential misuse of the technology loom on the horizon. Connectivity, governan- ce, and regulation are identified as linchpins for AI’s suc- cessful integration, especially in sub-Saharan Africa. The path forward requires strategic investments to overcome challenges, ensuring AI’s benefits permeate across bor- ders. As uncertainties persist, the certainty remains that AI’s multifaceted capabilities will continue to improve, presenting developing countries with a remarkable op- portunity and the power to seize it.


Source: The Statesman, 3/02/24