Followers

Showing posts with label Digital Literacy. Show all posts
Showing posts with label Digital Literacy. Show all posts

Wednesday, November 23, 2022

Draft Digital Personal Data Protection Bill 2022

 

Background

The first draft of the Personal Data Protection Bill, 2018 was proposed by Justice Srikrishna Committee, which was set up to provide recommendations on the new data protection law in India. The 2018 bill was revised and the Personal Data Protection Bill, 2019 was tabled at the Lok Sabha. The Lok Sabha passed a notion to refer the 2019 bill to a Joint Committee of both the House of Parliament. Due to delays caused by the COVID-19 pandemic, the joint committee submitted the report only in December 2021. The Data Protection Bill, 2021 was introduced by the government based on the recommendations of the joint committee. However, the bill was withdrawn because of the extensive changes proposed by the joint committee.

Why are there so many revisions to the data protection bill?

India is facing several challenges while formulating a data protection bill. These include:

  1. Protection of the rights of data principals (users) should not make even legitimate data processing impractical
  2. The need to create balance between the right to data privacy and the reasonable exception, especially when the government is processing personal data.
  3. The law must be future-proof so that it can keep pace with the current technological development.
  4. The rights and remedies should be made easily exercisable by data principals, who have unequal bargaining power with respect to data fiduciaries (companies).

What are the key features of the DPDP Bill, 2022?

  • The DPDP Bill, 2022 gives maximum control to the data principal. It mandates a comprehensive notice to the data principals on different aspects of data processing.
  • While non-consent based processing of personal data is present, the data principal is given the right to access, correct and delete their data.
  • The data fiduciary will be allowed to process the data only for the stated purposes and no more. The data can be retained only as long as it is required to fulfill the stated purpose.
  • The Bill penalizes entities for data breach. It also proposes the imposition of Rs.10,000 as a fine on individuals for providing false information, impersonating and filing frivolous complaints against social media.
  • The Bill removes the explicit reference to certain data protection principles like collection limitation, allowing the data fiduciary to collect any personal data permitted by the data principal. Making data collection solely based on consent does not consider the fact that data principals do not often have the requisite know-how of what kind of personal data is relevant for the particular purpose.
  • The bill removes concept of “sensitive personal data”, which recognizes the harm caused by the unlawful processing of certain personal data. It does not provide the extra protection for sensitive personal data, removing the need for explicit consent before processing and usage.
  • The Bill reduces the information that a data fiduciary is required to provide to the data principal to remove information overload. Previous versions required to provide considerable information in terms of the rights of data principals, grievance redressal mechanism, retention period of information, source of information collected etc.
  • The Bill proposes the setting up of the Data Protection Board of India. In case the data is breached, the data fiduciary or data processor is required to notify this board and each affected data principal. If they fail to do so, the Bill proposes a fine of up to Rs.200 crore.
  • The Bill introduces the concept of “deemed consent”. It categorizes purposes of data processing that are exempt from consent-based processing or are considered to be “reasonable purposes”. There are concerns regarding the grounds of deemed consent due to ambiguity of words such as “public interest”.

Tuesday, November 15, 2022

How we can make the digital space safer for all, particularly women

 

What could be helpful is to elevate the public discourse around technology-facilitated abuse. There is a need to focus on safety tools and features across platforms


India has one of the youngest youth demographics in the world (27 per cent are Gen Z while 34 per cent are Millennials) and among the most active online. As online interactions increase, more and more content is created and shared among people, helping them form new and wonderful connections. Sometimes, however, these interactions also make them vulnerable to harm.

Women are often particularly vulnerable. “What should I do, I can’t tell my family!” is a common refrain, heard from young women across the country when they grapple with the fallout of their private pictures being leaked online — sometimes from a hacked account, other times because of a soured relationship. In a culture where mobile phones sell because of the quality of their cameras, it should be no surprise that young men and women are exploring new ways to express their sexuality and navigate relationships, including through the taking and sharing of intimate images. However, it is increasingly evident that these new social norms have created new forms of abuse, as intimate images are being used to blackmail, shame, coerce, and control. Women are usually the victims.

Often, crimes that disproportionately impact women devolve into mass panic and lead to an all too predictable top-down discourse around the need to “protect our sisters and daughters”. This reaction, however well intentioned, will end up denying women their freedom and agency by their so-called “protectors”, many of whom are simply telling women to go offline, to be ashamed of expressing themselves, to stay in their lane.

Fortunately, leading academics — many of them women — are spearheading research around the topic, so that we may more accurately discuss and grapple with the evolution of technology-facilitated abuse, including intimate image abuse. Industry, too, has a role to play. If platform providers could be more responsive to the concerns and experiences of women then, to some extent, better design can help mitigate such issues.

A simple example is that of “unwanted contact”, one of the reasons why women avoid online spaces. This could mean design choices that help women stay in control of who they engage with, thereby reducing unwanted messages or advances. It could also mean leveraging open source technology that detects and blurs lewd images so that women don’t need to see unsolicited pictures. Therefore, focussing on safety tools and features — across the spectrum of websites and apps — could bring forward more ideas for creating a safer internet experience.

Various parliament committees in India have held meetings to discuss the issue of online safety of women over the years, and part of the government’s motivation in notifying the new IT rules had been rooted in the growing concern regarding the safety and security of users, particularly women and children. These are very good tangible steps. With the IT Act coming up for a rehaul, there is an opportunity to discuss in detail the nature of technology-facilitated abuse, capturing what this means, understanding how cases impact individuals as well as communities, the language needed to capture such offences and the punishment — penalties, jail or even rehabilitation programmes for perpetrators. This could be the start of an era of evidence-based discussion. Already, we know that crimes against women are the top category in India’s crime statistics, with cyber crimes a few rungs lower on the scale. Where the two intersect is where we need to focus if we are to make online space safe.

Despite these efforts, it is clear that women in India won’t feel safe online anytime soon unless society lets them. What could be helpful here is to elevate the public discourse around technology-facilitated abuse.

Written by Mahima Kaul

Source: Indian Express, 15/11/22

The writer is Director, APAC Public Policy, Bumble.

Monday, January 17, 2022

Countering hate in the digital world

 

Content moderation should be considered a late-stage intervention. Individuals need to be stopped early in the path to radicalisation and extremist behaviour to prevent the development of apps such as Bulli Bai.


Ongoing police investigations to identify the culprits behind the condemnable “Bulli Bai” and “Sulli Deals” apps, which “auctioned” several prominent and vocal Muslim women, implicate individuals born close to the turn of the century. At first glance, this indicates that digital natives are not resilient against problems such as disinformation, hate speech and the potential for radicalisation that plague our informational spaces. But placed within the broader context of decreasing levels of social cohesion in Indian society, that such apps were even created requires us to frame our understanding in a way that can point us towards the right set of long-term interventions.

To understand how we got here, we need to start by looking at the effect of new media technologies developed over the last 20 years on our collective behaviour, and identities. Technologies have changed the scale and structure of human networks; and led to abundance and virality of information. Social scientists hypothesise that these rapid transitions are altering how individuals and groups influence each other within our social systems. The pace of technological evolution coupled with the speed of diffusion of these influences has also meant that we neither fully understand the changes nor can we predict their outcomes. Others have focused on their effects on the evolution of individual, political, social, cultural identities. These identities can be shaped consciously or subconsciously by our interactions, and consequently affect how we process information and respond to events in digital and physical spaces.

Our identities ultimately bear on our cognitive processes — arguments against our defining values can activate the same neural paths as the threat of physical violence. The rise of social media has been linked to the strengthening of personal social identities at the cost of increasing inter-group divisions. Some have suggested that personalised feeds in new media technologies trap us in “echo chambers”, reducing exposure to alternate views. While other empirical work shows that people on social media gravitate towards like-minded people despite frequent interaction with ideas and people with whom they disagree. People can also self-select into groups that reinforce their beliefs and validate their actions. We still need a better understanding of the broader psychosocial effects, specifically in the Indian context. Experience, though, suggests that when these beliefs are prejudices and resentment against a specific group of people, the feedback loops of social confirmation and validation can result in violence. Even pockets of disconnected actions, when repeated and widespread, can destabilise delicate social-political relations built over decades.

Harms arising out of escalating levels of polarisation and radicalisation are primarily analysed through the lens of disinformation and hate speech which gives primacy to motives. This framing leaves room for some actors to evade responsibility since motives can be deemed subjective. And for others to be unaware of the downstream consequences of their actions — often, even those taken with good intentions can have unpredictable and adverse outcomes. The information ecosystem metaphor, proposed by Whitney Phillips and Ryan M. Milner, compares the current information dysfunction with environmental pollution. It encourages us to prioritise outcomes over motives, in that we should be concerned with how it spreads and not whether someone intended to pollute or not. It also makes us understand that the effects of pollution compound over time, and attempts to ignore, or worse, exploit this pollution only exacerbate the problem — not just for those victimised by them, but for everyone.

Our focus tends to be on those who command the largest audiences, have the loudest voices or say the most egregious things. While important, ignoring or downplaying the role of everyone else, or envisioning them as passive, malleable audiences risks overlooking the participatory nature of our current predicament. Big and small polluters feed off each other’s actions and content across social media, traditional media as well as physical spaces. The distinctions between “online” and “offline” effects or harms are often neither neatly categorisable nor easily distinguishable, “online” harassment is harassment. Actors as varied as bored students, local political aspirants, content creators/influencers, national-level politicians, or someone trying to gain clout, etc. engage throughout the information ecosystem. Their underlying motivations can range from the banal (FOMO, seeking entertainment, fame) to the sinister (organised, systematic and collaborative dissemination of propaganda, hate) to the performative (virtue signalling, projection of power, capability, expertise), and so on. The interactions of these disparate sets of actors and motivations result in a complex and unpredictable system, composed of multiple intersecting self-reinforcing and self-diminishing cycles, where untested interventions can have unanticipated and unintended consequences.

Several have called for action by platforms to address hate speech.  Content moderation should be considered a late-stage intervention. Individuals need to be stopped early in the path to radicalisation and extremist behaviour to prevent the development of apps such as Bulli Bai. This is where steps such as counterspeech — tactics to counter hate speech by presenting an alternative narrative — can play a role and need to be studied further in the Indian context. Counterspeech could take the form of messages aimed at building empathy by humanising those targeted; enforcing social norms around respect or openness; or de-escalating a dialogue. Notably, this excludes fact-checking. When people have strong ideological dispositions, contending their narratives based on accuracy alone, can have limited effectiveness. Since behaviours in online and physical spaces are linked, in-person community action and outreach can also help. Social norms can be imparted through families, friends and educational institutions. “Influencers” and those in positions of leadership can have a significant impact in shaping these norms. At such times, the signals that political leaders and state institutions send are particularly important.

Prabhakar is research lead at Tattle Civic Tech. Waghre is a researcher at The Takshashila Institution, where he studies India’s information ecosystem and the governance of digital communication networks

Written by Tarunima Prabhakar , Prateek Waghre 

Source: Indian Express, 17/01/22

Wednesday, November 17, 2021

Can we predict the future with big data?

 As a lifelong fan of science fiction, I was thrilled to learn that Apple TV+ was bringing Isaac Asimov’s classic Foundation series to the small screen. I have to admit that my excitement was tinged with just a soupçon of trepidation, unsure as I was that anyone would be able to do justice to the vast multi-generational span of the storyline. But it was great that even an attempt was being made.

Much of why the Foundation trilogy is so timeless has to do with psychohistory—the fictional science around which the plot of the entire series revolves. Using a combination of mathematics, history and sociology, psychohistory makes it possible for the protagonist, Hari Seldon, to predict the flow of future historical events with fine-grained accuracy. The science is based on the premise that, while no one can ever expect to predict what a single human is going to do, if you are modelling a large enough population, it is possible to describe the sequence in which future events will take place with unerring accuracy. The Galactic Empire in which the Foundation series is set has a population in excess of a quintillion people, allowing Seldon to accurately foresee its downfall and develop a plan to shape the course of these future events so that its worst effects could be mitigated.

The appeal of psychohistory lies in its believability. Even though it was conceptualized 80 years ago, well before big data and the miracles of modern data-driven innovation, Asimov intuitively zeroed in on the fact that while human behaviour might be erratic in isolation, when aggregated at population scale, it can become predictable.

Science fiction has always been prescient. Jules Verne wrote about space travel and submarines before anything even approaching the technology required to make this a reality existed. Arthur C. Clarke predicted satellite communication well before the first satellite was placed in geosynchronous orbit. Even Douglas Adams’ vision of a universal translator (the babelfish—a live fish that lives in your ear) long predated Google Translate.

Seeing how science fiction got all these things right, I’d like to think that it is only a matter of time before psychohistory becomes a reality.

A few weeks ago, The Economist weekly had a feature on a new revolution that it called the third wave of economics. Unlike the first wave that was largely driven by individual thinkers who wrote books and papers about a singular big idea, or the second wave that was slightly more experimental with new assertions supported by empirical studies, the third wave of economics is almost entirely powered by the voluminous availability of real-time data.

Tech companies have long been able to draw on the data they generate to predict customer behaviour. E-commerce firms use this information to promote products and ensure their private-label brands are hawked alongside items that are likely to be in strong demand. Streaming media companies use this data to green-light movies and TV series based on what is most likely to find favour with global audiences.

Third-wave economists use similar techniques, drawing on vast amounts of real-time granular data to explain real-world problems. By studying granular mobility data obtained from social media companies and telecom service providers during the pandemic, they were able to understand the impact of lockdown restrictions on disease transmission. By studying live data on the day-to-day movements of ships, they were able to figure out where the bottlenecks lay in supply-chain logistics. In the US, economists analysed live data from restaurant booking sites to gather evidence in support of a stimulus package for the industry.

The availability of real-time granular data is only going to increase. Apart from the fact that more and more people are going online every day, a rapid acceleration in the volume and variety of wearable and Internet of Things devices has resulted in an exponential increase in the availability of new information. This real-time data is available on the cloud in easily accessible, inter-operable formats that are well suited for cross-platform analysis. And, as the available volumes of data increase, the accuracy of early trends that are identified is only going to improve.

While this may not yet be anything like Seldon’s psychohistory, we might already be able to see how, given time, it could develop into something along those lines. If economists can model the data generated by commercial transactions to forecast the behaviour of markets, it can’t be long before this data is used to predict social outcomes. With real-time data at their disposal, it should be possible to build tight feedback loops that constantly refine and recast these models on the basis of how they perform in the real world, with constant iterations helping them improve accuracy.

When we have all the data required to predict outcomes as well as the models needed to do so accurately, we will also have all the tools it takes to shape those outcomes in desirable ways. Even though, at present, predictions from these models are relatively short term, it seems entirely within the realm of possibility that with more data and improved models, it will soon be possible to see further into the future, predicting not just an immediately proximate response but also the sequence of events that will take place. Once that happens, it will only be a matter of time before there is little to distinguish third wave economics from psychohistory.

Rahul Matthan

Source: Mintepaper, 17/11/21

Tuesday, September 24, 2019

Inequality of another kind


Why the right to Internet access and digital literacy should be recognised as a right in itself

Recently, in Faheema Shirin v. State of Kerala, the Kerala High Court declared the right to Internet access as a fundamental right forming a part of the right to privacy and the right to education under Article 21 of the Constitution. While this is a welcome move, it is important to recognise the right to Internet access as an independent right.

Digital inequality

Inequality is a concept that underpins most interventions focussed on social justice and development. It resembles the mythological serpent Hydra in Greek mythology — as the state attempts to deal with one aspect of inequality, many new aspects keep coming up.
In recent times, several government and private sector services have become digital. Some of them are only available online. This leads to a new kind of inequality, digital inequality, where social and economic backwardness is exacerbated due to information poverty, lack of infrastructure, and lack of digital literacy. According to the Deloitte report, ‘Digital India: Unlocking the Trillion Dollar Opportunity’, in mid-2016, digital literacy in India was less than 10%. We are moving to a global economy where knowledge of digital processes will transform the way in which people work, collaborate, consume information, and entertain themselves. This has been acknowledged in the Sustainable Development Goals as well as by the Indian government and has led to the Digital India mission. Offering services online has cost and efficiency benefits for the government and also allows citizens to bypass lower-level government bureaucracy. However, in the absence of Internet access and digital literacy enabling that access, there will be further exclusion of large parts of the population, exacerbating the already existing digital divide.
Moving governance and service delivery online without the requisite progress in Internet access and digital literacy also does not make economic sense. For instance, Common Service Centres, which operate in rural and remote locations, are physical facilities which help in delivering digital government services and informing communities about government initiatives. While the state may be saving resources by moving services online, it also has to spend resources since a large chunk of citizens cannot access these services. The government has acknowledged this and has initiated certain measures in this regard. The Bharat Net programme, aiming to have an optical fibre network in all gram panchayats, is to act as the infrastructural backbone for having Internet access all across the country. However, the project has consistently missed all its deadlines while the costs involved have doubled. Similarly, the National Digital Literacy Mission has barely touched 1.67% of the population and has been struggling for funds. This is particularly worrying because Internet access and digital literacy are dependent on each other, and creation of digital infrastructure must go hand in hand with the creation of digital skills.

The importance of digital literacy

Internet access and digital literacy have implications beyond access to government services. Digital literacy allows people to access information and services, collaborate, and navigate socio-cultural networks. In fact, the definition of literacy today must include the ability to access and act upon resources and information found online. While the Kerala High Court judgment acknowledges the role of the right to access Internet in accessing other fundamental rights, it is imperative that the right to Internet access and digital literacy be recognised as a right in itself. In this framework the state would have (i) a positive obligation to create infrastructure for a minimum standard and quality of Internet access as well as capacity-building measures which would allow all citizens to be digitally literate and (ii) a negative obligation prohibiting it from engaging in conduct that impedes, obstructs or violates such a right. Recognising the right to internet access and digital literacy will also make it easier to demand accountability from the state, as well as encourage the legislature and the executive to take a more proactive role in furthering this right. The courts have always interpreted Article 21 as a broad spectrum of rights considered incidental and/or integral to the right to life.
A right to Internet access would also further provisions given under Articles 38(2) and 39 of the Constitution. It has now become settled judicial practice to read fundamental rights along with directive principles with a view to defining the scope and ambit of the former. We are living in an ‘information society’. Unequal access to the Internet creates and reproduces socio-economic exclusions. It is important to recognise the right to Internet access and digital literacy to alleviate this situation, and allow citizens increased access to information, services, and the creation of better livelihood opportunities.
Sumeysh Srivastava is a programme manager at Nyaaya, an initiative of the Vidhi Centre for Legal Policy
Source: The Hindu, 24/09/2019