Followers

Showing posts with label Computer. Show all posts
Showing posts with label Computer. Show all posts

Thursday, November 11, 2021

Dropbox vs. Google Drive: Which cloud storage is right for you?

 Google’s Drive and Dropbox are two of the most popular options for cloud storage and backup. Which makes sense, because the two platforms compete with each other intensely. Which one is right for you? That’s a complex question, and it comes down to several factors: your budget, your total backup needs, and which platforms you want to use them on. 

Unsurprisingly, Google Drive works best if you’re heavily invested in Google’s other systems: Android, Chrome OS, and the Google Workspace suite of web apps. It’s also a better value in general. Dropbox is a better choice if you’re more concerned with speed and performance, and are willing to pay for it. 

Pricing 

At the consumer level, both companies offer at least one approximately comparable plan for cloud storage. Here’s a quick breakdown of the various plans and prices: 

Storage tier

Google Drive/Google One

Dropbox

2GB

Free (bonuses available)

15GB

Free

100GB

$2 a month

200GB

$3 a month

2TB

$10 a month

$12 a month (one user only), $20 a month for 6 users

3TB

$20 a month (one user only)

5TB

$25 a month

$45 per month/3 user minimum, $15 for each extra user

10TB

$50 a month

20TB

$100 a month

30TB

$150 a month

Unlimited

$75 per month/3 user minimum, $25 for each extra user

As you can see, Google Drive (also known as Google One) offers both more initial, free storage, and more and cheaper options at different levels of storage. Dropbox users can boost their free storage by getting friends to sign up with referral codes, up to 16GB. But making users essentially do your marketing for you to get what’s free elsewhere isn’t a great value proposition. 

Both companies offer discounts for paying yearly instead of monthly. But in terms of bang for your buck, Dropbox really only makes sense for individual users who want up to two terabytes of storage, or for teams of users who need an absolutely huge amount: more than Google Drive’s maximum 30TB.

Also, while Google allows free users to access Drive from anywhere and on unlimited devices, Dropbox makes users pay for more than three devices to have easy access via dedicated apps. You can get around this limit by using the Dropbox browser tool, but it’s a pretty huge barrier for free users.

Integration 

Google also wins out on integration with different platforms. The Google Drive system is built into most Android phones and tablets, all Chrome OS-powered devices, and it’s the default way to save files in Google Docs and other Google Workspace tools. On top of that, Google Drive/One apps are available on iOS and Windows, allowing for easy uploads and downloads. 

Dropbox is also available pretty much everywhere, but its integration is less seamless on mobile and Chrome OS. While it’s possible to upload and download to Dropbox on almost any platform (via the browser if not a dedicated app), it may take a few more steps. The three-device limit on a free Dropbox account is a big limiter here, too. 

Both Google Drive and Dropbox integrate with a variety of other often-used services, like Microsoft Office, Slack, Adobe Creative Cloud, Zoom, et cetera. Dropbox even lets you sign in with a Google or Apple account, if you like.

Usability

While Google is a clear winner on value, and they’ve made it easy to access your files on multiple platforms, Dropbox still has an edge on usability, in my opinion. Google Drive tends to treat its storage as one big pool of data, and while it has support for the basic directory system of folders most PC users are used to, the platform would prefer you to use its built-in search tools. 

Dropbox, on the other hand, assumes that you generally know where you put your stuff, and makes it easy to navigate through folders and sub-folders either on an app or in a desktop directory. It’s not effortlessly intuitive, but it’s familiar to anyone who’s been using desktops and laptops for most of their adult lives. It’s a PC-first approach, rather than the (perhaps understandable) mobile-style interface of Drive. 

Performance 

While Google Drive is by no means slow, Dropbox gets the edge in performance, too. When trying to upload massive amounts of both large and small data, Dropbox gave me consistently faster upload speeds. That’s a notable consideration if you plan on hitting your storage hard and frequently. 

Dropbox also has a feature that makes it faster to send files around your local network: LAN sync. This tool allows files added to your Dropbox account to start copying over local Ethernet or Wi-Fi connections even before they’re fully uploaded to the cloud. In practical terms, this makes a file added on your phone (say, a new photo you took of your pet) appear almost instantaneously in the Dropbox folder on your Windows or MacOS computer, so long as both devices are connected to the local network. 

It’s a small but crucial advantage if what you’re really looking for is a bucket of syncing storage that’s quick and easy to access. 

Sharing storage and PC backup 

As you might expect, Google comes out ahead in terms of sharing storage between family members. While Google One plans can be shared with up to five extra family members (for a total of six users) on the cheapest $2 a month tier, Dropbox only unlocks this option once you start paying $20 a month for 2TB of storage.

Individual files can be shared easily on both platforms, and there’s not much of a difference between Google Drive and Dropbox if you’re sharing accounts. But unless you need a truly massive amount of storage on Dropbox, Google Drive is better in terms of value if you want to share that storage between two or more users. 

Both systems offer tools to back up your PC’s files to the cloud in a system-wide fashion… sort of. While it’s certainly possible to treat Google Drive or Dropbox as a cloud backup system, these platforms really aren’t designed for a regular emergency backup. Their slow upload speed and cumbersome backup tools put them well behind dedicated services like Carbonite or Backblaze. I wouldn’t give either one extra points of this feature. (For more on this topic, see our roundup of the best cloud backup services.)

Extras 

On top of the above tools, there are less tangible advantages to both systems. Purchasing extra Google Drive storage via the Google One system gets you: 

·         Shared space for Gmail messages/attachments and Google Photos 

·         Free access to the Google One VPN on Android 

·         Discounts on purchases in the Google Store 

·         Occasional deals on travel and other items 

How about Dropbox? Once again, Dropbox is more stingy with its tools, unlocking some of its more premium options under more expensive consumer or business accounts. Even the full text search, a fairly basic tool that you can perform yourself on local files in just about any OS, isn’t available at the free tier. Once again Dropbox’s more stingy nature is hurting it in this comparison. 

Google Drive is the clear winner 

While Dropbox has a superior interface and user experience (at least for people who prefer conventional PC-style file systems), and its performance and LAN sync tools can leave the competition in the dust, Google is offering a better product and a better value on almost all other points of comparison. 

From the price of premium storage, to integration with desktop and mobile operating systems, to less tangible bonuses as part of the Google One system, Drive is a clear winner. That’s doubly true if you’re looking to stick to free tools. 

Which isn’t to say that Dropbox is necessarily a bad choice. That extra performance and better interface might be worth it, especially for users who don’t necessarily need the massive amount of storage Google offers. Just be aware of the trade-off in value.

By Michael Crider

Source:  PCWorld 

Thursday, October 08, 2020

Artificial intelligence solutions built in India can serve the world

 Written by Abhishek Singh

The RAISE 2020 summit (Responsible AI for Social Empowerment) has brought issues around artificial intelligence (AI) to the centre of policy discussions. Countries across the world are making efforts to be part of the AI-led digital economy, which is estimated to contribute around $15.7 trillion to the global economy by 2030. India, with its “AI for All” strategy, a vast pool of AI-trained workforce and an emerging startup ecosystem, has a unique opportunity to be a major contributor to AI-driven solutions that can revolutionise healthcare, agriculture, manufacturing, education and skilling.

AI is the branch of computer science concerned with developing machines that can complete tasks that typically require human intelligence. With the explosion of available data expansion of computing capacity, the world is witnessing rapid advancements in AI, machine learning and deep learning, transforming almost all sectors of the economy.

India has a large young population that is skilled and eager to adopt AI. The country has been ranked second on the Stanford AI Vibrancy Index primarily on account of its large AI-trained workforce. Our leading technology institutes like the IITs, IIITs and NITs have the potential to be the cradle of AI researchers and startups. India’s startups are innovating and developing solutions with AI across education, health, financial services and other domains to solve societal problems.

Machine Learning-based deep-learning algorithms in AI can give insights to healthcare providers in predicting future events for patients. It can also aid in the early detection and prevention of diseases by capturing the vitals of patients. A Bengaluru based start-up has developed a non-invasive, AI-enabled technology to screen for early signs of breast cancer. Similarly, hospitals in Tamil Nadu are using Machine Learning algorithms to detect diabetic retinopathy and help address the challenge of shortage of eye doctors. For the COVID-19 response, an AI-enabled Chatbot was used by MyGov for ensuring communications. Similarly, the Indian Council of Medical Research (ICMR) deployed the Watson Assistant on its portal to respond to specific queries of frontline staff and data entry operators from various testing and diagnostic facilities across the country on COVID-19. AI-based applications have helped biopharmaceutical companies to significantly shorten the preclinical drug identification and design process from several years to a few days or months. This intervention has been used by pharmaceutical companies to identify possible pharmaceutical therapies to help combat the spread of COVID19 by repurposing drugs.

AI-based solutions on water management, crop insurance and pest control are also being developed. Technologies like image recognition, drones, and automated intelligent monitoring of irrigation systems can help farmers kill weeds more effectively, harvest better crops and ensure higher yields. Voice-based products with strong vernacular language support can help make accurate information more accessible to farmers. A pilot project taken up in three districts — Bhopal, Rajkot and Nanded — has developed an AI-based decision support platform combined with weather sensing technology to give farm level advisories about weather forecasts and soil moisture information to help farmers make decisions regarding water and crop management. ICRISAT has developed an AI-power sowing app, which utilises weather models and data on local crop yield and rainfall to more accurately predict and advise local farmers on when they should plant their seeds. This has led to an increase in yield from 10 to 30 per cent for farmers. AI-based systems can also help is establishing partnerships with financial institutions with a strong rural presence to provide farmers with access to credit.

An AI-based flood forecasting model that has been implemented in Bihar is now being expanded to cover the whole of India to ensure that around 200 million people across 2,50,000 square kilometres get alerts and warnings 48 hours earlier about impending floods. These alerts are given in nine languages and are localised to specific areas and villages with adequate use of infographics and maps to ensure that it reaches all.

The Central Board of Secondary Education has integrated AI in the school curriculum to ensure that students passing out have the basic knowledge and skills of data science, machine learning and artificial intelligence. The Ministry of Electronics and Information Technology (MeitY) had launched a “Responsible AI for Youth” programme this year in April, wherein more than 11,000 students from government schools completed the basic course in AI.

As AI works for digital inclusion in India, it will have a ripple effect on economic growth and prosperity. Analysts predict that AI can help add up to $957 billion to the Indian economy by 2035. The opportunity for AI in India is colossal, as is the scope for its implementation. By 2025, data and AI can add over $500 billion and almost 20 million jobs to the Indian economy.

India’s “AI for All” strategy focuses on responsible AI, building AI solutions at scale with an intent to make India the AI garage of the world — a trusted nation to which the world can outsource AI-related work. AI solutions built in India will serve the world.

AI derives strength from data. To this end, the government is in the process of putting in place a strong legal framework governing the data of Indians. The legislation stems from a desire to become a highly secure and ethical AI powerhouse. India wants to build a data-rich and a data-driven society as data, through AI, which offers limitless opportunities to improve society, empower individuals and increase the ease of doing business.

The RAISE 2020 summit has brought together global experts to create a roadmap for responsible AI — an action plan that can help create replicable models with a strong foundation of ethics built-in. With the participation of more than 72,000 people from 145 countries, RAISE 2020 has become the true global platform for the exchange of ideas and thoughts for creating a robust AI roadmap for the world.

This article first appeared in the print edition on October 8, 2020 under the title ‘Making AI work for India’. The writer is president and CEO, NeGD, CEO MyGov and MD and CEO, Digital India Corporation.

Source: Indian Express, 8/10/20

Thursday, February 14, 2019

Artificial Intelligence models may have a few issues, algorithms don’t

Not all the concerns about AI models are unfounded. But most of the problem lies with the human element in the entire process: the selection of training and testing data.

As machine learning — fashionably branded as artificial intelligence (AI) — continues to flourish, a veritable cottage industry of activists has accused it of reflecting and perpetuating pretty much everything that ails the world: racial inequity, sexism, financial exploitation, big-business connivance, you name it. To be fair, new technologies must be questioned, probed, and “problematized” (to use one of their favourite buzzwords) — and it is indeed a democratic prerogative. That said, there seems to be persistent confusion around the very basics of the discipline.
No other example demonstrates this best than the conflation of objectives, algorithms and models. Simplifying a little, the life cycle in creating a machine learning model from scratch is the following. The first step is to set a high-level practical objective: What the model is supposed to do, such as recognising images or speech. This objective is then translated into a mathematical problem amenable to computing. This computational problem, in turn, is solved using one or more machine learning algorithms: specific mathematical procedures that perform numerical tasks in efficient ways. Up to this stage, no data is involved. The algorithms, by themselves, do not contain any.
The machine learning algorithms are then “trained” on a data sample selected at human discretion from a data pool. In simple terms, this means that the sample data is fed into the algorithms to obtain patterns. Whether these patterns are useful or not (or, often, whether they have predictive value) is verified using “testing” data — a data set different from the training sample, though selected from the same data pool. A machine learning model is born: The algorithm, along with the training and testing data sets, which meets the set practical objective. The model is then let loose on the world. (In a few cases, as the model interacts with this much larger universe of data, it fine-tunes itself and evolves; the model’s interaction with users helps it expand its training data set.) From predictive financial analytics to more glamorous cat-recognising systems, most current AI models follow this life cycle.
To reiterate, the algorithms themselves do not contain data; the model does. Algorithms are simply mathematical recipes and, as such, go way before computers. When you are dividing two numbers by the long division method, you are implementing an algorithm. Simpler still, when you are adding two, you are also implementing another. A commonly used algorithm to classify images — Support Vector Machines — is a simple way to solve a geometrical problem, invented in the early 1960s. Despite the bombastic moniker, it is not a machine, merely a recipe. Another with an equally impressive name, the Perceptron, also has a dry mathematical statement despite sounding like something out of a science fiction film.
All of the above would have sounded like idle pedantry had prominent voices not continued to conflate models with algorithms. Last month, America’s latest cause célèbre, Congresswoman Alexandria Ocasio-Cortez, noted that “algorithms are still pegged to basic human assumptions”. Unless you count basic logic as one such impediment, no other assumptions hide behind an algorithm. Yet another American professor published a book titled “Algorithms of Oppression.” While all of this may be for rhetorical effect — and algorithms as shorthand for artificial intelligence whatchamacallit — it reveals a cavalier attitude towards notions, especially among those who are in positions to shape technology policy.
This is not to say that concerns about AI models are unfounded. But most of the problem lies with the human element in the entire process: the selection of training and testing data. Suppose a developer draws on historical incarceration data to build a model to predict criminal behaviour. Chances are likely that the results will appear skewed and reflect human biases. Similarly, when Amazon’s voice responsive speaker Alexa told a user to “kill your foster parents”, it was pointed out that Reddit (not the politest of chat platforms) was part of its training set. Finally, as a recent MIT Technology Review article put it, the conversion of a practical objective into a computational problem (again, a human activity) may also introduce biases into an AI model. As an example, the article asked, how does one operationalise a fair definition of “creditworthiness” for an algorithm to understand and process?
At the end, the issue is not whether AI systems are problematic in themselves. It is that we are, as we choose data and definitions to feed into algorithms. In that, technology is often a mirror we hold in front of ourselves. But algorithms are independent of our predilections, built, as they are, only out of logic.
Abhijnan Rej is a New Delhi-based security analyst and mathematical scientist
Source: Hindustan Times, 14/02/2019

Monday, January 07, 2019

Artificial Intelligence is not the silver bullet for human development

If its potential to do good is to be fully realised, focus more on the obstacles that is preventing its uptake.

The excitement surrounding artificial intelligence nowadays reflects not only how AI applications could transform businesses and economies, but also the hope that they can address challenges like cancer and climate change. The idea that artificial intelligence could revolutionise human well being is obviously appealing, but just how realistic is it?
To answer that question, the McKinsey Global Institute has examined more than 150 scenarios in which artificial intelligence is being applied or could be applied for social good. What we found is that artificial intelligence could make a powerful contribution to resolving many types of societal challenges, but it is not a silver bullet – at least not yet. While artificial intelligence’s reach is broad, development bottlenecks and application risks must be overcome before the benefits can be realised on a global scale.
To be sure, artificial intelligence is already changing how we tackle human-development challenges. In 2017, for example, object-detection software and satellite imagery aided rescuers in Houston as they navigated the aftermath of Hurricane Harvey. In Africa, algorithms have helped reduce poaching in wildlife parks. In Denmark, voice-recognition programmes are used in emergency calls to detect whether callers are experiencing cardiac arrest. And at the MIT Media Lab near Boston, researchers have used “reinforcement learning” in simulated clinical trials involving patients with glioblastoma, the most aggressive form of brain cancer, to reduce chemotherapy doses.
Moreover, this is only a fraction of what is possible. Artificial intelligence can already detect early signs of diabetes from heart rate sensor data, help children with autism manage their emotions, and guide the visually impaired. If these innovations were widely available and used, the health and social benefits would be immense. In fact, our assessment concludes that artificial intelligence technologies could accelerate progress on each of the 17 United Nations Sustainable Development Goals.
But if any of these artificial intelligence solutions are to make a difference globally, their use must be scaled up dramatically. To do that, we must first address developmental obstacles and, at the same time, mitigate risks that could render artificial intelligence technologies more harmful than helpful.
On the development side, data accessibility is among the most significant hurdles. In many cases, sensitive or commercially viable data that have societal applications are privately owned and not accessible to non-governmental organisations. In other cases, bureaucratic inertia keeps otherwise useful data locked up.
So-called last-mile implementation challenges are another common problem. Even in cases where data are available and the technology is mature, the dearth of data scientists can make it difficult to apply artificial intelligence solutions locally. One way to address the shortage of workers with the skills needed to strengthen and implement artificial intelligence capabilities is for companies that employ such workers to devote more time and resources to beneficial causes. They should encourage artificial intelligence experts to take on pro bono projects and reward them for doing so.
There are of course risks. Artificial intelligence tools and techniques can be misused, intentionally or inadvertently. For example, biases can be embedded in artificial intelligence algorithms or data sets, and this can amplify existing inequalities when the applications are used. According to one academic study, error rates for facial analysis software are less than 1% for light-skinned men, but as high as 35% for dark-skinned women, which raises important questions about how to account for human prejudice in artificial intelligence programming. Another obvious risk is misuse of artificial intelligence by those intent on threatening individuals’ physical, digital, financial, and emotional security.
Stakeholders from the private and public sectors must work together to address these issues. To increase the availability of data, for example, public officials and private actors should grant broader access to those seeking to use data for initiatives that serve the public good. Already, satellite companies participate in an international agreement that commits them to providing open access during emergencies. Data-dependent partnerships like this one must be expanded and become a feature of firms’ operational routines.
Artificial intelligence is fast becoming an invaluable part of the human-development toolkit. But if its potential to do good globally is to be fully realised, proponents must focus less on the hype and more on the obstacles that are preventing its uptake.
Michael Chui is a partner at the McKinsey Global Institute. Martin Harrysson is a partner in McKinsey & Company’s Silicon Valley office.
Source: Hindustan Times, 7/01/2019

Thursday, June 15, 2017

Big data, big dangers


India needs to negotiate the world of big data technology with adequate safeguards

With the Supreme Court turning its gaze on privacy issues associated with Aadhaar, can we take a moment to look to the myriad ways in which our privacy is being assaulted in the digital world? When my neighbour across the street got too curious about my life, I installed curtains to block his gaze. But what about when the invisible drones at Facebook send him a message that one of my colleagues has tagged me enjoying a music festival in Goa and he might want to “like” this picture? How do we draw a curtain around our digital lives?
Think beyond the nosy neighbour to the corporations that want to utilise minutia of your life to sell products that you may or may not need. Corporations have always been interested in understanding consumer behaviour and been collecting data about users using their products or service. What is unique about Big Data Technology (BDT) is the scale at which this data collection can take place. For instance, Google has stored petabytes of information about billions of people and their online browsing habits. Similarly, Facebook and Amazon have collected information about social networks. In addition to using this data to improve products or services that these corporations offer, the stored data is available also to highest bidders and governments of nations where these companies are based.

Looming dangers

One major problem with collecting and storing such vast amounts of data overseas is the ability of owners of such data stores to violate the privacy of people. Even if the primary collectors of data may not engage in this behaviour, foreign governments or rogue multinationals could clandestinely access these vast pools of personal data in order to affect policies of a nation. Such knowledge could prove toxic and detrimental in the hands of unscrupulous elements or hostile foreign governments. The alleged Russian interference in the U.S. election tells us that these possibilities are not simply science fiction fantasies.
The other major problem is the potential drain of economic wealth of a nation. Currently, the corporations collecting such vast amounts of data are all based in developed countries, mostly in the U.S. Most emerging economies, including India, have neither the knowledge nor the favourable environment for businesses that collect data on such a vast scale. The advertising revenue that is currently earned by local newspapers or other media companies would eventually start to flow outside the country to overseas multinationals. A measure of this effect can already be seen in a way that consumer dollars are being redistributed across the spectrum of U.S. businesses touching them. For instance, communication carriers such as AT&T, Verizon and cable networks find that their revenue has remained flat to slightly falling in the last five years whereas the revenues of Google, which depend on these carriers to provide connectivity to consumers, are increasing exponentially. Unless we employ some countermeasures, we should expect the same phenomenon repeat itself for corporations based in India.
Sadly, BDT is a tiger the world is destined to ride. It is no longer possible to safely disembark, but staying on is not without its perils. The only way to negotiate this brave new world is to make sure that India does it on her own terms and finds a way to protect both financial rewards and ensure individual privacy and national security through appropriate safeguards.

What India can do

China has apparently understood this dynamic and taken measures to counter this threat. It has encouraged the formation of large Internet companies such as Baidu and Alibaba and deterred Google and others from having major market share in China by using informal trade restraints and anti-monopoly rules against them. India may not be able to emulate China in this way, but we could take other countermeasures to preserve our digital economy independence. The heart of building companies using BDT is their ability to build sophisticated super-large data centres. By providing appropriate subsidies such as cheap power and real estate, and cheap network bandwidth to those data centres, one would encourage our industries to be able to build and retain data within our boundaries. In the short term, we should also create a policy framework that encourages overseas multinationals such as Google and Amazon to build large data centres in India and to retain the bulk of raw data collected in India within our national geographical boundaries.
Moreover, we should also build research and development activities in Big Data Science and data centre technology at our academic and research institutions that allow for better understanding of the way in which BDT can be limited to reduce the risk of deductive disclosure at an individual level. This will require developing software and training for individuals on how to protect their privacy and for organisations and government officials to put in place strict firewalls, data backup and secure erasure procedures. In the West, we already are seeing a number of start-ups developing technology that enables users to control who gets access to the data about their behaviour patterns in the digital world.
The government has approved the “Digital India” Plan that aims to connect 2.5 lakh villages to the Internet by 2019 and to bring Wi-Fi access to 2.5 lakh schools, all universities and public places in major cities and major tourist centres. This is indeed a very desirable policy step. But unless we evolve appropriate policies to counter the side effects of the Digital Plan, this could also lead to the unforeseen eColonisation of India.
Hemant Kanakia is a computer scientist and investor in high technology companies. The views expressed are personal
Source: The Hindu, 15-06-2017

Wednesday, February 22, 2017

Indian government launches free antivirus for PC and mobile phones


The Indian government has approved Rs 900 crore for the National Cyber Coordination Centre.
The IT Ministry today launched anti-malware analysis centre that will facilitate free anti-virus for computers and mobile phones in the country with project cost of Rs 90 crore spread over a period of five years.
"I would like ISPs (Internet Service Provider) to encourage their consumers to come on board, there is a free service available. Come and use it in the event some malware has sneaked into the system," IT Minister Ravi Shankar Prasad said at the launch of Botnet Cleaning and Malware Analysis Centre.
The Indian cyber security watchdog CertIn will collect data of infected systems and send it to ISPs and banks. These ISPs and banks will identify the user and provide them with link of the centre, launched in name of Cyber Swachhta Kendra.
The user will be able to download anti-virus or anti-malware tools to disinfect their devices.
"The project has budget outlay of Rs 90 crore spread over period of 5 years," CertIn Director General Sanjay Bahl said.
As of now 58 ISPs and 13 banks have come on board to use this system.
The minister directed Indian Computer Emergency Response Team (CERT-In) to also set up National Cyber Coordination Centre (NCCC) by June.
The government has approved Rs 900 crore for NCCC which will monitor and handle cyber attacks on Indian internet space in real time.
"Safety and security is integral. As the Prime Minister said cyber threat is akin to bloodless war. I don't have slightest doubt cyber security is not only going to be big area of Digital Swachh Bharat but also going to be big area of digital growth, digital employment and digital commerce," Prasad said.
Source: DNA, 21-02-2017

Friday, January 20, 2017

Indian data protection norms insufficient: report

Indian data protection laws are inadequate and only address some of the security, privacy and other issues addressed by similar laws in other countries, a report said.
The report, authored by Sreenidhi Srinivasan and Namrata Mukherjee, research fellows at Vidhi Centre for Legal Policy, analysed the current rules and norms in place for data protection.
“At a time when India is seeking to develop as a digital economy, it is imperative to have in place an effective regime for protection of personal information,” the report said.
India is pushing digital (and cashless) transactions, some linked to the biometrics-based Aadhaar number which has been assigned to around a billion Indians by the Unique Identification Authority of India.
The paper compared the Indian data protection regime with international ones and found it lacking on several counts.
The paper identified a lack of a statute expressly recognizing privacy rights of an individual and rights over their personal data, especially in an interplay with non-state actors and firms.
The Vidhi paper enumerated important components of a robust data protection regime: entities required to comply with data protection norms; the kind of information to be protected; consent for collecting information; and the individual’s right to access his or her information held by another organization.
The 77-page report also provided a framework for management of personal data, which could serve as a model for a data protection statute or be assimilated in the IT Rules.
The paper argued for extension of data protection to all personal data and not merely sensitive personal data (the first includes passwords, financial information, health conditions, medical history, sexual orientation, biometric information). It also suggested seeking consent of individuals before collecting all personal information.
Rahul Matthan, partner at Trilegal and a Mint columnist appreciated the work done by Vidhi on data protection, but says he’d like a model of data protection more suited to the Indian context.
“Their approach seems to be to analyse the way international statutes have been written and try to see what’s appropriate in the Indian context. While that is the global best practice for new sector legislation I think there is merit in also considering an approach that allows us to build something from the ground up which is appropriate for us in our context,” he said.
Matthan pointed to flaws in the consent model for protecting data. “For instance, the consent framework, which is a world standard, is universally recognized as flawed and very hard to administer given the amount of data being collected from us and in the context of the Internet of Things where machines collect data automatically,” he said.
Pranesh Prakash, policy director at think tank Centre for Internet and Society, said the paper did not delve into a 2012 report of a group of experts on privacy chaired by justice A.P. Shah. That report made suggestions including a technology-neutral law on privacy applicable to both government and private agencies. “I look forward to examining the report and working with Vidhi and others to advance a robust data protection regime in India,” Prakash said.

Source: Mintepaper, 20-01-2017

 

Wednesday, June 29, 2016

Tor, a software that masks location, identity of internet users

A small library in New Hampshire sits at the forefront of global efforts to promote privacy and fight government surveillance -to the consternation of law enforcement.The Kilton Public Library in Lebanon, a city of 13,000, last year became the nation's first library to use Tor, software that masks the location and identity of internet users, in a pilot project initiated by the Cambridge, Massachusetts-based Library Freedom Project. Users the world over can and do have their searches randomly routed through the library . Computers that have Tor loaded on them bounce internet searches through a random pathway , or series of relays, of other computers equipped with Tor. This network of virtual tunnels masks the location and IP address of the person doing the search.
In a feature that makes Kilton unique among US libraries, it also has a computer with a Tor exit relay , which delivers the internet query to the destination site and becomes identified as the lastknown source of the query .
Alison Macrina, founder and director of the Library Freedom Project, said her or ganisation chose Kilton for its pilot project because it had embraced other privacy-enhancing software the project recommended and because she knew the library had the know-how take it to the complicated exit-relay stage.
Tor can protect shoppers, victims of domestic violence, whistleblowers, dissidents, undercover agents and criminals alike. A recent routine internet search using Tor on one of Kilton's computers was routed through Ukraine, Germany and the Netherlands.
“Libraries are bastions of freedom,“ said Shari Steele, executive director of the Tor Project, a nonprofit started in 2004 to promote the use of Tor worldwide. “They are a great natural ally .“
“Local police asked the Kilton library last July to stop using Tor. Its use was suspended until the library board voted unanimously at a standing-room-only meeting in September to maintain the Tor relay . Kilton's really committed as a library to the values of intellectual privacy ,“ Macrina said.
“In New Hampshire, there's a lot of activism fighting surveillance. It's the `Live Free or Die' place, and they really mean it.''

Source: Times of India, 29-06-2016

Wednesday, February 10, 2016

Internet power to the people

TRAI’s vigorous endorsement of net neutrality safeguards the Internet against platform monopolies, retaining the ability for users not only to be consumers but also creators of content

The regulations issued by the Telecom Regulatory Authority of India (TRAI)barring differential pricing of data based on content have created a global impact. A friend, who runs a major international software company, called it the most important victory for the people in the tech space in the last 20 years. India has joined a select few countries that have protected net neutrality and barred zero-rating services.
What makes this “victory” even more surprising was the complete asymmetry of the two sides involved. On one side was Facebook, a company whose market cap is greater than the GDPs of 144 countries, allied with a bunch of big telecom companies (telcos). They had already “won” easy victories for their platform in a number of countries, and felt India would be no exception. They had an ad campaign that estimates put at Rs.400 crore. On the other side was a motley group of free software and Internet activists, with unlikely allies such as comedy group AIB, a bunch of start-ups, and some political figures and formations.
The argument that Facebook was using appeared simple. Why should anybody deny the poor getting some access to the Internet — even if this was limited? Isn’t something better than nothing? Mark Zuckerberg not only wrote articles terming his opponents “Net Neutrality fundamentalists”, but also appeared in advertorials in the electronic media to push Free Basics. Some commentators wrote plugs for Facebook in the guise of opinion pieces, all more or less posing different variations of the broad theme that Zuckerberg’s heart beats for the Indian poor.
To beat back such an offensive, backed by the full power of Facebook’s media blitz, was no ordinary event. So why did Facebook’s campaign fail?
People’s campaign prevails

First is, of course, the energy and the creativity of the groups fighting Free Basics. They not only ran an innovative and creative campaign, but were also able to bring tech activists on to the streets. What surprised even them was the response of the people.
I am convinced that Facebook and their ad agencies completely underestimated the Indian public. Even if all of them do not use the Internet, they understand the difference between having access to the full Internet, with nearly a billion websites, and the so-called Free Basics platform that provides Facebook and a few other sites. They are sophisticated enough to know that Free Basics would not offer them any of the things they really want to access. No search, no email, no access to various services; no pictures or video clips for entertainment either. No access to the rich diversity of views and material on the Internet. Only a sterile walled garden where, at best, you can see what your friends are doing.
A level playing field

What is the flip side of such a platform? Other people who want to have the full Internet could still access it, so why is Facebook’s Free Basics harmful?
TRAI has correctly pointed out that the tariff principle at play is whether we can have differential pricing of data based on the content we see. If we accept this principle, what then prevents telcos from charging various websites and Internet services for accessing their subscribers? Accepting that one form of price discrimination is okay opens the door to all other forms of discrimination as well.
This is where Net Neutrality comes in. The most important characteristic of the Internet is whether it is the richest corporation in the world or an individual writing a blog, both are treated identically on the Internet. If the blogger had to negotiate with the Internet service providers (ISPs) — in today’s world the telcos — to reach the telco subscribers, she would have to negotiate with thousands of such ISPs. Telcos would then be the gatekeepers of the Internet. Only the biggest corporations could then survive on the Net. This is how the cable TV model works; for their channels to be carried, the TV channels have to negotiate with all the platforms such as Dish TV, Tata Sky, etc. If we accept that telcos can act as gatekeepers, we would then lose what has given the Internet its unique power, the ability for us not only to be consumers but also creators of content.
In its nascent phase, the big telco monopolies tried to levy a “tax” on all Internet content providers. The Internet companies were then the new kids on the block. They and the Internet user community fought back such attempts. This was the first net neutrality war, and it established the principle of non-discrimination on the Internet between different types of content or sites.
The scenario has changed dramatically today. We have the emergence of powerful Internet monopolies that are much bigger than the telcos. Not surprisingly, these companies now see the virtues of monopoly. They would like to combine with telcos to create monopolies for their platforms, ensuring that they control the future of the Internet and freeze their competition out.
Today, we have nearly a billion websites on the Internet and 3.5 billion users. This means that nearly one out of three users is both a content provider as well as content consumer. What the Internet monopolies want is that we should be passive consumers of their content, or at best generate captive content only for their platforms. This is why they have joined hands with telcos to offer various forms of zero-rating services.
Future-proofing policies

The two most common forms of zero rating used by telcos are (a) no data charges for a select set of sites, e.g. Facebook’s Free Basics, and (b) a few content providers such as Netflix not being subjected to data caps by telcos. The TRAI order bars both these forms.
The other issue that TRAI dealt with is whether regulatory policies should be crafted to prevent harm (ex ante) or be applied only after harm has been established. The argument of the telcos has been, “prove there has been harm, otherwise we should be allowed to do as we please”. TRAI has again correctly pointed out that not crafting the right policies for the Internet would distort the basic character of the Internet itself. It would then help the well heeled, who would be able to take advantage of a lack of policy. The TRAI order also points out that without the right policies, each tariff proposal would have to be analysed on a case-by-case basis, imposing high regulatory overheads.
The last issue we need to examine is how a powerful monopoly can bend policy by virtue of its control over its users. Facebook not only launched a media blitz but also ran a completely misleading campaign on Free Basics to its 130 million Indian subscribers. Through its various pop-ups and user interface, it pressured its users to send TRAI a boilerplate statement of support for Free Basics. It even painted this as providing basic Internet to the poor, without informing its users that Facebook was the sole arbiter of what constitutes a basic Internet.
The question is, can a platform monopoly — of the type Facebook, Google are — use this monopoly to run a campaign on a country’s policy? Facebook is a foreign entity and has argued before Indian courts that it is not accountable to Indian laws. Should such entities have such power over our peoples’ lives?
A media company is supposed to differentiate between advertisements and news. Facebook did not identify its plug for its Free Basics platform on Facebook as opinion but presented it as truth. How should online media conduct itself in the future on such issues?
TRAI had rebuked Facebook on its attempt to convert TRAI’s consultation on differential pricing to a numbers game. TRAI wanted clear answers to the questions they had posed, not boilerplate emails saying how people loved Free Basics. But it still leaves unanswered the question of what are the rights and duties of such platform monopolies towards their users. With Google and Facebook emerging bigger than many nation states, this is the key question for the Internet in the future.
(Prabir Purkayastha is Chairperson, Knowledge Commons, and Vice-President, Free Software Movement of India.)
Source: The Hindu, 10-02-2016