{"id":5097,"date":"2021-02-18T08:29:42","date_gmt":"2021-02-18T08:29:42","guid":{"rendered":"https:\/\/techclot.com\/index.php\/2021\/02\/18\/ai-algorithms-far-from-neutral-in-india\/"},"modified":"2021-02-18T08:29:42","modified_gmt":"2021-02-18T08:29:42","slug":"ai-algorithms-far-from-neutral-in-india","status":"publish","type":"post","link":"https:\/\/techclot.com\/index.php\/2021\/02\/18\/ai-algorithms-far-from-neutral-in-india\/","title":{"rendered":"AI algorithms far from neutral in India"},"content":{"rendered":"<p><a href=\"https:\/\/www.google.com\/url?rct=j&#038;sa=t&#038;url=https:\/\/www.livemint.com\/news\/world\/ai-algorithms-far-from-neutral-in-india-11613617957200.html&#038;ct=ga&#038;cd=CAIyHDkyYmU1MGQ5NjY1NjYxZTA6Y28udWs6ZW46R0I&#038;usg=AFQjCNHqehuNrMiHuyZqyXkbIg0KU6jqCQ\">AI algorithms far from neutral in India<\/a><\/p>\n<p><div><img data-recalc-dims=\"1\" decoding=\"async\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2021\/02\/wDZcET.jpg?w=640&#038;ssl=1\" class=\"ff-og-image-inserted lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\"><\/div>\n<div class=\"FirstEle\" readability=\"12\">\n<p>\nGovernments are increasingly using artificial intelligence and machine learning in decision-making. But are their underlying algorithms suitable for countries such as India? A recent study says they may not be, since they were designed for Western societies. For example, such algorithms may fail to recognize religious or caste biases and could treat oppressed minorities unfairly.<\/p>\n<\/p><\/div>\n<div class=\"paywall\" readability=\"35.678794178794\">\n<p>The study, by Nithya Sambasivan and others at Google Research, US, is based on interviews with 36 academics from various fields and activists working with marginalized communities.<\/p>\n<p>Many of these algorithms work on the assumption that the available data is representative of the society. But in Indian datasets, those with internet access are overrepresented, which is just 50% of the population. This means safety apps that invite users to identify unsafe areas in a city will mark Dalit and Muslim areas as unsafe, reflecting the prejudices of the app\u2019s middle- and upper-class users.<\/p>\n<p>Using artificial intelligence is aspirational for the Indian government. Since it is seen as futuristic, it is trusted without question. But this trust can be misplaced. The study\u2019s authors point out that when police use facial recognition to identify protestors, and the system is trained on people under trial, they could end up disproportionately targeting Dalits and Muslims. This is because more than half of the undertrials are from these communities.<\/p>\n<p>The authors make several suggestions to correct for these shortcomings. One is to empower marginalized groups with low-cost devices so that they can come online, represent themselves and produce knowledge about their communities. This will make Indian datasets more trustworthy by preventing distortion of data. Another suggestion is to educate journalists, activists and lawyers, so that there is, like in the West, an ecosystem of people who have the technical training to question the use of AI systems and hold practitioners accountable.<\/p>\n<p><strong>Also read<\/strong>: <a href=\"http:\/\/bit.ly\/2OpvCCb\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">\u201cRe-imagining Algorithmic Fairness in India and Beyond&#8221;<\/a><\/p>\n<p><em>Snap Fact features new and interesting reads from the world of research.<\/em><\/p>\n<p><input type=\"hidden\" id=\"iframecount\" value=\"0\"><\/p>\n<div class=\"newslettersub_outsidesso_11613617957200\" readability=\"6\">\n<div class=\"outsideSso clearfix\" id=\"outsideSso_11613617957200\" readability=\"8\">\n<p>Subscribe to <strong>Mint Newsletters<\/strong><\/p>\n<div class=\"inputSecArea clearfix\" id=\"inputSec_11613617957200\" readability=\"7\">\n<p><span>*<\/span> Enter a valid email<\/p>\n<p><span>*<\/span> Thank you for subscribing to our newsletter.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n<\/div>\n<p>Published at Thu, 18 Feb 2021 03:11:15 +0000<\/p>\n<p><a href=\"https:\/\/www.google.com\/url?rct=j&#038;sa=t&#038;url=https:\/\/www.livemint.com\/news\/world\/ai-algorithms-far-from-neutral-in-india-11613617957200.html&#038;ct=ga&#038;cd=CAIyHDkyYmU1MGQ5NjY1NjYxZTA6Y28udWs6ZW46R0I&#038;usg=AFQjCNHqehuNrMiHuyZqyXkbIg0KU6jqCQ\">AI algorithms far from neutral in India<\/a><\/p>\n<p><div><img data-recalc-dims=\"1\" decoding=\"async\" data-src=\"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2021\/02\/wDZcET.jpg?w=640&#038;ssl=1\" class=\"ff-og-image-inserted lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\"><\/div>\n<div class=\"FirstEle\" readability=\"12\">\n<p>\nGovernments are increasingly using artificial intelligence and machine learning in decision-making. But are their underlying algorithms suitable for countries such as India? A recent study says they may not be, since they were designed for Western societies. For example, such algorithms may fail to recognize religious or caste biases and could treat oppressed minorities unfairly.<\/p>\n<\/p><\/div>\n<div class=\"paywall\" readability=\"35.678794178794\">\n<p>The study, by Nithya Sambasivan and others at Google Research, US, is based on interviews with 36 academics from various fields and activists working with marginalized communities.<\/p>\n<p>Many of these algorithms work on the assumption that the available data is representative of the society. But in Indian datasets, those with internet access are overrepresented, which is just 50% of the population. This means safety apps that invite users to identify unsafe areas in a city will mark Dalit and Muslim areas as unsafe, reflecting the prejudices of the app\u2019s middle- and upper-class users.<\/p>\n<p>Using artificial intelligence is aspirational for the Indian government. Since it is seen as futuristic, it is trusted without question. But this trust can be misplaced. The study\u2019s authors point out that when police use facial recognition to identify protestors, and the system is trained on people under trial, they could end up disproportionately targeting Dalits and Muslims. This is because more than half of the undertrials are from these communities.<\/p>\n<p>The authors make several suggestions to correct for these shortcomings. One is to empower marginalized groups with low-cost devices so that they can come online, represent themselves and produce knowledge about their communities. This will make Indian datasets more trustworthy by preventing distortion of data. Another suggestion is to educate journalists, activists and lawyers, so that there is, like in the West, an ecosystem of people who have the technical training to question the use of AI systems and hold practitioners accountable.<\/p>\n<p><strong>Also read<\/strong>: <a href=\"http:\/\/bit.ly\/2OpvCCb\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">\u201cRe-imagining Algorithmic Fairness in India and Beyond&#8221;<\/a><\/p>\n<p><em>Snap Fact features new and interesting reads from the world of research.<\/em><\/p>\n<p><input type=\"hidden\" id=\"iframecount\" value=\"0\"><\/p>\n<div class=\"newslettersub_outsidesso_11613617957200\" readability=\"6\">\n<div class=\"outsideSso clearfix\" id=\"outsideSso_11613617957200\" readability=\"8\">\n<p>Subscribe to <strong>Mint Newsletters<\/strong><\/p>\n<div class=\"inputSecArea clearfix\" id=\"inputSec_11613617957200\" readability=\"7\">\n<p><span>*<\/span> Enter a valid email<\/p>\n<p><span>*<\/span> Thank you for subscribing to our newsletter.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n<\/div>\n<p>Published at Thu, 18 Feb 2021 03:11:15 +0000<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI algorithms far from neutral in India Governments are increasingly using artificial intelligence and machine&#8230;<\/p>\n","protected":false},"author":3,"featured_media":5096,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3],"tags":[],"class_list":["post-5097","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/techclot.com\/wp-content\/uploads\/2021\/02\/wDZcET.jpg?fit=600%2C338&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p3orZX-1kd","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/5097","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/comments?post=5097"}],"version-history":[{"count":0,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/posts\/5097\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media\/5096"}],"wp:attachment":[{"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/media?parent=5097"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/categories?post=5097"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techclot.com\/index.php\/wp-json\/wp\/v2\/tags?post=5097"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}