+- +-

+-User

Welcome, Guest.
Please login or register.
 
 
 

Login with your social network

Forgot your password?

+-Stats ezBlock

Members
Total Members: 48
Latest: watcher
New This Month: 0
New This Week: 0
New Today: 0
Stats
Total Posts: 16867
Total Topics: 271
Most Online Today: 1208
Most Online Ever: 1208
(March 28, 2024, 07:28:27 am)
Users Online
Members: 0
Guests: 210
Total: 210

Author Topic: AI and its Implications  (Read 1114 times)

0 Members and 1 Guest are viewing this topic.

Surly1

  • Administrator
  • Hero Member
  • *****
  • Posts: 916
AI and its Implications
« on: October 24, 2019, 06:52:24 am »
Google Is Coming for Your Face
Personal data is routinely harvested from the most vulnerable populations, without transparency, regulation, or principles—and this should concern us all.

By Malka Older




LLast week, The New York Times reported on the federal government’s plans to collect DNA samples from people in immigration custody, including asylum seekers. This is an infringement of civil rights and privacy, and opens the door to further misuse of data in the long term. There is no reason for people in custody to consent to this collection of personal data. Nor is there any clarity on the limits on how this data may be used in the future. The DNA samples will go into the FBI’s criminal database, even though requesting asylum is not a crime and entering the country illegally is only a misdemeanor. That makes the practice not only an invasion of privacy in the present but also potentially a way to skew statistics and arguments in debates over immigration in the future.

The collection of immigrant DNA is not an isolated policy. All around the world, personal data is harvested from the most vulnerable populations, without transparency, regulation, or principles. It’s a pattern we should all be concerned about, because it continues right up to the user agreements we click on again and again.

In February, the World Food Program (WFP) announced a five-year partnership with the data analytics company Palantir Technologies. While the WFP claimed that this partnership would help make emergency assistance to refugees and other food-insecure populations more efficient, it was broadly criticized within the international aid community for potential infringement of privacy. A group of researchers and data-focused organizations, including the Engine Room, the AI Now Institute, and DataKind, sent an open letter to the WFP, expressing their concerns over the lack of transparency in the agreement and the potential for de-anonymization, bias, violation of rights, and undermining of humanitarian principles, among other issues.

Many humanitarian agencies are struggling with how to integrate modern data collection and analysis into their work. Improvements in data technology offer the potential to improve processes and ease the challenges of working in chaotic, largely informal environments (as well as appealing to donors), but they also raise risks in terms of privacy, exposure, and the necessity of partnering with private-sector companies that may wish to profit from access to that data.

A man has his face painted to represent efforts to defeat facial recognition during a 2018 protest at Amazon’s headquarters over the company’s contracts with Palantir. (AP Photo / Elaine Thompson)

In August, for example, the United Nations High Commissioner for Refugees trumpeted its achievement in providing biometric identity cards to Rohingya refugees from Myanmar in Bangladesh. What wasn’t celebrated was the fact that refugees protested the cards both because of the way their identities were defined—the cards did not allow the option of identifying as Rohingya, calling them only “Myanmar nationals”—and out of concern that the biometric data might be shared with Myanmar on repatriation, raising echoes of the role ethnically marked identity cards played in the Rwandan genocide, among others. Writing about the Rohingya biometrics collection in the journal Social Media + Society, Mirca Madianou describes these initiatives as a kind of “techno-colonialism” in which “digital innovation and data practices reproduce the power asymmetries of humanitarianism, and…become constitutive of humanitarian crises themselves.”

Unprincipled data collection is not limited to refugee populations. The New York Daily Newsreported on Wednesday that Google has been using temporary employees, paid through a third party, to collect facial scans of dark-skinned people in an attempt to better balance its facial recognition database. According to the article, temporary workers were told “to go after people of color, conceal the fact that people’s faces were being recorded and even lie to maximize their data collections.” Target populations included homeless people and students. They were offered a five-dollar gift card (which is more than refugees and immigrant detainees get for their data) but, critically, were never informed about how the facial scans would be used, stored, or, apparently, collected.

A Google spokesperson told the Daily News that the data was being collected “to build fairness into Pixel 4’s face unlock feature” in the interests of “building an inclusive product.” Leaving aside whether contributing to the technology of a reportedly $900 phone is worthwhile for a homeless person, the collection of this data without formal consent or legal agreements leaves it open to being used for any number of other purposes, such as the policing of the homeless people who contributed it.

For governments, coerced data collection represents a way of making these chaotic populations visible, and therefore, in theory, controllable. These are also groups with very little recourse for rejecting data collection, offering states the opportunity to test out technologies of the future, like biometric identity cards, that might eventually become nationwide initiatives. For the private firms inevitably involved in implementing the complexities of data collection and management, these groups represent untapped value to surveillance capitalism, a term coined by Shoshana Zuboff to refer to the way corporations extract profit from data analysis; for example, by tracking behavior on Facebook or in Google searches to present targeted advertisements. In general, refugees, asylum seekers, and homeless people give companies far less data than the rest of us, meaning that there is still information to extract from them, compile, and sell for profits that the contributors of the data will never see.

One concern with this kind of unethical data sourcing means information collected for one stated goal may be used for another: In a recent New York Times Magazine article, McKenzie Funk details how data analytics developed during the previous administration to triage targeting toward “felons, not families” are now being used to track all immigrants, regardless of criminal status. Another issue is how the data is stored and protected, and how it might be misused by other actors in the case of a breach. A major concern for the Rohingya refugees was what might happen to them if their biometric data fell into the hands of the very groups that attacked them for their identity.

Both of these concerns should sound familiar to all of us. It seems like we hear about new data breaches on a daily basis, offering up the medical records, Social Security numbers, and shopping history of millions of customers to hackers and scammers. But even without insecurities, our data is routinely vacuumed up through our cell phones, browsers, and interactions with state bureaucracy (e.g., driver’s licenses)—and misused in immoral, illegal, or dangerous ways. Facebook has been forced to admitagain and again that it has been sharing the detailed information it gets from tracking its users with third parties, ranging from apps to advertisers to firms attempting to influence the political sphere, like Cambridge Analytica. Apple has been accused of similar misuse.

Refugees or detained asylum seekers have less choice than most people to opt out of certain terms of service. But these coercive mechanisms affect us all. Getting a five-dollar gift card (not even cash!) may seem like a low price for which to sell a scan of your face, but it isn’t so different from what happens when we willingly click “I Agree” on those terms-of-service boxes. Even if we’re wary of the way our data is being used, it’s getting harder and harder to avoid giving it out. As our digital identities become increasingly entangled with functions like credit reporting, paying bills, and buying insurance, avoiding the big tech companies becomes more and more difficult. But when we opt in, we do so on the company’s terms—not our own. User agreements and privacy policies are notoriously difficult for even experts to understand, and a new Pew Research study showed that most US citizens are short on digital knowledge and particularly lacking in understanding of privacy and cybersecurity.

Like the subjects of Google’s unethical facial scans and the recipients of biometric identity cards in refugee camps, we have little control over how the data is used once we’ve given it up, and no meaningful metric for deciding when giving up our information becomes a worthwhile trade-off. We should be shocked by how companies and governments are abusing the data and privacy rights of the most vulnerable groups and individuals. But we should also recognize that it’s not so different from the compromises we are all routinely asked to make ourselves.

Dr. Malka Older is an affiliated research fellow at the Centre for the Sociology of Organizations at Sciences Po and the author of an acclaimed trilogy of science-fiction political thrillers starting with Infomocracy. Her new collection, …and Other Disasters, comes out November 16.


Surly1

  • Administrator
  • Hero Member
  • *****
  • Posts: 916
Algorithms Are Designed to Addict Us, and the Consequences Go Beyond Wasted Time.



Thomas Hornigold

Goethe’s The Sorcerer’s Apprentice is a classic example of many stories in a similar theme. The young apprentice enchants a broom to mop the floor, avoiding some work in the process. But the enchantment quickly spirals out of control: the broom, mono-maniacally focused on its task but unconscious of the consequences, ends up flooding the room.

The classic fear surrounding hypothetical, superintelligent AI is that we might give it the wrong goal, or insufficient constraints. Even in the well-developed field of narrow AI, we see that machine learning algorithms are very capable of finding unexpected means and unintended ways to achieve their goals. For example, let loose in the structured environment of video games, where a simple function—points scored—is to be maximized, they often find new exploits or ****-bug-cheat">cheats to win without playing.

In some ways, YouTube’s algorithm is an immensely complicated beast: it serves up billions of recommendations a day. But its goals, at least originally, were fairly simple: maximize the likelihood that the user will click on a video, and the length of time they spend on YouTube. It has been stunningly successful: 70 percent of time spent on YouTube is watching recommended videos, amounting to 700 million hours a day. Every day, humanity as a collective spends a thousand lifetimes watching YouTube’s recommended videos.

The design of this algorithm, of course, is driven by YouTube’s parent company, Alphabet, maximizing its own goal: advertising revenue, and hence the profitability of the company. Practically everything else that happens is a side effect. The neural nets of YouTube’s algorithm form connections—statistical weightings that favor some pathways over others—based on the colossal amount of data that we all generate by using the site. It may seem an innocuous or even sensible way to determine what people want to see; but without oversight, the unintended consequences can be nasty.

Guillaume Chaslot, a former engineer at YouTube, has helped to expose some of these. Speaking to TheNextWeb, he pointed out, “The problem is that the AI isn’t built to help you get what you want—it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”

More than this: they can waste your time in harmful ways. Inflammatory, conspiratorial content generates clicks and engagement. If a small subset of users watches hours upon hours of political or conspiracy-theory content, the pathways in the neural net that recommend this content are reinforced.

The result is that users can begin with innocuous searches for relatively mild content, and find themselves quickly dragged towards extremist or conspiratorial material. A survey of 30 attendees at a Flat Earth conferenceshowed that all but one originally came upon the Flat Earth conspiracy via YouTube, with the lone dissenter exposed to the ideas from family members who were in turn converted by YouTube.

Many readers (and this writer) know the experience of being sucked into a “wormhole” of related videos and content when browsing social media. But these wormholes can be extremely dark. Recently, a “” on YouTube was discovered, a recommendation network of videos of children which was frequented by those who wanted to exploit children. In TechCrunch’s investigation, it took only a few recommendation clicks from a (somewhat raunchy) search for adults in bikinis to reach this exploitative content.

It’s simple, really: as far as the algorithm, with its one objective, is concerned, a user who watches one factual and informative video about astronomy and then goes on with their day is less advantageous than a user who watches fifteen flat-earth conspiracy videos in a row.

In some ways, none of this is particularly new. The algorithm is learning to exploit familiar flaws in the human psyche to achieve its ends, just as other algorithms find flaws in the code of 80s Atari games to score their own points. Conspiratorial tabloid newspaper content is replaced with clickbait videos on similar themes. Our short attention spans are exploited by social media algorithms, rather than TV advertising. Filter bubbles of opinion that once consisted of hanging around with people you agreed with and reading newspapers that reflected your own opinion are now reinforced by algorithms.

Any platform that reaches the size of the social media giants is bound to be exploited by people with exploitative, destructive, or irresponsible aims. It is equally difficult to see how they can operate at this scale without relying heavily on algorithms; even content moderation, which is partially automated, can take a heavy toll on the human moderators, required to filter the worst content imaginable. Yet directing how the human race spends a billion hours a day, often shaping people’s beliefs in unexpected ways, is evidently a source of great power.

The answer given by social media companies tends to be the same: better AI.These algorithms needn’t be blunt instruments. Tweaks are possible. For example, an older version of YouTube’s algorithm consistently recommended “stale” content, simply because this had the most viewing history to learn from. The developers fixed this by including the age of the video as a variable.

Similarly, choosing to shift the focus from click likelihood to time spent watching the video was aimed to prevent low-quality videos with clickbait titles from being recommended, leading to user dissatisfaction with the platform. Recent updates aim to prioritize news from reliable and authoritative sources, and make the algorithm more transparent by explaining why recommendations were made. Other potential tweaks could add more emphasis on whether users “like” videos, as an indication of quality. And YouTube videos about topics prone to conspiracy, such as global warming, now include links to factual sources of information.

The issue, however, is sure to arise if this conflicts with the profitability of the company in a large way. Take a recent tweak to the algorithm, aimed to reduce bias in the recommendations based on the order videos are recommended. Essentially, if you have to scroll down further before clicking on a particular video, YouTube adds more weight to that decision: the user is probably actively seeking out content that’s more related to their target. A neat idea, and one that improves user engagement by 0.24 percent, translating to millions of dollars in revenue for YouTube.

If addictive content and engagement wormholes are what’s profitable, will the algorithm change the weight of its recommendations accordingly? What weights will be applied to ethics, morality, and unintended consequences when making these decisions?

Here is the fundamental tension involved when trying to deploy these large-scale algorithms responsibly. Tech companies can tweak their algorithms, and journalists can probe their behavior and expose some of these unintended consequences. But just as algorithms need to become more complex and avoid prioritizing a single metric without considering the consequences, companies must do the same.


AGelbert

  • Administrator
  • Hero Member
  • *****
  • Posts: 36274
  • Location: Colchester, Vermont
    • Renwable Revolution
Re: AI and its Implications
« Reply #2 on: October 24, 2019, 01:23:25 pm »
Yes, AI bots are programmed to appeal to and exploit the basest instincts of humans in order to profit off of them, while simultaneously preventing them from engaging in critical thinking that would expose, precisely and in excruciating detail, how the human targets of exploitation are being used and abused. It is a demonically clever way to perpetuate the society destroying, elite enriching, status quo. Chris Hedges calls it "Electronic Hallucinations". He is right.

May God have mercy on us all and lead us away from those evil bastards who's goal is to turn all of us into a herd of unthinking animals, happily distracted by shiny objects, as we follow the primrose path to perdition.


 

He that loveth father or mother more than me is not worthy of me: and he that loveth son or daughter more than me is not worthy of me. Matt 10:37

 

+-Recent Topics

Future Earth by AGelbert
March 30, 2022, 12:39:42 pm

Key Historical Events ...THAT YOU MAY HAVE NEVER HEARD OF by AGelbert
March 29, 2022, 08:20:56 pm

The Big Picture of Renewable Energy Growth by AGelbert
March 28, 2022, 01:12:42 pm

Electric Vehicles by AGelbert
March 27, 2022, 02:27:28 pm

Heat Pumps by AGelbert
March 26, 2022, 03:54:43 pm

Defending Wildlife by AGelbert
March 25, 2022, 02:04:23 pm

The Koch Brothers Exposed! by AGelbert
March 25, 2022, 01:26:11 pm

Corruption in Government by AGelbert
March 25, 2022, 12:46:08 pm

Books and Audio Books that may interest you 🧐 by AGelbert
March 24, 2022, 04:28:56 pm

COVID-19 🏴☠️ Pandemic by AGelbert
March 23, 2022, 12:14:36 pm