Sign In  |  Register  |  About Daly City  |  Contact Us

Daly City, CA
September 01, 2020 1:20pm
7-Day Forecast | Traffic
  • Search Hotels in Daly City

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Misinformation machines? AI chatbots can spew falsehoods, even accuse people of crimes they never committed

Artificial intelligence has fueled defamatory charges of crime and misconduct around the world, setting the stage for potential chaos in legal circles.

Artificial intelligence chatbots have displayed a frightening ability to tarnish reputations and accuse innocent people of crimes — with the potential to fuel legal chaos. 

"Artificial intelligence creates unprecedented challenges to law, policy and the practice of law," Stephen Wu, chair of the American Bar Association Artificial Intelligence and Robotics National Institute, and shareholder with Silicon Valley Law Group, told Fox News Digital.

"AI technology has many promises," he added, "but also poses risks to fundamental rights and even the physical safety of our country's citizens."

GOOGLE CEO ADMITS HE, EXPERTS ‘DON’T FULLY UNDERSTAND' HOW AI WORKS

A slew of instances involving false charges of crime or wrongdoing spotlight the potential of legal woes ahead. 

They come at a time in which even the world’s top tech titans appear confused about some aspects of how artificial intelligence works or its potential pitfalls — and why, despite boasts of intelligence, AI appears easily prone to terrible mistakes.

"There is an aspect of this which we call, all of us in the field, call it a black box," Google CEO Sundar Pichai said in an interview with "60 Minutes" on Sunday.

"You don’t fully tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time, but that’s where the state of the art is."

Those mistakes have fueled legal and ethical trouble for people around the world.

CHAT GPT ANSWERED 25 BREAST CANCER SCREENING QUESTIONS, BUT IT'S ‘NOT READY FOR THE REAL WORLD’ — HERE'S WHY

AI software sparked a recent cheating scandal at the University of California-Davis. 

A mayor in Australia has threatened a lawsuit against OpenAI, the owner of ChatGPT, for falsely claiming he served time in prison. 

And George Washington professor and Fox News contributor Jonathan Turley was falsely accused of sexual harassment by ChatGPT, complete with a fake Washington Post story supporting the claims, among other scandals fueled by AI-generated misinformation. 

"What was really menacing about this incident is that the AI system made up a Washington Post story and then made up a quote from that story and said that there was this allegation of harassment on a trip with students to Alaska," Turley told Fox News' "The Story" earlier this month.

ELON MUSK TO DEVELOP ‘TRUTHGPT’ AS HE WARNS ABOUT 'CIVILIZATION DESTRUCTION FROM AI

"That trip never occurred. I’ve never gone on any trip with law students of any kind. It had me teaching at the wrong school, and I’ve never been accused of sexual harassment."

The Washington Post addressed the controversy on April 5.

"Because the systems respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods," University of Southern California professor Kate Crawford told the Post. 

Cornell Law School professor William A. Jacobson told Fox News Digital that Turley is fortunate enough to have a large platform where he can get the word out and try to have the situation remedied. 

ALTERNATIVE INVENTOR? BIDEN ADMIN OPENS DOOR TO NON-HUMAN, AI PATENT HOLDERS

However, the average person will not be able to pursue the same type of recourse.

"It’s a whole new frontier and I think the law is lagging behind the technology where you have a situation of essentially an algorithm, maybe even worse than an algorithm, defaming people," he said. 

Jacobson added that it was an open question of who is liable in this situation, to what capacity and under what laws. 

He floated the idea, however, that product liability or general tort law could possibly be invoked in this situation, as opposed to traditional defamation law. He also said Congress could pass laws aimed at tackling this particular issue, though did not find it very likely. 

"We can’t be in a situation where products are created which cause real damage to people and none of the people participating in the creation of the product bear any responsibility," Jacobson said. 

Artificial intelligence has been cited by tech leaders such as Mark Zuckerberg of Meta for its ability to uncover fake stories online. 

BAY AREA RESIDENTS TURN TO ARTIFICIAL INTELLIGENCE TO STOP CRIME AMID BURGLARY SURGE, POLICE SHORTAGES

Conversely, AI can be used to generate clever, highly believable fake stories, too. 

"In many respects, it [an AI generative tool] doesn’t have any way to tell the difference between true and false information," Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, told the Bulletin of Atomic Scientists last week.

"That’s what a human does. It does all those things: It reads, it collates, it sorts … by trying to understand what it is about a subject that’s important to the audience."

Brian Hood, the mayor of Hepburn Shire, north of Melbourne, Australia, was shocked recently when constituents told him that ChatGPT claimed that he spent time in jail for his role in a bribery scandal. 

In fact, Hood blew the whistle on a scandal at his former employer, Note Printing Australia, and was never charged with a crime, according to a Reuters report. 

Hood’s lawyers reportedly sent a "letter of concern" to ChatGPT owner OpenAI on March 21, giving it 28 days to fix the error or face a potential lawsuit for defamation. 

It’s believed it would be the first defamation suit against the artificial intelligence service.

Fox News Digital reached out to OpenAI for comment.

Artificial intelligence, meanwhile, is already stirring up ethics concerns and false allegations of cheating on at least one college campus.

William Quarterman, a student of the University of California Davis, was shocked to find that a professor flagged him for cheating after using an AI program called GPTZero, according to a report last week in USA Today. 

The program is used by educators to determine if students are relying on AI themselves to boost test scores. 

Quarterman was eventually cleared of the accusations — but only after he first received a failing grade, faced the Office of Student Support and Judicial Affairs for academic dishonesty and suffered "full-blown panic attacks." 

Other services used by educators to detect cheating, such as plagiarism-detection program Turnitin, have been flagged numerous time for creating "false positive" accusations of student misconduct. 

"There is still a small risk of false positives," Turnitin Chief Product Officer Annie Chechitelli posted on the company blog last month. 

"We’d like to emphasize that Turnitin does not make a determination of misconduct even in the space of text similarity; rather, we provide data for educators to make an informed decision based on their academic and institutional policies."

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 DalyCity.com & California Media Partners, LLC. All rights reserved.