Sign In  |  Register  |  About Daly City  |  Contact Us

Daly City, CA
September 01, 2020 1:20pm
7-Day Forecast | Traffic
  • Search Hotels in Daly City

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

'Heart wrenching': AI expert details dangers of deepfakes and tools to detect manipulated content

Criminals are taking advantage of AI technology to conduct misinformation campaigns, commit fraud and obstruct justice through deepfake audio and video.

While some uses of deepfakes are lighthearted like the pope donning a white Balenciaga puffer jacket or an AI-generated song using vocals from Drake and The Weeknd, they can also sow doubt about the authenticity of legitimate audio and videos. 

Criminals are taking advantage of the technology to conduct misinformation campaigns, commit fraud and obstruct justice. As artificial intelligence (AI) continues to advance, so does the proliferation of fake content that experts warn could pose a serious threat to various aspects of everyday life if proper controls aren't put in place. 

AI-manipulated images, videos and audio known as "deepfakes" are often used to create convincing but false representations of people and events. Because deepfakes are difficult for the average consumer to detect, companies like Pindrop are working to help companies and consumers identify what's real and what's fake.

Pindrop co-founder and CEO, Vijay Balasubramaniyan, said his company looks at security, identity and intelligence in audio communications to help the top banks, insurance companies and health care providers in the world determine whether they are talking to a human on the other end of the line. 

Balasubramaniyan said Pindrop is at the forefront of AI security and has analyzed more than five billion voice interactions, two million of which they identified as fraudsters using AI to try to convince a caller they are human. 

He explained that when you call a business with sensitive information like a bank, insurance company or health care provider, they verify it's you by asking a multitude of security questions, but Pindrop replaces that process and instead verifies people based on their voice, device and behavior. 

WHEN WILL ARTIFICIAL INTELLIGENCE ANSWER EMAILS? EXPERTS WEIGN IN ON HOW THE TECHNOLOGY WILL AFFECT WORK

"We're seeing very specific targeted attacks," he said. "If I'm the CEO of a particular organization, I probably have a lot of audio content out there, video content out there, [so fraudsters] create a deepfake of that person to go after them for their bank accounts [and] their health care records."

While Pindrop mainly focuses on helping large companies avoid AI scams, Balasubramaniyan said he eventually wants to expand his technology to help the individual consumer because the problem is affecting everyone. 

He predicts audio and video breaches are only going to become more common because if people have "tons of audio or tons of video of a particular person, you can create their likeness a whole lot easier."

"Once they have a version of your audio or your video, they can actually start creating versions of you," he said. "Those versions of you can be used for all kinds of things to get bank account information, to get health care records, to get to talk to your parents or a loved one claiming to be you. That's where technology like ours is super important."

He explained that AI and machine learning (ML) systems work by learning from the information that already exists and building upon that knowledge. 

"The more of you that's out there, the more likely it is to create a version of you and a human is not going to figure out who that is," he said. 

He said there are some telltale signs that can indicate a call or video is a deepfake, such as a time lag between when a question is asked and an answer is given, which can actually work in the scammer's favor because it leads the person on the other end of the line to believe something is wrong. 

"When a call center agent is trying to help you and you don't respond immediately, they actually think, 'Oh man, this person is unhappy or I didn't say the right thing,'" he explained. "Therefore many of them actually start divulging all kinds of things."

"The same thing is happening on the consumer side when you are getting a call from your daughter, your son saying, 'There's a problem, I've been kidnapped' and then you have this really long pause," he added. "That pause is unsettling, but it's actually a sign that someone's using a deepfake because they have to type the answer and the system has to process that." 

OPENAI CHIEF ALTMAN DESCRIBED WHAT ‘SCARY’ AI MEANS TO HIM, BUT CHATGPT HAS ITS OWN EXAMPLES

In an experiment conducted by Pindrop, people were given examples of audio and asked to determine if they thought it was authentic. 

"When we did it across a wide variety of humans, they got it right 54% of times," he said. "What that means is they're 4% better than a monkey who did a coin toss."

As it becomes more difficult to ascertain who is human and who is a machine, it is important to adopt technology that allows you to make that determination, Balasubramaniyan argued. 

"But the scarier thing for me is our democracy," he added. "We're coming up to an election cycle in the next year, and you're seeing ads, you're seeing images."

For example, the leading candidate of a campaign could be smeared by a series of deepfakes or there might be authentic content that puts a candidate in a bad light, but they can deny it by using AI as a scapegoat. 

In the lead-up to his recent New York arraignment, deepfakes of former President Trump's mugshot, as well as fake photos showing him resisting arrest, went viral on the internet.

"If something is too good to be true or too sensational, think twice," he said. "Don't react immediately … people get too worked up or react too much to a particular thing in the immediate moment."

CRITICS SAY AI CAN THREATEN HUMANITY, BUT CHATGPT HAS ITS OWN DOOMSDAY PREDICTIONS

Balasubramaniyan said people need to be increasingly skeptical about what they are hearing and viewing and warned that if a voice seems robotic, a video is choppy, there is background noise, pauses between questions or the subject isn't blinking, they should exercise caution and assume it is a deepfake. 

He said this added caution is especially important if the video or message appeals to your emotions, which can lead to "heart-wrenching" consequences if a loved one gets a call about you or your grandparent is coerced into forking over their hard-earned money, as well as instances where a woman's image and likeness is used to generate deep fake pictures or videos.

Some of the most successful companies in the business profit off of AI companionship to generate fake boyfriends, or more often according to Balasubramaniyan, fake boyfriends with certain qualities or capabilities. 

"Because not only are deepfakes being created that are deepfakes of you, but then they're creating deepfakes or synthetic identities that have no bearing, but have some likeness to human," he warned. "Both of those things you have to be vigilant about."

Balasubramaniyan often hearkens back to the creation of the internet to quell many of the concerns people have about AI and explained that we simply need more time to ameliorate some of the negative consequences of the new technology. 

"When the Internet was created, if you looked at all the content on the Internet, it was the degenerates using it, like it was awful, all kinds of nefarious things would happen on it," he said. "If you just go back down history lane to the '90s, it was filled with stuff like this."

"Over time, you build security, you build the ability for you to now have a checkmark on your website to say this is a good website," he added. 

The same thing will happen with AI if people take back control through a combination of technology and human vigilance, Balasubramaniyan said.

"You're going to have a lot of bad use cases, a lot of degenerates using it, but you as a consumer have to stay vigilant," he said. "Otherwise you're going to get the shirt taken off your back."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 DalyCity.com & California Media Partners, LLC. All rights reserved.