Holywood News

Online dating scam: Deep love or deep? Dating during AI

B Eth Hyland thinks she met her lifelong love on Tinder.

In fact, the Michigan-based executive assistant was manipulated by an online scam artist, a Frenchman named Richard, who used Deepfake videos on Skype phones and posted photos of another man to pull down his scam.

“Deep Hit” is made using artificial intelligence (AI) to make it look realistic video or audio. They are often difficult to detect without specialized tools.
Over the course of months, Hyland, 53, took a total of $26,000 in loans, sent the money to Richard, and dropped the classic romantic bait or pig slaughter, named after the exploitative method of liar raising victims.

The UK government said it is expected to share an 8 million deep strikes globally by 2025.


According to a January report by network company McAfee, about one in five will be part of a romantic scam. “It’s like death,” Heyland told the Thomson Reuters Foundation. She said: “When I saw him in the video, it was like the picture he sent me. He looked a little blurry, but I didn’t know how to hit the deep.”

Manipulation and lies

Highland, who lives in a port about 230 kilometers west of Detroit, had been divorced for four years when she was dating again.

She paired with a man on Tinder and his profile seemed to complement her well.

Now, she says this “perfect game” may have been carefully planned.

“Richard” said he was born in Paris but lived in Indiana and worked as a freelance project manager for a construction company that required a lot of travel, including going to Qatar.

The next few months of emotional manipulation, lies, fake photos and AI-stealing Skype phone calls. The liar guaranteed his eternal love, but there were countless reasons to miss all potential parties.

A few weeks after they matched, “Richard” convinced Hyland that he needed her help for the lawyers and translation fees in Qatar.

“I told him I was going to take out the loan, and he started crying and told me that no one loved him like this before.”

But “Richard” has been asking for more money, and when Heyland finally told her financial adviser what happened, he said she was likely a romance victim.

“I can’t believe it, but I can’t ignore it,” said Hyland.

She faces “Richard”; initially, he denied it all, but then remained silent when Heyland asked him to “prove himself wrong” and return her money.

Police told Hyland that they could not raise her case further because of a letter from the Portage Public Safety Director seen by the Thomson Reuters Foundation because there was no “coercion, threat or force”.

The Office of Public Safety, which oversees police and fire departments, did not respond to a request for comment.

After the Thomson Reuters Foundation saw the fraudster’s account and she marked the fraudster’s account as Tinder, the company said it removed users who violated its terms of service or guidelines.

Although Tinder said the investigation was not able to be shared due to privacy reasons, it said Hyland’s report has been “evaluated” and “acted in accordance with our policies.”

A Tinder spokesman said the company has “zero tolerance” for fraudsters and uses AI to root our potential scammers and warn their users, and provides a list of facts about romance.

In March, Hyland attended a U.S. Senate committee hearing when it introduced a bill that required dating apps to remove scammers and notify users who interacted with fake accounts.

Senators who proposed the bill said Hyland’s story showed why legislation was needed.

Usually, once a dating app deletes the fraudster’s account, or as required in the proposed new bill, the dating app will not notify users who communicate with the scammer or post alerts about how to avoid being scammered.

The FBI said the United States reported more than $4 billion in pig scams in 2023.

Microsoft, which owns Skype, directs the Thomson Reuters Foundation to provide users with blog posts to prevent how to prevent romantic scams and steps taken to resolve AI-generated content, such as adding a watermark to images.

The company did not provide further comments.

Jason Lane-Sellers, director of fraud and identity at Lexisnexis Risk Solutions, said only 7% of scams were reported and victims were often blocked by shame.

“AI arms race”

Jorij Abraham, managing director of the Global Anti-SCAM Alliance, a Netherlands-based organization that protects consumers, said humans will not be able to detect and manipulate the media for a long time.

“In two or three years, this will be AI targeting AI,” he said.

“[Software exists] This can follow your conversations – looking into the eyes, if they blink – these are things that humans can’t see, but software can. ”

The driveway seller of Lexisnexis risk solutions calls it an “arms race” between AI between fraudsters and anti-fraud companies trying to protect consumers and businesses.

Richard Whittle, an AI expert at Salford Business School in northern England, said he hopes that future DeepFake detection technology will be built by hardware manufacturers such as Apple, Google and Microsoft, which can access users’ webcams.

Neither Apple nor Google responded to a request for comment on how to protect consumers from deep development or future product development.

Abraham said the real challenge was to catch up with the scammers, who often work in different countries.

Despite her death, Hyland still believes it is a good thing to report a scam and help the authorities fight the fraud.

She hopes the scam victims know that it is not their fault.

“I learned the term…we don’t lose (money) or throw it away – it’s stolen. We don’t fall for scams – we’re manipulated and victimized.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button