These men are extremely loveable. It's a shame they're not real.

Without even knowing it, generative AI is infiltrating our everyday lives.

From asking Chat GPT to craft the perfect break-up text to predicting the ends of our email sentences, it's a useful tool that is driving forward Australia's digital transformation. But is it also a romance scammer's dream?

For example, the three men in the image for this story are not real humans.

They were the result of a quick search query in one of the more popular AI image generators that are available for public use. It took no more than 15 minutes, and there are much more advanced versions where the images will only look more realistic. 

It's scary, to say the least. 

According to Dr Kate Gould, senior researcher and clinical neuropsychologist at Monash University, generative AI makes uncovering romance scams even harder, as the scammers use it to create realistic images, voices and videos to 'verify' the false persona.

In a nutshell, generative AI scams are harder to spot.

"While scammers often rely on stealing other people's images to make their lies and stories seem real, with generative AI, scammers can create photos of whatever they want to try to be more convincing," she tells Mamamia

"It can also be harder to use reverse image searches to see that the photos were stolen."

Essentially, it reduces our ability to internet sleuth and sniff out a rat. 

Dr Gould says the improvement in AI voice technology is also contributing to the 'realness' factor of these scams.


"Scammers previously would correspond mainly by text message, but as technology like AI improves, they can copy a real person's voice with only a short excerpt or can program anything to be said in the desired age, gender and accent to pretend to be a romantic partner, someone famous or someone you know."

AI dating scams are becoming more common.

Image: Getty.

It's easy to hear about AI dating scams and think it's something that happens to someone else. However, it is increasingly happening to everyday Aussies, who are looking online to meet 'the one'.


According to new Norton research, the risk of being scammed via AI has increased exponentially this year, with AI-based scams up 72 per cent.

It is something that dating apps and online matchmaking services are keenly aware of. 

According to Bumble's APAC Communications Director, Lucille McCart, 33 per cent of Australians they surveyed recently rated fake profiles as a top concern in online dating. 

Yet, it's not a 'throw the baby out with the bathwater' situation. 

Is AI inherently bad?

There is no dichotomy of 'AI is bad' or 'AI is good', it's about how it's being used. 

For example, Bumble uses AI monitors to sniff out AI scams on the app. Their 'Deception Detector' identifies and takes action against fake profiles, scammers and spammers, with testing showing that it blocks up to 95 per cent of fake accounts automatically.

They also use AI to discover and automatically blur lewd photos sent on the app until the person receiving it gives consent to view it. 

Bumble's founder Whitney Wolfe Herd has even suggested that in the not-too-distant future, online daters could have 'AI concierges' that 'date' each other to find the best possible matches. They're already on the way there. The 'For You' feature congregates your dating preferences and shows four matches that are closer to what you're actively looking for. 

It's clear that AI is also making the world of dating a better place. So how can we make sure it's only used for good?

Our AI regulation is lagging seriously behind.

Image: Getty


AI isn't always the bad guy, but without proper regulation, it can be. Australia is unfortunately lagging behind other countries in AI reform.

"In Europe, there is the Artificial Intelligence Act, which provides a comprehensive framework for AI systems," says software architect Nick Beaugeard, CEO and Co-Founder of World of Workflows. "We don't have the equivalent in Australia."

Locally, specific AI laws are virtually non-existent, bar a federal list of eight 'AI Ethics Principles', which has been in action since 2019. It is currently regulated through Australian consumer law, the privacy act, copyright law and data protection laws.


"This puts providers and users in a bit of a no-man's-land," says Beaugeard. 

Dr Catriona Wallace, founder at the Responsible Metaverse Alliance, recommends that AI is governed at "federal, state, company and individual level[s]."

For now, our laws fall woefully short. 

"Some existing laws that sit with the eSaftey Commissioner, Julie Inman-Grant (such as the Online Safety Act 2022) and the Human Rights Commissioner, Lorraine Finlay can be extended to apply to AI, however they do not overtly refer to AI," explains Dr Wallace.

With the growth of AI moving exponentially in comparison to the lumbering pace of new regulation, it's become a game of catch-up. Australian governments are starting to take action, with a new Senate Committee on Adopting Artificial Intelligence announced in March 2024. It will look at the pros and cons of AI in Australia and then make a series of recommendations. The next report is expected in September 2024.

New laws can't come soon enough. The rise of deep fakes — digitally altered images and videos — is creating havoc across the globe, from celebrities' faces being pasted on sexually explicit content to AI-generated songs that an artist never even sang. 

In fact, a Victorian school recently faced an incident where 50 young women and girls were targeted, having their faces manipulated onto explicit content.

There are myriad Reddit threads on the topic, with people sharing their own close brushes with AI scammers. 


"There was a romance scammed person here… where all the pictures were generated or at least AI modified," one user wrote. "It's scary stuff because you can no longer just Google search the images now."

Another added, "Facetime is indeed possible nowadays. Using AI Deep Fake technology. I saw a sample of a scammer talking to someone close to me. I couldn't believe my eyes. Be aware."

Beaugeard says, "Deep fakes have already seen Australians part with their money, and there are anecdotal stories of people being called by relatives (using AI-generated voices) and sending money to help them on the spot."

Imagine how that could spiral when the rose-coloured glasses of love are added.

Generative AI scams have wide-ranging impacts on victim/survivors. 

Image: Getty


Per Dr Wallace, "According to the Existential Risk researchers at Oxford University, AI currently poses the highest existential risk to humanity."

She lists things like identity theft, fake news, scamming, extortion, exploitation, false pornographic images, malicious hoaxes, misinformation and public manipulation as a few of the dangers it could pose.

For those involved in romance scams, the emotional turmoil can cut deep.

As Dr Gould explains, "People can go through a range of emotions, like confusion, disbelief and denial, shock, anger, shame, guilt, embarrassment, worry, depression and distress. 

"Some people can feel hopeless and without adequate compassion and support from others, and can feel suicidal."

These feelings are compounded when money is involved in what researchers call a 'double-hit'. 

"There are also often a range of other social impacts, such as disagreements and loss of trust with family and friends, the practical impacts of losing significant sums of money, impacts on time for trying to navigate the legal and financial repercussions of being scammed," she says. 

"It can really shake up someone's willingness to date and to trust people with their emotions. Scam victim/survivors can feel like the world is no longer a safe place."


According to Beaugeard, "The National Anti-Scam Centre reported on 12 May that in the prior quarter over $300 million was lost by Australians to scams.

"The rise of generative AI means that we will see these [scams] as harder and harder to see through."

While we all think it could never happen to us, Dr Gould says "anyone can be scammed, and there is a scam for everybody."

In the meantime, Bumble says they're highly reticent of the fact that generative AI is a developing issue. 

"Our guidelines prohibit any attempts to artificially influence connections, matching, conversations, or engagement through the use of automation or scripting," McCart says. 

"We have also been deploying machine learning models for a number of years to enforce its guidelines and enhance its focus on member safety, and use automated safeguards to detect comments and images that go against its guidelines."

For now, the Responsible Metaverse Alliance is treating AI as a "wild west." 

"Our young ones are in danger and government and tech providers need to step right up, and introduce stringent ethical standards and legislation," Dr Wallace says. 

"Or else, we are in trouble."

New-age tips to spot a dating app scam.

If you're wondering what information you need to be armed with in this new generative AI era, McCart has shared her tips. These include: 

  • Always look for profiles that have verification badges: they've gone through a process to confirm their identity, which helps weed out any fake accounts or scams.

  • If something feels off about a profile or conversation, members can report off-putting behaviour.

  • In the early stages of chatting with someone, make sure you're not revealing any sensitive personal information (such as where you live, your email address and phone number).

  • Remember that you can voice call and video chat through the app, so you don't need to share your contact details until you are comfortable.

  • You are allowed to control the pace of your interactions, and the right person won't rush you through anything.

  • Social media can be a helpful detective tool: check if your match has linked their Instagram or do a quick search using the details on their profile.

  • Watch out for red flags like overly perfect photos or dodgy answers to questions.

Feature Image: Open Art AI; Canva.