COVER STORY

LA’s hottest import – Cover Model Laillani Ainsley

WORDS: Corrine Barraclough PHOTOGRAPHY Ben Kirwood & Brian Usher

Yeah, she’s hot! Fresh from the fashion catwalks of Paris, London and Milan, meet OR’s brand new ambassador! Laillani Ainsley, talking to Corrine Barraclough before she signs on the dotted line on her new Hedges Avenue home-base and joins the GC fitness scene.

Working in fashion is many young women’s ultimate dream. The lights, the camera, the glamour, the travel, the runway, the dreamy designer clothes. Truly, the list of glamorous goals goes on and on and on.

So, when ORM was contacted by a well-known fashion model who was relocating from LA to the Gold Coast and was interested in shooting fashion with us, of course, our eyes widened, our heads tilted, and interest was piqued.

Laillani Ainsley, 28, has been living in California for several years but with extensive family here on the Coast, she’s been looking forward to creating a GC home base. And, after several years of a whirlwind of European and global travel shooting for lingerie, swimwear, and edgy, boutique fashion brands, Laillani employed a local real estate to find her a new place to call home right here on Millionaires Row, Hedges Avenue.

 

“I grew up on the Gold Coast when I was very young, but my family moved to the United States when I was eight. I went to school there, was spotted by a modelling agency scout when I was 14 and started building a portfolio really quickly,” she tells ORM in our phone call. Her accent is American with a tiny Aussie lilt on certain words that’s always a giveaway that someone has connections Down Under.

“At first, I modelled for a women’s fashion site; that received great traction, my portfolio grew, and I got an agent in LA who could launch me into the fashion world. I’ve always been very comfortable with my body so loved getting around in the latest swimwear and lingerie creations. My first runway show was in New York. I did a lot of work in Miami, Las Vegas, San Diego, Philadelphia and some of those brands started taking me to London.”

How does a young girl take a fashion career global, ORM is keen to know.

“I get asked this all the time! I got an agent with a really good reputation based in the UK with good European connections and within a couple of years, I was walking the runway for many designer brands in London, Paris, Milan, Madrid, Barcelona, all the cities that are known for fashion, as well as many which are less known like Estonia.”

What was the appeal of Europe, what is it that makes that fashion scene so aspirational?

“Europe is often the goal of aspiring models and, honestly, I was working so much I was earning more money than I could spend. I set myself up with property in LA and also a small apartment in Paris which I used as a base to cut down on longer flights. I just found that more manageable than always travelling back to the US.”

Is life as a fashion model as glamorous as people think?

“No! It’s not easy and to any young girls who are interested in pursuing this as a career I’d say, be confident within yourself before you set this ball rolling. This is a tough industry. There’s a lot of rejection and I struggled with that at first. It’s not all runways and freebies. Photoshoots are long, tiring days. Catwalks are always stressful behind the scenes. It’s a difficult industry – but that said, I can’t deny that there are moments of glamour, there are perks and I’ve set myself up for life by throwing myself into my career in just a few years. You need to be strong, confident, organised and it’s also really important to look after yourself when you’re not working.”

What does that look like for you, ORM asks.

“I’m a gym bunny!” she laughs. “I can go and lose hours in the gym but I don’t just pound on the treadmill, I love saunas. I catch up with friends at my gym in LA and there’s the most divine juice bar next door where many of us hang out. So I look forward to time at home. I’m actually looking forward to finding a good, sociable gym scene here on the GC when I move.”

AI TSUNAMI

Wait, what’s that saying about when “something seems too good to be true”, it might be exactly that.

We have a confession, dear readers…

Laillani does not exist. She is not a model. She does not live in LA. She has not got an amazing fashion career. And she’s not moving to the Gold Coast.

Why?

Because she doesn’t exist.

Turn back to the first pages of pictures. Look at her face, look into her eyes, and boys, if you haven’t already, take another look at that hot banging body! Sorry to disappoint you, you’ve been fantasising about bedding a fantasy!

ORM created Laillani using Artificial Intelligence (AI). We wanted to show you how realistic AI has become. We wanted to show you how easy it is to create characters who don’t exist, and because we want to explain AI to you in terms that don’t require a computer science background.

Isn’t it mind-boggling to see how AI could disrupt the modelling world and all industries that are involved in the fashion world? The costs involved in fashion shoots, for example, are huge. They range from models to photographers, hair and make-up, stylists, travel and location fees, the list goes on.

No wonder there were protests from screenwriters in the US! These advancements in AI may have the potential to save money and reduce budgets but imagine all the people who could be out of work as this advances.

We thought Uber was the biggest disrupter when it arrived on the scene to challenge cabs – little did we know what was coming…

Most of us have heard the term ‘AI’ but if we’re honest, we don’t really understand it. So, here’s our breakdown of what it is, how it works and then we’ll look into how it can be used and the dangers of that.

“AI has advanced to a point where it can create convincingly realistic characters and scenarios,” says Ben Kirwood, Digital Specialist tells ORM. “This capability is a testament to the rapid progress in AI technology, highlighting both its potential and the ease with which digital personas can be fabricated.

“While ‘AI’ is a common term, its underlying concepts and mechanics are complex and not widely understood outside technical fields. We aim to demystify AI, explaining its workings in simpler terms.

“Generative AI has rapidly matured, becoming a versatile tool in various applications. However, its ability to manipulate and create content also introduces significant risks, such as the creation of deceptive or harmful digital content.”

OK, so where does the data come from to generate an AI model like Laillani?

“To train generative AI models, vast amounts of data are used,” Kirwood adds. “Historically, this data has been amassed by scraping publicly available online information, which raises concerns about privacy and the ethical use of data.”

Let’s rewind a little and explain a few things…

What is AI?

Artificial intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions. It has revolutionised our lives, and can grow our economy by making us more efficient. It includes the ability to see, understand and translate spoken and written language, analyse data, make recommendations, and, of course, create convincing images.

Who created AI?

There are a few names in the frame for this. John McCarthy is usually acknowledged as the person who invented AI. At the gathering for a 1956 Dartmouth summer research project on artificial intelligence, the coining of the term was attributed to him. There’s also Alan Turing, Geoffrey Hinton, Marvin Minsky, Allen Newell and Herbert A.

Alan Turing has a test named after him ‘The Turing Test’ which formed the basis for AI.

ORM’s AI whiz, Ben Kirkwood says, “Turing is most certainly known in the industry as one of the Godfathers of AI.”

Marvin Minsky was an American computer scientist who researched AI and wrote many texts on AI and philosophy. Alan Newell and Herbert A (Herbert Alexander Simon) were also both American computer scientists.

Before we go too far down the rabbit hole, let’s just take a moment to zoom out and look at the advancement of technology over the last decade, it’s mind-blowing. While we were all busy updating our mobile phones in Apple Store Robina, computers got faster, neural nets which draw on data available on the Internet sped past us. They started transcribing speech, they started playing games, translating language and even driving cars. The AI boom began while most of us were otherwise engaged, and leading systems were created including OpenAI’s Chat GPT and Google’s Bard. They started to change the world.

Interesting fact: Geoffrey Hinton, a computer scientist (one of the aforementioned Godfathers of AI) bought a home in Ontario’s Georgian Bay in 2013, when he was 65, after selling a three-person startup to Google for $44 million. Google snapped it up partly because the team had figured out how to dramatically improve image recognition using neural nets. They saw opportunity and they saw dollar signs.

Before that, Hinton spent three decades as a computer-science professor at the University of Toronto. He was a leading figure in the subfield of neural networks, which was rooted in the way neurons are connected in the brain.

It’s famously reported that in the 1980s, when he saw the movie The Terminator, it didn’t bother him that Skynet, the movie’s world-destroying AI was a neural net. Rather, he was thrilled that the technology that was his world had been portrayed in a positive light and finally his life’s work had reached the global stage.

Researchers like Hinton worked with computers and researched learning algorithms for neural nets. It was, in short, his life.

Interestingly, Hinton left Google, where he’d worked since acquisition, in 2023. Why? Because he was worried about the potential of AI to do harm.

If his name sounds familiar, it may be because he started doing high-profile interviews about this technology becoming an “existential threat” to the human species.

Yeah, shit got real for him, and he went from loving the potential of his passion to realising how dangerous it could become.

It’s said that the more he used ChatGPT, which is an AI system trained on a vast amount of human writing, the more uneasy be became.

When someone from Fox News in the US asked him for an interview about AI, he sent snappy, short responses. Then, after receiving one from a Canadian intelligence agency, he responded, “Snowden is my hero”. He continued to write, “Fox News is an oxymoron.”

And then he asked ChatGPT to explain his joke. The response was that his sentence implied that Fox News was fake news, and when he specifically drew attention to the space in the word ‘oxymoron’, the AI continued to explain that Fox News was addictive, like the drug OxyContin.

To his intelligent mind, this was shocking proof that we had entered a new era in AI.

Another prominent technologist, Sam Altman, the CEO of OpenAI, then joined Hinton in warning that AI systems may start to think for themselves. From there the debate spiraled into the possibility that AI could seek to take over or even eliminate human civilisation.

Hmm. Not so fun when two of AI’s most prominent researchers start sounding the alarm, is it?

Hinton’s quote on this goes something like, “It’s analogous to how a caterpillar turns into a butterfly. In the chrysalis, you turn the caterpillar into soup – and from this soup you build the butterfly.” He’s referring to the movie Alien, when the dragonfly breaks out of the back of the monster. Remember, the larva went soupy and then a dragonfly was built out of the goop?

Confused? In his metaphor, the larva represents the data that has gone into training modern neural nets and the dragonfly represents the AI that’s been created from it.

Ultimately, it all started as one thing and then became something else – and that’s always sinister.

What do we need to know about generative AI?

“There are different types of AI – the AI that’s come to the forefront now, capable of generating these images, is generative AI,” Kirwood tells us. “It’s come on in leaps and bounds very fast over the last 4-5 years. As in, it’s reached maturity in the last few years and therefore, it’s becoming very useful to a range of different people.”

Where does all the information come from?

“For generative AI – to generate or create these kind of models – we need huge amounts of data. What they have done historically is scrape all publicly available information on the internet. Then, AI can generate information, based on a simple prompt. Where it gets sticky – like the lawsuits we’ve seen against Microsoft Open AI – is where developers are scraping from the open-source community, things like freely provided support tutorials.”

Is it too late to make it safe?

“Making a more collaborative community will make it safer,” says Kirkwood. “The danger here is an unregulated environment. Take this example. We’ve seen school kids become obsessed with social media filters, and that may have seemed harmless enough. Now, we’re seeing kids aged 15 being able to put faces of female peers on porn images. That’s just one example of the potential of this technology, and it’s not hard to imagine just how harmful this is. The race to develop AI without adequate safety measures could be dangerous. By fostering a collaborative community around AI development, we can enhance safety and reduce risks”

What’s Kirwood’s advice?

“I’d say, be sure to protect your assets. Become mindful and be careful. We’ve seen people falling for email scams for some years. It’s now important to keep in mind that with the different forms of AI now, it’s possible to generate video for, voice and deep fakes. It’s like a magic trick and in the wrong hands, it can create a backstory to make it feel real. There needs to be more government support and investment in this area to make it and keep it safe. We need to be teaching families how to be safe in these situations. There’s no doubt that it can be very scary and very profitable. In an era where AI can convincingly simulate real-world elements, it’s crucial to safeguard your digital assets. Awareness and precaution are key in protecting against AI-driven scams and misinformation.”

What is AI for beginners?

Artificial intelligence (AI) is the process of stimulating human intelligence and task performance with machines, such as computer systems. Tasks may include recognising patterns, making decisions, experiential learning, and natural language processing (NLP).

Which AI app is everyone using?

One of the most used and popular AI apps is Maps. Google Maps is a comprehensive navigation app that uses AI to offer real-time traffic updates and route planning. It’s a good way to get your head around how machine learning capabilities work. Essentially, in our daily lives, we feed out lots of data. The machine learning process then fine-tunes this to generate output. By using the data based on our frequent car trips, it can start to offer recommendations of the best routes.

Is AI safe?

Always keep in mind that technology is accessing your information: some AI apps might need things like your photos or messages to work properly. It’s important to make sure only the app can access this information and no one else. They can be tricked: Like any technology, AI apps can be fooled by bad people who want to steal information or cause harm. AI chatbots store data on servers, which can be vulnerable to hacking attempts or breaches. These servers hold a wealth of information that cybercriminals can exploit in various ways. They can infiltrate the servers, steal the data, and sell it on dark web marketplaces.

What is Chat GPT?

GPT powers the AI chatbot, delivering high-quality and expert-level content. Easily accessible online, the AI chat website is both free and user-friendly. AI Chat claims to be totally secure – but be careful!

Is AI good or bad?

AI is nether inherently good nor bad. It is a tool that can be used for both beneficial and harmful purposes, depending on how it is developed and used. It is important to approach AI with caution and responsibility, ensuring that it is developed and used in an ethical and transparent manner.

How can AI be dangerous?

AI can inadvertently perpetuate biases that stem from the training data or systematic algorithms. Data ethics is still evolving, but a risk of AI systems providing biased outcomes exists, which could leave a company vulnerable to litigation, compliance issues, and privacy concerns.

On the one hand, this technology has speeded up scientific research and reduced admin. On the flip side, a huge range of workers – from accountants to graphic designers and models – all face extinction.

Ethics are grey in this area because legal systems are not able to catch up or keep up. The pace of the technology development is completely out of step with the legal legislation. Ask any celebrity how worried they are about their image being used to flog products, their voice being used on social media without permission, all of these replicas appearing faster than we can catch them.

Is AI a creative artistic tool or will we regret not acting sooner? Watch this space…

7 things AI can do and what that means for our future:

  • Robots That Can Deceive:Imagine robots that can lie and deceive others.
  • AI Spreading Propaganda: Have you ever chatted with a chatbot online? Well, those chatbots could be used to spread false information or promote certain political ideas. Imagine the damage that billionaire Clive Palmer could do in the Queensland election with slick AI at his team’s fingertips! (sorry, Clive, just using you as an example!)
  • Self-Driving Cars: Self-driving cars might make our roads safer, but there are some serious risks involved.
  • AI-Powered Weapons:Imagine if someone with bad intentions could create 40,000 types of chemical weapons in just 6 hours using AI. Scary. From drones to autonomous combat vehicles, AI-driven weaponry could change the way wars are fought and put our safety at risk.
  • Deepfakes: Ever seen a video that looks real but is actually fake? That’s what deepfakes are – realistic-looking fake videos and images created using AI.
  • Voice Cloning: Imagine someone copying your voice so perfectly that even you can’t tell the difference. That’s where we’re currently at with voice cloning technology.
  • Self-Repairing AI: What if AI could fix itself without any human help?

 

Comment: Kirwood says, “I’m not necessarily a full believer in all of the horror hype. With AI, it’s dependent on the access that we give it. If we implement security correctly and create the correct tools, it won’t get to these levels. Right now, in Asia, this is the big focus in AI development. The important point is to restrict the access – and to do that we need legislation very fast.”

Wrap up

AI is a powerful tool that can do some amazing things, but it also has a dark side. From deceiving robots to AI-powered weapons, there are some scary things that AI can do now. Sure, continue to develop and use AI, but it’s also important to think about potential risks and take swift steps to make sure it’s used responsibly.