Latest News

Blog: The growing AI threat - what it can mean for your brand and reputation

30 May 2025

It’s undeniable that Artificial Intelligence (AI) is deeply embedded in our daily lives. Whether it's the smart watch tracking our steps, the chatbot answering customer queries, or the virtual assistant setting us reminders, AI is working around us 24/7 — and often invisibly.

While AI has brought convenience and efficiency, its rapid evolution has introduced increasingly complex ethical, social, and operational risks for organisations that can threaten brand and reputation suddenly and profoundly. 

The most pressing risks to Australian businesses right now include:
•    The spread of misinformation and malinformation.
•    Bias and discrimination in AI-generated content.
•    Data security breaches.

These aren’t just hypothetical problems. They’re real and happening now.

Consider the UK energy firm that lost nearly $500,000 to a scam involving AI-generated voice deepfakes — the CEO was convinced he was speaking to his German boss. Or the real estate listing in NSW created by an employee with ChatGPT that inaccurately referenced schools that didn’t exist.

Schools are facing scrutiny too, with students using AI to manipulate images of peers and teachers — and in some cases, selling them online for pocket money. These incidents are driving a wave of media attention and increasing pressure on school leaders to explain their policies, protections, and responses.

What these examples show is that AI-related damage can come from both outside the organisation and from within. That’s why clear communication strategies, proactive planning, and robust internal protocols are now more important than ever.

At our own consultancy, we’re acutely aware of these same challenges. We’re currently developing internal guidelines and an AI policy — backed by training and research into ethical AI practices across the communications sector, both here and overseas.

One insight has stood out: the technology itself isn’t the biggest issue — it's the lack of proper understanding and training among those using it.

To protect your brand and reputation, it’s important to treat AI as a potential risk area in your overall crisis planning. Here are a few steps we recommend:
•    Audit the key AI risks for you organisation.
•    Include AI-related scenarios in your crisis communication plans.
•    Run an AI crisis scenario, including simulated interviews and press events about hypothetical AI failures.
•    Take part in media training to understand how the media reports on AI issues and learn how to respond to AI-related crises.
•    Develop key messaging and FAQs on what actions, and responsibilities, your organisation has undertaken to reduce the threat of AI.
•    Create clear internal communications for employees and contractors that outline your organisation’s guidelines on the ethical use of AI.

And finally, if something does go wrong, be transparent — and make sure to call your PR consultant!

Sign up to Hughes News