AI is being introduced into workplaces faster than most organisations are preparing people for it.
The conversation often centres on productivity, efficiency and staying competitive within the market.
But there’s a gap.
We’re not talking enough about how it’s impacting people.
Because when AI is rolled out without clarity, communication, or team involvement, it doesn’t just change how work gets done.
It changes how safe people feel at work.
AI isn’t the hazard. Job insecurity is.
AI on its own isn’t a psychosocial hazard.
But the way it’s introduced can create one.
And one of the biggest risks emerging in Australian workplaces is job insecurity.
Not just fear of redundancy.
But uncertainty.
- Uncertainty about whether a role will still exist
- Uncertainty about what “good performance” now looks like
- Uncertainty about how decisions are being made
Under the Safe Work Australia framework, psychosocial hazards are defined as factors that can cause psychological harm at work.
This includes job insecurity, which is also explicitly recognised by Comcare as a psychosocial hazard that can arise when workers experience uncertainty about the future of their role or employment.
When people don’t understand what’s changing, or what it means for them, that uncertainty builds.
And over time, it starts to impact employees’ confidence in their role, engagement at work and psychological safety.
What the Australian evidence is telling us
This isn’t hypothetical.
The Australian Psychological Society has warned that poorly managed AI rollout can erode trust in organisations, increase workplace safety risks and negatively impact mental health.
Importantly, the risk isn’t the technology itself.
It’s how it’s introduced.
Guidance from the Australian Institute of Health & Safety reinforces that successful AI adoption depends on workers feeling informed, secure and included throughout workplace change.
Without that, uncertainty increases and trust decreases.
What this looks like in real workplaces
This doesn’t always show up as people openly saying they’re worried about AI.
It shows up in more subtle ways:
- New tools introduced with little explanation or training
- Performance being tracked or assessed in new ways
- Roles shifting without clear communication
- Teams expected to “just adapt”
From the outside, it can look like resistance.
But underneath, it’s often a loss of control.
And when people feel like things are happening to them, not with them, trust starts to erode.
Why this matters more than we think
Job insecurity isn’t just an individual experience.
It’s a workplace risk.
Left unmanaged, it can lead to:
- Reduced engagement
- Increased presenteeism
- Lower trust in leadership
- Resistance to change
- Poor adoption of new systems
- Increased psychosocial risk across teams
Insights from the Australian Psychological Society continue to highlight that addressing psychosocial hazards is essential to maintaining healthy, productive workplaces.
AI is simply introducing a new version of an existing risk faster and often without the same level of preparation.
The problem isn’t the technology. It’s the approach.
AI is here, and it will continue to shape how we work.
The organisations that will navigate this well aren’t the ones avoiding it.
They’re the ones introducing it responsibly.
Because change isn’t just operational.
It’s psychological.
What good looks like
Australian evidence is increasingly clear: AI needs to be introduced with psychologically informed safeguards, not just technical capability. Some useful safeguards are:
1. Transparency
People don’t expect all the answers.
But they do expect honesty.
Be clear about:
- Why AI is being introduced
- What it will and won’t impact
- What is still unknown
Clarity reduces uncertainty.
And uncertainty is where risk grows.
2. Participation
Involving people early changes everything.
Consultation is a key requirement under Work Health and Safety Legislation especially during periods of organisational change.
When employees are part of the process, they’re more likely to:
- Understand the change
- Trust the decisions
- Engage with new ways of working
This isn’t just good culture.
It’s risk management.
3. Capability building
You can’t expect confidence without capability.
The Australian Psychological Society emphasises the importance of education and training alongside AI adoption to support psychologically safe workplaces.
If people are expected to work alongside AI, they need:
- Training
- Time to adapt
- Space to ask questions
Without this, uncertainty turns into avoidance or silent disengagement.
This is a leadership moment
AI will continue to evolve.
But the responsibility to create psychologically safe workplaces doesn’t change.
If anything, it becomes more important.
Because the risk isn’t the technology itself.
It’s what happens when people are left in the dark.
Where to from here
Job insecurity linked to AI is an emerging psychosocial hazard in Australian workplaces.
But it’s also preventable.
With the right approach, organisations can:
- Reduce risk
- Build trust
- Improve adoption of new ways of working
Because behind every system change, people are trying to understand where they stand.
And that’s where good leadership makes the difference.
To support this, we’ve developed a practical checklist: Managing Psychosocial Risk During AI Rollout.
It is designed to help leaders identify where uncertainty may be increasing within their teams, and where additional controls may be needed to support psychological safety during change.
If you would like access to the checklist, please enter your details below.
"*" indicates required fields
References
Safe Work Australia
Psychosocial hazards definition and framework
Comcare
Job insecurity as a psychosocial hazard and its impact on worker mental health and wellbeing
Australian Psychological Society
Peak psychology body urges consideration as AI grows within Australian workplaces
Australian Institute of Health & Safety
Psychologists warn AI rollout risks workplace trust, safety and mental health
KPMG (2024)
Impacts of artificial intelligence in the workplace


