At last month’s ATC 2025, my goal wasn’t to warn people or reassure them about the ongoing wave of AI-fueled transformation. I wanted to recalibrate everyone’s expectations about AI and what it means for workforces around the world.
It’s actually pretty simple: looking at AI solely as a savior or a threat misses the point. This technology is becoming infrastructure. Like Rome’s aqueducts, it can expand what’s possible or quietly contaminate the system if leaders ignore the design flaws. While my session at the event focused on how this will impact talent acquisition and HR teams, there’s insight that all teams should take notice of.
The myth of total automation is distracting from the real work
Let’s put it simply: AI is not 3 months away from automating everyone’s jobs. When a report from MIT shows that 95% of Gen AI projects fail to deliver promised results, one can probably draw the conclusion that the tech just isn’t quite there yet. And if we are solely focused on job loss, we miss what’s important in this new world:
- Judgment becomes the differentiator
- Administrative drag becomes automatable
- Capability evolves faster than org design
The priority for leaders shouldn’t be defending jobs from AI, but redesigning roles so people spend their time on higher-value decisions and interactions.
Fairness still depends on design
Despite the technological shifts, most of the basics aren’t changing. You still need structured interviews. You still need inclusive job ads. You still need ethical work sample tests. You still need transparency about how candidates are evaluated.
AI can speed up the inputs, but it does not fix flawed processes. Poor design at scale is still worse design.
Candidate experience is still the ultimate stress test
Have you considered the implications of fully automating your recruitment process? I recently saw a recruitment leader lamenting that they had a 70% drop off rates in applications with new grads where people had the “opportunity” to interview with AI to “collect additional data” before meeting with a team member. The reality is that automation does not equal efficiency or quality if a candidate hates the experience.
From the leader’s perspective, the issue was motivation. I disagree. My theory is that candidates didn’t feel respected and included in the process.The reality is that people want to talk to people when decisions about their future are on the line.
Now, there is research that shows that for some roles, AI is a fabulous recruiter. For some jobs, especially those where success can be attributed to a few consistent factors, it might perform better. The lesson isn’t a binary “AI is good / bad”. Rather, it’s an invitation to start small and test, and scale what actually works in your own environment.
Accountability isn’t optional
The EU AI Act categorises employment-related AI as high-risk. California has already removed algorithmic blame as a legal defence. While regulations are and will continue to be in a state of flux, the point is clear: the global supply chain for AI governance is already shaping how local organisations must behave.
For TA leaders, that means three things:
- You will increasingly need to explain, not just deploy, AI.
- Every hiring tool will need human accountability baked in.
- AI decisions must be auditable in a way that withstands regulatory scrutiny.
Optimise for the right problems
Don’t get caught in the trap of “throwing some AI on it” when, well, you don’t know what it is. It’s crucial to choose the right problems for AI to solve. For recruiting, some of the best use cases are those that are repeatable and documentation heavy:
- Job descriptions aligned to inclusive templates
- Interview rubrics based on structured, validated criteria
- Language standardisation for fairness and clarity
These use cases provide the scaffolding for recruiters to focus on the decision making and human elements. But of course, even that scaffolding will need to be checked and validated by actual humans – LLMs are famous for making things up!
We don’t have all the answers
The latest wave of automation is coming with questions that we don’t have answers to yet. And the reality is that there is likely not a single right answer – organisations will have to determine their own approaches.
For example, what happens when someone automates away half of their existing workload? Most job architectures still assume a full-time role contains a full-time amount of manual effort.
But AI can introduce slack and that can provide options about how someone’s skills can be best used. I predict that we’ll see more project-based deployments, cross-functional teams, and skills matching in organisations that are undergoing true AI transformation. This means that hiring for potential and adaptability become just as important as technical alignment.
An AI policy is only useful if it’s hard to ignore
You absolutely need an AI policy. But that policy isn’t worth the (probably digital) paper its printed on if it’s not relevant in the flow of work. You must activate it. Triggering reminders inside tools, providing point-in-time training, and making escalation paths clear for questions or concerns are crucial to making sure a policy is useful to those doing the work.
Ethical AI is operational, not abstract
Operating ethically isn’t a problem for later. In addition to, you know, doing the right thing, there are serious operational, reputational, and legal risks to not thinking about how to use AI responsibly early. I find it useful to think about these principles to drive responsible decisions:
- Consent: Let people opt in or out of AI processes without penalty.
- Privacy: Know and share exactly where data goes, and who sees it.
- Minimisation: Don’t collect data that you can’t justify.
- Environmental Impact: Use AI where it genuinely adds value: the ecological cost of generative AI is not trivial.
Responsible AI leadership isn’t about grand statements. It’s about operational choices made every day.
AI won’t replace recruiters because trust isn’t automatable
The reality is that candidates aren’t selecting jobs just because of a role description or comp offer. People choose roles based on connection, trust, and belonging. No model currently available can create a full-cycle experience that achieves that.
AI will change how the work gets done, but the core of hiring remains deeply human. If anything, the rise of automation will make the human elements at work more valuable, not less.
