Johannesburg – The world is experiencing an Artificial Intelligence (AI) rush as organisations push ahead with implementing productivity tools, automation systems, and other resources in their businesses.
But are they truly ready for AI and what it entails in the workplace?
The assumption is that AI readiness is the obvious step forward, but this is a risky misunderstanding, says Vera Solomatina, SVP of People and Culture at inDrive, and happens to be one of the biggest risks in current AI adoption strategies.
“AI readiness should be measured by the level of internal trust in an organisation,” said Solomatina on a panel discussion at the recent Women in Tech conference held in Cape Town.
“Organisations are investing heavily in technology, but seem to underinvest in the psychological safety of the people who will actually work with it.”
She believes that AI transformation is considered solely on what the deployment process will involve, when in practice, it involves considerable behavioural shifts that depend on whether people feel able to engage with new systems without fear or hesitation.
Tool deployment is not readiness
For Solomatina, the gap between installation and integration is where most organisations stumble.
“Buying an AI platform only creates the illusion of transformation,” she notes.
“Real readiness is when employees feel safe saying ‘I don’t understand how this works’, and are given the tools and resources to learn.”
While much of the corporate response to AI has focused on reskilling at scale, Solomatina argued that many programmes are missing the point entirely.
“What works is contextual, embedded learning built into real workflows, not delivered as standalone courses,” she says.
“Mandatory e-learning and programmes designed exclusively for senior or technical staff don’t typically work, because people aren’t acquiring the knowledge in a very practical, meaningful way.
“This approach rarely translates into behaviour change or true adoption.”
She pointed to stronger outcomes when learning is integrated into day-to-day tasks and supported through peer-led collaboration, the way it is done within inDrive.
“For example, pairing technical and non-technical employees will accelerate the rate of adoption as people are able to learn from colleagues working in different roles and apply those insights directly to their own day-to-day tasks,” she said.
“We’ve also seen that the best AI ambassadors are often not the most senior people, and if upskilling only reaches leadership and tech teams, you create an AI-literate elite and a disengaged majority that simply widens the readiness gap.”
HR must move upstream in AI decisions
As AI becomes more embedded in how organisations operate, Solomatina believes people functions can no longer operate downstream of technology decisions.
“HR must move from being a function that responds to AI decisions to being the architect of how AI processes and tools will be implemented to ensure the well-being and productivity of all teams,” she said.
“That means the Chief People Officer needs to be at the table where automation decisions are made, not brought in afterwards to manage potential issues.”
This shift requires HR teams to develop new capabilities.
Data fluency and AI ethics awareness are crucial for the current and future HR roles, ensuring leaders are able to lead the charge when it comes to adoption and the impact on a company.
Where AI is used in hiring or tracking performance, Solomatina added that organisations should also be mindful of bias ingrained in legacy systems that will impact the new.
“If the data used to train models reflects historical biases, automation doesn’t remove those biases; it scales them,” she says, arguing that diversity and inclusion must be embedded into AI design, not added afterwards.
AI readiness, across these many layers, requires planning as well as confidence in using them, and this means ensuring that there is trust and an environment of psychological safety being nurtured within the fabric of organisations in the first place.


