Blog AI
25. April 2024

Are we really facing an “AI apocalypse”?

The explosion of ChatGPT in November 2022 sparked a gold rush. But as 100 million users registered to play with the new generative artificial intelligence (AI) tool, many professionals feared the Doomsday countdown had begun. 

Up to 8 million UK jobs at risk from AI

When the IPPR, an independent think tank that helps to inform progressive policy change, released its latest report into the impact of generative AI, it created a media storm - 8 million jobs in the UK alone were identified as being at risk. 

The IPPR report identified back office secretarial roles and entry-level customer service positions as being most at risk from automation. Also in the firing line are knowledge workers, especially those involved in collating and analysing large quantities of data. Similarly, many tech roles are exposed to AI disruption including software developers, web developers, computer programmers and coders. Finally, creative roles are seen as being very much at risk including copywriters, journalists, and graphic designers, given generative AI’s ability to create content from scratch.

11% of today’s tasks are at risk of being taken over by AI

According to the IPPR, 11% of tasks currently performed by workers are at risk. This could rise to 59% as the technology becomes more widely adopted into ‘business-as-usual’. It’s a scary sounding statistic. But does it really signal the start of an “AI apocalypse”?

Innovations, like driverless cars and humanoid robots, are always quick to hit the headlines because of their ingenuity. However the best practical examples of AI success are found when the technology is used to automate parts of existing processes to augment human job roles. For example, our clients have:

  • Reduced invoice processing time by 50%, which freed up staff to focus on more valuable work.
  • Automated the processing of 250,000 invoices in a variety of languages across 55 operating units.
  • Reduced order processing times by 4x to below 60 seconds.
80% of business leaders believe AI will increase business efficiencies

Research by McKinsey shows that generative AI will “unleash the next wave of productivity” across customer operations, marketing and sales, software engineering, and R&D. It estimates the productivity gains from these use cases could add up to $4.4 trillion annually to the global economy. Generative AI can deliver efficiencies by performing repetitive tasks with greater accuracy and speed, which frees employees up for more strategic work or customer engagement.

But the benefits extend far beyond automation and productivity.

Generative AI can improve decision making by revealing patterns and trends within data, which enables organisations to reach new, different, or better outcomes quicker. When applied to sales and marketing, generative AI enables organisations to deliver more personalised communications that enhance the customer experience. And when applied to certain sectors, like manufacturing, generative AI-powered machines can improve employee safety by performing hazardous tasks.

So, while there’s been plenty of speculation about generative AI threatening jobs, there’s little evidence of the technology being actively used to replace humans. In fact, according to research from Deloitte, reducing headcount is the lowest priority, as business leaders favour investigating how generative AI can help them to improve content quality, drive competitive advantage, and scale employee expertise.

Are we really facing an “AI apocalypse”?

But remember, 4 in 5 AI projects fail

To succeed in deriving value from your AI investments, we need to look to the successful 20% who treat AI projects as data projects. Therefore, the following are three practical considerations to ensure your investments in AI have a real business impact to your organisation.

Quality data is a prerequisite for successful AI

Data quality is the second biggest reason AI projects fail (behind ‘unclear business objectives’) because it’s often overlooked in the excitement of exploring new technologies. Poor data quality affects AI outputs in three key ways:

  • Unnecessary noise: because AI uses any, and all, available data to train the technology, anything bad in the mix will negatively affect the results.
  • Hallucinations: some generative AI tools, like ChatGPT, are programmed for plausibility over accuracy, so will make things up or cite sources that do not exist.
  • Bias: historic data sets are inherently biased, and any AI outputs will perpetuate that bias if left unchecked or accounted for.

Organisations must prioritise data capture, cleansing, and enrichment to give AI tools the right foundation to learn and grow. The smart way to achieve this is through intelligent data capture, which leverages AI and machine learning (ML) to both extract key information and understand documents. 

Furthermore, because the technology continually learns, over time it requires less and less manual interventions from staff, because only genuine exceptions are flagged for investigation.

Training is key to safe AI adoption

The notorious case of Levidow & Oberman submitting a legal opinion filled with fake citations because it asked ChatGPT to write its arguments, highlights the risks associated with using technology you don’t fully understand.

To successfully take advantage of the technology, employees need to develop their AI aptitude to understand how the tools work, and therefore, how to overcome their limitations. Yet just 14% of frontline employees have received training on generative AI, and less than half (47%) believe their company’s leadership team has sufficient knowledge to understand the risks and opportunities the technology presents.

At the most basic level, AI users need to know how to constrain the data inputs and use prompt engineering to create guardrails for the AI model. Training is crucial to the success of AI projects because it allows you to trust the outputs more and know that you haven’t fallen foul of security, privacy, or copyright laws.

Use flexible architectures and APIs for smooth integration

A solid integration architecture is essential for AI projects to be successful. The most effective way to secure integration is using Application Programming Interfaces (APIs) because they define how different software components should interact with each other to exchange data seamlessly. 

When designing APIs for AI models, you must pay special attention to endpoints to ensure data can flow freely, as well as how to maintain security of data in transit, versioning so new changes don’t break existing integrations, and testing to ensure the integration performs as expected.

While your AI tool is best designed to address a specific use case, with an API-led approach, you can create small, reusable building blocks that can be deployed on subsequent projects. Using some smaller add-on modules to augment the outputs, you create a futureproof foundation, which dramatically reduces your time-to-market.

If you’re keen to understand more about how AI can integrate into your current ways of working,
contact us today and discover how AI can best augment job roles to free your people up to focus more time on strategic thinking and action.

Back to all articles

Other resources
you might be interested in

All resources