I have been focused on promoting emerging technologies in all UK businesses. Technology, such as Artificial Intelligence, is not just for academics and web giants like Google or Amazon.
Gone are the days when you need immense computing power and huge finances to approach artificial intelligence (AI). However, there is always a hype around emerging tech, how do we make explainable artificial intelligence?
Therefore, to build on the ‘Beyond the Cloud’ results, I looked at 2 studies which emphasise the importance of businesses adopting AI to remain competitive.
The First Study
The Let’s get real about AI by OC&C consultants found that artificial intelligence (AI) spending increased to $219bn globally in 2018. This was equivalent to 7% of the total IT expenditure worldwide. The UK contributed to $12bn of this expenditure.
The Second Study
The Gartner Trends 2019 report predicted that computational resources used in AI will increase five times from 2018 to 2023. This would mean that explainable artificial intelligence (AI) would drive most business decisions.
Additionally, by 2022 30% of customers in mature markets will rely on AI to decide what they eat, wear and live the survey concluded.
However, the report noted that issues related to lack of governance and unintended consequences of AI will slow down development in explainable artificial intelligence.
A Change of Focus
This blog did not want to focus on more positives of AI as many businesses still do not deploy the technology.
Instead, I wanted to understand the concerns with implementing the technology. In the same vein, why organisations might not want to promote the technology.
The Basic Disadvantage of Artificial Intelligence
I realise that I have written a lot of blogs about the importance of AI in business but have never covered the basics.
AI systems typically learn ‘rules’ from exposure to outcomes rather than building up from simple rules. Therefore, the system may not be able to explain why a particular result was achieved. It does not have the X means Y concept. The problems which organisations encounter (which is also a reason for not implementing the tech) is that AI may not be able to explain why a particular result was achieved.
Problems also arise when human operators are rejected, but the technology experiences problems with conforming to regulations.
Where Explainable Artificial Intelligence Comes In
The news also tends to report the negatives of AI adoption. An example of an AI disaster is the unfortunate story of the woman in Arizona killed by the driverless car. It was later found that the AI system needed to be explainable artificial intelligence and justify its decisions which it had not done before.
Therefore, before using AI to make decisions, Tola Alade from Lloyds Banking Group highlighted the importance of justifying the reasons behind using this technology.
Problems which are not Highlighted
AI may not always give optimal results. This is even if it is trained on known data. AI takes a sample of all the people who work on materials and introduce a model based on the average.
However, the key closing point is: “just because 50% of the population does something does not make it the right solution”