No longer just a fictional theme for far-fetched science fiction movies, artificial intelligence is now very much a day-to-day part of our reality. In factories, in intelligent transportation, even in the medical field, artificial intelligence (AI) is just about everywhere.
But what exactly is artificial intelligence? As AI becomes more ubiquitous, why is there a need for International Standards? And what are some of the topics surrounding its standardization?
A recent report by the McKinsey Global Institute) suggests that investment in artificial intelligence (AI) is growing fast. McKinsey estimates that digital leaders such as Google spent between “USD 20 billion to USD 30 billion on AI in 2016, with 90 % of this allocated to R&D and deployment, and 10 % to AI acquisitions”. According to the International Data Corporation) (IDC), by 2019, 40 % of digital transformation initiatives will deploy some sort of variation of AI and by 2021, 75 % of enterprise applications will use AI, with expenditure growing to an estimated USD 52.2 billion.