AI for fraud detection to triple by 2021

The Anti-Fraud Technology Benchmarking Report assessed data from more than 1000 ACFE members regarding their organizations’ use of tech to fight fraud, discovering that while only 13% of businesses currently use AI and machine learning to detect/deter fraudulent activity, another 25% plan to do so in the next year or two.

Other key findings discovered that 26% of organizations are using biometrics as part of their anti-fraud programs, with another 16% expecting to deploy biometrics by 2021, while more than half of respondents (55%) plan to increase their anti-fraud tech budgets over the next two years.

“As criminals find new ways to exploit technology to commit schemes and target victims, anti-fraud professionals must likewise adopt more advanced technologies to stop them,” said Bruce Dorris, JD, CFE, CPA, president and CEO of the ACFE.

Read entire post AI for fraud detection to triple by 2021 | Michael Hill| InfoSecurity
Advertisements

Standards cooperation is key to making AI and smart cities a reality

Hosted by a different member each year, the meeting of the Global Standards Collaboration, GSC-22, was jointly organized by ISO and the IEC (International Electrotechnical Commission). The two-day event attracted participants from around the world, with notable representation from those countries where information communication technology (ICT) is set to play an increasingly strong role in the economy.

Standardization is essential to artificial intelligence – its future and its wide adoption across the world

The first day was dedicated to innovative presentations and lively panel discussions on the theme of smart sustainable cities. GSC members shared their views on standards relevant to cities that face substantial challenges in choosing suitable standards for their requirements.

Recognizing the fast pace of technological evolution combined with rapidly growing populations, members encouraged continued discussion, particularly on the development of guidelines and standards to enable seamless data exchange and interoperability.

Read entire article Standards cooperation is key to making AI and smart cities a reality | Barnaby Lewis | ISO.org

New ISO standard puts humans at the centre of business

From the advent of the Internet to what is now known as the Fourth Industrial Revolution, the latest cutting-edge technologies – among them robotics, artificial intelligence (AI), the Internet of Things – are fundamentally changing how we live, work and relate to each other.

ISO 27501:2019, The human-centred organization – Guidance for managers, can help organizations to meet these challenges.

The issue for business in this new era is not so much about the bottom line, or even just corporate social responsibility, it is also about taking a human-centred approach to the future of work and finding the right tools to ensure that organizations are successful and sustainable.

The likes of AI are presenting a great opportunity to help everyone – leaders, policy makers and people from all income groups and countries – to lead more enriching and rewarding lives, but they are also posing challenges for how to harness these technologies to create an inclusive, human-centred future.

Read entire post New ISO standard puts humans at the centre of business | Elizabeth Gasiorowski-Denis | ISO.org

Why AI, 3D printing and IoT will reach new heights in 2019

The comments below explore why there will be a further rise in the use of technologies such as AI, 3D printing and IoT as they become more affordable and are increasingly seen as vital tools to manufacturers.

Rothwell says: “In the manufacturing sector, we have been seeing the rise of a number of key technologies recently, specifically artificial intelligence (AI), Internet of Things (IoT) and additive manufacturing (3D printing).

We expect, in 2019, to see this trend continue, especially in AI and IoT adoption, as manufacturers are increasing investment in these now affordable technologies which are underpinning more accurate decision making and unlocking new business models, such as shifting towards service-based product offerings.”

Read entire article Why AI, 3D printing and IoT will reach new heights in 2019 | Dealer Support

Measuring up to the Fourth Industrial Revolution in the latest ISOfocus

The Fourth Industrial Revolution is expected to create up to USD 3.7 trillion in value by 2025, according to the 2018 World Economic Forum/McKinsey & Company white paper.

Contrary to some negative perceptions, countries and companies have an opportunity to counter and potentially reverse the slowdown in productivity by diffusing and adopting technology at scale.

The November/December 2018 issue of ISOfocus examines how government, businesses and societies will navigate the increasing integration of technologies into business and production processes. Among the experts interviewed are faculty, companies, small business leaders and standards professionals from around the world, in fields ranging from robots to industrial data to artificial intelligence.

The latest ISOfocus issue showcases some of the new opportunities for ISO standards by highlighting the industry sectors most likely to benefit.

Read entire article Measuring up to the Fourth Industrial Revolution in the latest ISOfocus | Elizabeth Gasiorowski-Denis | ISO.org

The new frontier for artificial intelligence

No longer just a fictional theme for far-fetched science fiction movies, artificial intelligence is now very much a day-to-day part of our reality. In factories, in intelligent transportation, even in the medical field, artificial intelligence (AI) is just about everywhere.

But what exactly is artificial intelligence? As AI becomes more ubiquitous, why is there a need for International Standards? And what are some of the topics surrounding its standardization?

A recent report by the McKinsey Global Institute) suggests that investment in artificial intelligence (AI) is growing fast. McKinsey estimates that digital leaders such as Google spent between “USD 20 billion to USD 30 billion on AI in 2016, with 90 % of this allocated to R&D and deployment, and 10 % to AI acquisitions”. According to the International Data Corporation) (IDC), by 2019, 40 % of digital transformation initiatives will deploy some sort of variation of AI and by 2021, 75 % of enterprise applications will use AI, with expenditure growing to an estimated USD 52.2 billion.

> Read entire article The new frontier for artificial intelligence | Elizabeth Gasiorowski-Denis | ISO.org

Four takeaways from Amazon Alexa’s bone-chilling, unprompted laughter

Multiple people have been spooked by Amazon’s virtual AI assistant, Alexa, laughing on its own. Amazon has promised it will implement changes to avoid similar incidents in the future, but it’s good to look at what we could learn from all of this.

Posted on Futurism | By Dom Galeon

Reports of Amazon’s virtual artificial intelligence (AI) assistant Alexa behaving strangely have recently made the rounds on the news and social media. Several Alexa-enabled devices have reportedly started talking or laughing without being prompted, or doing so instead of performing a command. Naturally, Alexa owners who heard this freaked out, with many resorting to turning off the AI assistants or unplugging their devices.

These incidents didn’t go unnoticed by Amazon, which immediately set out to fix the bug. On March 7, Amazon released a statement explaining Alexa’s sudden gleeful outbursts: “In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,’” the company said.

https://platform.twitter.com/widgets.js

As a result, Amazon decided to change the phrase to “Alexa, can you laugh?” which they said would be “less likely to have false positives.” The phrase “Alexa, laugh” has also been disabled. Amazon also noted that they were “changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.

First, let’s set a couple of things straight. Alexa laughing at seemingly random moments, coupled with little acts of defiance, sure sounds chillingly familiar — but this (probably) isn’t a sign of an AI takeover. What it is, rather, is a chance to reconsider some of the realities of living with virtual AI assistants today, and in the future.

This should probably go without saying. One of the most promising — but also, arguably, disconcerting — realities of AI in mobile devices and the Internet of Things is that they are always on.

Read entire article Four Takeaways From Amazon Alexa’s Bone-Chilling, Unprompted Laughter | Futurism

Theresa May calls for ethical rules to govern use of artificial intelligence

The Prime Minister said she wanted to make Britain one of the best places in the world to start and grow tech businesses.

Posted on BP

Theresa May, British Prime Minister
World Economic Forum
Theresa May has issued a call for international co-operation to develop ethical rules for the use of technological breakthroughs in areas like artificial intelligence (AI).

Speaking at the World Economic Forum in Davos, the Prime Minister said she wanted to make the UK a world leader in innovative technologies, including AI, which could provide “a path to deliver prosperity and growth for all our people”.

However, she said that new norms and regulations must be developed to allay public concerns over issues like the control of private data, the disappearance of traditional jobs and the abuse of social media.

Theresa May has issued a call for international co-operation to develop ethical rules for the use of technological breakthroughs in areas like artificial intelligence (AI).

Read entire article Theresa May calls for ethical rules to govern use of artificial intelligence

Artificial intelligence predicts corruption

Researchers from Spain have created a computer model based on neural networks which provides in which Spanish provinces cases of corruption can appear with greater probability, as well as the conditions that favor their appearance.

Posted on ScienceDaily

University of Valladolid
Spain

Researchers from the University of Valladolid (Spain) have created a computer model based on neural networks which provides in which Spanish provinces cases of corruption can appear with greater probability, as well as the conditions that favor their appearance. This alert system confirms that the probabilities increase when the same party stays in government more years.

Two researchers from the University of Valladolid have developed a model with artificial neural networks to predict in which Spanish provinces corruption cases could appear with more probability, after one, two and up to three years.

The study, published in Social Indicators Research, does not mention the provinces most prone to corruption so as not to generate controversy, explains one of the authors, Ivan Pastor, to Sinc, who recalls that, in any case, “a greater propensity or high probability does not imply corruption will actually happen.

The data indicate that the real estate tax, the exaggerated increase in the price of housing, the opening of bank branches and the creation of new companies are some of the variables that seem to induce public corruption.

Read entire article Artificial intelligence predicts corruption | ScienceDaily

When the threats get weird, the security solutions get weirder

Next year, our phones and desktops will be ground zero for an arms race between bizarre new threats and strange new innovations in cybersecurity.

The world of security is getting super weird. And the solutions may be even weirder than the threats!
I told you last week that some of the biggest companies in technology have been caught deliberately introducing potential vulnerabilities into mobile operating systems and making no effort to inform users.

One of those was introduced into Android by Google. In that case, Android had been caught transmitting location data that didn’t require the GPS system in a phone, or even an installed SIM card. Google claimed that it never stored or used the data, and it later ended the practice.
Tracking is a real problem for mobile apps, and this problem is underappreciated in considerations around BYOD policies.

Next year, our phones and desktops will be ground zero for an arms race between bizarre new threats and strange new innovations in cybersecurity.

Read complete article
When the threats get weird, the security solutions get weirder | Mike Elgan | Computer World

10 grand challenges we’ll face by 2050

Editing genes, ageing populations, rising sea levels… the world is moving faster than ever. What will those trends mean for our society over the next 30 years?

Over the last few months, BBC Future Now has been examining some of the biggest problems humankind faces right now: land use to accommodate exploding populations, the future of nuclear energy, the chasm between rich and poor – and much more.

But what about the big challenges that are brewing for the future? In 30 years, what might be on the world’s agenda to solve? It’s impossible to predict, but we can get clues from how current trends in science and technology may play out. Here are just some of the potential big issues of tomorrow:

Genetic modification of humans

Debates among scientists started roaring last year over a new technology that lets us edit human DNA. It’s called Crispr (pronounced ‘crisper’) and it’s a means of altering people’s DNA to carve diseases like cancer out of the equation.

Sounds great, right? But what if takes a dark ethical turn, and it turns into a eugenics-esque vanity project to churn out ‘designer babies’, selecting embryos that produce babies that will have a certain amount of intelligence or that have certain physical characteristics?

What are the big challenges that are brewing
for the future? In 30 years, what might be on
the world’s agenda to solve?

Read complete article 10 grand challenges we’ll face by 2050 | BBC

When will AI become less artificial and more intelligent?

From Wikipedia:

Artificial intelligence (AI) is intelligence exhibited by machines”. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The idea is to make machines (SW, HW) function in a manner similar to humans, automating the capacity that humans have to learn, make decisions, to accumulate experience.

What can we do to take AI to another level, to provoke a quantum leap?

AI algorithms fall into many classes. Wikipedia reports these:

  1. Search and optimization
  2. Logic
  3. Probabilistic methods for uncertain reasoning
  4. Classifiers and statistical learning methods
  5. Neural networks
  6. Deep feed-forward neural networks
  7. Deep recurrent neural networks
  8. Control theory
  9. Languages
  10. Evaluating progress
Most of the above techniques, not to say all of them, are pretty old, and anyone who has some grey hair has seen this stuff 20-30 years ago. We all agree that these techniques are quite successful and that AI is still in its infancy. But that is not the point of this blog.

The question we wish to ask is this: what, at this point, can we do to make AI less artificial and more intelligent, almost indistinguishable from humans? What can we do to take AI to another level, to provoke a quantum leap?

Let us first examine what the word “intelligence” means, let us look at its etymology. From Latin “intelligentia”: understanding, knowledge, power of discerning, from assimilated form of inter “between” ( inter) + legere “choose, pick out. Let us focus on the “choose”, “pick out” part in conjunction with the “inter”, i.e. between. Humans make choices, select options, weigh scenarios and discriminate between different solutions, hundreds if not thousands of times a day. Sometimes this is done using tools, in other cases based on gut, on intuition or experience. It is this capacity to select an option or a strategy that makes humans unique. Life, with all its complexities and nuances, offers almost infinite ways and means of setting goals and then reaching them. No two people will do things in exactly the same manner.

Now we don’t want to get too philosophical here. The idea is to simply state that a generic problem solving process goes more or less like this:

  1. Problem statement (definition)
  2. Verification if the problem actually has a solution
  3. Selection of solution method (there may be many)
  4. Solution
  5. Verification of result

This is of course a very gross description. A generic example is illustrated below. Suppose that one has to cross the network (or some domain) from left to right.

Suppose that we identify three possibilities represented by the three paths shown in the figure. Suppose, just for the sake of discussion, that each of these paths entails very similar energy expenditure, time, cost, risks, etc. Which path would you choose? All things being equal (or not necessarily all) an experienced and wise individual (or a good engineer!) would probably select the least complex alternative. Humans instinctively imagine multiple scenarios and assess their complexity trying to stay away from the ones that will potentially make life complex in the next few minutes, days, or years. This logic applies to running a family, a corporation or a battle scenario. High complexity leads to fragility.

The key, therefore, is to be able to measure complexity. Since 2005, thanks to Ontonix this is possible. We have now an Italian Standard, the UNI 11613, which shows corporations how to do a “Business Complexity Assessment” and work is in progress to deliver a similar ISO standard, the ISO 22375 (“Guidelines for Business Complexity Analysis”).

The bottom line is that today we have a consolidated technology known as the QCM, or Quantitative Complexity Assessment. QCM provides measures of complexity, not sensations. The way QCM works is simple. Suppose you need to design a turbine and you come up with two candidate designs, which may be represented by the two Complexity maps shown below. The maps illustrate which parameters of our turbine are correlated with which other parameters. More correlations – interdependencies – means the system is intricate, difficult to understand and to fix. The first solution has a complexity of 3.03 cbits. It has 12 correlations between the 10 parameters.

The more complex solution, shown below, has a complexity of 5.2 cbits and has 19 correlations between the said ten parameters.

Providing both solutions are acceptable, the less complex one is clearly the better choice. This is intuitive.

We believe that AI-based systems should incorporate a QCM layer which would measure the complexity of the solutions provided by the computational kernel – clearly the kernel would need to provide multiple solutions – so as toselect the least complex ones that satisfy objectives and constraints. If done in real-time, it will be difficult to distinguish man from machine.

One important difference between man and machine is that humans are capable of original and creative ideas. It is very difficult to hard-wire something like that into an algorithm. But QCM can help here too. The two simple complexity maps illustrated above are in reality topological sums of a number of other maps called modes or attractors. These modes may be selected and assembled in a myriad of ways, some of which may be counter-intuitive or, simply, original. In spaces having a large dimension, the number of such modes can be very large indeed.

In essence, we propose to move from AI to AI+QCM. Imagine what great benefits can be obtained if AI, which will penetrate the industry, pervade our lives, our homes, would  reduce complexity wherever possible. Imagine, for example, driving strategies for autonomous vehicles that reduce traffic complexity. Just imagine how less complexity can mean more efficiency, less delays, less waste and less risk.

Our world is quickly getting more complex

We measure every year the complexity of the world as a system based on over 250 000 parameters published by the World Bank. We can say that today the world is approximately 500% more complex than in the early 1970s. Moreover, we have created technologies that are rapidly increasing complexity everywhere. Think of the Internet of Things. How far do we think that we can take things without actually managing complexity? Can we just grow to be more complex with impunity? Certainly not. There exists the so-called critical complexity, which is a sort of Pandora’s Box, except that it tells you how far you can go. You need to stay away from critical complexity if you want to avoid a systemic collapse. AI, in conjunction with QCM can play a crucial role, not just in delivering sexier, more human-like solutions to a bunch of problems, but it can also help our global society to stay on a path of resilient sustainability.


Enjoyed this publication? Here is more by Jacek Marczyk!

Who rates ratings? – Failure is often not contemplated in a model. No model-building laws forces one to do so. See how this problem lead to catastrophes!

In Math we Trust. That is precisely the problem! – How resilient is your business? Can you afford not to know? The economy is a dynamic system which is far too complex for us to understand.

Systemic Resilience Analysis: Supercomputers provide new tools for regulators, investors and governments – Discover Quantitative Complexity Theory, a different approach and a new set of analytical tools to address modern day challenges.



ABOUT THE AUTHOR – Jacek Marczyk, author of nine books on uncertainty and complexity management, has developed in 2003 the Quantitative Complexity Theory (QCT), Quantitative Complexity Management (QCM) methodologies and a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. He introduced the Global Financial Complexity and Resilience Indices in 2013. Since 2015 he is Executive Chairman of Singapore-based Universal Ratings. Read more publications by Jacek Marczyk