ISO 20252 tackles market research with confidence

From helping companies to develop and promote products and services, to analysing our behaviour as consumers, market research contributes to many aspects of modern life. But does it always? And is it global and consistent? The newly updated ISO 20252 will ensure it delivers on promise.

When firms report market research results that aren’t based on sound research principles, they are not reducing risk

Market research helps reduce risk. Good quality research provides information and understanding, which allow users to more effectively value alternatives and make better decisions.

Market research analyses are the go-to solution for many professionals embarking on a new business venture as they save time, provide new insights on the business market you are working on, and help to refine and polish up your strategy. So, when firms report market research results that aren’t based on sound research principles, they are not reducing risk; in fact, they may be inadvertently increasing it.

Read entire post ISO 20252 tackles market research with confidence |  Clare Naden | ISO.org
Advertisements

Alternative data, investment strategies and the principle of incompatibility

Alternative data – a trendy subject nowadays – is non-traditional data that can be used in the investment process. An increased percentage of hedge fund managers are planning to use this kind of data and new analytics in their investment processes. Examples of alternative data are:
  • Social media
  • Sentiment
  • Web crawls
  • Satellite & weather
  • Consumer credit
  • Internet of Things
  • Mobile App usage
  • Advertising
  • Store locations
  • Employment
The amount of such data may be huge. The logic behind adopting alternative data is:“Large amounts of data lead to more available information for better analysis”

This statement is not necessarily always true. Let’s see why. Before you start to solve a problem (a numerical problem) you should check its conditioning. An ill-conditioned problem will produce very fragile and unreliable results – no matter how elegant and sophisticated solution you come up with it may be irrelevant or simply wrong.

Simple and basic problem

Consider a simple and basic problem, a linear system of equations: y = A x + b. If A is ill-conditioned, the solution will be very sensitive to entries in both b and y and errors therein will be multiplied by the so-called condition number of A, i.e. k(A). That’s as far as simple linear algebra goes. However, most problems in life cannot be tackled via a linear matrix equation (or any other type of equations for that matter). This does not mean, though, that they cannot be ill-conditioned, quite the contrary.

Most problems in life cannot be tackled via a linear matrix equation (or any other type of equations for that matter)The numerical conditioning of a problem should always be computed before one can attempt its solution. How often is this done? Very very rarely. Once you’ve determined that a problem is well-conditioned, there is the issue of determining if it will allow a solution, multiple solutions or none. Those who practice math on a daily basis know this well. Nothing new under the Sun. However, if you collect huge amounts of data and you don’t check its numerical conditioning before you start to work on it, you may be playing an extravagant video game.

There is one fundamental issue, in our view, which makes huge problems/huge data sets difficult to solve – high complexity. Close to critical complexity – each system has such a threshold – means the problem (or system) is very ill-conditioned and dominated by uncertainty (i.e. is chaotic). Imagine the linear system of equations y = A x + b in which the entries of A are not crisp values but fuzzy. In other words, suppose that a particular entry aij assumes values from a certain range and that the “exact” value is unknown. This changes the situation dramatically as the system can lead to a huge number of solutions.

Chaotic soup of numbers

Suppose, now, that a huge set of alternative data has been collected. How can one determine if this data is of value, i.e. if it contains structure and useful rules or if it is just a chaotic soup of numbers? This can be done easily by measuring the data set’s complexity and corresponding critical complexity. Their ratio is a good proxy of numerical conditioning (i.e. k(A)). Very simple examples of what we’re talking of are shown in the figure below.

The case on the left hand side corresponds to a low-complexity high-correlation situation whereby one may extract a crisp and useful rule (i.e. ‘if X increases then Y increases’). On the other extreme data is uncorrelated and no rule may be extracted.When high complexity kicks in there is no such thing as accuracy or precisionSo, the complexity/critical complexity ratio for a data set is a sort of data set rating – a low value points to data which can deliver useful information, while values close to 1 reveal a situation dominated by chaos and noise.

One last point. Data and the corresponding analyses must be relevant, not accurate. When high complexity kicks in there is no such thing as accuracy or precision. The Principle of Incompatibility, coined by L. Zadeh states that ‘high complexity is incompatible with high precision’. In other words, when complexity is high, ‘precise statements lose relevance’ and ‘relevant statements lose precision’.

Examples

Precise statement that is irrelevant: the probability of default of a given corporation in the next 3 years is 0.025%.

Relevant statement that is not precise: there is a high probability that it may rain tomorrow.

Alternative data, or Big Data, can be very complex. How complex? Well, you need to measure it, but suppose that indeed its complexity is high. In such circumstances don’t delude yourself – the information you extract from it will not be precise. Adding more data is not synonymous to adding more information.

The bottom line: every set of Alternative Data (or Big Data) should have a complexity-based conditioning, or rating, attached to it. If this rating is low, you’ll never extract useful information from it, no matter what method you use. Handle with care.

How to ‘fix’ standard deviations

Standard deviations are a popular and often useful measure of dispersion. To be sure, a standard deviation is merely the most likely deviation from the mean. It also doesn’t take into account the shape of the probability distribution function (this is done better using, for example, entropy, which is a more versatile measure of dispersion).

Standard deviations, however, may be ‘adjusted’ to take into account an interesting aspect of data, namely complexity. Let’s see an example. Say you have a portfolio of 28 stocks, all of which are independent (i.e. uncorrelated). In such a case the complexity map of the portfolio is as the one below.

One computes the standard deviation of each stock and may them use it to measure the volatility of the portfolio or other measures of risk. Suppose now that some of the stocks are indeed correlated. Say that the complexity map is now the one below.
Stocks 5 and 7, for example, are correlated with numerous other stocks, while 3, 6 and 25 are uncorrelated. This is reflected in the Portfolio Complexity Profile (or Portfolio Complexity Spectrum) which ranks the complexity footprint of each stock in the portfolio. This is illustrated below.
Stock 7 has a footprint of just over 17% while stock 5 is responsible for nearly 15% of the complexity of the portfolio.

Clearly, just like in the previous case, one can calculate the standard deviations of all stocks one by one. However, in the first case all stocks were uncorrelated, here some of them are. These two cases are obviously different, in particular from a structural point of view. The question now is this: why not use the information in the Complexity Profile to ‘adjust’ standard deviations by adding a correction originating from complexity? Clearly, a stock that is heavily correlated to other stocks in a portfolio could be more ‘dangerous’ than an uncorrelated one.  Evidently, it is the job of covariance to express this:

Covariance(i,j) = Correlation(i,j) x STD(i) x STD(j)
But why not take this into account also at standard deviation level? One simple way to accomplish this is the following:
Adjusted STD = (1 + Complexity contribution) x STD
Basically, stocks that increase portfolio complexity see their standard deviations corrected (increased) by a complexity-based factor. The (ranked) result is illustrated below.
The bar chart below shows the complexity-induced corrections of standard deviations:
For example, the standard deviation of the biggest complexity contributor – stock 7 – which is 3.81, is incremented by 17.1% (its complexity footprint) to yield a value of 4.46. The norm of the original covariance matrix is 58.21, while the ‘corrected’ covariance matrix has a norm of 68.15.

Portfolio complexity, which is a factor that is neglected while analyzing or designing a portfolio (a covariance matrix is a poor substitute) ‘increases’ standard deviations, illustrating eloquently the concept of complexity-induced risk.

Doing classical stats may produce overly optimistic results if complexity is neglected. In reality, every system has some degree of complexity, which is invisible to conventional analytics . In reality, there is often more risk than one may think.

Grindr under fire for sharing HIV status of users

Same-sex dating app Grindr has said it will stop sharing users’ HIV status after it was revealed that the details were shared with third-party analytics companies.

Published on InfoSecurity | By Dan Raywood

According to initial research by Antoine Pultier, a researcher at SINTEF, and verified by Buzzfeed News, Grindr shared HIV status along with users’ GPS data, sexuality, relationship status, ethnicity, phone ID and email to Apptimize and Localytics, which help optimize apps. This information, unlike the HIV data, was sometimes shared via plain text.

Buzzfeed News reported that under the app’s “HIV status” category, users can choose from a variety of statuses, which include whether the user is positive, positive and on HIV treatment, negative, or negative and on PrEP, the once-daily pill shown to effectively prevent contracting HIV.

In a statement, Grindr CTO Scott Chen said that as a company that serves the LGBTQ community “we understand the sensitivities around HIV status disclosure” and clarified that Grindr “has never, nor will we ever sell personally identifiable user information – especially information regarding HIV status or last test date – to third parties or advertisers.

Chen clarified that it does work with highly-regarded vendors to test and optimize how it rolls out the platform, and these vendors are under strict contractual terms that provide for the highest level of confidentiality, data security and user privacy.

Read entire article Grindr Under Fire for Sharing HIV Status of Users | InfoSecurity

How Coca-Cola Hellenic and Credit Suisse are optimising internal audit using data analytics

The impact of data analytics on businesses across multiple sectors continues to grow, as innovative technology and workforce skills develop to ensure organisations are making the most of the information they hold.

Internal audit is no exception, and professionals are increasingly expected to leverage the latest advanced analytics techniques to deliver greater efficiency and effectiveness at lower costs.

Data analytics and internal audit in 2017

The latest PwC State of the Internal Audit Profession report, published in March, showed that 44% of businesses in which internal audit’s role is crucial to anticipating disruption have increased investment in analytics.

Meanwhile, a new report from the Chartered Institute of Internal Auditors (IIA) has identified Coca Cola Hellenic and Credit Suisse as leading the charge in the battle to strengthen auditing performance through data analytics platforms.

Let’s take a closer look at how these industry giants are utilising advanced analytics.

Coca-Cola Hellenic reaps the benefits of ERP

Coca-Cola Hellenic is the primary bottler for the Coca-Cola brand, producing 50 billion servings across operations in 28 countries worldwide. The organisation has a sophisticated enterprise resource planning (ERP) system with massive quantities of data flowing through it.

Richard Brasher, corporate audit director at Coca-Cola Hellenic, said data analytics are incorporated from the very beginning of the auditing process, from planning through to completion. “The use of data analytics helps external auditors to rely on the work already done by internal audit and hence reduces duplication of time and effort,” he explained.

The company can now test 100% of the sample data, thus optimising the strength of its assurance processes. To ensure auditors are well versed in the technical aspects of the role, the organisation encourages staff to spend six months on secondment with the data analytics team.

Analytics team takes Credit for Suisse audit success

Global private bank Credit Suisse considers itself at the advanced end of the data analytics maturity path, with the organisation incorporating the technology organically over the years alongside other innovations.

“Data analytics is helping the organisation to identify business areas with high-control risks due to anomalous, non-conforming events, and is facilitating the continuous monitoring of the risks,” said chief auditor of regulatory and people risks Mark Starbuck.

Credit Suisse is also focusing more effort on continuous risk monitoring in 2017, with increased emphasis on planning, fieldwork and reporting.

Mr Starbuck noted that the data analytics team has been successful due to sponsorship and buy-in from internal audit leaders. There continues to be strong advocates for data-driven methodologies, with training and awareness programmes helping to deliver the necessary skills to perform analytics and use core applications.

“The ideal data analytics auditor has a blend of core analytics skillsets, business functional experience and a good understanding of risk,” he explained.

creditsuisse_2835441b

Finding the right talent

The IIA case studies show the benefits of data analytics within the internal audit function. These include:

  • Increased efficiency via the re-use of scripts for periodic audits
  • Improved effectiveness through whole-population testing
  • Enhanced assurance
  • Time and cost savings
  • Greater focus on strategic risks
  • Broadened audit coverage

Nevertheless, finding people with the right mix of skills remains a challenge for many businesses hoping to maximize data analytics use across the audit function. With organisations placing more importance on data analytics within internal audit, we expect this trend to continue for the rest of 2017 and into the years beyond.

Source: barclaysimpson

Read entire post grey