Ranking information, “winner take all” and the Heisenberg Uncertainty Principle

Does ranking information, who likes what, top-10 rankings, produce “winner take all” situations?

There is an old rule in strategy which is that there are no rules. While it is always nice to try and create underlying theories or rules of how the world works, science and mathematics are still fairly young to describe something this complex. Trying to apply rules like the concept presented in the title is probably some form of confirmatory bias.

Having said that, there is evidence that this effect can happen, not as a rule to be followed, but something that does occur. How could this happen?

Ranking information does allow us, as people, to see what other people are doing. That’s always interesting–to see what others are doing, looking at or thinking about. And by looking at what other people are looking at, there is a natural increase in “viewership” of that item. So the top-10 ranking, always entertaining of course, does create healthy follow-on “views.”

But “views” does not mean involvement or agreement. In other word, while ranking information and today’s internet makes it easy to see what other are seeing, our act of observation actually contributes to the appearance of popularity. That popularity appears to drive others to “winner take all.”

“Winner take all” can take many forms. Winner take all can meant that once the pile-on starts, that a web site becomes very popular. This is often confused with the network effect. Winner take all can also mean that a song becomes popular because its played alot, so more people like it, so its played even more, and so on. Of course this does not describe how the song became popular to begin with–perhaps people actually liked the song and it had favorable corporate support–there is nothing wrong with that.

And this leads us to the uncertainty principle. The act of observation disturbs the thing we are trying to measure. The more scientific formulation has to do with the limits of observation of position and velocity at the atomic level but we’ll gloss over that more formal definition.

The act of observing a top-10 list on the internet, causes the top-10 list to become more popular. The act of listening to a song, and through internet communication channels, changes the popularity of the song. So its clear, that given internet technology, there is a potential feedback loop that employs the uncertainty principle.

Alright, that makes sense. But the world is probably a little more complex than this simple thought.

While the act of observing could make something more popular, that does not mean that the act of observing makes something unpopular turn into something popular. In other words, people are not fools. They like what they like. If something is on the top-10 list or comes from a band that has good corporate airtime support, that does not mean that it is a bad song or a bad list. It does not mean that people would not like it if did not play in that venue.

The internet is a powerful tool to help people quickly find what they want. The cost of finding a new web site or a new source of top-10 lists (or whatever) is fairly low. So there is no real inherent lock-in. There is the possibility that given the internet’s reach, the ability to more rapidly escalate and de-escalate from “winner take all” to “has been” is fairly robust. Its quite possible, in the spirit of making business rules for fun, that the internet produces a steady stream of “winner take all” and if there is a steady stream of “winner take take all” events, then they are really just average events after all (regression to the mean). So with my fancy new rule, there are no “winner take all” events any more, just a large number of rapidly escalating/de-escalating average events–the frequency has just been bumped up.

That’s okay as well.

Yeah! Big data to the rescue…wait a second…

I use google alerts to provide me a sense of flow around articles. One of the filters is around bigdata and healthcare. The flow around this topic has increased. I see many stories, $100m investment here, $20m investment there, and so on. That’s alot of dollars! Alot of it is government backed of course and put on hyperdrive with a recent McKinsey report on bigdata and potential healthcare savings.

Breathlessly, drawing on my own bigdata and scientific background, I then ask, “well when should see the effects of these bigdata exercises?” The answer is not so clear. Just as I explained in my last post, there has not been a dramatic lack of analytics applied to healthcare. However, I do think to do more analytics because clearly there are many ideas and insights to draw on.

But it will still take a few years before insight turns into action. And it will take longer for any significant savings and benefits to be realized. Why?

Unless we solve the payment and incentives issues (also blogged on before) what’s the incentive for large scale change?

Bryan Lawrence in today’s Washington Post’s opinion article “The fine-print warning on Medicare” put it well.

  • No real productivity growth in many areas of healthcare
  • Benefits of reimbursement will never be given up to the government (every year we see this with Medicare payments, every year Congress avoids implementing payment changes, literally). This is the age-old question of making investments, “who gets to cash the check?”
  • Lower prices usually leads to providers increasing volumes
  • Congress will keep changing the laws anyway, you’ll never catch up.

You could peg this as cynical thinking, but its this type of envelope thinking that calls out issues that are hard to ignore and overcome quickly.

So while bigdata could give us great insights, implementing and acting on those insights (the change management problem) is larger and more complex. Its like the Artificial Intelligence (AI) claims back in the 80’s and 90’s that said we can really use analytics to get better answers. After two decades, the crickets were chirping. The value realization is better now, and it is clear, at least to me,  that more people performing analytics can make a difference. But the real policy world is much more difficult to navigate.

There are no good answers to accelerate value capture from bigdata. Although everything from nationalization to forced integrated capitated models (like Kaiser) may be the only way to structurally change the system. It’s clear that competitive forces have been inhibited in the US and that there are externalities driving the dynamics of the healthcare system.

And the real change that’s needed is in you and I. We need to get healthier. And for that, we need increase both our personal commitment to better health as well instill changes in our government policies  to change arcane food and commerce laws that lead to bad diets.

Tempering our expectations for bigdata in healthcare

Expectations around bigdata’s impact on healthcare is leaping ahead of reality and some good thoughts are being expressed. However, healthcare has already had significant amounts of analytics applied to it. The issue is not that larger sets of data are critical, but that the sharing and integration of the data are the critical parts for better analysis. Bigdata does not necessarily solve these problems although the bigdata fever may help smash through these barriers. Over 15 Blues and most of the major nationals have already purchased data warehouse appliances and advanced systems to speed-up analysis, so its not necessarily performance or scalability that is constraining advances built on data-driven approaches. And just using unstructured text in analytics will not create a leapfrog in better outcomes from data.

We really need to think integration and access. More people performing analysis in clever ways will make a difference. And this means more people than just the few that can access healthcare detailed data: most of which is proprietary and will stay proprietary to companies that collect it. Privacy and other issues prevent widespread sharing of the granular data needed to truly perform analysis and get great results…its a journey.

This makes the PCORI announcements about yet another national data infrastructure (based on a distributed data model concept) and Obama’s directive to get more Medicare data into the world for innovation (see the 2013 Healthcare Datapooloza that just completed in Washington DC) that much more interesting. PCORI is really building a closed network of detailed data using a common data model and distributed analysis while CMS is being pushed to make datasets more available to entrepreneurs and innovators–a bit of the opposite in terms of “access.”

There are innovative ideas out there, in fact, there is no end to them. Bigdata is actually a set of fairly old ideas that are suddently becoming economic to implement. And there is serious lack of useful datasets that are widely available. The CMS datasets are often heavily massaged prior to release in order to conform to HIPAA rules e.g. you cannot provide detailed data at an individual level essentially despite what you think you are getting: just stripping off a name and address off a claim form is sufficient for satisfying HIPAA rules.

So its clear that to get great results, you probably have to follow the PCORI model, but then analysis is really restricted to just a few people who can access those datasets.

That’s not to say that if patients are willing to opt-in to programs that get their healthcare data out there, bigdata does not have alot to offer. Companies using bigdata technology on their proprietary datasets can make a difference and there are many useful ideas to economically go after using bigdata–many of which are fairly obvious and easy to prioritize. But there is not going to suddenly be a large community of people with new access to granular data that could be, and often is, the source of innovation. Let’s face it. Many healthcare companies have had advanced analytics and effectively no real budget constraints for many years and will continue to do so.  So the reason that analytics have not been created deployed more than today is unrelated to technology.

If bigdata hype can help executives get moving and actually innovate (its difficult for executives to innovate versus just react in healthcare) then that’s a good thing and getting momentum will most likely be the largest stimulus to innovation overall. That’s why change management is key when using analytics for healthcare.

Anti-Money Laundering (AML) and Combating Terrorist Funding (CTF) analytics review

In my last blog I reviewed some recent patents in the AML/CTF space. They describe what I consider some very rudimentary analytics workflows–fairly simple scoring and weighting using various a-priori measures. Why are such simple approaches patentable? To give you sense of why I would ask this question, there was a great trumpeting of news around the closing of a $6b money laundering operation at Liberty Reserve. But money laundering (including terrorism funding) is estimated at $500 billion to $1 trillion per year. That’s alot of badness that needs to be stopped. Hopefully smarter is better.

There are predictive analytical solutions to various parts of the AML problem and there is a movement away from rules-only systems (rules are here to stay however since policies must still be applied to predictive results). However, the use of predictive analytics  is slowed because AML analytics boils down to an unsupervised learning problem. Real-world test cases are hard to find (or create!) and the data is exceptional noisy and incomplete. The short message is that its a really hard problem to solve and sometimes simpler approaches just work easier than others. However, in this note, I’ll describe the issues a bit more and talk about where more advanced analytics have come into play. Oh and do not forget, on the other side of the law, criminals are actively and cleverly trying to hide their activity and they know how banks operate.

The use of algorithms for AML analytics is advancing. Since AML analytics can occur at two different levels, the network and the individual level, its pretty clear that graph theory and other techniques that operate on the data in various ways are applicable. AML Analytics is not simply about a prediction that a particular transaction, legal entity or gorup of LE’s are conducting money laundering operations.  Its best to view AML analytics as a collection of techniques from probabilistic matching to graph theory to predictive analytics combining together to identify suspicious transactions or LEs.

If the state AML analytics is relatively maturing, what is the current state? Rather simple actually. Previous systems, including home grown systems, focused on the case management and reporting aspects (that’s reporting as in reporting on the data to help an analyst analyze some flows as well as regulatory reporting). AML Analytics was also typically based on sampling!

Today, bigdata can help to avoid sampling issues. But current investments are focused around the data management aspects because poor data management capabilities have greatly exacerbated the cost of implementing AML solutions. FS institutions desperately need to reduce these costs and comply with what will be an ever-changing area of regulation. “First things first” seems to be the general thrust around AML investments.

Since AML analysis will be based on Legal Entities (people and companies) as well as products, its pretty clear that the unique identification of LEs and the hierarchy/taxonomies/classifications of financial instruments is an important data management capability. Results from AML Analytics can be greatly reduced if the core data is noisy. When you combine the noisy data problem with today’s reality of highly siloed data systems inside of Banks and FS institutions, the scope of trying to implement AML Analytics is quite daunting. Of course, start simple and grow it.

I mentioned above that there are not alot of identifiable cases for training algorithms. While it is possible to flag some transactions and confirm them, companies must report Suspicious Activity Reports (SAR) to the government. Unfortunately, the government does not provide a list of “identified” data back. So it is difficult to formulate a solution using supervised learning approaches. That’s why it is also important to attack the problem from multiple analytical approaches–no one method dominates and you need multiple angles of attack to help tune your false positive rates and manage your workload.

When we look at the underlying data, its important to look at not only the data but also the business rules currently (or proposed) in use. The business rules will help identify how the data is to be used per the policies set by the Compliance Officer. The rules also help orient you on the objectives of the AML program at a specific institution. Since not all institutions transact all types of financial products, the “objectives” of an AML system can be very different. Since the objectives are different, the set of analytics used are also different. For example, smaller companies may wish to use highly iterative what-if scenario analysis to refine the policies/false positive rates by adjusting parameters and thresholds (which feels very univariate). Larger banks need more sophisticated analysis based on more advanced techniques (very multi-variate).

We’ve mentioned rules (a-priori knowledge, etc.) and predictive/data mining models (of all kinds since you can test deviations from peer groups using data mining methods, and predicted versus actual patterns etc.) and graph theory (link analysis). We’ve also mentioned master data management for LEs (don’t forget identity theft) and products as well taxonomies, classifications and ontologies. But we also cannot forget time series analysis for analyzing sequential events. That’s a good bag of data mining tricks to draw from. The list is much longer. I am often reminded of a really great statistics paper called Bump Hunting in High Dimensional Data by Jerome Friedman and Nick Fisher because that’s conceptually what we are really doing. Naturally, criminals wish to hide their bumps and make their transactions look like normal data.

On the data site, we have mentioned a variety of data types. The list below is a good first cut but you also need to recognize that synthesizing data, such as from aggregations (both time based aggregations and LE based aggregations such as transaction->account->person LE->group LE), are also important for the types of analytics mentioned above:

  • LE data (Know Your Customer – KYC)
  • General Ledger
  • Detailed Transaction data
  • Product Data
  • External sources: watch lists, passport lists, identity lists
  • Supplemental: Reference data, classifications, hierarchies, etc.

Clearly, since there are regulatory requirements around SAR (suspicious activity), CTF (currency transactions) and KYC, it is important that the data quality enhancements first focus on those areas.

Anti-Money Laundering patent review

I was recently reviewing some anti-money laundering (AML) patents to see if any had been published recently (published does not mean granted).

Here’s a few links to some patents, some granted some applied for:

All of the patents describe a general purpose system of calculating a risk score. The risk score is based on several factors.

In AML, the key data include:

  • A legal entity (name, location, type)
  • A “location” (typically country) that determines the set of rules and “data lists”  to be applied. This could be the LE’s country or it could be the financial instrument’s country but generally this embodies a jurisdiction area that applies to the AML effort. A “data list” from a country or location is the list of legal entities that are being watched or have been determined to engage in AML operations. So we have a mix of suspected and validated data.
  • A financial instrument / product and its set of attributes such as transactions, amounts, etc.
  • A jurisdiction: the risk assessor’s set of rules. Typically these are rules created by a company or a line of business. These rules help identify an event and should be relatively consistent across an entire enterprise but also vary based on the set of locations where a company may operate. A bank’s Compliance Officer is especially concerned about this area as it also contains policies. The policies represent who needs to do what in which situation.

I have not tried to capture the nature of time in the above list since all of these components can change over time. Likewise, I did not try to capture all of the functions a AML system must perform such as regulatory reporting. We have also ignored whether all of these components are used in batch or real-time to perform a function. Or whether rules engines and workflow are powering some incredibly wonderful AML “cockpit” for an AML analyst at a company.

We assume that the ultimate goal of a AML system is to identify LE’s potentially engaging in AML activities. I write “potentially” because you need to report “suspicious” activities to the Financial Crimes Enforcement Network (FinCEN). We can never know for certain whether all of the data is accurate or that an individual transaction is actually fraudulent. We can, however, use rules, either a-priori or predictive, to identify potential AML events.

The patents describe a method of combining information, using a “computer system” to calculate a AML risk score. The higher the score, the more probable that an LE-FinancialProduct is being used for money laundering. Inherently, this is probabilistic. It’s also no different than any other risk scoring system. You have a bunch of inputs, there is formula or a predictive model, there is an output score. If something scores above a threshold, you do take action, such as report it to the government. Just as a note, there are also strict guidelines about what needs to be reported to the government as well as areas where there is latitude.

The trick in such a system is to minimize false positives–LE-FinancialProduct combinations  identified as money laundering but in reality are not. False positives waste time. So the system tries to create the best possible discrimination.

So now look at the patents using the background I just laid out. They are fairly broad, they described this basic analysis workflow. It’s the same workflow, using the same concepts as credit scoring for FICA scores, or credit scoring for many types of loans, or marketing scoring for lifetime value or next logical product purchasing. In other words, the approach is the same. Okay, these are like many existing patents out there. My reaction is the same: I am incredulous that general patents are issued like they are.

If you look past whether patents are being granted for general concepts, I think it is useful to note that many of these came out around 2005-2006 or so which is a few years after many regulations changed with the Patriot Act and other changes in financial regulations.

So the key thought is yes, patents are being submitted in this area but I think the relatively low number of patent applications in this area reflects that the general workflow is, well, pretty general. Alright, the 2011 patent has some cool “graph/link analysis” but that type of analysis is also a bit 1980s.

Note: I selected a few data concepts from the real-time AML risk scoring patent to give you a feel for the type of data used in AML around the transaction:

  • transaction amount,
  • source of funds such as bank or credit cards,
  • channel used for loading funds such as POS or ATM,
  • velocity such as count and amount sent in the past x days,
  • location information such as number of pre-paid cards purchased from the same zip code, same country, same IP address within x hours,
  • external data sources (.e.g. Interpol List) or internal data source

Do customers want Social Customer Service? Yes, and they want more…

I receive Google Alerts on BigData in Healthcare as well as social customer service (these are aligned with my professional activities).

I received two alerts recently:

There are many more like this although I admit, the number of alerts I receive touting social customer service is much larger.

Here’s my thoughts. Customers want to get a job done. They often do not care how to get it done especially if its a negative or an operational issue, for example, an item they purchased is not working or they need to make a fix for a bad banking transaction quickly.

The customer’s job is: “fix or solve the issues.” This is why FTR – First Time Resolution–is the top customer service metric that customers want to experience. So for each customer service episode, again excluding “rage” issues, is “how do I get this done as fast or efficiently as possible?”

I am not sure that customers care if its tweets, facebook pages or the call center.

But today, social media channels, with their expectations of a fast response, often have a faster SLA for responding. Social channels cannot easily spike FTR, but they can get the customer engaged faster than say, a phone channel with an IVR navigation time of at least 5 minutes.

So customers gravitate to those channels that get them going as fast as possible and today, social channels have spiked some aspects of customer services–so yes, social customer service is relevant so long as it remains responsive. I think the second article in trying to make a point but the nuances matter here. It’s not that there is no proof customers want to use social customer service channels, its that they will use social customer service channels as long as companies are responsive in that channel.

And there are some interesting balancing forces that may sustain and grow channels such as twitter and facebook over the long run. Strategically, since there is an opportunity to engage customers directly in a public forum–and twitter and shown to be very effective at doing this–companies need to capitalize on these channels. Hence being part of the mix and engaging are additional  benefits for maintaining a social customer service channel presence.

I am not swayed by doe-eyed or save-the-world arguments when it comes to customer management. My perspective is that you need to invest in this channel and manage the balance with other investments.

If you need help finding the balance, give me call.