Platform Scale

Sanjeet Choudary has put out a book about how platforms, not pipes, are the new business model. The book is very inspiring so I recommend reading it. There are not any new ideas in it but they are packaged together very nicely. It’s very much another “explaining things” book and for the lens that it wants you to use, I think it does a good job.

The key thought behind the book is actually fairly simple:

Be a middleman. Reduce your costs as a middleman to gain share. Shift cost and risk out to everyone else, as much as possible. Allow companies to build on your platform. Reducing your middleman costs can gain you share and the best way do that is to be digital. If you only make a small slice of money at every interaction, you need alot of interactions so don’t forget the “make it big” part.

That’s really about it. There’s not alot of examples with deep insight in the book and he avoids most levels of strategic thinking entirely. The book also fails to connect what has been going on today to the massive “platforms” built in the past few decades but which are not necessarily fully digital as in the examples reused in the book. The book spends most of its pages explaining that if you can reduce transactions costs and get scale, the wold is your oyster. Of course, this is only just one model of succeeding in business and actually not always the most interesting or sustainable.

But that’s OK. Go find your “unit,” reduce that friction and make a billion. It’s a good read.

Enjoy!

Why I Like Fishing – It’s Not What You Think

Yesterday, my family went on a fishing trip.

We keep a twenty-one foot, center console fishing boat over on the Eastern Shore just off the Chester River. The Chester feeds the Chesapeake Bay. The mouth of the Chester is about 1 mile north of the Bay Bridge.

There were four of us, my wife and I and our two sons. I bought sandwiches and some chips at the nearby Safeway and we had each had our own water jugs. We brought 10 fishing poles. Four poles are heavy duty and are designed to catch larger fish deeper in the bay (around forty to fifty feet in the main channel). We had our planar boards with us to spread out the lines but we used them only once.

This was October–the stripers had just started running. The stripers (aka Rockfish) become larger by November, but we were out early to see what we could catch. Most of the time we did the following:

  • Jet out to the middle of the channel.
  • Look for bird flocks on the water.
  • Jet the boat to the seagull flock, along with several other fishing boats.
  • Fish with individual poles using a variety of lures. My youngest son is an expert fisherman so he knew which lures to use for each situation.
  • Try not to hit the other boats.
  • Catch fish.
  • Release those that were too small.
  • Catch seagulls, by accident.
  • Untangle the seagulls, unharmed.
  • When the seagulls picked up and moved, following the fish, jet to the new spot.
  • So we, along with alot of other fisherman, move from flock to flock, jetting around in the water, trying to catch legal sized fish.

That’s it! We did that for half the day.

Our “charter” started late because I was late from a Saturday meeting. We left around 1:30pm on the boat and came back right after sunset, around 6pm. As we returned to the Chester after sunset, we were not paying close attention to driving and almost hit a dock, but that’s another matter that my youngest son can explain one day to his kids when discussing boat safety.

It was wonderful weather, not too cold. Skies were overcast which kept it cooler–good for fishing of course. We had forgotten to fill the oil reservoir so the oil engine light kept coming on. We had plenty of oil, the reservoir was just low that’s all.

After the trip, we came back and had some delicious crab cakes at the house with my wife’s mom. The crabcakes were from the Bay Shore Steam Pot in Centerville. I think they are the best crab cakes on the Eastern Shore and the shop is very close to where we keep our boats.

It was our older son’s eighteenth birthday. He had wanted to go fishing. The night before, we went to a jazz concert with the Anderson twins (sax, clarinet and flute) and Alex Wintz (guitar), known as the Peter and Will Anderson Trio, in Baltimore at the fabulously cool An die Musik. Fabulous concert. All of the chairs were oversized and full of padding; relics from a regal hotel no doubt. Front row seats. The jazz seemed to infuse the next day’s boating trip.

It seemed to me that fishing was about getting things done and working together, like jazz, versus pop music or old style rock and roll both of which have a different type of energy.

Overall, we caught around forty fish but only a few were keepers. Stripers need to be twenty inches to keep, and our largest was seventeen. No matter.

While you can still catch a fish on a simple fishing pole off the dock, the larger fish need to be found. You need the right gear but it’s not excessive. You need to know some techniques to catch alot of fish to find the few keepers. You need to work as a team since steering, fishing and keeping your eyes open for the bird flock is hard for just a single person to do. My wife and I did less fishing than the kids but we helped as much as we could having been relegated to deck hands. My wife took alot of pictures and I sneaked in a few. We were fortunate to grab some pictures below the Bay Bridge with the bridge framing our fishing activities.

As we headed back to the dock for the night, I thought this was the nicest family weekend in a long time. We all worked well together on a small boat and got things done. Everything seemed to come together and it felt good. My youngest son captained the boat and it was my older son’s birthday. In a crazy, fast world, we spent a little slice of time trying to catch a few fish, together. Perhaps the fish were really not the point.

That’s why I like fishing.

 

 

yes, yet another bigdata summary post…now it’s a party

Since I am “recovering” data scientist, I thought that once in awhile, it would be good deviate from my more management consulting articles and  eyeball the bigdata landscape to see if something interesting has happened.

What!?! It seems like you cannot read an article without encountering yet another treatise on bigdata or at the very least, descriptions of the “internet of things.”

That’s true, but if you look under the hood, the most important benefits of the bigdata revolution have really been on two fronts. First, recent bigdata technologies have decreased the cost of analytics and this makes analytics more easily available to smaller companies. Second, the bigadata bandwagon has increased awareness that analytics are needed to run the business.  Large companies could long afford the investments in analytics which made corporate size an important competitive attribute. The benefits from analytics should not lead to a blanket and unthoughtful endorsement of analytics. Not every business process, product or  channel needs overwhelming analytics. You want, however, analytics to be part of the standard toolkit for managing the value chain process and decision making.

The ability to process large amounts of data, beyond what mainframes could do, has been with us for years-twenty to thirty years The algorithms developed decades ago are similar to the algorithms and processing schemes pushed in the bigdata world today. Teradata helped created the MPP database and SQL world. AbInitio (still available) and Torrent (with their Orchestrate product sold to IBM eventually) defined the pipeline parallelism and data parallelism data processing toolchain world. Many of the engineers at these two ETL companies came from Thinking Machines. The MPI API defined parallel processing for the scientific world (and before that PVM and before that…).

All of these technologies were available decades ago. Mapreduce is really an old lisp concept of map and fold which was available in parallel from Thinking Machines even earlier. Today’s tools build on the paradigms that these companies created in the first pass of commercialization. As you would expect, these companies built on what had occurred before them. For example, parallel filesystems have been around for a long time and were present on day one in those processing tools mentioned above.

Now that the hype around mapreduce is declining and its limitations are finally becoming widely understood,  people recognize that mapreduce is just one of several parallel processing approaches. Free from the mapreduce-like thinking, bigdata toolchains can finally get down to business. The bigdata toolchains realize that sql query expressions are a good way to express computations. Sql query capabilities are solidly available in most bigdata environments. Technically, many of the bigdata tools provide “manual” infrastructure to build the equivalent sql commands. That is, they provide the parsing, planning and distribution of the queries to independent processing nodes.

I consider the current bigdata “spin” that started a about 1-2 years ago healthy because it  increased the value of other processing schemes such as streaming, real-time query interaction and graphs. To accommodate these processing approaches, the bigdata toolchains have changed significantly. Think SIMD, MIMD, SIPD and all the different variations.

I think the framework developers have realized that these other processing approaches require a general purpose parallel execution engine. An engine that AbInitio and others have had for decades. You need to be able to execute programs using a variety of processing algorithms where you think of the “nodes” as running different types of computations and not just a single mapreduce job. You need general purpose pipeline and data parallelism.

We see this in the following open-source’ish projects:

  • Hadoop now as a real resource and job management subsystem that is a more general parallel job scheduling tool. It is now useful for more genera parallel programming.
  • Apache Tez helps you build general jobs (for hadoop).
  • Apache Flink builds pipeline and data parallel jobs. Its also a general purpose engine e.g. streaming, …
  • Apache Spark builds pipeline and data parallel jobs. Its also a general purpose engine e.g. streaming, ..
  • Apache Cascading/Scalding builds pipeline and data parallel jobs, etc.
  • DataTorrent: streaming and more.
  • Storm: Streaming
  • Kafka: Messaging (with persistency)
  • Scrunch: Based on apache crunch, builds processing pipelines
  • …many of the above available as PaaS on AWS or Azure…

I skipped many others of course and I am completely skipping some of the early sql-ish systems such as hive and I have skipped  visualization, which I’ll hit in another article. Some of these have been around for a few years in various stages of maturity. Most of these implement pipeline parallelism and data parallelism for creating general processing graphs and some provide sql support where that processing approach makes sense.

In addition the underlying engines, what’s new? I think some very important elements: usability. The tools are a heck-of-alot easier to use now. Here’s why.

What made the early-stage (20-30 years ago) parallel processing tools easier to use was that they recognized, due to their experience in the parallel world, that usability by programmers was key. While it is actually fairly easy to get inexpensive scientific and programming talent, programming parallel systems has always been hard. It needs to be easier.

New languages are always being created to help make parallel programming easier. Long ago, HPF and C* among many were commercial variations of the same idea.  Programmers today want to stay within their toolchains because switching toolchains to run a data workflow is hard work and time consuming to develop. Many of today’s bigdata tools allow multiple languages to be used: Java, Python, R, Scala, javascript and more. The raw mapreduce system was very difficult to program and so user-facing interfaces were provided, for example, cascading. Usability is one of the reasons that SAS is so important to the industry. It is also why Microsoft’s research Dryad project was popular. Despite SAS’s quirks, its alot easier to use than many other environments and its more accessible to the users who need to create the analytics.

In the original toolsets from the vendors mentioned earlier in this article, you would program in C++ or a special purpose data management language. It worked fine for those companies who could afford the talent that could master that model. In contrast to today, you can use languages like python or scala to run the workflows and use the language itself to express the computations. The language itself is expressive enough  that you are not using the programming environment as a “library” that you make programming calls to. The language constructs are  translated into the parallel constructs transparently. The newer languages, like lisp of yore, are more functionally oriented. Functional programming languages come with a variety of capabilities that makes this possible. This was the prize that HPF and C* were trying to win. Specialized languages are still being developed that help specify parallelism and data locality without being “embedded” in other modern languages and they to can make it easier to use the new bigdata capabilities.

While the runtimes of these embedded parallel capabilities are still fairly immature in a variety of ways. Using embedded expressions, data scientists can use familiar toolchains, languages and other components to create their analytical workflows easier. Since the new runtimes allow more than just mapreduce, streaming, machine learning and other data mining approaches suddenly becomes much more accessible at large scale in more ways than just using other tools like R and others.

This is actually extremely important. Today’s compute infrastructure should not be built with rigid assumptions about tools, but be “floatable” to new environments where the pace of innovation is strong. New execution engines are being deployed at a fantastic rate and you want to be able to use them to obtain processing advantages. You can only do that if you are using well known tools and technologies and if you have engineered your data (through data governance) to be portable to these environments that often live in the cloud. It is through this approach that you can obtain flexibility.

I won’t provide any examples here, but lookup the web pages for storm and flink for examples. Since sql-like query engines are now available in these environments, this also contributes to the user-friendliness.

Three critical elements are now in play: cost effectiveness, usability and generalness.

Now its a party.

Do sanctions work? Not sure, but they will keep getting more complex

After Russia and Ukraine ran into some issues a few months back, the US gathered international support and imposed sanctions.

Most people think that sanctions sound like a good idea. But do they work?

Whether sanctions work is a deeply controversial topic. You can view sanctions through many different lenses. I will not be able to answer that question in this blog. It is interesting to note that the sanctions against Russia over the Ukraine situation are some of the most complex in history. I think the trend will continue. Here’s why.

Previously, sanctions would be imposed on a country that is doing things the sanctioning entity does not want to happen. Country-wide sanctions are fairly easy to understand and implement. For example, sanctions against Iran for nuclear enrichment. Sanctions in the past could be levelled at an entire country or a category of trade e.g. steel or high performance computers. But they have to be balanced. In the case of Russian and Ukraine, the EU obtains significant amounts of energy from Russia.  Sanctions against the energy sector would hurt both the EU and Russia.

Sanctions today often go against individuals. The central idea is to target individuals who have money at stake. OFAC publishes a list of sanctioned individuals and updates in regularly. If you are on the list, you are not allowed to do business with those sanctioned individuals, that is, you should not conduct financial transactions of any type with that individual (or company).

The new Russian sanctions target certain individuals, a few Russian banks (not all of them), and allows certain forms of transactions. For example, you cannot transact with a loan or debenture longer than 90 days maturity or new issues. Instead of blanket sanctions, its a combination of attributes that apply as to whether a financial transaction can be made.

Why are the Russian sanctions not a blanket “no business” set of sanctions?

By carefully targeting (think targeted marketing) the influences of national policy, the sanctions would hurt the average citizen a bit less, perhaps biting them, but no so much that the average citizen turns against the sanctioning entity. Biting into the influencers and others at the top is part of a newer model of making individuals feel the pain. This approach is being used the anti-money laundering (AML) and regulatory space in the US in order to drive change in the financial services industry e.g. hold a chief compliance officer accountable if a bad AML situation develops.

So given the philosophical change as well as the new information-based tools that allow governments to be more targeted they will keep getting more complex.

Oso Mudslides and BigData

There was much todo about google’s bigdata bad flu forecasts recently in the news. google had tried to forecast flu rates in the US based on search data. That’s a hard issue  to forecast well but doing better will have public benefits by giving public officials and others information to identify pro-active actions.

Lets also think about other places where bigdata, in a non-corporate, non-figure-out-what-customers-will-buy-next way, could also help.

Let’s think about Oso, Washington (Oso landslide area on google maps)

Given my background in geophysics (and a bit of geology), you can look at Oslo, Washington and think…yeah…that was a candidate for a mudslide. Using google earth, its easy to look at the pictures and see the line in the forest where the earth has given way over the years. It looks like the geology of the area is mostly sand and it was mentioned it was glacier related. All this makes sense.

We also know that homeowner’s insurance tries to estimate the risk of a policy before its issued and its safe to assume that the policies either did not cover mudslides or catastrophes of this nature for exactly this reason.

All of this is good hind-sight. How do we do better?

Its pretty clear from the aerial photography that the land across the river was ripe for a slide. The think sandy line, the sparse vegetation and other visual aspects from google earth/maps shows that detail. Its a classic geological situation. I’ll also bet the lithography of the area is sand, alot of sand, and more sand possible on top of hard rock at the base.

So lets propose that bigdata should help give homeowners a risk assessment of their house which they can monitor over time and use to evaluate the potential devastation that could come from a future house purchase. Insurance costs alone should not prevent homeowners from assessing their risks. Even “alerts” from local government officials sometimes fall on deaf ears.

Here’s the setup:

  • Use google earth maps to interpret the images along rivers, lakes and ocean fronts
  • Use geological studies. Its little known that universities and the government have conducted extensive studies in most areas of the US and we could, in theory, make that information more accessible and usable
  • Use aerial photography analysis to evaluate vegetation density and surface features
  • Use land data to understand the terrain e.g. gradients and funnels
  • Align the data with fault lines, historical analysis of events and other factors.
  • Calculate risk scores for each home or identify homes in an area of heightened risk.

Do this and repeat monthly for every home in the US at risk and create a report for homeowners to read.

Now that would be bigdata in action!

This is a really hard problem to solve but if the bigdata “industry” wants to prove that its good at data fusion on a really hard problem that mixes an extremely complex and large amount of disparate data and has public benefit, this would be it.

Yanukovych, Money Laundering and a Probe: The Rise of Network Analytics

I have been working in the Anti-Money Laundering (AML) for awhile. Compared to healthcare or the more general Customer Relationship Management (CRM) space, the AML and Bank Secrecy Act (BSA) is really the “shady” side of the customer–or at least it assumes that some customer are a shady and tries to find them or prevent their actions. Some estimates suggest that the aggregate impact of BSA/AML (and Fraud) regulations is only 10-20% of the total amount of dollar flow in the world so we know that while regulators and prosecutors do catch some of the bad guys alot of dollars remain on the table.

Take the recent case of the Ukraine. It’s been reported that the Swiss are launching a money-laundering probe into ousted president Viktor Yanukovich and his son Oleksander. They think the money laundering could amount to tens of billions. All told, over 20 Ukrainians are listed as targets of the Swiss probe.

In BSA/AML, Yanukovichs (father and son) is a clearly a Politically Exposed Person (PEP). And apparently the son had a company established that was doing quite well. That information usually leads to flags that up the risk score of a customer at a bank. So an investigation and PEP indicators are all good things.

Officials estimate that $70 billion disappeared from the government almost overnight. Of course, Yanukovich WAS the president of Ukraine and he was on the run up until last week. But an investigation into money laundering on tens of billions that suddenly just happened?

Recently, I attended an ACAMs event in NYC. Both Benjamin Lawsky (regulator side) and Preet Bharara (prosecution side) spoke. One of their comments was that to have a real impact on money-laundering, you have to create disincentives so that people do not break the law in the future. You can sue companies, people and levy fines. These create disincentives and disincentives are the only scalable way to reduce money-laundering–stop it before it starts. The ACAMs event was US based, but the ideas are valid everywhere. The Swiss have always had issues with shielding bad people’s money but they are playing better than before.

But the real issue is that the conduits, the pathways, were already setup to make this happen. And most likely, there have been many dollars siphoned off with the list $70 billion being the end of the train. So the focus needs to be on active monitoring of the conduits and the pathways, with the BSA/AML components being one part of monitoring those paths. After all, the BSA/AML regulations motivate a relatively narrow view of the “network” with an organization’s boundaries.

If we want to really crack down on the large scale movement of funds, it will not be enough to have the financial institutions–which have limited views into corporations–use traditional BSA/AML and Fraud techniques. A layer of network analysis is needed at the cross-bank level that goes beyond filing a suspicious activity report (SAR) or a currency transaction report (CTR). And this network analytical layer needs to be intensely and actively monitored at all times and not just during periods of prosecution. While the Fed uses the data sent back from a company’s SAR and CTR (and other reports) and in theory acts at the larger network level, it is not clear that such limited sampling can produce a cohesive view. Today, social media companies (like Facebook) and shopping sites (like Amazon) collect an amazing amount of information at a detailed level. NSA tried to collect just phone metadata and was pounced on. So the information available in the commercial world is vast, that which the government receives is tiny.

In other words, the beginnings of an analytical network is clearly present in the current regulations, but the intensity and breadth of the activity needs to match the scale of the problem so that the disincentives dramatically increase. And while it is very difficult to make this happen across borders or even politically within the US, its pretty clear that until the “network analysis” scale either increases its “resolution” or another solution is found, large scale money laundering will continue to thrive and most enforcement efforts will continually lag.

Its a balancing act. Too much ongoing monitoring is both political anathema to some in the US and it can be very costly. Too little and the level of disincentives may not deter future crimes.

Pop!…another $70 billion just disappeared.

Should companies organize themselves like consultancies? If they do, they need to hire like them as well.

A recent HBR article (October 2013)  mentioned that P&G and other companies are rethinking how they organize themselves. The basic idea is that instead of having fixed organizations, companies should organize themselves like consultancies–everything is a project and you assemble/disassemble teams as needed to solve problems. There will be some ongoing operations that do require “flat” jobs–jobs that more repetitive but still require knowledge workers?

The article begs the question of whether organizing into projects and flexible staff (like consultancies) is a good thing for companies that are heavily knowledge worker based. Part of the proof that knowledge work is becoming more dominant is by looking at the decreasing COGS and increasing SG&A lines on financial statements. COGS indicates decreasing amounts of “blue-collar” work over time while SG&A is a good proxy for white-collar, knowledge worker type jobs.

So is it?

My view is that it is not so cut and dry. Consultancies create large labor pools at the practice area level that generally have a specific industry expertise. Generally, there are horizontal practices for people who specialize in skills that cut across industries. Typically, these practice areas are large enough that the random noise (!) of projects starting and stopping creates a consistent utilization curve over time. And a management structure, for performing reviews, connecting with people, is still needed to ensure consultants feel like they have a home.

Another important aspect quoted in the article is the creation of repeatable methodologies that consultants are trained on so that knowledge can be codified instead of horded.

Consultancies are good, but not super great, at knowledge management and sharing deliverables so that practices that have proven themselves to work can be re-used in other projects or contexts.

Let’s look at companies:

  • Companies have people, often fairly substantial groups, that are focused on a horizontal area e.g. finance, marketing, IT, customer service. Companies are often organized by product which also forces it to be organized by industry, but there are many variations to this model.
  • Companies try to organize activities into projects. Not everything can be a project e.g. ongoing operational support of different kinds. But companies do try to kick-off efforts, set deadlines, integrate teams from different groups, etc.
  • Companies share deliverables from one project to another. Unlike consultancies, the pool of deliverables is often narrower because of the corporate boundaries and sharing within an industry are often not as robust as in a consultancy. Companies that hire talent from the outside frequently can bring these elements in, however.
  • Groups share resources, although not as robustly as consultancies, across projects and groups. Companies are less robust at true sharing because inside of companies, people count is often a measure of power. At consultancies, revenue and margin usually are the primary metric, but of course, these are only achieved through resources.

Companies today are already employing many elements of what this model calls out. Most companies are not as robust as consultancies at some aspects. But are these differences the primary reason why consultancies have shown good resilience to execution in different circumstances?

There is probably another aspect. Consultancies typically seek out and retain a large amount of quality talent. Companies, to varying degrees, do not always hire highly talented individuals. Their pay, performance management approach and culture do not attract the best talent in the marketplace.

While companies could improve certain areas of their capabilities, there was an entire part of the story that was missing in the HBR article–a focus on top talent across the entire company and not just for a few key roles.

ACO and HMO: HMO redo or something new?

I have covered this topic before but I came across an article that stimulated by thinking again.

It has been said that ACOs are really the new HMOs. HMOs in the 1990 were an experiment to put ‘risk and reward in the same bucket.” Much like integrated capitation, the idea is to let those that save money, while still delivering great quality care, benefit from their innovations.

This was the thinking behind the Affordable Care Act, which seeks to re-align risk and reward. It also, possibly unfortunately, makes Provider power even more concentrated. Maybe that’s good, maybe that’s bad.

A recent analysis of healthcare costs as a % of GDP came out in the New England Journal of Medicine. One of the questions we want to answer is where the healthcare costs will be a decade from now based on changes today. Typical projections run that in a decade or so, that 20% of the US GDP will be spent on healthcare (all private and public expenditures). This is based on projects from the last 2 years worth of data, which have shown lower health care growth rates than the past 20. The past 2 years of healthcare growth rates have been thoughtfully reviewed and it has been determined that these growth rates in the past 2 years are not representative of growth rates that we are likely to see in the next decade or two.

This NJEM article, published May 22, 2013, The Gross Domestic Product and Health Care Spending (Victor Fuchs, PhD), suggests that the growth rate that should be used is probably the long term growth rate, that recent changes in the growth rate are one-time events and that using 2 year growth rates is typically a bad idea anyway. The articles also describes how the growth rates were greatly reduced, cut in 1/2, when HMOs came out. HMOs rationed care. It is generally thought that most people want “all you can drink” healthcare for their “buffer” prices and this is the reason that HMOs were given the boot by consumers. Victor thinks that if you use historical growth rates, then the share of GDP for healthcare grows to 30%. That’s huge.

So if ACOs are really HMOs reborn, wouldn’t that be a good thing? Its probably a good thing to think it through a bit to see if such a top-level thought holds water. First we’ll recognize that the ACOs and concentration of power into Proivders (the Act gives enormous emphasis on hospitals) which possibly leads to verticalization is not necessarily bad, at least in business circles. And combining risk and reward to get incentives right, is also probably not a bad thing.

But there are other factors. We will also assume that that Americans will not engage in more healthy lifestyles since changing people’s behaviors towards the healthy has not really worked nor probably will ever work without significant economic rewards. And we will assume that Americans want choice and do not like being told where to get their healthcare services (especially since care is so uneven).

If normal competition were at work, then we would expect that verticalization and centralizing risk & reward should all be good. We should expect to see declining prices and improving outcomes.

But when we look at the healthcare spectrum, we see some of the largest improvements in outcomes results from drugs and medical devices. We do not see large improvements in care based on “processes” inside of hospitals. While some hospitals do indeed work on Six-Sigma type continuous process improvements and show great results, they are inconsistently used and are not a source of large-scale productivity increases.

So we have not seen that hospitals are capable of being managed to reduce their costs and improve outcomes to any significant degree. In fact, most innovation in the hospital community is around centralization, getting big to have the scale. But we need to ask ourselves if whether hospitals becoming large lowers cost and improves outcomes, or does it just allow more fixed cost (beds) to be spread out over a large surface area threrby reducing per unit costs but not reducing costs? The ACOs models of having a minimum of a few thousand patients to be efficient is probably way off. Some estimates suggest you need a million patients. Hospital systems are responding to this scale need and scaling up. As we have seen though, a larger company is not the most efficient when it comes to innovation or lowering cost without significant forms of competition.

And that’s where the ACO model is probably not like the HMO model. The ACO model is encouraging super-regional entities to be formed that will reduce competition in a given service area versus increase it. Unlike national Payers that look a bit more like Walmart, super-regional ACOs will be large, but not super large. They willl not have competition in the area. And improving their productivity is a bit suspect (I hope they do improve their productivity by the way). And hospital systems are fighting changes to allow clinics to become more prevalent as well as allowing non-physicians to write prescriptions because that draws power away from them.

It has been widely studied and reported that HMOs reduced choice as a trade-off to obtain the benefit of reduced costs. This reduced choice is not directly present in the ACO model although both Payers and Providers want patients to stay in network of course and have been forming “lean” or “focused” networks just for this reason. So there are no large forces in the Act to strongly motivate that ACOs will help manage and control consumer choice.

So on the surface, ACOs look like a good model but they become questionable fairly quickly. You can place your bets on what will happen or wait it out. It is clear that will take years to demonstrate  if ACOs are working just like it did for HMOs–way after they were killed off.

There are actually ways to fix many of these issues by addressing the underlying problems directly. For example, creating more uniform outcomes by standardizing processes and the quality of practicing physicians may reduce the need to have the ultimate “go anywhere” flexibility driven by a patient’s need to find quality care. We need to promote competition on a very broad scale across multiple geographies by changing laws. Reduce Provider power around Rx writing and get people into clinics and alternative care delivery centers. We can also modify the reimbursement policies and centralize risk & reward so that investors (a term I used broadly here)  receive a reward for taking risk and succeeding unlike today where they are penalized with lower payments–essentially creating a disincentive to invest.

All of these ideas would have create a dramatic change in the cost curve over time without fundamentally altering the landscape. It would be a good start.

Ranking information, “winner take all” and the Heisenberg Uncertainty Principle

Does ranking information, who likes what, top-10 rankings, produce “winner take all” situations?

There is an old rule in strategy which is that there are no rules. While it is always nice to try and create underlying theories or rules of how the world works, science and mathematics are still fairly young to describe something this complex. Trying to apply rules like the concept presented in the title is probably some form of confirmatory bias.

Having said that, there is evidence that this effect can happen, not as a rule to be followed, but something that does occur. How could this happen?

Ranking information does allow us, as people, to see what other people are doing. That’s always interesting–to see what others are doing, looking at or thinking about. And by looking at what other people are looking at, there is a natural increase in “viewership” of that item. So the top-10 ranking, always entertaining of course, does create healthy follow-on “views.”

But “views” does not mean involvement or agreement. In other word, while ranking information and today’s internet makes it easy to see what other are seeing, our act of observation actually contributes to the appearance of popularity. That popularity appears to drive others to “winner take all.”

“Winner take all” can take many forms. Winner take all can meant that once the pile-on starts, that a web site becomes very popular. This is often confused with the network effect. Winner take all can also mean that a song becomes popular because its played alot, so more people like it, so its played even more, and so on. Of course this does not describe how the song became popular to begin with–perhaps people actually liked the song and it had favorable corporate support–there is nothing wrong with that.

And this leads us to the uncertainty principle. The act of observation disturbs the thing we are trying to measure. The more scientific formulation has to do with the limits of observation of position and velocity at the atomic level but we’ll gloss over that more formal definition.

The act of observing a top-10 list on the internet, causes the top-10 list to become more popular. The act of listening to a song, and through internet communication channels, changes the popularity of the song. So its clear, that given internet technology, there is a potential feedback loop that employs the uncertainty principle.

Alright, that makes sense. But the world is probably a little more complex than this simple thought.

While the act of observing could make something more popular, that does not mean that the act of observing makes something unpopular turn into something popular. In other words, people are not fools. They like what they like. If something is on the top-10 list or comes from a band that has good corporate airtime support, that does not mean that it is a bad song or a bad list. It does not mean that people would not like it if did not play in that venue.

The internet is a powerful tool to help people quickly find what they want. The cost of finding a new web site or a new source of top-10 lists (or whatever) is fairly low. So there is no real inherent lock-in. There is the possibility that given the internet’s reach, the ability to more rapidly escalate and de-escalate from “winner take all” to “has been” is fairly robust. Its quite possible, in the spirit of making business rules for fun, that the internet produces a steady stream of “winner take all” and if there is a steady stream of “winner take take all” events, then they are really just average events after all (regression to the mean). So with my fancy new rule, there are no “winner take all” events any more, just a large number of rapidly escalating/de-escalating average events–the frequency has just been bumped up.

That’s okay as well.

Yeah! Big data to the rescue…wait a second…

I use google alerts to provide me a sense of flow around articles. One of the filters is around bigdata and healthcare. The flow around this topic has increased. I see many stories, $100m investment here, $20m investment there, and so on. That’s alot of dollars! Alot of it is government backed of course and put on hyperdrive with a recent McKinsey report on bigdata and potential healthcare savings.

Breathlessly, drawing on my own bigdata and scientific background, I then ask, “well when should see the effects of these bigdata exercises?” The answer is not so clear. Just as I explained in my last post, there has not been a dramatic lack of analytics applied to healthcare. However, I do think to do more analytics because clearly there are many ideas and insights to draw on.

But it will still take a few years before insight turns into action. And it will take longer for any significant savings and benefits to be realized. Why?

Unless we solve the payment and incentives issues (also blogged on before) what’s the incentive for large scale change?

Bryan Lawrence in today’s Washington Post’s opinion article “The fine-print warning on Medicare” put it well.

  • No real productivity growth in many areas of healthcare
  • Benefits of reimbursement will never be given up to the government (every year we see this with Medicare payments, every year Congress avoids implementing payment changes, literally). This is the age-old question of making investments, “who gets to cash the check?”
  • Lower prices usually leads to providers increasing volumes
  • Congress will keep changing the laws anyway, you’ll never catch up.

You could peg this as cynical thinking, but its this type of envelope thinking that calls out issues that are hard to ignore and overcome quickly.

So while bigdata could give us great insights, implementing and acting on those insights (the change management problem) is larger and more complex. Its like the Artificial Intelligence (AI) claims back in the 80’s and 90’s that said we can really use analytics to get better answers. After two decades, the crickets were chirping. The value realization is better now, and it is clear, at least to me,  that more people performing analytics can make a difference. But the real policy world is much more difficult to navigate.

There are no good answers to accelerate value capture from bigdata. Although everything from nationalization to forced integrated capitated models (like Kaiser) may be the only way to structurally change the system. It’s clear that competitive forces have been inhibited in the US and that there are externalities driving the dynamics of the healthcare system.

And the real change that’s needed is in you and I. We need to get healthier. And for that, we need increase both our personal commitment to better health as well instill changes in our government policies  to change arcane food and commerce laws that lead to bad diets.