Anti-Money Laundering patent review

I was recently reviewing some anti-money laundering (AML) patents to see if any had been published recently (published does not mean granted).

Here’s a few links to some patents, some granted some applied for:

All of the patents describe a general purpose system of calculating a risk score. The risk score is based on several factors.

In AML, the key data include:

  • A legal entity (name, location, type)
  • A “location” (typically country) that determines the set of rules and “data lists”  to be applied. This could be the LE’s country or it could be the financial instrument’s country but generally this embodies a jurisdiction area that applies to the AML effort. A “data list” from a country or location is the list of legal entities that are being watched or have been determined to engage in AML operations. So we have a mix of suspected and validated data.
  • A financial instrument / product and its set of attributes such as transactions, amounts, etc.
  • A jurisdiction: the risk assessor’s set of rules. Typically these are rules created by a company or a line of business. These rules help identify an event and should be relatively consistent across an entire enterprise but also vary based on the set of locations where a company may operate. A bank’s Compliance Officer is especially concerned about this area as it also contains policies. The policies represent who needs to do what in which situation.

I have not tried to capture the nature of time in the above list since all of these components can change over time. Likewise, I did not try to capture all of the functions a AML system must perform such as regulatory reporting. We have also ignored whether all of these components are used in batch or real-time to perform a function. Or whether rules engines and workflow are powering some incredibly wonderful AML “cockpit” for an AML analyst at a company.

We assume that the ultimate goal of a AML system is to identify LE’s potentially engaging in AML activities. I write “potentially” because you need to report “suspicious” activities to the Financial Crimes Enforcement Network (FinCEN). We can never know for certain whether all of the data is accurate or that an individual transaction is actually fraudulent. We can, however, use rules, either a-priori or predictive, to identify potential AML events.

The patents describe a method of combining information, using a “computer system” to calculate a AML risk score. The higher the score, the more probable that an LE-FinancialProduct is being used for money laundering. Inherently, this is probabilistic. It’s also no different than any other risk scoring system. You have a bunch of inputs, there is formula or a predictive model, there is an output score. If something scores above a threshold, you do take action, such as report it to the government. Just as a note, there are also strict guidelines about what needs to be reported to the government as well as areas where there is latitude.

The trick in such a system is to minimize false positives–LE-FinancialProduct combinations  identified as money laundering but in reality are not. False positives waste time. So the system tries to create the best possible discrimination.

So now look at the patents using the background I just laid out. They are fairly broad, they described this basic analysis workflow. It’s the same workflow, using the same concepts as credit scoring for FICA scores, or credit scoring for many types of loans, or marketing scoring for lifetime value or next logical product purchasing. In other words, the approach is the same. Okay, these are like many existing patents out there. My reaction is the same: I am incredulous that general patents are issued like they are.

If you look past whether patents are being granted for general concepts, I think it is useful to note that many of these came out around 2005-2006 or so which is a few years after many regulations changed with the Patriot Act and other changes in financial regulations.

So the key thought is yes, patents are being submitted in this area but I think the relatively low number of patent applications in this area reflects that the general workflow is, well, pretty general. Alright, the 2011 patent has some cool “graph/link analysis” but that type of analysis is also a bit 1980s.

Note: I selected a few data concepts from the real-time AML risk scoring patent to give you a feel for the type of data used in AML around the transaction:

  • transaction amount,
  • source of funds such as bank or credit cards,
  • channel used for loading funds such as POS or ATM,
  • velocity such as count and amount sent in the past x days,
  • location information such as number of pre-paid cards purchased from the same zip code, same country, same IP address within x hours,
  • external data sources (.e.g. Interpol List) or internal data source

Do customers want Social Customer Service? Yes, and they want more…

I receive Google Alerts on BigData in Healthcare as well as social customer service (these are aligned with my professional activities).

I received two alerts recently:

There are many more like this although I admit, the number of alerts I receive touting social customer service is much larger.

Here’s my thoughts. Customers want to get a job done. They often do not care how to get it done especially if its a negative or an operational issue, for example, an item they purchased is not working or they need to make a fix for a bad banking transaction quickly.

The customer’s job is: “fix or solve the issues.” This is why FTR – First Time Resolution–is the top customer service metric that customers want to experience. So for each customer service episode, again excluding “rage” issues, is “how do I get this done as fast or efficiently as possible?”

I am not sure that customers care if its tweets, facebook pages or the call center.

But today, social media channels, with their expectations of a fast response, often have a faster SLA for responding. Social channels cannot easily spike FTR, but they can get the customer engaged faster than say, a phone channel with an IVR navigation time of at least 5 minutes.

So customers gravitate to those channels that get them going as fast as possible and today, social channels have spiked some aspects of customer services–so yes, social customer service is relevant so long as it remains responsive. I think the second article in trying to make a point but the nuances matter here. It’s not that there is no proof customers want to use social customer service channels, its that they will use social customer service channels as long as companies are responsive in that channel.

And there are some interesting balancing forces that may sustain and grow channels such as twitter and facebook over the long run. Strategically, since there is an opportunity to engage customers directly in a public forum–and twitter and shown to be very effective at doing this–companies need to capitalize on these channels. Hence being part of the mix and engaging are additional  benefits for maintaining a social customer service channel presence.

I am not swayed by doe-eyed or save-the-world arguments when it comes to customer management. My perspective is that you need to invest in this channel and manage the balance with other investments.

If you need help finding the balance, give me call.

Community rating or individual rating for health exchanges

I was chatting with someone about health exchanges and they mentioned about building risk models at the individual level. Certainly that’s a good thing.  However, according to the Act, 2013 will have adjusted community ratings. Ratings will be based on a few factors at the individual level. The obvious variables are present: smoking and such. But the other variables will include things like age, family size and geography. This is for non-grandfathered (i.e. new) plans and will be in effect for companies less than 100 employees.

The ratios of pricing, the price ratio between the different risk areas, cannot exceed certain ratios. Of course, just because the law says this does not mean that there are not costs that will be incurred outside those ratios. This will lead, most likely, to higher rates in other plans such as employer plans. Perhaps this will force more employers to be self-insured area–which might not be a bad thing except that it creates an employment model that favors large companies and could force the squeeze on small companies. Perhaps the landscape will shift to one of large employers overall–the exact opposite of the markets the Act is suppose to help–individuals and small companies.

Opportunities for BigData and Heathcare: Need a little change management here

What are the bigdata opportunities in healthcare? Today, BigData techniques are already employed by startups because BigData technology today can be very cost effectively used to perform analytics  and gives startups an edge on the cost and capabilities front.

Big what are the opportunities in heatlhcare for established companies? I’ll offer the thought that it can be broken into two main categories. The categories reflect the fact that there are in-place data assets that will be in place for quite awhile. Its very difficult to move an entire infrastructure to a new technology base overnight. It is true that if some semblance of modern architecture (messaging, interfaces for data access) is in place today, the movement can be much faster because the underlying implementation can be changed without changing downstream applications.

The two categories are:

  • Move targeted, structured analytical workflows to BigData.
  • Enable new analytical capabilities that were previously not viable.

The first category speaks to the area of BigData that can make a substantial ROI appear fairly quickly. There are many well-undestood workflows today inside healthcare Payers, for example, that simply run too slow, are not robust or are unable to handle the volume. Purchasing another large, hardware based appliance is not the answer. But scaling out to cloudscale (yes using a public cloud for a Payer is considered leading edge but easy to do with the proper security in place) allows a Payer to use BigData technology cheaply. Targeted workflows, that are well understood but underperforming can be moved over to BigData technology. The benefits are substantial ROI for infrastructure and cost avoidance for future updates. The positive ROI that comes from these projects indicates that the transition pays for itself. It can actually occur quite quickly.

The second opportunity is around new analytical capabilities. Today, Payers and others cannot simple perform certain types of analytics easily because of limitations in the information management environments. These areas offer, assuming the business issue being addressed suggests it, substantial cost savings opportunities on the care side. New ways of disease management, outcomes research and network performance management can make substantial returns in under 2 years (it takes a year to cycle through provider network contracts and ensure the new analytics has a change to change the business process). Its these new capabilities that are most exciting.

The largest impediment to these areas of opportunity will be change management. Changing the way analytics are performed is difficult. Today, SAS is used more for data management than statistical analysis and is the defacto standard for the analytical environment. SAS offers grid and other types of larger data processing solutions. To use BigData, plans will have to embrace immature technology and the talent that must be hired to deploy it. But the cost curve could be substantially below that of scaling current environments–again paying for itself fairly quickly. Management and groups used to a certain analytical methodology (e.g. cost allocations) will have to become comfortable seeing that methodology implemented differently. Payers may seek to outsource BigData analytics tools and technologies but the real benefit will be obtained by retaining talent in-house over the long run even if some part of the work is outsourced. Because analytics is a core competency and Payers need to, in my opinion, retain some core versus just becoming a virtual shell, BigData needs to be an in-house capability.

ProPublica: So why can’t the government analyze the data? And what about commercial insurance plans? What questions should we ask the data?

There was a recent set of articles quoting ProPublica’s data analysis of Medicare Part D data. ProPublica acquired the data through the Freedom of Information Act and integrated the data with drug information and provider data. As a side note there has also been the recent publishing of CMS pricing data using socrata dataset publishing model (an API factory). (Side Note: You can plug into the data navigator at CMS).

You can view various trends and aggregations of data to compare a provider against others and navigate the data to gain insight into the script behavior of Medicare Part D providers. If you review the methodology used to create the data, you’ll realize that there are many caveats and just reading through some of the analysis, you realize that a simple evaluation of the data is insufficient to identify actionable responses to insights. You have to dig deep to see if a trend is really significant or an artifact of incomplete analysis. Yes, there is danger in not understanding the data enough.

But the ProPublica analysis is a great example of analysis by external groups. It is an example of simple provider profiling that helps detect variations in standards of care as well as outright fraud. The medical community continues to improve standards of care but it is a challenging problem with few incentives and governance structures.

The question we may ask ourselves is, “Why does the government not perform more analysis?”

The short answer is that they do. The government performs extensive analysis in a variety of ways. What the ProPublica publications really show us is that there is alot more analysis that could be performed and that could be useful in managing healthcare. Some of it is quite complex and cost-wise we should not, either through expectations of infinite funding which do not exist or by law as set by congress, expect the government to perform all the various types of analysis that one could imagine should be performed. Everyone admit, that there is more there and I am sure we all have an opinion about top priorities that conflict with others.

And the government does act as publisher of data. The Chronic Condition Warehouse (CCW) is a good example of data publication. The CMS also has plans to do more in the information management space that should make the sharing easier. I am concerned about pricing though. Based on a very small sampling, it appears that extract costs are still quite high and cumbersome–on the order of $80K for several extracts covering just 2 years. This needs to flatline to $0 per extract since we already pay for CMS already and our funds should be wisely used to enable this service from the start. Both anonymized and identified datasets are available. Comprehensive, anonymized datasets should be available for free.

This publication of the data, including the pricing data, is a great example of “democratizing” data. Many companies use this language to describe the ability to access datasets in a way that any analyst (with sufficient HIPAA safeguards) can gain insight and do their jobs better through information driven guidance. We can see from these examples that just publishing the raw data is not enough. You must have the information management technology to join it together with other datasets. This is what makes analysis so expensive and is the main barrier to data democratization.

So what can’t commercial health plans publish their data? There is really no benefit to them for publishing. Although one could argue that individual state subsidies such as non-profit status, and hence a state entitlement that the residents pay for, should motivate the ability to force data publishing, there is really no benefit for commercial health plans to publish data. Commercial plans do analyze Provider data and create Pay for Performance (P4P) programs used to manage their networks. P4P often ranks Providers and provides incentives to provide more “value.”

Of course, the free agency theory applies here and P4P can really only ever be marginally helpful. Sometimes marginally helpful is good of course so I am not dismissing it. However, the same issues around the ProPublica analysis applies to health plans’ data.

  • First, the information technology of many plans is fairly immature despite the billions these plans handle. This is because they focus on claims processing versus analytics.
  • Second, they have the same data integration issues that everyone else has–and its hard work to get it right after 30 years of extremely bad IT implementations and a lack  management talent.

Things are changing now but I predict that even with better “data democratization” information management technology there is still not enough coverage of analytics to be transformational. It is  possible that if the government really wants to get serious about managing costs of healthcare and gaining insights from data to help drive transformational cost changes, it really needs to have all the plans publish their data together.

Again, you run into competitiveness issue fairly quickly since the “network” for a plan and the prices they pay are a big part of a plan’s competitive advantage.

But I like to blue-sky it here on my blog.

As a truly blue-sky thought, if the U.S. is really, albeit slowly, moving towards single payer (the government is already the largest payer already anyway) then as compromise to keep off true single payer, perhaps the government can force publishing of claim data for anyone to analyze (following HIPPA of course). This could stave-off the march towards a single-payer model and introduce consistency in the networks. This would shift the competitive focus the plans have and force them to compete in other healthcare areas that need more focus, such as sales & marketing, member/patient education outreach, etc.

Of course, there is another blue-sky thought–that Providers will start their own plans (a micro-patchwork of 1,000s of plans) publish their own data according to government standards and democratize the data to help the health of our citizenry. There are already examples for this model. The ACO model provided by the new parts of the Patient Protection Act as well as Medicaid programs where MCO have sprung up attached to hospital systems to serve the Medicaid population profitably.

As a final note, what surprised me the most about Part D prescriptions is that 3% of the script writers, wrote more than 1/2 of all prescriptions. This could mean that these Providers are concentrated around those that need the most help. Perhaps some government focus on these “super-scripters” could help  help manage their costs down.

There are some other thought provoking bits of information as well just in the topline numbers. Based on the report, the ratio of providers to beneficiaries is 1 out of 15. This seems like a really high concentration of providers to beneficiaries in the sense that each physician who wrote Part D scripts saw 15 beneficiaries. In the Michael Porter world where specialists focus more on their specialty and become highly efficient at it (better outcomes, lower costs), I would think that a higher ratio would reflect focus and perhaps the opportunity for innovation. Perhaps not.

Also, what’s astounding is that the total cost was roughly $77 billion dollars. This is for prescriptions including the costs of the visits. This helps prop up the pharmaceutical industry. Many of the top drugs are still branded drugs versus generics. But regardless of the industry it helps, that’s alot of money. Drugs are wonderful healthcare productivity boosters (they truly make a difference in people’s quality and duration of life) but we need to continually attach these cost bumps to shrink them.

It would also be instructive to bounce the medicare Part D data against provider quality scores, say at an aggregate level.

We could then answer the ultimate questions, which centers on value. That is, for the $ we invest in drugs under Part D, are we getting good outcomes? Is the Effort-Return equation producing the best numbers for us? That’s the question we really need to answer.

Healthcare, customer marketing, iphone and education? Can we personalize all of these?

I was listening to a TEDTalk recently about education. One of the ideas in the talk was that children are not receiving the education they need. Many of the education programs, especially at the federal level, try to force a common structure, standardized tests, on students and this has the effect that teachers teach to the test. By teaching to the test, the curriculum normalizes to a focus on the test content. Essentially this has led to the same curriculum for all students. After all, they all need to take the test.

The logic reminded me of healthcare conversations. Today, medicine, by and large, is applied across wide swaths of people. But a funny thing happened along they way to medication heaven–some medicines worked better in some individuals than others. Over time, it was found that the more complex the medicine, the more that it worked well in some and not at all in others and perhaps even hurt the patient.

This created an opportunity for tests–tests that could determine when a medicine would well in a patient. These tests suggested that medication had to be personalized to an individual based on their specific chemistry and more importantly, their specific genetic structure. While the differences in a human being’s genetic structure is rather small, its significant for the medicines we create.

The field of personalized medicine was born. While the concept of personalizing medicine to a specific patient is age-old, personalized medicine today really implies the use of newer technology or processes to customize treatment for a patient. It recognizes that many complex human diseases and issues require a deeper understanding of what will work on a patient and that each patient is different.

And if we look at the area of customer marketing, where the most important trends in the past 20 years has been around taking marketing efforts down to the level of the individual. To know as much as you can about one person in order to better communicate with them, is all based on the idea that if you can personalize the messages based on the actual marketing target (the shopper for example) then that message will be heard and be much more effective at changing behavior–which in this case is to purchase a product.

The iphone is another example. The iphone is really about mass customization–the ability to create a platform that can be further customized by “apps.” The apps are the customization that tailors the phone to the needs of each individual. The iphone’s success is testament to this core idea.

But in the education area, according to the TEDTalk, its all about standardization. What is needed, according to the talk, is personalized education. In the same way that healthcare, customer marketing and many other areas of human endeavor have found, when something is personalized, it often performs better or is more relevant or is more beneficial.

What is stopping personalized education? Is it the bureaucracy? The teachers? The principals? The federal laws?

Surely, all of these probably play a role. But perhaps the larger issue, and one that underpins healthcare, customer marketing and the iphone is that there has to be a platform of productivity behind the customization. Education has none, but one is on the horizon.

Healthcare has a platform of science, with a fairly rich (and growing)  backbone of chemistry and genetics. Healthcare has fundamental tools that can be manipulated like building blocks to work through the discovery process. It can run many experiments to optimize itself to the problem.

Customer marketing, often driven by digital marketing, can run thousands of experiments (placement, color, message, etc..) as well as fairly solid technical commonality around message delivery and presentation–the technology is fairly mature and continually maturing.

The iphone is an obvious platform that others can innovate on directly. Its pretty obvious.

What about education? Can it rely on a few building blocks? Is there a mechanism by which the productivity of teachers or of the education process can dramatically accelerate? Does it have to be the same chalkboard and lecture format?

There are positive signs of a “platform” on the horizon. In my opinion, the highly disruptive power of digital based education (video lectures, tests, etc.) could be the platform that is needed. By siphoning off core learning activities of the mechanics in some fields, for example, addition and subtraction, calculate and many other areas, the education process’s productivity could rise. Teachers need to focus on the hardest educational processes and the toughest subject areas and customize the dialogue with the student based on what the student needs. But all of this customization, this personalization, takes significant amounts of time. Where will that time come from? It’s not going to come from teaching a standard curriculum using traditional techniques. We know that this does not work.

What is clear that teacher engagement makes all the difference. And teacher productivity is key. Yes teacher’s need better oversight, just as principals need better oversight and the school bureaucracy and school boards are neither healthy or make sense. But the largest impact we can have and the most important impact is to give teachers a productivity boost. And the best way to do that is to let the easy stuff be handled through other teaching techniques and focus teachers on the student’s at a more personalized level.

We see this in colleges today. Despite the incredibly poor management at colleges and universities today (it really is quite disappointing), they do have one thing right although for the wrong reasons. There is a shift to computer based training for standard material. This frees up the professor’s time. Of course, most universities should just be shut down for being so poorly managed but that’s another story. Clay Christensen talks about disruption in the education market at the senior educational level. It could wipe out many inefficient and ineffective universities today (hurray!) and actually improve education for all of society instead of the few that can pay increasingly large amounts of money.

We need some of that disruption at the elementary and high-school levels as well in order to allow personalized education to be used at the level where it can be influential. We need that today.

Healthcare risk: long-term versus short-term

A long time ago, when the first financial crisis hit around the savings and loan (S&L) industry in the 1980 I remember that there was a thread of conversation around using long term interest rates to make short-term bets. The idea was that by borrowing long term at a lower rate, you could seek to play with assets on a short-term and make money. That’s not a new concept, its just that this behavior promoted playing at the boundaries of the risk envelope.

An opinion article today in the Washington Post pointed out the same thing happening today in healthcare.

The article described how the Oregon Medicaid lottery system gave new insights because it was essentially run as a large-scale randomized controlled trial (RCT). They found that there was an improvement in health when Medicaid benefits were used. It also pointed out that the benefits were a bit different than expected. There was no change in acute or chronic issues like the big ones: diabetes, hypertension, etc. Instead, they found a decrease in mental health (around 30%) issues. There were other benefits as well.

Based on my own industry experience, I know that its hard to keep Medicaid members enrolled. Since Medicaid serves the lower income group and the lower income group typically exhibits higher rates of coming in an out of managed care due to moving, jobs, ability to get to healthcare facilities (access) and other factors. Hence, the benefits you obtain from the Medicaid benefit are probably more long term (which is part of what the study found). So even in a group that is highly transitory, long term benefits were observed.

With commercial experience, the groups are less transitory to some degree. But people still do switch plans and change jobs.

Either with commercial or Medicaid, or really any system, when access to something short-term provides long term benefits, how to people who put in the effort (in this case insurance companies who shoulder the risk) manage to match that against the long-term benefits. Of course, actuaries play with this equation all the time.

But imagine this essential tension playing out at the macro-scale. You have systematic and exceptional large and complex resource spent today that have a long-term pay-off. How do you run a business that way?

You can of course charge more today or you can hope that on average, even in face of all the churn, that the churn gives you a net neutral. But this type of risk calculus can lead to a policy of minimizing investments today perhaps to the point of depressing outcomes in the future.

It seems that incentives are out of whack and until the incentives change and come into alignment, there may not be alot of progress.

Social media, BigData and Effort-Return

The classic  question we ask about marketing, or really any form of outreach, is given the effort I expend, what is my return. This Effort-Return question is at the heart of ROI, value proposition and the general, down-to-earth question of “Was it worth it?”

That’s the essential question my clients have always asked me and I think that’s a big question that is being formed around the entire area of social media and BigData. Its clear that social media is here to stay. The idea that “people like us” create our own content is very powerful. It gives us voice, it gives us a communication platform and it gives us, essentially, power. The power of the consumer voice is amplified.

Instead of 10 people hearing us when we are upset at something, we can have 1,000,00 hear us. That’s power. And the cost of “hearing” the content, of finding it, is dropping dramatically. It is still not free to access content, you still have enormous search costs (we really need contextual search–searching those resources most relevant to me instead of searching the world). But search costs are dropping, navigation costs are dropping. Every year, those 1,000,000 can listen and filter another 1,000,000 messages.

BigData has come our rescue…in a way. It gives more tools to the programmers who want to shape that search data, who want to help us listen and reach out. There’s alot of hype out there and the technology is moving very fast, so fast, that new projects, new frameworks and new approaches are popping out every day.

But is it worth the effort? If so, just how much is it worth it? That’s still a key question. The amount of innovation in this area is tremendous and its not unlike the innovation I see occurring in the healthcare space. Everyone is trying something which means that everything is being tried somewhere.

That’s good. But it pretty clear already that while we can now communicate in many new channels unavailable 5 years ago, we can communicate easier, more frequently and find things that interest us, does it really pay-off? Do companies benefit by investing now and trying to get ahead or do they just try to keep pace and meet some, but not all, customer expectations. Will entire companies fall because of social media and BigData?

Those are hard questions to answer, but I think we can look at other patterns out there and see that even with all the hype today, it will be worth it. Companies should invest. But many should not over invest. When the internet was first widely used, it was thought that it would change the world. Well, it did. It just too 2 decades to do that instead of the few years that futurists predicted. But with a higher density of connectivity today, innovations can roll out faster.

But I think that’s where the answer is. If you are in the business of social, then being on that wave and pushing the edge is good. If your business is BigData tools and technologies, then yes, you need to invest and recognize that it will be worth it in the long run if you survive. But many brands (companies) can just strive to keep pace and do okay. There are exceptions of course. Product quality and features still dominate purchase decisions. Yes people are moved by viral videos and bad customer service or bad products, but companies with long-standing brands whose products are core, can afford to spend to meet expectations versus having to over invest. They will do just fine keeping pace and continuing to focus on the product as well as marketing. For example, does Coca-Cola expect to double market share because they are better at social media than others? Will they grow significantly because of it? Its not clear but for some segment of products, the spending pattern does not have to be extraordinary. It just needs to keep pace and be reasonable.

This gets us back to the question of social media, BigData and Effort-Return. Effort-Return is important to calculate because brands should not over invest. They need to manage their investments. Is social media and BigData worth the investment? Absolutely, its really a question of degree.

Lean startups

The May issue of HBR has an article on the learn startup model. Essentially, you need to prototype something, find clients quickly, get feedback and iterate again. The idea of “lean” is that you forgo deep planning and marketing that may not make sense since most plans change rapidly anyway.

There is alot of truth in that. I’ve helped startups (even in my early college and grad-school days) and certainly it makes sense to try something and keep iterating. I found that to be true with writing, software, management consulting ideas and a variety of areas.

However, its not universally true. You really do need deep thinking in some cases and some industries or in situations where just getting to the first prototype will consume significant capital. Hence, the idea is a good one, but should be judiciously applied. That’s not to say that getting continuous feedback is ever bad, it’s just that you need more than a prototype.

There is an old management principal around innovation. There is “learning while doing” model that says you cannot know everything or even 1/10th of what you need to know, so its better to get started and learn as you go. That’s the basic concept behind the lean startup (see here for more info on learning-by-doing which is concept from the Toyota system).

The concept is bouncing around the technical crowds as well. This article makes the case that you need to “learn fast” versus “fail fast, fail often” which is in the spirit of the lean startup. In fact, now there are learn canvases that you can put together. While there are alot of good ideas here, I think the only rule about employing them is “pick your rules carefully.”

Hospital profits and trust in the medical community

The Washington Post had an article that covered how hospital profits are increased when there are complications with surgery. I do not think that the health care system causes complications intentionally, but for me this links back to arguments made by Lawerence Lessig.

His argument in his book “Republic Lost” is that the presence of money (profits), in the wrong location (complications related to surgery) causes us to think differently about the relationship between those that provide care and patients (givers and receivers). He believes that the mere presence of money causes us to change our trust relationship with other party.

There does seem to be some evidence of physician-led abuses in the care community. But abusers are more than just providers. An entire ecosystem of characters are at work trying to get a slice of what is an overwhelmingly large slice of money in the US economy.

It is a large pie and so we should expect to have some abuses by all parties involved–including patients! The issue is really about how the presence of money and of stories that discuss it like the above, distorts our trust relationship. According to Lessig, it is this distortion that is decreasing our trust relationship.

Using Lessig’s argument, it is not that we think Providers are needlessly causing complications to obtain more profit, but the presence of money for the wrong incentive causes us to think twice. This is the essence of his argument of “dependency corruption.”

To remove these incentives and restore trust–is the solution an integrated capitated model like Kaiser Permanente? Where Provider & Payer are one and the same and hence, there is a motivation to reduce costs and improve outcomes because those who save dollars get to “cash the check?”

If you believe that this incentive model is the only one that could restore trust, what is the eventual outcome? Could it be that the entire healthcare insurance market will fragment into thousands of small plans perhaps like Canada where there is a central payer and then thousands of healthcare plans to fill in the cracks?

Or is the only way to restore trust to go to a national payer system so that a majority of the healthcare delivered would have an integrated incentive?

Are there any in-between models that work?

Its not clear what will happen but it does seem that trust, as abstract as it sounds, could lead to major structural shifts in the industry just as trust in today’s government seems to be greatly diminished (citizens think that the government is captive to special interest groups and lobbyists who finance their campaigns).

Regardless, I think that smart information management technologies can support that type of fragmentation and still be efficient so we should not let technology limit the best model for healthcare delivery. After all, the new health care law (patient protection act) is attempting to create a nationwide individual market (almost overnight) and the plans must meet minimum standards. They will also include little gap-fillers to fill in the gaps for those that want delta coverage.

We’ll see.