The Data Dive

Author: fuad rahman

The opportunities and challenges of using Natural Language Processing in enriching Electronic Health Records

The use of Electronic Health Records (EHR) is increasing in primary care practices, partially driven in the United States by the Health Information Technology for Economic and Clinical Health Act. In 2011, 55% of all physicians and 68% of family physicians were using an EHR system. In 2013, 78% of office-based physicians reported adopting an EHR system. EHRs can, however, be a source of frustration for physicians. A 2012 survey of family physicians revealed that only 38% were highly satisfied with their EHR. Among the barriers to EHR adoption and satisfaction are issues with usability, readability, loss of efficiency and productivity, and divergent stakeholder information needs, which are all crammed into a small and single form factor.

Cost of EHR Systems

The American Recovery and Reinvestment Act incentivizes expanding the meaningful use of electronic health record systems, but this comes at a cost. A recent study has reported the cost of implementing an electronic health record system in twenty-six primary care practices in a physician network in north Texas —taking into account hardware and software costs — as well as the time and effort invested in implementation. For an average five-physician practice, implementation cost is estimated to be around USD 162,000, with USD 85,500 in maintenance expenses during the first year. It is also estimated that the HealthTexas network implementation team and the practice implementation team needed an average of 611 hours to prepare for and implement the electronic health record system, and that the end users — physicians, other clinical staff, and nonclinical staff — needed 134 hours per physician, on average, to prepare for use of the record system in clinical encounters.

The Opportunity

This clearly has opened up an opportunity to innovate. Despite slower-than-expected growth, the global market for EHR is estimated to have reached USD 22.3 billion by the end of 2015, with the North American market projected to account for USD 10.1 billion or 47%, according to research released by Accenture (NYSE:ACN).

Although the worldwide EHR market is projected to grow at 5.5% annually through 2015, Accenture’s previous research shows that would represent a slowdown from roughly 9% growth during 2010. Despite the slower pace of growth globally, the combined EHR market in North and South America is expected to have reached USD 11.1 billion by the end of 2015, compared to an estimated USD 4 billion in the Asia Pacific region and USD 7.1 billion in Europe, the Middle East and Africa.

The Challenge

EHRs have the potential to improve outcomes and quality of care, yield cost savings, and increase engagement of patients with their own healthcare. When successfully integrated into clinical practice, EHRs automate and streamline clinician workflows, narrow the gap between information and action that can result in delayed or inadequate care. Although there is evolving evidence that EHRs can modestly improve clinical outcomes, one fundamental problem is that EHR systems were principally designed to support the transactional needs of administrators and billers, and less so to nurture the relationship between patients and their providers. Nowhere is this more apparent than in the ability of EHRs to handle unstructured, free-text data of the sort found in the history of present illness (HPI). Current EHR systems are not designed to capture the nature of HPI — an open-ended interview eliciting patient input — summarizing the information as free text within the patient record. There is huge untapped areas to innovate by exploiting the HPI to execute care plans and to document a foundational reference for subsequent encounters. In addition, the HPI can be used directly by an automated system driven by AI — replacing the current manual model using clinical coding specialists — into more structured data linked to payment and reimbursement.

“Although the market is growing, the ability of healthcare leaders to achieve sustained outcomes and proven returns on their investments pose a significant challenge to the adoption of electronic health records,” said Kaveh Safavi, global managing director of Accenture Health. “However, as market needs continue to change, we’re beginning to see innovative solutions emerge that can better adapt and scale electronic health records to meet the needs of specific patient populations as well as the business needs of health systems.”

In summary, with adoption of EHR came the challenge of data, and finding the information quickly and efficiently. With a typical 5-day hospital stay, with many doctors and nurses working on the same patient creating a huge amount of overlapping data, it becomes almost impossible to get a clear picture of what is happening with a patient by the 3rd day. The traditional EHR model is not effective in this setting.

Ripe for Innovation

One solution to the problem is to utilize human augmented machine learning to generate an insightful, patient-specific narrative — especially in the case of in-patient encounter — to simplify all of this data. Such a system will have the ability to use the Natural Language Technology (NLP) to process free-format text (“unstructured data”) stored within Patient Notes and aggregate that with the information located within the various Tables and Charts (the “structured data”). In a way, this is in line with the overall trend in work automation — the use of innovative technologies to facilitate the transition to electronic records from paper-based records — specifically for healthcare providers. NLP is one technology that can fundamentally change the way we interact with patient records and help improve clinical outcomes.

Let’s look a little closely at the data captured within an EHR system. Within the EHR, data is captured in one of four ways, entering data directly — including templates, scanning documents, transcribing text reports created with dictation or speech recognition and finally interfacing data from other information systems such as laboratory systems, radiology systems, blood pressure monitors, or electrocardiographs. This captured data, in turn, can be represented in either structured or unstructured forms. Structured data is, by definition, created through constrained choices in the form of data entry devices including drop-down menus, check boxes, and pre-filled templates. There are obvious advantages of this type of data format. They are easily searchable, aggregated, analyzed, reported, and linked to other information resources — but it suffers from data compression and more immortally loss of context —making them unsuitable for individualization of the EHR and too fragmented for intelligent holistic treatment that is possible with unstructured data.

Unstructured clinical data, on the other hand, exists in the form of free text narratives. Provider and patient encounters are commonly recorded in free-form clinical notes. Free text entries into the patient’s health record give the provider flexibility to note observations and concepts that are not supported or anticipated by the constrained choices associated with structured data. It is important to note that some data are inherently suitable for structured format, while others are not. NLP can be a powerful tool in achieving this balance — some part of unstructured text narratives can be transformed into structured data — leaving other data in free format text, but with derived annotation and semantic analytics, making the EHR data model real life situations more closely.

Not the Silver Bullet

NLP is not a silver bullet and clinical text poses significant challenges to NLP. This text is often ungrammatical, consists of bullet-point telegraphic phrases with limited context, and lacks complete sentences. Clinical notes make heavy use of acronyms and abbreviations, making them highly ambiguous. Word sense disambiguation also poses a challenge in extracting meaningful data from unstructured text. Clinical notes often contain terms or phrases that have more than one meaning. For example, discharge can signify either bodily excretion or release from a hospital; cold can refer to a disease, a temperature sensation, or an environmental condition. Similarly, the abbreviation MDcan be interpreted as the credential for “Doctor of Medicine” or as an abbreviation for “mental disorder.” This underscores the need to understand and model the context more closely, and NLP practitioners are working towards a working solution to these challenges.

One such solution is the standardization of medical language such as the Unified Medical Language System (UMLS). It is a set of files and software that brings together many health and biomedical vocabularies and standards to enable interoperability between computer systems. We can use the UMLS to enhance or develop applications, such as electronic health records, classification tools, dictionaries and language translators. Specifically, the UMLS metathesaurus, is a repository of over 100 biomedical vocabularies, including CPT®, ICD-10-CMLOINC®MeSH®RxNorm, and SNOMED CT®, is an excellent tool in standardizing this variation. Within the Metathesaurus, terms across vocabularies are grouped together based on meaning — forming concepts — allowing us to capture and account for the huge variations in language and expressions.

This obviously helps, but even such exhaustive approaches have their limitations. Given the nature of language itself, each individual concept is often assigned multiple semantic type categories from the UMLS Semantic Network, making the meaningcontext-sensitive. For example, within UMLS, 33.1% of abbreviations have multiple meanings. The presence of abbreviation ambiguity is even higher in clinical notes, with a rate of 54.3%. This makes subjectivity a big factor in understanding clinical notes — makes it that much more difficult to derive actionable intelligence.

The Growing Market

Irrespective of these challenges, the NLP market is growing steadily and is forecasted to grow for some time, as shown in the Figure below.

According to a recent report, NLP Market for Healthcare and Life Sciences Industry will be worth USD 2.67 Billion by 2020. This report, titled, “Natural Language Processing Market for Health Care and Life Sciences Industry by Type (Rule-Based, Statistical, & Hybrid NLP Solutions), Region (North America, Europe, Asia-Pacific, Middle East and Africa, Latin America) – Global Forecast to 2020”, defines and divides the NLP market into various segments with an in-depth analysis and forecasting of revenues.

The global NLP market for health care and life sciences industry is expected to grow from USD 1.10 Billion in 2015 to USD 2.67 Billion by 2020, at a CAGR of 19.2%. In the current scenario, North America is expected to be the largest market on the basis of spending and adoption of NLP solutions for the healthcare and life sciences industry.

What Next?

The EHR is here to stay. Now is the time to innovate by introducing better ways to capture clinical data, better ways to interact with the data and better ways to use the data to improve clinical outcomes. NLP and machine leaning are obvious candidates to make this happen. That is why investments are piling up in this area. Jorge Conde is Andreessen Horowitz’ newest general partner and leads the firm’s investments at the cross section of biology, computer science, and healthcare. He was recently asked: “… you were an undergrad biology at Johns Hopkins, but you have an MBA from Harvard and also worked as an investment banker at Morgan Stanley! How does that all add up?” Jorge‘s answer was simple and to the point: “I went to finance to see if I could understand … what drives an industry, how does the operation actually work? But then I realized … that I wanted to build and do. And so … I did additional graduate work in the sciences at the medical school at Harvard and at MIT.” The article I am quoting this from is aptly titled The Century of Biology. Computational biology and by its extension healthcare is going to be the most exciting field for the 21st century and we will need to build the tools to support it. Well, we better get to work!


Big Data, Machine Learning and Healthcare – An increasingly significant interplay

The field of healthcare is undergoing a revolution with the increasing adoption of technology – devices, sensors, software, insights and artificial intelligence. Unfortunately buzzwords such as machine learning and Big Data have clouded this conversation. People throwing around these buzzwords do a major disservice to any real adoption of new technologies. And I am not the only one who is bothered by this: others are voicing their concerns too: Machine Learning – Can We Please Just Agree What This Means? Although machine learning and big data have become buzzwords, which these days do carry a negative connotation, in this particular case it is making significant inroads in healthcare. What is Big Data, what is machine learning and how are they changing healthcare?

Defining Big Data is like defining what life is – it depends on who you ask. The simplest and closest definition is attributed to Gartner – Big data is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation. But not everybody agrees, according to The Big Data Conundrum: How to Define It, published by MIT Technology Review, Jonathan Stuart Ward and Adam Barker at the University of St Andrews in Scotland surveyed what it means to different organizations and got very different results and bravely finished their survey with a definition of their own: Big data is a term describing the storage and analysis of large and or complex data sets using a series of techniques including, but not limited to: NoSQL, MapReduce and machine learning. One significant evolution of the term seems to be the marriage of the concept with the enabling technology or algorithmic framework, specifically database, optimizing algorithms and machine learning.

So let’s now turn our focus on machine learning, which has similar problems. To me machine learning is simply the process by which a computer can learn to do something. That something might be as simple as reading the alphabet, or as complex as driving a car on its own. Although explaining this to a non-technical audience is not easy, valiant efforts have been made by some lost souls, for example, Pararth Shah, Ex-Stanford student and currently at Google Research, How do you explain Machine Learning and Data Mining to a layman? and Daniel Tunkelang, data scientist, who led teams at LinkedIn and Google, How do you explain machine learning to a child?” How this learning can take place, however, is harder to explain. Before attempting that, let me clarify some other relevant technical jargons people may have thrown at you, such has AI, soft computing and computational intelligence. AI, which stands for Artificial intelligence, is the generic study of how human intelligence can be incorporated into computers. Machine learning, which is a sub-area of AI, on the other hand, concentrates on the theoretical foundations used by computational aspects of these algorithms, considered to belong to the field of Computational Intelligence and Soft Computing, some examples of which are neural networks, fuzzy systems and evolutionary algorithms. More simplistically, a machine-learning algorithm is used to determine the relationship between a system’s inputs and outputs using a learning data set that is representative of all the behavior found in the system using various data modeling techniques. This learning can be either supervised or unsupervised.

The interesting reality is that whether we are aware or not, machine learning based solutions are already part of our daily lives, so much so that BBC thought it would be fun just to point it out in Eight ways intelligent machines are already in your life. Not surprisingly, one of the eight areas mentioned there is healthcare.

Now we come to the hard part of this discussion. There are numerous different interplays between big data, machine learning and healthcare. Thousands of books are being written on it - my most recent search on Amazon with “machine learning” yielded 14,389 matches! Dedicated conferences on this topic are being organized. There are so many courses on it that David Venturi from Udacity was inspired to research and publish Every single Machine Learning course on the internet, ranked by your reviews. Virtually almost all healthcare startups are now expected to use some form of machine learning – VC funding to healthcare startups that uses some form of AI increased 29% year-over-year to hit 88 deals in 2016, and are already on track to reach a 6-year-high in 2017. A good starting point, however, is “Top 4 Machine Learning Use Cases for Healthcare Providers” written by Jennifer Bresnick. She broadly identifies the following areas where significant inroads have already been made by machine learning, imaging analytics and pathology, natural language processing and free-text data, clinical decision support and predictive analytics and finally, cyber-security and ransomware. If you are looking to get more specific, check out 7 Applications of Machine Learning in Pharma and Medicine by Daniel Faggella, where he identifies applications that use machine learning with the most forecasted impact on healthcare – disease identification/diagnosis, personalized treatment/behavioral modification, drug discovery/manufacturing, clinical trial research, radiology and radiotherapy, smart electronic health records and finally, epidemic outbreak prediction. In reality, it is becoming harder and harder to find any area of healthcare which is untouched by machine learning in some way these days.

Why is this happening? It’s because we are realizing that machine learning has enormous potential to make healthcare more efficient, smarter and cost effective. My prediction is that in the future, we will not talk about machine learning as a separate tool, it will become so ubiquitous that we will automatically assume it is part of a solution, much in the same way we do not think of internet search as a separate tool anymore, we automatically assume it is available. The more important question is now that the jinni is out of the bottle, where will it end? Will we one day have completely autonomous artificial systems such as the famous Emergency Medical Hologram Mark I replace human doctors as the creators of Star Trek imagined? Or will healthcare prove to be so complex that no machine can ever replace humans completely? Only time will tell.

When is a small sample size too small for statistical reporting?

It has been a fairly well known assumption in Statistics that a sample size of 30 is a so-called magic number in estimating distribution or statistical errors. The problem is that, firstly, according to Andrew Messing of the Center for Brain Science, Harvard University, like a lot of Rule of Thumb commonsense measures, this assumption does not have a solid theoretical basis to prove its veracity. Secondly, the number 30 is itself arbitrary, and some textbooks give alternative magic numbers of 50 or 20. Examples can be found in Is 30 the magic number issues in sample size estimation? or Shamanism as Statistical Knowledge: Is a Sample Size of 30 All You Need?, for example. Funny thing is that there is no formal proof that any of these numbers are useful because they all rely on assumptions that can fail to hold true in one or more ways, and as a result, the adequate sample size cannot be derived using the methods typically taught (and used) in the medical, social, cognitive, and behavioral sciences.

Like so many others before me, this got me thinking. One of my domains is healthcare data analytics, a field that is perpetually inundated with data. Should I test this rule of thumb and see if there is any truth to it?

Let’s set the background first. The data I was looking at centers around the treatment of Hepatitis C.  The goal of Hepatitis C therapy is to clear the patient’s blood of the Hepatitis C virus (HCV). During this treatment, the doctors routinely monitor the level of virus in the patient’s blood – a measurement known as viral load – typically in terms of International Units per milliliter (IU/mL). When I was slicing and dicing the data using different criteria such as age, sex, genotype etc., to report effectiveness of treatment, sometimes the sample sizes of these cohorts were becoming too small. So what should the minimum size of my sample set before I can confidently report that result?

Let’s look at some fairly simple mathematical model now. In this specific case, we assumed a T-distribution of our data. T-distribution is almost engineered so it gives a better estimate of our confidence intervals especially since we have a small sample size. It looks very similar to a normal distribution. It has a “mean”, which is our mean of our sampling distribution. But it also has fatter tails. So normally what we can do is that we find the estimate of the true standard deviation, and then we can say that the standard deviation of the sampling distribution is equal to the true standard deviation of our population divided by the square root of n, which is the sample size.

This is especially useful since we never know the true standard deviation, or we seldom know the true standard deviation. So, if we don’t know that, the best thing we can put in there is our sample standard deviation.

We do not necessarily call this estimate a probability interval; rather it is a “confidence interval” because we are making some assumptions. This confidence measure is going to change from sample to sample. And in particular, the expectation is that this is going to be a particularly bad estimate when we have a really small sample size.

Accordingly, we calculated confidence intervals with the above procedure for the data. If the size of the sample is more than a cut off, say 30, we have used Z-scores; otherwise we have used t-table for calculation. (I am assuming t-tables and Z-scores are outside the scope of this article.) This basically means that we first find the mean, then find the standard deviation, and finally find the standard error, which is equal to standard deviation divided by the square root of sample size. We then find the range either from the t-table or Z-score, as mentioned above. Finally, the adjusted range with a specific % confidence will be equal to the mean +/- the range, as calculated above.

The graph below shows the results where the sample size was 1,720 patients.

The next graph shows the results where the sample size is 28.

In both of these cases, it appears that sample size around 30 gives us enough statistical confidence in the results we are presenting. In both cases, however, we are bound by the fact that comparing effectiveness across treatments will probably be best related to the size of the sample (cohort) itself, the closest metric being the utilization factor. For the calculated values within each category, however, we should be able to report the numbers with a prescribed confidence interval. In all the calculations presented above, that confidence interval was 95%.

Somehow, we picked a set of “arbitrary” healthcare data and somehow, a sample size around 30 was an adequately large number to generate dependable statistics. But the question remains, why?

The best rationale I have come across of why this is such a popular number was given by Christopher C. Rout, of the University of KwaZulu-Natal, Department of Anesthetics and Critical Care, Durban, KwaZulu-Natal, South Africa. According to him, it is not “enough”, but rather it is that we need “at least” 30 samples before we can reasonably expect an analysis based upon the normal distribution (i.e. Z test) to be valid. That is, it represents a threshold above which the sample size is no longer considered small. It may have to do with the difference between the square roots of 1/n and 1/(n-1). At about 30 (actually between 32 and 33) this difference becomes less than 0.001, so in a way, the intuitive sense is that at or around that number of the sample size, the difference between samples of larger size may not contribute too much to the probability distribution calculation and a measure of estimated error goes down to acceptable levels.

Well, sounds about right to me.

Data interoperability

Data is fragile. Entering it is labor intensive, checking for quality is hard, reusing it is even harder. Take the example of something as simple as tracking progress of construction projects. Fairly simple forms with information fields such as project beginning date, estimated end date, percentage conclusion, resource allocation, funding status etc. But in reality, the process is heavily manual. Forms are often hand written taking days to process since the data needs to be  re-entered, often manually again, in electronic format. The form then gets disseminated and the necessary action, such as release of funds for the next stage of the project, can finally take place. Some of these large projects, although primarily owned by a contractor, are eventually carried out by hundreds of sub-contractors. Once a subcontractor fails to deliver on deadline, this delay ripples through an entire Eco-system and often puts an overall project in jeopardy .

Now imagine that the data were tagged during submission using a technology like XBRL (eXtensibe Business Reporting Language). Everyone in the value chain, from the sub-contractors who are submitting the data and tracking their own progress, the contractors who own the overall project, the federal agencies who often fund these projects and the Banks who provide Bonds, can read and consume this data within minutes.  Decisions are taken in minutes too. This can result is millions of dollars of savings for all involved, not to mention efficiency and transparency of the status of the projects.

The first step towards this was taken today by a consortium of companies under the leadership of USC Chico and technical backing from Apurba by proposing an XBRL-CET (Construction-Energy-Transportation) taxonomy.

Compliance and analytics – Two sides of the same coin

Thanks to US SEC, we now have tagged financial data freely available. What are we doing with it? Imagine this – someone has acquired the data (thanks to an accounting system), someone has prepared the data (courtesy of CFOs and their teams) and someone has labeled the data (this time thanks goes to the person who did the XBRL tagging), someone made sure the labels are correct (the ever suffering auditors get the credit this time) and finally someone made sure all this was done according to some process someone has established and monitored (this time our thanks goes to the SEC EDGAR validation system).  Wow! All we now have to do is to figure out what model to use and what colors to apply in the graphs – we have analytics and visualization!

Data visualization – Pulling data from various data files from SEC filing

This all sounds too simplistic, right? That is because the picture I just painted is just that.

There are tons of issues that are hidden in this simple process. The devil is always in the details. What is the quality of this tagging? How much details are in the financial tables versus hidden in the notes Sections? How granular is this information? Is the information sufficiently functionally related so that a complete picture can be drawn? Is it possible to query the model built based on this data? Does the model give us enough data points to forecast anything reliably? How much data is really tagged? How much of this tagging is consistent across multiple quarters? How consistent are different companies in tagging the same concept with the same element? How much of the data is using extended customized tagging? All of these are valid questions and raises a lot of very legitimate issues.

But what is really the primary question we should ask? To me, that question is:

“Does this tagged data help build better analytics than what we could before there was no tagged data available?”

A company snapshot

As someone who has been working on data analytics for quite a number of years, I can safely say that the answer is an emphatic YES!  Yes, there are problems. Yes, it is not highly reliable or accurate or sometimes even usable, but it is better than what we could have done previously. We now have tools that can build quick models, connect relevant data, compare performances and even make predictions. And this trend has not been completely missed either. Leena Roselli, Senior Research Manager at  Financial Executives Research Foundation, Inc. (FERF) recently authored a report titled “Data Mining with XBRL and Other Sources” and explored some solutions that are just hitting the market including I-Metrix (RR Donnelly), Ask 9W (9W Search) and Lyticas (Apurba). While we are still pioneering in the financial analytics and visualization space using XBRL as the primary source of data, the initial solutions are quite promising.

The bottom line is that this mandate has given us a golden opportunity to move from data mandate to data consumption, from the avoidance of punishment to generating deeper Business Intelligence. Join us in that voyage!


Copyright © 2019 The Data Dive

Theme by Anders NorenUp ↑