The Data Dive

Month: April 2016

Big Data Analytics – Test Driven Platform Design – Early smoke testing

When we began our adventures in Spark we soon brought up the topic of smoke testing.

So what’s smoke testing?

In my mind smoke testing is when we are making sure that our system doesn’t break as soon as it’s turned on.
It was brought to the foreground as at the time it was our first foray into this new world.

We had installed three critical components:

Apache Spark – Cluster Processing Framework
Cassandra – NoSQL Database
Hadoop – Storage and Cluster Processing Execution Framework

Okay these days that’s hardly earth shattering in Big Data Analytics world. In this world that trio is as common as Fish & Chips and mushy peas is in England. So I am not giving away any company secrets. This is good for me. However if there are some Big Data newbies out there reading this post, this is a great combination.

Back to our question though, how should we smoke test this? The main thrust of our Lyticas product is handling of XBRL which is a mixture of numerical and textual processing.  Also dealing with stock price information which comes as a time series.

To that we focused on testing how well our stack would respond to these Data format types.

1.    Text – XBRL

2.     Time Series Data

All the above use Spark Core functionality. We analysed XBRL  and retrieved statistical information from time series data.

One other important part of our strategy was to test the performance at the least optimum configuration possible. Kind of finding out if your starship can still maintain a stable warp field with a single nacelle.

Here is something else to ponder. What if your system would give an acceptable level of performance at the most basic configuration. For example running on the minimum number of processing nodes, databases and servers, performance was still strong because the quality of code!!!

I will leave this blog with that bombshell.

 

 

 

Big Data Analytics – Illuminating dark data

In this blog I am going to describe a scenario where a Big Data stack is introduced to provide a business advantage to an established information ecosystem. The company in this scenario is in biopharmaceutical and it involves dark data.

For years this company has invested in and maintained an enterprise content management to store all research related and operational documentation.

It’s been working well for many years. You know when a system is well received. The user community use it as simply as a kitchen appliance.

The only time it’s noticed is when there is a failure.

These particular systems have reached an enviable operation record. In the last year the unscheduled downtime has been 35 minutes spread over the year. That’s pretty good going for a system with 800 concurrent users and a total global community of 5000 users.

This system supports the development of new drugs, speculative research, marketing, finance, buildings operations in fact almost everything.

As it’s a single large repository, the security mechanism is incredibly granular, insuring information is dished out on a need to know basis.

All looks well but what lies beneath are some serious issues.

The escalating costs  of running this system are becoming difficult to justify.

It’s expensive to maintain. There are on going license, support, hardware and staff costs.

The knee jerk reaction is to switch to a system that’s cheaper to run.

Well let’s look at that for a moment.

To shift to an entirely new ecosystem is a massive cost in itself. It also carries a great deal of risk.

What if there is data loss? What if what is delivered has a poorer operational record?

IT ain’t stupid here. They know if they screw up, the scientists who create the value in this company will be out for their blood.

If upper management are prepared to deal with couple of thousand scientists, that’s fine. Like who listens to geeks anyway.

However when outages affect the pipeline to new drugs coming onto the market, that will affect share price. T

hat will get senior management closer to the executioners block! Which bit shall I chop off first????

So what are the alternatives to migrating to a new ecosystem?

Well augment what you have already.

This company is at least lucky that their current stack is extensible.

You are able to bolt on other technologies that can leverage their existing repository.

So let’s ask the question, “what additional features would your users like that you aren’t offering?”

The quick answer is collaboration. They don’t have spaces where they can collaborate across continents.

I mean the ability to facilitate knowledge creation through a synthesis of joint document authoring, review, publishing and audio/video conferencing. Okay now we are going off track!

This isn’t the analytics problem we are looking for.

However this is exactly what this company is investing in. They are doing it because it’s going to bring back some added value and also it’s something they can understand.

What I am proposing is something akin to sorcery! And it gives shiver down my spine. I am not talking about the feeling you get reading about You-Know-Who in the Harry Potter world created by the amazing JK Rowling.

I am taking about the creepy feeling you get when reading Lovecraft or Crowley.

The bump in the night that freaks you out when reading., “The Tibetan Book of the Dead”.

I am taking about going after dark data!

The information that exists in large repositories that is inaccessible due to non existent meta data. I am taking about metrics on fluctuations in dark data in close to real-time as we can.

The term dark data is not new. Here is the Gartner definition.
Dark data is the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes.

In the context of the biopharma, dark data is content whose value goes unrealised.

For example that graduate students promising research that goes unnoticed.

If only a few of these ideas are realised for a pharma company it could be the make or break of a new drug. It could literally be worth billions.

Dark data is often untagged or at best the metadata applied to it gives no clue of what the content relates to. So how do we get value?

We have to go in and retrieve the semantic meaning from the text. We need to retrieve the concepts and create social graph.

Once we have that we can see bring the dark into the light and see the kind of information assets we have, whose created them, when and the distribution of dark data in our information repository.

Now the question is how? How can we do this?

This is where the tools we have applying to Big Data analytics can help. We can trawl through vast quantities of information using cluster processing powering semantic meaning & concept extraction. Then visualise what we have found out and assist the data scientist to uncover new value.

That’s the dream and it’s not far off…..

 

 

 

Big Data Definitions, Misconceptions and Myths

After all these years of being involved in ‘Big Data’, I have finally got around to write this blog, entitled, “Big Data – Definitions, Myths & Misconceptions”.

As I wrote that statement, I got the same feeling as if I asked the question, “What is God? What is the meaning of life!”. Such is the fervour and hype around this topic these days. There are countless books explaining how Big Data is already revolutionising our world. There are legions of companies saying that they are doing it.

But what does it mean?

Here is the trusted Gartner definition.

Big Data is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.

I am sorry but I still don’t feel much the wiser! However like the most profound Zen Koans meaning is realised beneath the surface of the words.

So let dive in!

HIGH-VOLUME, HIGH-VELOCITY and/or HIGH VARIETY

To me these terms describe the characteristics of the information we are dealing with. Years before I worked in big data I worked in enterprise content management.

My clients were large multinational institutions, that needed to store Terra Bytes upon Terra Bytes of data for purposes ranging from regulatory compliance to supporting business critical operations. I suppose this is what comes to my mind when I think of high volume.

The next is High Velocity! To me that means the rate at which information systems are receiving and processing information. Consider an enterprise resource planning application for example an airline reservation system or a large supermarket distribution centre. Information is being updated continuously by in some cases thousands of concurrent users.

The final term is High Variety. Speaking from my enterprise content management background this means the range of the types of documents and content produced by a large institution. In many of the companies I consulted for, these documents were often unstructured Microsoft office documents, PDF’s, Audio and Video.

In addition to these documents were information in large database (structure data – built against a schema) and information in XML format.

Then we had the semi – structured metadata. A wide variety of information types, data and formats.

Now we have have delved into the meaning beneath the surface of these words high-volume, high-velocity and/or high variety, I am getting the feeling that although I have just explored these characteristics from my experience of enterprise content management, that actually these characteristics have been with us for a long while.

Consider something like these institutions, “Library of Congress, British Library, Bibliothèque nationale de France”. There are of course libraries with thousands upon thousands of books. Here the volume term is obvious, shelves as far as the eye can see. Variety, the range of topics and the Velocity the number of new publications coming in or books being tracked as they are borrowed.

If this was all Big Data was about then it feels that it’s the same old, same old but with a modern branding.

So here is a MISCONCEPTION. Big data is where you just have lots of information and its a rebadge of technology and concepts we have been using forever. It’s not of course…

The second part of the Gartner definition takes that misconception apart as its about getting something useful from these collections of information.

From my experience in enterprise content management that meant ensuring that the information can be retrieved after it being stored.

Taking the library example say I was looking for the book, “War and Peace”, rather than spending the next hundred years trying to find it on one of the myriad of shelves, it would be useful if all the books were tagged on Title, Subject, Author, location, What we are doing of course is applying metadata to retrieve documents.

What if we were trying to ask another question? Find me the book where two characters Pierre Bezukhov and Natasha Rostova marry. If they do, to bring up the precise sections of the book where they do!

We would need not only to do a full text search but natural language search too. This ‘little’ requirement brings with it a huge amount of work beneath the scene to deliver it. Now add to this a demand that we bring the information back in less than 4 seconds.

Let’s take another example. I want to know the number of books borrowed and any given time, organised into fiction/non-fiction, topic and author.

I want to be able to learn something about the demographics of the borrowers? Now that’s a question that uses an entirely different asset class of information.

Now what if I wanted to know this information to plan an advertising campaign or create the market for additional services?

Now we are taking Big Data!

Big Data is a place where we are no longer observing our systems but engaging with them to create value through gaining insight from the information being gathered every moment.

So what’s the myth? To create these systems its prohibitively expensive! Like imagine digitising the entire contents of a library. Well since the beginning of this new millennium, this Herculean effort has been going on and by now most of humanities greatest literature has probably been digitised. Meanwhile every new publication is being born digitised first!!! So it’s been paid for already!

Here is another myth! The use of advanced math for analysing trends and patterns in data such as complex machine learning algorithms are for university research labs or the closed doors of the likes of Google! Use of advanced computing techniques such as cluster programming is beyond the reach of your coders.

All a big myth! Why??

Because in the last two years these techniques have been packaged and made accessible, they are now waiting to be leveraged. This leads to a new frontier.

What if Big Data could be about gaining insight from all the information we have locked away in our current information systems? In many large companies information is dispersed in a variety of repositories? What if we could mine this information? What could we learn? Why not use the latest machine learning and cluster computing to do just that?

 

 

 

 

 

 

Apache Sparked!!

From around the autumn/fall of 2015, I went through some serious soul searching. Since around 2012, we had been using a well established distributed data processing technology. I am referring to MapReduce. Honestly we were using it, but using it with a lot of effort in manpower to keep it running. I would describe MapReduce like a 1970′s Porsche 911. It’s fast, it does the job but by heaven the engine is in the wrong place and get it wrong and in the country hedge you go in a frightening tailspin.

For the experienced technologists at my company, they weren’t too keen on looking at alternatives. I am being completely frank here. They knew how to make it work. Like driving that 1970′s Porsche 911 they knew when to go off the throttle and on to the break, slow into corners and fast out. I can go on about racing metaphors. Yes I am a Porsche enthusiast.

The rookies weren’t keen at all. They would much prefer the lastest 2015 Porsche 911. They wanted the easy to use API’s, fast set up and maintenance. Keeping sports car metaphors these rookies wanted traction control, GPS navigation, leather, iPhone dock, Blue tooth the works!

I had been hearing about Apache Spark, an alternative to MapReduce one that would offer greater easy of use, installation, performance and flexibility.

Honestly it sounded too good to be true. It really did! MapReduce developed by Google in 2002 and adopted in 2008, had done the rounds. It had been fighting hard since then and has a strong following in many companies.

I began by asking around. I took a field of opinions from people working with larger data sets and needing a cluster programming solution. What I got was suspicion of new technology.

Finally we spoke with a few contacts working in Big Data in Silicon Valley. They said that Apache Spark was the new disruptive kid on the block and was packing quite a punch.

Apache Spark was developed at University of California, Berkeley, as a response to short comings in cluster computing frameworks such as MapReduce and Dryad.

So here we have a number of conflicting opinions, so what did we do?

We went ahead and tried it.

I have to say, we were not disappointed.

It was easy to install and we found to how surprise compatible with our existing Hadoop Distributed filesystem (HDFS). To our joy it supported Amazon S3 and our beloved Cassandra NoSQL database. The subject of Cassandra is for another blog!

Anyway the above list of compatibilities came to our attention immediately as once we started to get our hands dirty. The other was support for Java and Maven.

However the real surprise came when we started using Apache Spark…..

The first thing we noticed was how easy it was to manage and process the data. Apache Spark’s principle programming abstraction is the Resilient Distributed Data Set (RDD). Imagine if you will an array! That’s essentially what an RDD appears to a programmer. What happens underneath is really interesting. The Apache Spark Engine takes the responsibility of processing the RDD across a cluster. The engine takes care of everything so that all you need to worry about is building and processing your RDD.

We began our work with text documents. What we do as a company is to perform natural language processing on textual content. Very often this means parsing the document and then processing each line.

So in keeping with that we decided to create a simple text processing application with Apache Spark.

We installed Apache Spark on an Amazon Cloud Instance. We began with creating an application to load an entire text document into an RDD and apply a search algorithm to recover specific lines of text. It was a very simple test, but it was indicative of how easy the Apache Spark API was to use.

We also noticed that using the same RDD concept we could work with real-time streams of data. Also there was support for machine learning libraries too.

From the start of our investigation to the present, we have become more and more convinced that Apache Spark was the right choice for us. Even our experienced MapReduce people have been converted, because its easy to use, fast and has  a lot more useful features.

Beyond the added features offered by Apache Spark, what struck me was the ability to operate in real-time.

In our view this presents an opportunity from moving away from large scale batch processing of historical data but to a new paradigm where we are engaging with our data like never before.

I for one am very keen to see what insights we can learn from this shift.

 

 

Copyright © 2017 The Data Dive

Theme by Anders NorenUp ↑