The Data Dive

Author: fuad rahman

Data interoperability


Warning: Division by zero in /home/content/67/5605767/html/blog/wp-content/plugins/nice-youtube-lite/niceYouTubeLite.php on line 184

Data is fragile. Entering it is labor intensive, checking for quality is hard, reusing it is even harder. Take the example of something as simple as tracking progress of construction projects. Fairly simple forms with information fields such as project beginning date, estimated end date, percentage conclusion, resource allocation, funding status etc. But in reality, the process is heavily manual. Forms are often hand written taking days to process since the data needs to be  re-entered, often manually again, in electronic format. The form then gets disseminated and the necessary action, such as release of funds for the next stage of the project, can finally take place. Some of these large projects, although primarily owned by a contractor, are eventually carried out by hundreds of sub-contractors. Once a subcontractor fails to deliver on deadline, this delay ripples through an entire Eco-system and often puts an overall project in jeopardy .

Now imagine that the data were tagged during submission using a technology like XBRL (eXtensibe Business Reporting Language). Everyone in the value chain, from the sub-contractors who are submitting the data and tracking their own progress, the contractors who own the overall project, the federal agencies who often fund these projects and the Banks who provide Bonds, can read and consume this data within minutes.  Decisions are taken in minutes too. This can result is millions of dollars of savings for all involved, not to mention efficiency and transparency of the status of the projects.

The first step towards this was taken today by a consortium of companies under the leadership of USC Chico and technical backing from Apurba by proposing an XBRL-CET (Construction-Energy-Transportation) taxonomy.

Compliance and analytics – Two sides of the same coin

Thanks to US SEC, we now have tagged financial data freely available. What are we doing with it? Imagine this – someone has acquired the data (thanks to an accounting system), someone has prepared the data (courtesy of CFOs and their teams) and someone has labeled the data (this time thanks goes to the person who did the XBRL tagging), someone made sure the labels are correct (the ever suffering auditors get the credit this time) and finally someone made sure all this was done according to some process someone has established and monitored (this time our thanks goes to the SEC EDGAR validation system).  Wow! All we now have to do is to figure out what model to use and what colors to apply in the graphs – we have analytics and visualization!

Data visualization – Pulling data from various data files from SEC filing

This all sounds too simplistic, right? That is because the picture I just painted is just that.

There are tons of issues that are hidden in this simple process. The devil is always in the details. What is the quality of this tagging? How much details are in the financial tables versus hidden in the notes Sections? How granular is this information? Is the information sufficiently functionally related so that a complete picture can be drawn? Is it possible to query the model built based on this data? Does the model give us enough data points to forecast anything reliably? How much data is really tagged? How much of this tagging is consistent across multiple quarters? How consistent are different companies in tagging the same concept with the same element? How much of the data is using extended customized tagging? All of these are valid questions and raises a lot of very legitimate issues.

But what is really the primary question we should ask? To me, that question is:

“Does this tagged data help build better analytics than what we could before there was no tagged data available?”

A company snapshot

As someone who has been working on data analytics for quite a number of years, I can safely say that the answer is an emphatic YES!  Yes, there are problems. Yes, it is not highly reliable or accurate or sometimes even usable, but it is better than what we could have done previously. We now have tools that can build quick models, connect relevant data, compare performances and even make predictions. And this trend has not been completely missed either. Leena Roselli, Senior Research Manager at  Financial Executives Research Foundation, Inc. (FERF) recently authored a report titled “Data Mining with XBRL and Other Sources” and explored some solutions that are just hitting the market including I-Metrix (RR Donnelly), Ask 9W (9W Search) and Lyticas (Apurba). While we are still pioneering in the financial analytics and visualization space using XBRL as the primary source of data, the initial solutions are quite promising.

The bottom line is that this mandate has given us a golden opportunity to move from data mandate to data consumption, from the avoidance of punishment to generating deeper Business Intelligence. Join us in that voyage!

 

Copyright © 2017 The Data Dive

Theme by Anders NorenUp ↑