All Posts By

John Mertic

The state of open source and big data – three years later

By | Blog

Originally posted on DataWorks Summit blog

ODPi turns 3 this year, being first announced at the spring Strata+Hadoop World and brought under the auspice of the Linux Foundation later in the year at the fall Strata+Hadoop World. Hadoop then turned 10 the following year, and seemed to be proclaimed deadthen alive, and then seemingly scrubbed from the world. One might think this meant the nail in the coffin for an organization centered on Hadoop standardization.

The Linux Foundation looks at open source projects in a life cycle, driven by the market needs. A common chart used to describe this is shown below.

In essence, open source foundations such as ODPi invest in developer communities, whose work enables accelerated delivery of new products on the marketplace and cost savings for R&D in organizations. As this produces profits for these organizations, they push investment back into the projects and foundations that support this work. In the present day, open source parlance this practice known as “Managing your Software Supply Chain”. An active cycle here is able to react and adapt to market demands, as well as, take inputs from all stakeholders – developers, implementers, administrators, and end-users.

So, as ODPi started to hit stride in 2016, we talked with people across the data landscape. From these conversations, we quickly saw that big data technology enterprise production adoption numbers were skewed – mostly because of the lack of a solid definition. To better baseline the discussion, we came up with this maturity model on how big data technologies are adopted in the enterprise.

Using this model showed that in 2017, nearly 3/4ths of organizations are still not fully enterprise-wide in deployment of big data. What’s blocking this? Data Governance, a broad and under-invested in area, but one growing more critical by the day with new regulations coming into play along with breakdowns in managing data privacy.

ODPi’s belief is that tackling such a broad issue as Data Governance can only be done with all members of the data ecosystem participating – platform vendors, ISVs, end users, and data governance and privacy experts. This collaboration can only happen in a vendor-neutral space, which is why ODPi has launched a PMC to solely focus on this space.

During Dataworks Summit Berlin, there will be numerous sessions and Meetups around this effort to help you learn more:

We will also be active in the community showcase, where you can chat directly with the experts in this area and learn how to participate in this effort.

Bringing it back to the original question – we are three years into this journey for creating sustainability in big data. We’ve had successes in reducing the numbers of disparate platforms and bringing market awareness to the issues of enterprises adopting these tools. Now the community is poised to take the lessons learned and build a strong community around governance to solidify this practice. Are the challenges different than 3 years ago – absolutely. However, the goal of enterprise adoption remains the same, and with that, we see that big data is becoming more mature, more inclusive, and is building a more collaboratively community.

ODPi Webinar on How BI and Data Science Gets Results

By | Blog

By John Mertic, Director of ODPi at The Linux Foundation

ODPi recently hosted a webinar on getting results from BI and Data Science with Cupid Chan, managing partner at 4C Decision, Moon soo Lee, CTO and co-founder of ZEPL and creator of Apache Zeppelin, and Frank McQuillan, director of product management at Pivotal.

During the webinar, we discussed the convergence of traditional BI and Data Science disciplines (machine learning, artificial intelligence… etc), and why statistical/data science models can now run on Hadoop in a much more cost effective manner than a few years ago.

The second part of the webinar focused on demos of Jupyter Notebooks and Apache Zeppelin. These were important and relevant demos, as Data Scientist utilize Jupyter Notebooks the most and Apache Zeppelin supports multiple technologies, multi-languages & environments; making it a great tool for BI.

The inspiration for the webinar was the new Data Science Notebook Guidelines. Created by the ODPi BI and Data Science SIG, the guidelines help bridge the gap so that BI tools can sit harmoniously on top of both Hadoop and RDBMS, while providing the same, or even more, business insight to the BI users who have also Hadoop in the backend. Download Now »

Additionally, webinar listeners asked detailed questions; including:

  • How can one transition from a bioinformatics developer to Data scientist in Bio-statistic?
  • Where do you see the future of both Jupyter and Zeppelin going? Are there other key data science challenges needing solved by these tools?
  • When do you choose to use one notebook over the other?
  • Can the 2 notebooks be used together?  i.e., can you create a Jupyter notebook and save it, then upload it into Zeppelin (or vice versa)?

Overall, the webinar was an insightful discussion on how we can achieve big data ecosystem integration in a collaborative way

If you missed the webinar, Watch the Replay and Download the Slides.

Looking at the latest Gartner Magic Quadrant for Business Intelligence and Analytics Platforms

By | Blog

By John Mertic

I spent some time reviewing the latest Gartner Magic Quadrant for Business Intelligence and Analytics Platforms in preparation for my time at the Gartner Data and Analytics Summit last week. Overall, I’m really excited to see vendors overall scoring higher in ‘Ability to Execute’; Gartner toughly judges this so seeing the general shift upwards is great to see.

While the piece is clearly targeted towards buyers of these tools – I wanted to take a critical eye on the positioning of vendors in relation to their interoperability with Big Data and Hadoop tools. After all, it was a mere decade ago that all of data was covered by a single Gartner analyst. Enter the age of Big Data; with that variability, velocity, and volume has come a cornucopia of products, strategies, and opportunities for answering the data question.

In the same way, BI and Analytics has come from being purely the realm of “data at rest” to become cohesive with “data in motion”. It’s no surprise then to see two “pure play big data” BI vendors, Datameer and ZoomData, joining ClearStory which joined the MQ last year – cementing the enterprise production need of valuable data insights. And with a tip of the hat to the new breed of open source trailblazers such as Hortonworks, they heavily leverage Hadoop and Spark as not just another data source but instead a tool to better process data – letting them focus on their core competency of delivering business insights.

However, what really struck me was the positioning of data governance as a whole in this report – let’s dig into that more.

Data governance and discovery is being pushed farther out

If you’d compare the 2016 report to the 2017 report – you’d immediately notice this line from 2016…

By 2018, smart, governed, Hadoop-based, search-based and visual-based data discovery will converge in a single form of next-generation data discovery that will include self-service data preparation and natural-language generation.

…became…

By 2020, smart, governed, Hadoop/Spark-, search- and visual-based data discovery capabilities will converge into a single set of next-generation data discovery capabilities as components of modern BI and analytics platforms.

Two year delay in just a year is something of note – clearly there is a continual gap in converging the technologies. This aligns with what our members and end-users in our UAB mention as well – the lack of a unified standard here is hurting adoption and investment.

Governance no longer considered a critical capability for a BI vendor

This really stood out to me in light of the point above – is sounds like Gartner believes that governance will need to happen at the data source versus the access point. It’s a clear message that better data management needs to happen in the data lake – we can’t secure at the endpoints for true enterprise production deployment. This again supports the needs of driving standards in the data security and governance space.

I recently sat down with IBM Analytics’ WW Analytics Client Architect Neil Stokes on our ODPi Member Conversations podcast series and the discussion of data lakes was a very present one. To listen to this podcast, visit ODPi Youtube.
I’m reminded of the HL Mencken quote “For every complex problem there is an answer that is clear, simple, and wrong.” Data governance is hard, and not ever going to be something one vendor will solve in a vacuum. That’s why I’m really excited to see the output of both our BI and Data Science SIG and Data Security and Governance SIG in the coming month. Starting the conversation in the context of real world usage, looking at both the challenges and opportunities, is the key to building any successful product. Perhaps this work could be the catalyst for smarter investment and value adds as these platforms continue to grow and become more mature.

Is Your Data Clean or Dirty?

By | Blog

downloadOver the weekend I read an incredible post from SAS Big Data evangelist Tamara Dull. I love her down-to-earth and real life perspectives on Big Data, and your analogy of cleaning the car hit home for me. She is spot on – clean data pays dividends in being able to get better insights.

But, what is clean data? What is that threshold that says your data is clean versus dirty?

Could data even be “too clean”?

(pause to hear gasps from my OCD readers)

Clean data and clean houses

Taking this to a real life example, I can say first hand there are often different definitions of what clean is. For example, my wife is very keen on keeping excess items off our kitchen counters, to the point where she’ll see something that doesn’t belong and put it in the first cabinet or drawer she encounters that has space for it. Me on the other hand, I’m big on finding what I believe is the right place for it. Both of us have the same goal in mind – get the counters clean.

To each of us, there’s value in our approaches – which is efficiency. Hers is optimized at the front end, mine at the back end. However, the end result of each of our “cleaning” could have negative impacts (with my approach, it’s my wife’s inability to find where I put something – with my wife’s method, it’s having items fall out of a cabinet on me as I open it).

Is “clean” to one person the same as everyone?

The life lesson above teaches something critical about data – clean isn’t a cut and dry threshold. And taking a page from Tamara’s post, it’s also not a static definition.

The trap you can quickly fall into is thinking of data in the same terms as you would have looked at structured data. While yes, part of the challenge is to understand what the data is and its relationships, the more crucial challenge is how you intend to consume the data and then use it. This is a shift from the RDBMS thinking of focusing on normalization and structure first and then usage second. With the Big Data-esque ways of consuming and processing data (streaming, ML, AI, IOT) combined with velocity, variability, and volume, the use-case mindset is exactly where your focus should be.

“Use case first approach” is how we look at these technologies at ODPi. We look at questions like “Here is the data I have, and this is what I’m trying to find out – what is the right approach/tools/patterns to use?” and how they can be answered. We ensure all of our compliant platforms, interoperable apps, and specifications have the components needed to enable successful business outcomes. This provides companies the peace of mind that they are making a safe investment, and that switching tools doesn’t mean that their clean data becomes less than optimal to leverage the way they want.

This parallels on the discussion of cleaning in our house – are we trying to clean up quickly because company is coming over, or are we trying to go through an entire room and organize it. Approaching data cleaning is the same thought process.

Is Open Source Big Data a broken promise?

By | Blog

An article caught my eye this past week, where Robert Hof of SiliconAngle asserted that the challenges of Apache Hadoop adoption are a byproduct of the open source development approach. Hof argues that the various pieces do not integrate well together and some projects are not living up to their promises, which has resulting in additional work being required by organizations for them to see their true value. This has lead to a small pool of available talent and end-customers that are uncertain about where to direct their investments.

On the heels of this article, I watched the below video from Rakesh Kant of US Bank that I found just as insightful.


His sentiment rings loud and clear:

  • “I’m not seeing any signal, only noise.”
  • “The landscape is evolving into more experiments”
  • “A standard is required to help businesses”
  • “I’d like to focus time on delivering business value”

The Hadoop ecosystem has always been a technology focused one, and its clear this technology has been ground breaking and impactful. However, I do think that, over time, this technology has evolved to solve the needs of technologists. Enterprises have been largely been left without a voice and to struggle to embrace it with confidence.

In my view, open source as a development model is not the problem. Rather, it’s the lack of feedback from end-users like US Bank into the process. ODPi would like to solve this problem and help end-users share their feedback.

If you are an end-user of Hadoop, we’d love to have you as part of our End User Advisory Board to discuss these issues and help us focus on making adopting these technologies less risky for you.

My Experience at Global Big Data Summit: Discussing the Importance of Standards

By | Blog

I had a good day last week presenting to the audience at the Global Big Data Summit in Santa Clara. The tail end of the the last day of any conference is a bit slow, but was thrilled when many came barreling in right as I was ready to start working through my slidedeck which spoke to the point of the importance of standards, like ODPi, in driving future investment in Big Data and Apache Hadoop.

I had one critical question after the talk that I thoroughly enjoyed answering. A gentleman pushed back on my point that standards need to be the focus. In his experience, staff training and education were the biggest concerns and it didn’t make sense to focus on standards until a critical mass of developers and practitioners were properly trained first. It was a fair argument, and one that Gartner has found as a key blocker to Apache Hadoop growth as well, but to me one that tries to treat the symptom more than the core issue, and I pushed back saying that standards enables better education and enablement. My point made sense to him, but I walked away wanting to discuss this more in a blog with better data points behind it. After all, we are in the data industry here and should be data driven!

If there is one industry where standards are at the forefront, it’s education. Education standards are a very touchy subject (disclaimer here – I’m a parent of 4 school aged children and good friends with several educators) and while I’ll attempt to steer clear of the execution for this article, the concept of what is trying to be driven makes perfect sense. Does what skills a first grader have in one state equate with another state? What are reasonable benchmarks for defining competency? Can trends in learning/teaching method and outcomes be better correlated?

I came across an interview with a leader in educational standards entitled “How and Why Standards Can Improve Student Achievement: A Conversation with Robert J. Marzano”. The interviewee made some interesting insights which drew parallels to the critical question I received at the talk. Here’s a few quotes from the interview and their relation to Apache Hadoop standards:

“Standards hold the greatest hope for significantly improving student achievement. Every other policy mandate we’ve tried hasn’t done so. For example, right after A Nation at Risk (Washington, DC: U.S. Department of Education, 1983) was published, we tried to increase academic achievement by making graduation requirements more rigorous. That was the first wave of reform, but it didn’t have much of an effect.”

This makes a great point – creating a measuring stick for competency without some sort of standard to base education from hurts more than it helps.

The interviewer goes on to ask about what conditions are needed to implement standards.

“Cut the number of standards and the content within standards dramatically. If you look at all the national and state documents that McREL has organized on its Web site (www.mcrel.org), you’ll find approximately 130 across some 14 different subject areas. The knowledge and skills that these documents describe represent about 3,500 benchmarks. To cover all this content, you would have to change schooling from K–12 to K–22. Even if you look at a specific state document and start calculating how much time it would take to cover all the content it contains, there’s just not enough time to do it. So step one toward implementing standards is to cut the amount of content addressed within standards. By my reckoning, we would have to cut content by about two-thirds. The sheer number of standards is the biggest impediment to implementing standards.”

Lots and lots of content and things identified to learn across a diverse set of subject areas, with a finite time to turn out individuals competent in the space. Seem similar to the situation in the Apache Hadoop ecosystem?

The interviewer then follows up with asking how can you do this with knowledge continuing to expand.

“It is a hard task, but not impossible. So far the people we’ve asked to articulate standards have been subject matter specialists. If I teach music and my life is devoted to that, of course I’m going to believe that all of what’s identified in the national documents is important. Subject matter experts were certainly the ones to answer the question, What’s important in your content area? To answer the question, What’s absolutely essential? you have to broaden that population dramatically to include all constituents—those with and without college degrees.”

This response aligns very well with the ODPi approach to creating Apache Hadoop standards. We aren’t in the business of creating full end-to-end comprehensive standards of what an Apache Hadoop Platform should offer, or an Apache Hadoop-Native Big Data Application should adhere to, but instead focus on what’s truly important to provide that base level –  those essential pieces for what a platform should offer. And I particularly like the last point “expanding the scope of the conversation around standard to get diverse opinions and experiences,” which is something ODPi is uniquely positioned to drive.

One last quote, which I think shapes the “Why?” on this effort.

“Whether we focus on standards or not, we’re entering an era of accountability that has been created by technology and the information explosion.”

The enterprise has the same expectations – they want to lower risks in Big Data investments, which those risks are a byproduct of not having staff to manage them. Fortune 500 executives need this in place to have any confidence in this technology, which the abysmal adoption rates have shown to be problematic. In short, Apache Hadoop needs to be accountable for its enterprise growth.

ODPi Meetup Recap: “War Stories of Making Software Work with Hadoop”

By | Blog

Hadoop Summit is notorious for bringing together everyone who’s anyone in the in the Big Data world – and this year’s event, welcoming more than 4,000 attendees, was no different.

 

Not only was ODPi able to announce that five Apache™ Hadoop® distributions are officially ODPi Runtime Compliant, but we also hosted a meetup that centered on “War Stories of Making Software Work with Hadoop.”

 

Successfully migrating big data software to interoperate with one or more Apache™ Hadoop® releases requires unique engineering approaches and streamlined innovation. Our meetup discussed the importance and benefits of certifying compatibility between multiple Hadoop distributions. Those who have navigated this space for years without any true standardization shared their war stories.  

Attendees also heard from ODPi members hailing from big data software vendors and ISVs. The War Stories panel featured insights from Scott Gray, chief architect of IBM’s Open Platform for Apache Hadoop; Vineet Goel, principal product manager of Pivotal HDB & Hadoop at Pivotal; Paul Kent, VP of big data initiatives at SAS; and Smiti Sharma, principal engineer of big data and emerging technologies for EMC. These members have each ported their software to work with one or more Hadoop distributions.

They discussed technical challenges they overcame and why they believe ODPi will help simplify this for both end users and ISVs in the future.

After explaining to the room how their companies are committed to both big data innovation, and how their numerous technologies aid end users, Gray, Goel, Kent, and Sharma then covered off on cross-organizational compatibility within the Hadoop space.

 

John Mertic’s first question to the panel, “Before the concept of what ODPi is meant to deliver, what were the chief challenges you were running into?” (can be found at the 28:50 mark).

When diving into this question – most of which centered on their experience and the difficulties of supporting multiple, disjointed distributions – the panelists made some insightful statements.

Gray of IBM set the stage for these pain points, noting, “Hadoop evolves at an incredible pace and there’s this never-ending tension between what the customers want… and distros [being] pressed to keep up with this evolution, and we have all these products trying to chase the distribution… It makes it incredibly, insanely expensive… It really was in our best interest to try to put a little sanity into the landscape.”

 

Goel applauded ODPi’s baseline specifications and explained Pivotal’s arduous journey of taking on a new distribution (around the 34:00 mark). Mertic commented: “I like how you said, ‘If we had the money back from supporting all these distros, imagine the innovation we could have…’ I think that’s a really powerful statement.”

After kicking off an interactive Q&A with the engaged audience, an audience member then asked for examples of the value proposition for the end users for engaging with companies part of ODPi (starting after the 42:00 mark).

Sharma addressed this question, noting her experience in pre-sales, saying “You could benefit from being on an ODPi-compliant platform… if you want to have your application portable from a Hadoop as an OS, it’s possibile through being part of ODPi.”

 

“In the early days of Hadoop, you really did have to grow your own in-house talent,” said Kent. “but we’re entering the mature part of the lifecycle curve where there’s lots of customers that just want to pick it up and use it. They don't really want to get into all these nuances. So the value of something like ODPi… will inevitably make a standardized path, where people can say ‘If you don't go out of these lines, you’re pretty safe.’”

Catch a full recording of our meetup, centered on how ODPi fits into the Hadoop and Big Data ecosystem, here – and don’t forget to subscribe to our YouTube channel!

Hadoop Summit San Jose 2016 Wrap-up

By | Blog

We’re Making Good on our Pledge to Open the Big Data Ecosystem

As part of the industry convergence on San Jose, ODPi members and Linux Foundation staffers used Hadoop Summit to share our common commitment to grow Apache Hadoop and Big Data through a set of Specifications.

HS16SJPic1.jpg

.@vVineet @ScottCGrayIBM @hornpolish & @smiti_sharma sharing “War Stories: Making Software Work w/ Hadoop

HS16SJPic2.jpg

@ODPiOrg booth at Hadoop Summit – those rocket footballs were a hit!

HS16SJpic3.jpg

@IBMBigData booth before the show opened – Can you find the ODPi Rocket?

HS16SJpic4.jpg

@CaskData captured plenty of attention with their focus on Applications and Insights, not Infrastructure and Integration

HS16SJpic5.jpg

@Altiscale ready for the rush of attendees looking for Big Data as a Service

It was terrific seeing ODPi members and sharing ideas at the conference. And the conference sessions couldn’t have been more on point. In the words of Ben Markham from ODPi member Xiilab:

I particularly loved the session about Apache Nifi and how to build a smart home, as this is related to Xiilab and also something I’d personally love to do. The sheer amount of data that needs to be processed in order to make an efficient smart home is amazing, and it speaks to why we’re all so passionate about this industry!

Before describing the significant milestone achieved at Hadoop Summit, first let me provide a short recap on ODPi’s progress to date.

ODPi published its first Runtime Specification in March to specify how HDFS, YARN, and MapReduce components should be installed and configured. The Runtime specification also provides a set of tests for validation to make it easier to create big data solutions and data-driven applications.

  • The purpose?
    Increases consistency for ISVs and End Users when building on top of, integrating with, and running Hadoop.

  • Why?
    Because consistency around things like how APIs are exposed and where .jar files are located reduces engineering effort on low-value activities like maintaining compatibility matrices, so that more effort can go into building the features that customers care about.

That’s the promise and commitment ODPi and its members made to the industry when we published the Runtime Spec.  

At Hadoop Summit, ALL FIVE ODPi members that ship Apache Hadoop distributions announced that they achieved ODPi Runtime Compliance.

runtime_compliant_image.PNG

Cool – so how exactly does that Open the Big Data Ecosystem?

Two of the Distros that achieved Runtime Compliance, Hortonworks and IBM Big Insights, collectively partner with several hundred of the biggest Big Data ISVs and IHVs.

Altiscale, a cloud Big Data as a service company, Infosys, which supports many government clients around the world with their Hadoop distro and custom Big Data apps on top of it, and ArenaData, who is making a name for themselves bringing Hadoop and Big Data to more Russian and Eastern European businesses, also achieved Runtime Compliance.

Thanks to ODPi, today ANY of the applications that run on Hortonworks or IBM Big Insights can, WITH SIGNIFICANTLY LESS UP FRONT AND ONGOING engineering cost, support Altiscale, ArenaData and Infosys.

Pivotal lit the way by describing on their blog how Pivotal HDB was installed on the ODPi reference implementation and on one of the ODPi Runtime Compliant distributions with no modifications to standard installation steps.

That’s called Opening the Big Data Ecosystem!

Now it’s your turn to show your support for an Open Big Data Ecosystem

Tweet why YOU think Hadoop and Big Data need standards.

Share a challenge you’ve faced, maybe an engineering effort that just took way longer than it should have, or a customer support ticket that by rights should have taken minutes but instead took hours.

Be sure to tag @odpiorg and include the hashtag #ODPi4Standards in your tweet and you’ll be entered to win one of TEN $25.00 Visa Gift cards. Read contest rules here.*

*Eligibility Criteria: 10 people, tweeting 7/14/2016 – 7/18/2016, with constructive #ODPi4Standards feedback + @ODPiOrg tag or RT will win a $25 Visa gift card.

Are you a fan of #BigData and from #Believeland? Come hear “A kid from Akron” talk about ODPi

By | Blog

Ok, maybe not the “Kid from Akron” you are thinking of 😉

I’m excited to be the featured speaker at this month’s Cleveland Hadoop User Group meeting. This is a very vibrant group, with a large array of developers, business users, and students who are looking to better understand the Big Data and Hadoop landscape in the Northeast Ohio area.

When I talk about tech in the midwest, people often roll their eyes at me. But let me tell you that Big Data is huge in this area. Progressive Insurance hadtwo keynote speakers at last month’s Hadoop Summit that dug into the technology behind predictive analytics for the insurance industry. And Explorys, a spinoff startup from the Cleveland Clinic that was acquired last year by IBM to lead their Watson Healthcare initiatives, is driving innovation on delivering personalized healthcare and great insight into disease prevention and cures. It’s the combination of foundational industries and new technology that is making former rust belt mainstays evolve into hot spots for technical innovation.

My talk will be focused on giving the audience a glimpse into the challenges in the Apache Hadoop market and how ODPi is looking to be an enabler for growing a large scale, yet easier to engage with ecosystem that benefits downstream consumers of this technology. And I’m looking forward to connecting more with local members of this community to help learn how they are using Apache Hadoop and how ODPi can make those investments go farther and make their businesses more effective.

If you are in the Cleveland area or will be in town to visit what this great city has to offer, then come to the meet up on Monday, July 25, 2016 at 5:00 PM at the Progressive offices in Mayfield Village, OH (map). Worst case, you’ll have some excellent pizza :-).