Category

Blog

Predictable Hybrid Hadoop Blog Series – Crossing the Chasm

By | Blog | No Comments

In the previous blog in this series, we outlined some of the important ways that running Hadoop in production – especially in enterprise-wide production – differs from point solutions and PoCs.

As a leading downstream community of big data vendors, users and platform providers, ODPi is focused on tackling the security, governance, lifecycle management and application portability needed to run Hadoop at scale.

A classic way to think about technology maturity is Geoffrey Moore’s Chasm model. In our new white paper, we plot key Hadoop milestones against the technology adoption curve (see image below) and argue that the things the ODPi community are focused on are essential to continuing the adoption of this transformative technology.

An adaptation of Everett Roger’s famous S diffusion of innovations curve, the Chasm model argues that users on the left of the chasm are fundamentally different from those on the right. The chasm separates users by adoption trigger/motivation: on the left it’s all about competitive advantage at nearly any cost, on the right it’s about continuity of operations and keeping up with the Joneses.

As awesome as this model is, it has sometimes been co-opted. One way this happens is by applying it to a Product, when in fact it needs to apply to a Category. This is one reason why we are so bullish about our work at ODPi – we explicitly acknowledge that the only way Hadoop and associated Big Data solutions can cross the chasm to mainstream adoption is by working together to define category-wide – NOT vendor-specific – answers to questions like lifecycle management, security and governance, application portability – these are the things that address early and late majority users’ interest in stability and operational continuity.

When thinking about what it really means for a technology to be a platform, we like the way Sam Ghod puts it:

A platform abstracts away a messy problem so you can build on top of it. Platforms do this by delivering portability and extensibility.

With ODPi Releases 1.0 and 2.0 in place, we invited Application Vendors to self-certify that their applications work unmodified across multiple ODPi Runtime Compliance Hadoop Distros. As of this writing, twelve applications from leading vendors like SAS, IBM and DataTorrent have completed the self-certification.

We believe that savvy Enterprise CDOs, CIOs, CTOs and Chief Information Security Officers (CISOs) should carefully consider the platform independence that ODPi’s Interoperable Apps program delivers before making their Hadoop platform choices. If one of your preferred vendors isn’t listed either as an Interoperable App or as a Runtime Compliant Platform, let that vendor know that it matters to you.

In 2017, we’re heads down adding to our existing specifications and creating new workstreams through our Special Interest Groups. We invite you to get involved. If you are a twitter user, be sure to follow @odpiorg and participate in our ongoing polls.

Looking at the latest Gartner Magic Quadrant for Business Intelligence and Analytics Platforms

By | Blog | No Comments

By John Mertic

I spent some time reviewing the latest Gartner Magic Quadrant for Business Intelligence and Analytics Platforms in preparation for my time at the Gartner Data and Analytics Summit last week. Overall, I’m really excited to see vendors overall scoring higher in ‘Ability to Execute’; Gartner toughly judges this so seeing the general shift upwards is great to see.

While the piece is clearly targeted towards buyers of these tools – I wanted to take a critical eye on the positioning of vendors in relation to their interoperability with Big Data and Hadoop tools. After all, it was a mere decade ago that all of data was covered by a single Gartner analyst. Enter the age of Big Data; with that variability, velocity, and volume has come a cornucopia of products, strategies, and opportunities for answering the data question.

In the same way, BI and Analytics has come from being purely the realm of “data at rest” to become cohesive with “data in motion”. It’s no surprise then to see two “pure play big data” BI vendors, Datameer and ZoomData, joining ClearStory which joined the MQ last year – cementing the enterprise production need of valuable data insights. And with a tip of the hat to the new breed of open source trailblazers such as Hortonworks, they heavily leverage Hadoop and Spark as not just another data source but instead a tool to better process data – letting them focus on their core competency of delivering business insights.

However, what really struck me was the positioning of data governance as a whole in this report – let’s dig into that more.

Data governance and discovery is being pushed farther out

If you’d compare the 2016 report to the 2017 report – you’d immediately notice this line from 2016…

By 2018, smart, governed, Hadoop-based, search-based and visual-based data discovery will converge in a single form of next-generation data discovery that will include self-service data preparation and natural-language generation.

…became…

By 2020, smart, governed, Hadoop/Spark-, search- and visual-based data discovery capabilities will converge into a single set of next-generation data discovery capabilities as components of modern BI and analytics platforms.

Two year delay in just a year is something of note – clearly there is a continual gap in converging the technologies. This aligns with what our members and end-users in our UAB mention as well – the lack of a unified standard here is hurting adoption and investment.

Governance no longer considered a critical capability for a BI vendor

This really stood out to me in light of the point above – is sounds like Gartner believes that governance will need to happen at the data source versus the access point. It’s a clear message that better data management needs to happen in the data lake – we can’t secure at the endpoints for true enterprise production deployment. This again supports the needs of driving standards in the data security and governance space.

I recently sat down with IBM Analytics’ WW Analytics Client Architect Neil Stokes on our ODPi Member Conversations podcast series and the discussion of data lakes was a very present one. To listen to this podcast, visit ODPi Youtube.
I’m reminded of the HL Mencken quote “For every complex problem there is an answer that is clear, simple, and wrong.” Data governance is hard, and not ever going to be something one vendor will solve in a vacuum. That’s why I’m really excited to see the output of both our BI and Data Science SIG and Data Security and Governance SIG in the coming month. Starting the conversation in the context of real world usage, looking at both the challenges and opportunities, is the key to building any successful product. Perhaps this work could be the catalyst for smarter investment and value adds as these platforms continue to grow and become more mature.

Predictable Hybrid Hadoop Blog Series – DataOps Considerations From Lab to Enterprise-wide Production

By | Blog | No Comments

In last week’s blog, The Hadoop Deployment Continuum, we covered how “in production” actually refers to a very diverse set of deployment scenarios. Anything from a PoC, to point solution, departmental deployment to enterprise-wide production can and often is called “production” use.

This blog focuses on the step-change DataOps requirements that come when you take Hadoop into enterprise-wide production.

As enterprises plan to move Hadoop and Big Data into enterprise-wide production scale out, they face a number of challenges.

Table 1, taken from our recent White Paper, details how running Hadoop and Big Data at enterprise-wide production requires a significant re-think across multiple dimensions.

The good news is that these are the very same challenges that ODPi big data community has been working on for over a year. Through our ODPi Compliance and Interoperable Apps programs, enterprises get stacks that are validated across a number of platforms, providing needed support for  multi-vendor procurement policies. In the words of Gene Banman, CEO of ODPi member DriveScale: “Enterprises have varying big data needs that require flexible and interoperable platform components. Becoming a member of ODPi will allow us to better maximize data center efficiency for Hadoop with interoperability for enterprise-grade deployments.”

Our ongoing work to validate workloads across cloud environments promises to extend ODPi predictability even further.

From a lifecycle management perspective, our Application Installation and Management specification covers requirements and guarantees for custom service specifications and views. Importantly, this spec, like all ODPi specs, is developed in the open and guided by the ODPi Technical Steering Committee (TSC), which is pulled from the entire Big Data industry. ODPi benefits from the involvement of end users, Hadoop platform providers, solution providers, and ISVs.

Last but certainly not least, our Special Interest Groups (SIGs), are looking into the following areas that are key to predictable enterprise-wide operations:

If these things matter to you, we invite you to get involved with any of these SIGs and/or join our slack channel and work with us to co-create a predictable hybrid future for Hadoop.

Predictable Hybrid Hadoop Blog Series – The Hadoop Deployment Continuum

By | Blog | No Comments

In working on the recent ODPi White Paper, a few things have come into much sharper focus to the team here.

First is that “Production” is a loaded term. Even though you’ve got really good research from places like AtScale reporting that 73% of respondents run Hadoop in production, we think this term needs to be unpacked.

That’s why we worked across our community, including ODPi members and participants in our User Advisory Board, on this Enterprise Hadoop Deployment Continuum graphic.

The very simple idea here is to plot Hadoop deployments from the lab all the way to enterprise-wide production use and to lay against the gates between phases the primary considerations Big Data teams review before taking the next step.

Many of the folks we talk to in our UAB, our membership and at conferences agree that right now, their Hadoop deployments are straddling the last gate, between Point Solution (sometimes these are massive with big business impact and huge volumes of data, but still focused on a single department/application) and looking to go Enterprise-wide. Some folks we’ve talked to even said they could put specific dates on this image when Hadoop has passed through these different phases. Can you?

It’s a very exciting juncture in the history of this amazing technology. Here at ODPi, we are squarely focused on collaborating as an industry to ensure the needed governance, security models and portability are in place to bring about predictable hybrid Hadoop.

In addition to our Runtime and Operations specifications and our ODPi Interoperable Applications program, we are also ushering in greater predictability through the work of our Special Interest Groups (SIGs), any of which we invite you to participate in:

  1. Data security and governance
  2. BI and Data Science
  3. Spark and Fast Data Analytics

These groups bring together downstream consumers of Hadoop and Big Data technologies ( Hadoop Platform Vendors, ISVs/IHVs, Solution Providers, and End-users ) to discuss and provide recommendations to our technical community on the key challenges and opportunities in each area. Participation doesn’t require code contribution – just the contribution of your insights and expertise on how to bring about predictable hybrid Hadoop for the larger Big Data world.

Inside Big Data said it well: “Enterprises that apply Big Data analytics across their entire organizations, versus those that simply implement point solutions to solve one specific challenge, will benefit greatly by uncovering business or market anomalies or other risks that they never knew existed.”  We couldn’t agree more.

The next blog in this series will contrast the operational consideration when running Hadoop in the lab/limited production versus running it enterprise-wide.

Improving Production Hadoop: ODPi Member Conversation with Ampool

By | Blog | No Comments

Last month, John Mertic sat down for our first ODPi Member Conversation podcast with Milind Bhandarkar, founder and CEO of Ampool.

The exciting discussion centered around the challenges production Hadoop deployments face and how to make the framework faster, easier and more productive.

As he’s spent the last 11+ years working with the various versions of Hadoop – first starting at Yahoo!, where Hadoop was invented – Milind had some interesting context to share with podcast listeners.

After highlighting the changes the space has seen since Hadoop was first introduced to the world, he explained that today’s projects usually “depend on different projects or on different components in the Hadoop ecosystem.”

The importance of interoperability within these offerings, to ensure today’s software-defined companies are able to harness the full power of their data, cannot be understated – as John and Milind agreed that one of Hadoop’s biggest challenges in production has been ensuring that commercial distributions are compatible across multiple components and the applications that have been written to use these components.

To hear more of Milind and John’s expert insight, including more ways to improve production Hadoop, tune in to the episode on our YouTube channel!

Subscribe to our YouTube channel and follow us on Twitter to catch upcoming episodes of the ODPi Member Conversation podcast series!

2017 Predictions: What’s Next for Hadoop

By | Blog | No Comments

Hadoop

By: John Mertic, Director of Program Management for ODPi

If you follow ODPi insight closely, you might remember these 2017 Big Data Predictions from our VP of Technology, Roman Shaposhnik. After the start of the new year, I started to think about what his predictions and emerging trends like Big Data’s “Push to the Cloud” might mean for our ecosystem – especially as it relates to the Hadoop landscape.

Last year, Apache Hadoop celebrated its tenth birthday. It was a milestone for the diaspora of the early team at Yahoo! that invented the technology and the worldwide community, along with The Apache Software Foundation that shepherded the growing platform since its launch. However, this decade-iversary also showcased something less obvious than Hadoop’s staying power: it brought to light that the canonical state of Hadoop is breaking apart.

Over the last couple of weeks, I’ve spent a lot of time reading through Hadoop and Big Data landscape articles written in the past few years. The most popular conversation was clearly the expansion of the stack – meaning new projects for every possible nook and cranny of the space. Fast data? Check. 12 ways to perform a SQL slice and dice? Done. AI (artificial intelligence) and ML (machine learning) capabilities? Yup. To see what I mean, take a look at this enormous Hadoop Ecosystem Table – summarizing current Hadoop related projects – here.

Traditionally, the role of Hadoop distribution providers within the ecosystem was to help make sense of a fast-changing and often-confusing landscape for customers. Showcasing their own preferred tools, distros gave the enterprise a stack of components that (more-or-less) worked well together – provided users stayed within confining application architecture walls. While this wasn’t ideal, it worked fairly well if enterprises were happy to stay in the “safe zone” their selected vendors laid out and could blissfully ignore other distros and solutions.

Though this may seem simple, the nature of deploying Big Data is quite varied. Reading through AtScale’s recent “Big Data Maturity” report, 53% of respondents reported using cloud in their deployment but only 14% have all of their data in the cloud. Not to mention Tony Baer’s recent ZDNet article citing that Hadoop in the cloud is a varied product depending upon the provider – and not in the traditional sense with how Cloudera CDP differs from Hortonworks HDP. This emergence of cloud brings into focus a fundamental shift emerging within the entire Big Data landscape.

If there is one overarching lesson the drive to PaaS and IaaS taught us, it would be the benefits of being lean. For example, you can throw more CPU, RAM and disk drives onto your on-premise environment with negligible cost increases; but for cloud instances, each addition counts against you quickly. Knowing this, the best cloud architectures include the ability to compartmentalize, identify focus areas of work and optimize each resource used – as wasting resources on the cloud has in-your-face cost ramifications.

Now combine Hadoop’s push to the cloud with the forced fiduciary responsibility of using cloud resources, and it’s quickly apparent that a traditional one-size-fits-all Hadoop distro is at natural odds – especially when that distro comes with a number of projects and tools that you’ve long-since outgrown.

My biggest prediction for 2017 is that the Hadoop of 2016 is going to become much more modular, special purpose and leaner than what is currently being shipped. We’re are already seeing these trends in the following ways:

  • IBM’s Watson Data Platform is centered around Spark – notice anything missing?
  • Cloud vendors are moving away from traditional HDFS and, instead, making their native block stores the data lake
  • Even traditional Hadoop distro vendors are recognizing this trend and launching offerings leveraging containers as a stopgap solution

This slow elimination of the one-size-fits-all ideal leads me to my second prediction: Hadoop and Big Data will no longer be discussed as their own beings – they’ll instead just be referred to as “Data.” I see this acknowledgment as the separation line between vendors who will be successful in 2017 and those who will not. Connecting the entire landscape story together, and speaking to customers about their data strategy vs. shiny new Hadoop or Big Data products, will separate this year’s data winners from its data losers.

My third prediction for Hadoop: ridding the marketplace of the “traditional Hadoop” baggage, and having the important conversations around data strategy, will employ the needs of traditional business to highlight leading technologies in this space. While this may sound pretty obvious, try answering this: how many traditional businesses are bragging about the efficiency of their Hadoop/Big Data/Data solutions and strategies right now? Not many. However, these businesses know that in order to remain competitive they’ll need to become “data driven.” I think we’ll start seeing organizations drive their needs back to vendors like never before and their successes will be much more prominently showcased. In other words, less focus on Amazon, Netflix and Facebook, and more narratives around companies like Progressive Insurance.

It’s a key year for Big Data as it crosses its biggest chasm yet, but as greater focus comes to this industry I think we’ll start seeing a noticeable push forward – setting up some even more impressive leaps in 2018 and beyond.

ODPi Community Lounge @ Apache Big Data Europe

By | Blog | No Comments

Join the Discussion at the ODPi Community Lounge

Once again ODPi is sponsoring the Community Lounge at Apache Big Data Europe, November 14-16 in Seville, Spain.  Apache project members and speakers are welcome to hold their meetings and after-session discussions.  This is a great way to have a deeper intimate conversation with fellow attendees, and to introduce new potential collaborators to your project

Please choose a time on the Community Lounge Schedule  for your topic or project.  We’ll help promote your upcoming meeting.  Be sure to tell your followers as well.  Time slots are 30 minutes each and can be scheduled on a first come, first served basis.

ODPi Community Lounge – ApacheCon EU 2016

Discussion Schedule

Monday, November 14

Time Speaker or Project Name Topic
10:30
11:00
11:30
12:00
12:30  Apache Giraph – Roman Shaposhnik  Discussion session: Practical Graph Processing with Apache Giraph
13:00
13:30 – 15:30 Lunch 
15:30  Apache MADlib – Roman Shaposhnik  Distributed In-Database Machine Learning with Apache MADlib (incubating) – Roman Shaposhnik, Pivotal
16:00  Apache Geode – Greg Chase  Meet Apache Geode – graduated for Apache Incubator
16:30
17:00

Tuesday, November 15

Time Speaker or Project Name Topic
10:30
11:00
11:30
12:00
12:30
13:00
13:30 – 15:30 Lunch
15:30  Apache Big Top & Greenplum Database – Greg Chase & Roman Shaposhnik Discussion: Massively Parallel Data Warehousing in the Hadoop Stack
16:00
16:30
17:00

Wednesday, November 16

Time Speaker or Project Name Topic
10:30  John Mertic, Director, ODPi and Open Mainframe Project, Linux Foundation  Discussion: Keynote: Lessons from the Trenches: How Apache Hadoop is Being Used & The Challenges Its Users Face –
11:00  ODPi – John Mertic  Discussion: Standardizing data governance across Hadoop distributions
11:30 ODPi – Roman Shaposhnik and John Mertic Discussion: Security in Hadoop
12:00 ODPi – Roman Shaposhnik and John Mertic Discussion: Streaming data in Hadoop
12:30 ODPi Discussion – Roman Shaposhnik Discussion: Hadoop Compatible File Systems across Hadoop Distributions
13:00  ODPi – Alan Gates  Discussion: Standardizing Hive in Hadoop distributions
End of conference

Is Your Data Clean or Dirty?

By | Blog | No Comments

downloadOver the weekend I read an incredible post from SAS Big Data evangelist Tamara Dull. I love her down-to-earth and real life perspectives on Big Data, and your analogy of cleaning the car hit home for me. She is spot on – clean data pays dividends in being able to get better insights.

But, what is clean data? What is that threshold that says your data is clean versus dirty?

Could data even be “too clean”?

(pause to hear gasps from my OCD readers)

Clean data and clean houses

Taking this to a real life example, I can say first hand there are often different definitions of what clean is. For example, my wife is very keen on keeping excess items off our kitchen counters, to the point where she’ll see something that doesn’t belong and put it in the first cabinet or drawer she encounters that has space for it. Me on the other hand, I’m big on finding what I believe is the right place for it. Both of us have the same goal in mind – get the counters clean.

To each of us, there’s value in our approaches – which is efficiency. Hers is optimized at the front end, mine at the back end. However, the end result of each of our “cleaning” could have negative impacts (with my approach, it’s my wife’s inability to find where I put something – with my wife’s method, it’s having items fall out of a cabinet on me as I open it).

Is “clean” to one person the same as everyone?

The life lesson above teaches something critical about data – clean isn’t a cut and dry threshold. And taking a page from Tamara’s post, it’s also not a static definition.

The trap you can quickly fall into is thinking of data in the same terms as you would have looked at structured data. While yes, part of the challenge is to understand what the data is and its relationships, the more crucial challenge is how you intend to consume the data and then use it. This is a shift from the RDBMS thinking of focusing on normalization and structure first and then usage second. With the Big Data-esque ways of consuming and processing data (streaming, ML, AI, IOT) combined with velocity, variability, and volume, the use-case mindset is exactly where your focus should be.

“Use case first approach” is how we look at these technologies at ODPi. We look at questions like “Here is the data I have, and this is what I’m trying to find out – what is the right approach/tools/patterns to use?” and how they can be answered. We ensure all of our compliant platforms, interoperable apps, and specifications have the components needed to enable successful business outcomes. This provides companies the peace of mind that they are making a safe investment, and that switching tools doesn’t mean that their clean data becomes less than optimal to leverage the way they want.

This parallels on the discussion of cleaning in our house – are we trying to clean up quickly because company is coming over, or are we trying to go through an entire room and organize it. Approaching data cleaning is the same thought process.

ESG Whitepaper: ODPi Simplifies Apache Hadoop Application Development and Portability

By | Blog | No Comments

ODPi_Twitter_WhitepaperAd_800x300_v2_ac-01

Overview

Over the last decade, Apache Hadoop has generated many popular open source software projects, spawned a number of rapid growth startups with commercial distributions and complementary products, and has been a reliable distributed data platform for analytics. As Apache Hadoop adoption continues to grow, the larger Hadoop ecosystem is expanding, too. However, some debate remains about the future direction of the technology.

In this paper created by ESG Senior Analyst Nik Rouda, he discusses Apache Hadoop support from businesses, governments, academia, and technology vendors and how this large and diverse community differs in their specific goals and objectives for harnessing this technology.

Rouda dives into how ODPi is helping to bring maturity and choice to the Hadoop ecosystem in several ways, offering:

  • More confidence that Hadoop will remain a safe data platform choice for companies.
  • Simplified application and compatibility testing for third-party software developers.
  • Vendor-neutral coordination of efforts between vendors to build synergies across their offerings.

Download this free report to learn more about simplifying Apache Hadoop application development and portability.

Join author Nik Rouda, ESG Senior Analyst and ODPI Director John Mertic for a complimentary webinar Monday November 7 from 12-1 PM Eastern. All registrants will get a free copy of this valuable white paper.

Is Open Source Big Data a broken promise?

By | Blog | No Comments

An article caught my eye this past week, where Robert Hof of SiliconAngle asserted that the challenges of Apache Hadoop adoption are a byproduct of the open source development approach. Hof argues that the various pieces do not integrate well together and some projects are not living up to their promises, which has resulting in additional work being required by organizations for them to see their true value. This has lead to a small pool of available talent and end-customers that are uncertain about where to direct their investments.

On the heels of this article, I watched the below video from Rakesh Kant of US Bank that I found just as insightful.


His sentiment rings loud and clear:

  • “I’m not seeing any signal, only noise.”
  • “The landscape is evolving into more experiments”
  • “A standard is required to help businesses”
  • “I’d like to focus time on delivering business value”

The Hadoop ecosystem has always been a technology focused one, and its clear this technology has been ground breaking and impactful. However, I do think that, over time, this technology has evolved to solve the needs of technologists. Enterprises have been largely been left without a voice and to struggle to embrace it with confidence.

In my view, open source as a development model is not the problem. Rather, it’s the lack of feedback from end-users like US Bank into the process. ODPi would like to solve this problem and help end-users share their feedback.

If you are an end-user of Hadoop, we’d love to have you as part of our End User Advisory Board to discuss these issues and help us focus on making adopting these technologies less risky for you.