Category

Blog

Project Frontier: Shaping the Next Generation Hadoop Build Framework of Apache Bigtop

By | Blog

By Evans Ye, Yahoo Taiwan

As a mature Apache top-level project, Apache Bigtop has now been around for 6 years, serving as a critical component for building Hadoop distributions running in production. From on-premises, to big data solution vendors, to cloud providers—Bigtop has been widely leveraged in the big data world.

Yet today that world is growing even more complex. Having started with only a handful of components (HBase, Hive, Pig, Oozie, etc.), the latest release of Bigtop now includes more than 30 components. To handle such complexity, developers need to make sure a patch won’t break components that are integrated together, and release engineers also need to ensure features are fully functional. This is why we initiated Project Frontier, funded by ODPi.

Project Frontier focuses on extending and hardening the feature that Bigtop was originally designed for: building Hadoop distributions. Bigtop can only produce high-quality distributions if working with upstream projects closely to solve integration problems across multiple Hadoop ecosystem projects.

Based on observations to existing Bigtop build frameworks, we set the following goals for Project Frontier:

  1.  Provide a one-stop seamlessly integrated build pipeline
  2.  Document examples as reference implementations
  3.  Create better documentation for iTest, Smoke Tests and the others

These goals are all around one core mission of Project Frontier: Make Bigtop extremely friendly to use. The industry needs a simplified integration test framework for Apache Bigtop. We need a better solution for Apache Bigtop to work with other Hadoop ecosystem projects, with release and integration tests to ensure versions of different projects are working properly with one another.

For example for you, one of the scenarios we’d like to support is that developers can just submit a commit SHA1 which contains newly developed feature, then the framework will handle all the rest to craft an integration test report. That’s how simple it is.

Project Frontier Feature Preview

To tackle these ambitious goals, we will develop the features and functionality of Project Frontier in phases. The initial phase is focusing on improvements to building components in Bigtop. Let’s preview a feature that will be available in the upcoming Bigtop 1.3 release. In Bigtop’s master branch, users will now be able to run the following command under the Bigtop repository to build components.

Let’s say Hadoop:

$ git clone https://github.com/apache/bigtop.git

$ cd bigtop

$ ./gradlew hadoop-pkg-ind

That’s it. Bigtop will take care of the full build environment, and dependencies,  for you. The advantages of this new feature are:

  1.  It abstracts the tedious work that requires direct user attention
  2.  Now grade targets can be streamlined like this:

$ grade hadoop-pkg-ind docker-provisioner, which has hadoop built and deployed as a testing cluster.

We’re still polishing the feature to support more customizations. For example, adding build packages with Nexus server support. Many more features are under development, so share your input and get involved. The Bigtop community welcomes all kinds of contributions from code, to doc, test and discussion—Learn more by visiting our page on GitHub. Join us now to shape the way we are building and integrating the Big Data ecosystem!

 

Evans Ye is a PMC member and former Chair of Apache Bigtop, and leads the Project Frontier initiative for ODPi. He works at Yahoo Taiwan to develop E-Commerce data solutions. Ye loves to code, automate things, and develop big data applications.           

Managing Privacy in the GDPR-era

By | Blog

 

Now that the EU General Data Protection Regulation (GDPR) is in full effect, businesses both large and small have made changes to be fully compliant, regardless of where they are located. The changes include more regulation for how companies collect data, how they store it, keep it safe from hackers and use it in their day-to-day activities. Some people think GDPR as ‘giving the power over data back to the user’. GDPR replaced old data privacy laws that were set up in 1995 and that have been obsolete for some time now.

But what does this mean for the consumer?

According to this Marketing Week article, consumers don’t understand how brands use their data. In fact, 48% of consumers still don’t understand where and how organizations use their personal data. This is up from 31% when the research was last conducted two years ago.

Only 7% feel they have a good understanding of how companies use their data, with 45% saying they “somewhat understand,” but 18% believe businesses treat people’s personal data in an honest and transparent way.

This is where ODPi comes in. ODPi’s Data Governance initiative aims to create an open data governance ecosystem through collaboration with data governance subject matter experts and data platform and tools vendors. On Thursday, July 12, ODPi is hosting a webinar focused on managing privacy.

Mandy Chessell, distinguished engineer and master inventor at IBM, will share best practices for how IBM manages data that keeps individuals’ privacy respected and is compliant with new regulations on data privacy such as the EU GDPR.

Attendees will learn:

  • The life cycle of a digital service as it is developed, sold, enhanced and used. This life cycle breaks the work into six stages. Each stage describes the roles and the activities involved to ensure data privacy.
  • The types of artifacts that need to be collected about a digital service and the methods used to develop it.
  • How these artifacts link together in an open metadata repository (data catalog).

Click to learn more or to register for the webinar.

The state of open source and big data – three years later

By | Blog

Originally posted on DataWorks Summit blog

ODPi turns 3 this year, being first announced at the spring Strata+Hadoop World and brought under the auspice of the Linux Foundation later in the year at the fall Strata+Hadoop World. Hadoop then turned 10 the following year, and seemed to be proclaimed deadthen alive, and then seemingly scrubbed from the world. One might think this meant the nail in the coffin for an organization centered on Hadoop standardization.

The Linux Foundation looks at open source projects in a life cycle, driven by the market needs. A common chart used to describe this is shown below.

In essence, open source foundations such as ODPi invest in developer communities, whose work enables accelerated delivery of new products on the marketplace and cost savings for R&D in organizations. As this produces profits for these organizations, they push investment back into the projects and foundations that support this work. In the present day, open source parlance this practice known as “Managing your Software Supply Chain”. An active cycle here is able to react and adapt to market demands, as well as, take inputs from all stakeholders – developers, implementers, administrators, and end-users.

So, as ODPi started to hit stride in 2016, we talked with people across the data landscape. From these conversations, we quickly saw that big data technology enterprise production adoption numbers were skewed – mostly because of the lack of a solid definition. To better baseline the discussion, we came up with this maturity model on how big data technologies are adopted in the enterprise.

Using this model showed that in 2017, nearly 3/4ths of organizations are still not fully enterprise-wide in deployment of big data. What’s blocking this? Data Governance, a broad and under-invested in area, but one growing more critical by the day with new regulations coming into play along with breakdowns in managing data privacy.

ODPi’s belief is that tackling such a broad issue as Data Governance can only be done with all members of the data ecosystem participating – platform vendors, ISVs, end users, and data governance and privacy experts. This collaboration can only happen in a vendor-neutral space, which is why ODPi has launched a PMC to solely focus on this space.

During Dataworks Summit Berlin, there will be numerous sessions and Meetups around this effort to help you learn more:

We will also be active in the community showcase, where you can chat directly with the experts in this area and learn how to participate in this effort.

Bringing it back to the original question – we are three years into this journey for creating sustainability in big data. We’ve had successes in reducing the numbers of disparate platforms and bringing market awareness to the issues of enterprises adopting these tools. Now the community is poised to take the lessons learned and build a strong community around governance to solidify this practice. Are the challenges different than 3 years ago – absolutely. However, the goal of enterprise adoption remains the same, and with that, we see that big data is becoming more mature, more inclusive, and is building a more collaboratively community.

The Rise of Big Data Governance: Strata Data Conference and DataWorks Summit Sessions, Webinar, RedGuide and More!

By | Blog

Each of today’s most forward-thinking enterprises have been forced to face similar data challenges: the reliance on real-time data to better serve their customers and, subsequently, the requirement of complying with regulations to protect that data, such as the EU’s General Data Protection Regulation (GDPR).

ODPi Data Governance PMC is working to create a neutral, industry-wide approach to data governance. Together, they are supporting the mission of creating an open data ecosystem through collaboration with subject matter experts and data platform and tools vendors.

Below please find upcoming speaking sessions, Meetups, webinars and a RedGuide meant to further the discussion and work of Data Governance.

March 6–8, 2018

Strata Data Conference

San Jose, CA

The rise of big data governance: Insight on this emerging trend from active open source initiatives

Speakers:

 Maryna Strelchuk (ING)

 John Mertic (ODPi)

Time: 1:50pm–2:30pm

Date: Wednesday, March 7, 2018

https://conferences.oreilly.com/strata/strata-ca/public/schedule/detail/64048

John Mertic and Maryna Strelchuk detail the benefits of a vendor-neutral approach to data governance, explain the need for an open metadata standard, and share how companies like ING, IBM, Hortonworks, and more are delivering solutions to this challenge as an open source initiative. The solution to this emerging challenge is a tricky one. For companies like ING, this data governance challenge has been met with metadata, a consistent view across a large heterogeneous ecosystem, and collaboration with an active open source community.

—————————-

April 16-19, 2018

DataWorks Summit

Berlin, Germany

The rise of big data governance: Insight on this emerging trend from active open source initiatives

Speakers:

 Ferd Scheepers (ING)

 John Mertic (ODPi)

https://dataworkssummit.com/berlin-2018/

Attendees will understand the role of metadata, the need for a cross-technology view on metadata, the role of Apache Atlas as a reference implementation, and the role of ODPi in offering value-added services, such as certification.

ODPi Data Governance PMC

Hosted by:

 Mandy Chessell (IBM)

https://dataworkssummit.com/berlin-2018/bofs/

This Birds of Feather (BoFs) sessions, hosted by IBM, ING, ODPi, and Hortonworks will include discussions around the ODPi Data Governance PMC. Come and share your experiences, challenges, future interests.

—————————-

April 26, 2018 at 9am PST/ 12pm EST

ODPi Webinar

Speakers: Mandy Chessell (IBM), John Mertic (ODPi)

Topic – Discussion of the IBM Redguide “The Journey Continues: From Data Lake to Data-Driven Organization”, an overview of the ODPi Data Governance PMC and a look at what’s to come this year.

Sign up here: https://www.odpi.org/projects/data-governance-pmc 

Check @ODPi on Twitter for details soon!

—————————-

Download Now!

The Journey Continues: From Data Lake to Data-Driven Organization

Written by Mandy Chessell (IBM), Ferd Scheepers (ING), Maryna Strelchuk (ING), Ron van der Starre (IBM), Seth Dobrin (IBM), and Daniel Hernandez (IBM)

http://www.redbooks.ibm.com/Abstracts/redp5486.html?Open  

This IBM Redguide™ publication looks back on the key decisions that made the data lake successful and looks forward to the future. It proposes that the metadata management and governance approaches developed for the data lake can be adopted more broadly to increase the value that an organization gets from its data. Delivering this broader vision, however, requires a new generation of data catalogs and governance tools built on open standards that are adopted by a multi-vendor ecosystem of data platforms and tools.

Work is already underway to define and deliver this capability, and there are multiple ways to engage. This guide covers the reasons why this new capability is critical for modern businesses and how you can get value from it.

ODPi Webinar on How BI and Data Science Gets Results

By | Blog

By John Mertic, Director of ODPi at The Linux Foundation

ODPi recently hosted a webinar on getting results from BI and Data Science with Cupid Chan, managing partner at 4C Decision, Moon soo Lee, CTO and co-founder of ZEPL and creator of Apache Zeppelin, and Frank McQuillan, director of product management at Pivotal.

During the webinar, we discussed the convergence of traditional BI and Data Science disciplines (machine learning, artificial intelligence… etc), and why statistical/data science models can now run on Hadoop in a much more cost effective manner than a few years ago.

The second part of the webinar focused on demos of Jupyter Notebooks and Apache Zeppelin. These were important and relevant demos, as Data Scientist utilize Jupyter Notebooks the most and Apache Zeppelin supports multiple technologies, multi-languages & environments; making it a great tool for BI.

The inspiration for the webinar was the new Data Science Notebook Guidelines. Created by the ODPi BI and Data Science SIG, the guidelines help bridge the gap so that BI tools can sit harmoniously on top of both Hadoop and RDBMS, while providing the same, or even more, business insight to the BI users who have also Hadoop in the backend. Download Now »

Additionally, webinar listeners asked detailed questions; including:

  • How can one transition from a bioinformatics developer to Data scientist in Bio-statistic?
  • Where do you see the future of both Jupyter and Zeppelin going? Are there other key data science challenges needing solved by these tools?
  • When do you choose to use one notebook over the other?
  • Can the 2 notebooks be used together?  i.e., can you create a Jupyter notebook and save it, then upload it into Zeppelin (or vice versa)?

Overall, the webinar was an insightful discussion on how we can achieve big data ecosystem integration in a collaborative way

If you missed the webinar, Watch the Replay and Download the Slides.