Implementing an Open Metadata Connector

By | Blog

Eager to integrate your own metadata repository into the Egeria ecosystem, but not sure where to start? This article walks through how to do just that: implementing an open metadata repository connector according to the standards of ODPi Egeria.

The following outlines the steps involved:


Integrating a metadata repository into the Open Metadata ecosystem involves coding an Open Metadata Collection Store Connector. These are Open Connector Framework (OCF) connectors that define how to connect to and interact with a metadata repository.

Open Metadata Collection Store Connectors are typically comprised of two parts:

  1. The repository connector: which provides a standard repository interface that communicates using the Open Metadata Repository Services (OMRS) API and payloads.
  2. The event mapper connector: which captures events when metadata has changed in the metadata repository and passes these along to the Open Metadata Repository Services (OMRS) cohort.

The event mapper connector often calls the repository connector: to translate the repository-native events into Egeria’s OMRS events.

While various patterns can be used to implement these, perhaps the simplest and most loosely-coupled is the adapter. The adapter approach wraps the proprietary interface(s) of the metadata repository to translate these into OMRS calls and payloads. In this way, the metadata repository can communicate as if it were an open metadata repository.

The remainder of this article will walkthrough:

  • implementing such an adapter pattern as a connector, and
  • using the resulting connector through the proxy capabilities provided by the core of Egeria.

1. Design work

Designing before implementing
Designing before implementing

Before delving straight into the implementation of a connector, you really need to start with a level of design work. Fundamentally this will involve two steps:

  1. Mapping to the meta-model concepts of Egeria: in particular Entities, Classifications and Relationships.
  2. Mapping to the actual open metadata types of Egeria: e.g. GlossaryTerm, GlossaryCategory, RelationalColumn, and so on.

Map to the Egeria meta-model concepts

The best place to start with the design work is to understand the meta-model of Egeria itself. Consider how your metadata repository will map to the fundamental Egeria metadata concepts: Entities, Classifications, and Relationships.

When implementing the code described in the remainder of this article, you’ll be making use of and mapping to these fundamental Egeria concepts. Therefore, it is well worth your time now understanding them in some detail. This is before even considering specific instances of these types like GlossaryTerm or GlossaryCategory.

Meta-model mapping may be quite a straightforward conceptual mapping for some repositories. For example, Apache Atlas has the same concepts of Entities, Classifications and Relationships all as first-class objects.

On the other hand, not all repositories do. For example, IBM Information Governance Catalog (IGC) has Entities, and a level of Relationships and Classifications — but the latter two are not really first-class objects (i.e. properties and values cannot exist on them).

Therefore you may need to consider

  • whether to attempt to support these constructs in your mappings, and
  • if so, how to prescriptively represent them (if they are not first-class objects).

For example, in the implementation of the sample IGC connector we suggest using categories with specific names in IGC to represent certain classifications. Additionally, one of the reasons for implementing a read-only connector is that we can still handle relationships without any properties: by simply having the properties of any Egeria relationships we translate from IGC all be empty.

Map to the Egeria open metadata types

Once you have some idea for how to handle the mapping to the meta-model concepts, check your thinking by working through a few examples. Pick a few of the open metadata types and work out on paper how they map to your metadata repository’s pre-existing model. Common areas to do this would be e.g. GlossaryTerm, GlossaryCategory for glossary (business vocabulary) content; RelationalColumn, etc for relational database structures; and so on.

Most of these should be fairly straightforward after you have an approach for mapping to the fundamental meta-model concepts.

Then you’ll also want to decide how to handle any differences in types between the open metadata types and your repository’s pre-existing types:

  • Can your metadata repository be extended with new types?
  • Can your metadata repository’s pre-existing types be extended with new properties?
  • What impacts might be caused to repositories (and metadata instances) that already exist if you add to or extend the types?
  • What impacts will this have on your UI or how users interact with these extensions?

Your answers to these questions will inevitably depend on your specific metadata repository, but should help you decide on what approach you’d like to take:

  • Ignore any open metadata types that do not map to your pre-existing types.
  • Add any Egeria open metadata types that do not exist in your repository.
  • Add Egeria open metadata properties to your pre-existing types when Egeria has additional properties that do not yet exist in your type(s).
  • Implement a read-only connection (possibly with some hard-coding of property values) for types that are partially map-able, but not easily extended to support the full set of properties defined on the open metadata type.
  • and so on.

2. Pre-requisites

Creating your own connector project
Creating your own connector project

Implementing an adapter can be greatly accelerated by using the pre-built base classes of Egeria. Therefore building a connector using Java is likely the easiest way to start.

This requires an appropriate build environment comprised of both Java (minimally v1.8) and Maven.

Setup a project

Egeria has been designed to allow connectors to be developed in projects independently from the core itself. Some examples have already been implemented, which could provide a useful reference point as you proceed through this walkthrough:

Start by defining a new Maven project in your IDE of choice. In the root-level POM be sure to include the following:


Naturally change the version to whichever version of Egeria you’d like to build against. The dependencies listed ensure you’ll have the necessary portion of Egeria to build your connector against.

3. Implement the repository connector

Implementing your connector in your own project
Implementing your connector in your own project

The repository connector exposes the ability to search, query, create, update and delete metadata in an existing metadata repository. As such, it will be the core of your adapter.

You can start to build this within your new project by creating a new Maven module called something like adapter. Within this adapter module implement the following:

Implement an OMRSRepositoryConnectorProvider

Start by writing an OMRSRepositoryConnectorProvider specific to your connector, which extends OMRSRepositoryConnectorProviderBase. The connector provider is a factory for its corresponding connector. Much of the logic needed is coded in the base class, and therefore your implementation really only involves defining the connector class and setting this in the constructor.

For example, the following illustrates this for the Apache Atlas Repository Connector:

package org.odpi.egeria.connectors.apache.atlas.repositoryconnector;

import org.odpi.openmetadata.repositoryservices.connectors.stores.metadatacollectionstore.repositoryconnector.OMRSRepositoryConnectorProviderBase;

public class ApacheAtlasOMRSRepositoryConnectorProvider extends OMRSRepositoryConnectorProviderBase {

    static final String  connectorTypeGUID = "7b200ca2-655b-4106-917b-abddf2ec3aa4";
    static final String  connectorTypeName = "OMRS Apache Atlas Repository Connector";
    static final String  connectorTypeDescription = "OMRS Apache Atlas Repository Connector that processes events from the Apache Atlas repository store.";

    public ApacheAtlasOMRSRepositoryConnectorProvider() {
        Class connectorClass = ApacheAtlasOMRSRepositoryConnector.class;
        ConnectorType connectorType = new ConnectorType();
        super.connectorTypeBean = connectorType;

Note that you’ll need to define a unique GUID for the connector type, and a meaningful name and description. Really all you then need to implement is the constructor, which can largely be a copy / paste for most adapters. Just remember to change the connectorClass to your own, which you’ll implement in the next step (below).

Implement an OMRSRepositoryConnector

Next, write an OMRSRepositoryConnector specific to your connector, which extends OMRSRepositoryConnector. This defines the logic to connect to and disconnect from your metadata repository. As such the main logic of this class will be implemented by:

  • Overriding the initialize() method to define any logic for initializing the connection: for example, connecting to an underlying database, starting a REST API session, etc.
  • Overriding the setMetadataCollectionId() method to create an OMRSMetadataCollection for your repository (see next step below).
  • Overriding the disconnect() method to properly cleanup / close such resources.

Whenever possible, it makes sense to try to re-use any existing client library that might exist for your repository. For example, Apache Atlas provides a client through Maven that we can use directly. Re-using it saves us from needing to implement and maintain various beans for the (de)serialization of REST API calls.

The following illustrates the start of such an implementation for the Apache Atlas Repository Connector:

package org.odpi.egeria.connectors.apache.atlas.repositoryconnector;

import org.apache.atlas.AtlasClientV2;

public class ApacheAtlasOMRSRepositoryConnector extends OMRSRepositoryConnector {

    private String url;
	private AtlasClientV2 atlasClient;
    private boolean successfulInit = false;

    public ApacheAtlasOMRSRepositoryConnector() { }

    public void initialize(String               connectorInstanceId,
                           ConnectionProperties connectionProperties) {
        super.initialize(connectorInstanceId, connectionProperties);

        final String methodName = "initialize";

        // Retrieve connection details
        Map<String, Object> proxyProperties = this.connectionBean.getConfigurationProperties();
        this.url = (String) proxyProperties.get("");
        String username = (String) proxyProperties.get("apache.atlas.username");
        String password = (String) proxyProperties.get("apache.atlas.password");

        this.atlasClient = new AtlasClientV2(new String[]{ this.url }, new String[]{ username, password });

        // Test REST API connection by attempting to retrieve types list
        try {
            AtlasTypesDef atlasTypes = atlasClient.getAllTypeDefs(new SearchFilter());
            successfulInit = (atlasTypes != null && atlasTypes.hasEntityDef("Referenceable"));
        } catch (AtlasServiceException e) {
            log.error("Unable to retrieve types from Apache Atlas.", e);

        if (!successfulInit) {
            ApacheAtlasOMRSErrorCode errorCode = ApacheAtlasOMRSErrorCode.REST_CLIENT_FAILURE;
            String errorMessage = errorCode.getErrorMessageId() + errorCode.getFormattedErrorMessage(this.url);
            throw new OMRSRuntimeException(


    public void setMetadataCollectionId(String metadataCollectionId) {
        this.metadataCollectionId = metadataCollectionId;
        if (successfulInit) {
            metadataCollection = new ApacheAtlasOMRSMetadataCollection(this,


This has been abbreviated from the actual class for simplicity; however, note that as part of the initialize() it may make sense to test out the parameters received for configuring the connection, to make sure that a connection to your repository can actually be established before proceeding any further.

(This is also used in this example to setup a flag successfulInit to indicate whether connectivity was possible, so that if it was not we do not proceed any further with setting up the metadata collection and we allow the connector to fail immediately, with a meaningful error.)

You may want to wrap the metadata repository client’s methods with your own methods in this class as well. Generally think of this class as “speaking the language” of your proprietary metadata repository, while the next class “speaks” Egeria.

Implement an OMRSMetadataCollection

Finally, write an OMRSMetadataCollection specific to your repository, which extends OMRSMetadataCollectionBase. This can grow to be quite a large class, with many methods, but is essential for the participation of your metadata repository in a broader cohort. In particular, it is heavily leveraged by Egeria’s Enterprise Connector to federate actions against your metadata repository. As such, this is how your connector “speaks” Egeria (open metadata).

Ideally your implementation should override each of the methods defined in the base class. To get started:

  1. Override the addTypeDef() method. For each TypeDef this method should either extend your metadata repository to include this TypeDef, configure the mapping from your repository’s types to the open metadata types, or throw a TypeDefNotSupportedException. (For those that are implemented it may be helpful to store these in a class member for comparison in the next step.)
  2. Override the verifyTypeDef() method, which can check the types you have implemented (above) conform to the open metadata TypeDef received (ie. that all properties are available, of the same data type, etc), and that if none have yet been listed as implemented that false is returned (this will cause addTypeDef() above to automatically be called).
  3. Override the getEntityDetail() method that retrieves an entity by its GUID.

Note that there are various options for implementing each of these. Which route to take will depend on the particulars of your specific metadata repository:

  • In the sample IBM InfoSphere Information Governance Catalog Repository Connector the mappings are defined in code. This approach was used because IGC does not have first-class Relationship or Classification objects. Therefore, some complex logic is needed in places to achieve an appropriate mapping. Furthermore, if a user wants to extend the logic or mappings used for their particular implementation of IGC, this approach allows complete flexibility to do so. (A developer simply needs to override the appropriate method(s) with custom logic.)
  • The sample Apache Atlas Repository Connector illustrates a different approach. Because the TypeDefs are quite similar to those of Egeria, it is easier to map more directly through configuration files. A generic set of classes can be implemented that use these configuration files to drive the specifics of each mapping. In this case, simple JSON files were used to define the omrs name of a particular object or property and the corresponding atlas entity / property name to which it should be mapped. While this allows for much more quickly adding new mappings for new object types, it is far less flexible than the code-based approach used for IGC. (It is only capable of handling very simple mappings: anything complex would either require the definition of a complicated configuration file or still resorting to code).

Once these minimal starting points are implemented, you should be able to configure the OMAG Server Platform as a proxy to your repository connector by following the instructions in the next step.

Important: this will not necessarily be the end-state pattern you intend to use for your repository connector. Nonetheless, it can provide a quick way to start testing its functionality.

This very basic, initial scaffold of an implementation allows:

  • a connection to be instantiated to your repository, and
  • translation between your repository’s representation of metadata and the open metadata standard types.

4. Package your connector

Packaging your connector
Packaging your connector

To make your connector available to run within the OMAG Server Platform, you can package it into a distributable .jar file using another Maven module, something like distribution.

In this module’s POM file include your adapter module (by artifactId) as a dependency, and consider using the maven-shade-plugin to define just the necessary components for your .jar file. Since it should only ever be executed as part of an Egeria OMAG Server Platform, your .jar file does not need to re-include all of the underlying Egeria dependencies.

For example, in our Apache Atlas Repository Connector we only need to include the adapter module itself and the base dependencies for Apache Atlas’s Java client (all other dependencies like Egeria core itself, the Spring framework, etc will already be available through the Egeria OMAG Server Platform):

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""





Of course, you do not need to use the maven-shade-plugin to accomplish such bundling: feel free to define a Maven assembly or other Maven techniques.

Building and packaging your connector should then be as simple as running the following from the root of your project tree:

$ mvn clean install

Working out exactly which dependencies to include when you are using an external client like Apache Atlas’s can be a little bit tricky. Starting small will inevitably result in various errors related to classes not being found: when building you’ll see a list of all the classes considered by the shade plugin and which are included and excluded. You can use this to make some educated guesses as to which may still need to be included if you are running into errors about classes not being found. (Ideally you’ll have a simple, single jar file / dependency you can directly include instead of needing to work through this, but that won’t always be the case.)

Again, since we will just be using this connector alongside the existing OMAG Server Platform, this avoids ending up with a .jar file that includes the entirety of the Egeria OMAG Server Platform (and its dependencies) in your connector .jar — and instead allows your minimal .jar to be loaded at startup of the core OMAG Server Platform and configured through the REST calls covered in section 6.

Of course, if you intend to embed or otherwise implement your own server, the packaging mechanism will likely be different. However, as mentioned in the previous step this should provide a quick and easy initial way of testing the functionality of the connector against the core of Egeria.

5. Startup the OMAG Server Platform with your connector

Configuring the OMAG Server Platform with your connector
Configuring the OMAG Server Platform with your connector

Assuming you’ve built your connector .jar file using the approach outlined above, you’ll now have a .jar file under the distribution/target/ directory of your project: for the Apache Atlas example, this would be distribution/target/egeria-connector-apache-atlas-package-1.1-SNAPSHOT.jar.

When starting up the OMAG Server Platform of Egeria, we need to point to this .jar file using either the LOADER_PATH environment variable or a -Dloader.path= command-line argument to the server start command:

$ export LOADER_PATH=..../distribution/target/egeria-connector-apache-atlas-package-1.1-SNAPSHOT.jar
$ java -jar server-chassis-spring-1.1-SNAPSHOT.jar


$ java -Dloader.path=..../distribution/target/egeria-connector-apache-atlas-package-1.1-SNAPSHOT.jar -jar server-chassis-spring-1.1-SNAPSHOT.jar

Either startup should ensure your connector is now available to the OMAG Server Platform to use for connecting to your metadata repository. You may also want to setup the LOGGING_LEVEL_ROOT environment variable to define a more granular logging level for your initial testing, e.g. export LOGGING_LEVEL_ROOT=INFO before running the startup command above, to receive deeper information on the startup. (You can also setup a similar variable to get even deeper information for just your portion of code by using your unique package name, e.g. export LOGGING_LEVEL_ORG_ODPI_EGERIA_CONNECTOR_X_Y_Z=DEBUG.)

Then configure the OMAG Server Platform to use your connector. Note that the configuration and startup sequence is important.

Start with just the following:

Enable the OMAG Server as a repository proxy

Enable the OMAG Server as a repository proxy by specifying your canonical OMRSRepositoryConnectorProvider class name for the connectorProvider={javaClassName} parameter and POSTing to:


For example, in our Apache Atlas example we would POST to:


… with a payload like the following:

  "": "http://atlashost:21000",
  "apache.atlas.username": "admin",
  "apache.atlas.password": "admin"

Start the server instance

Start the OMAG Server instance by POSTing to:


During server startup you should then see various messages related to the metadata type registration process as the open metadata types are checked against your repository. (These in turn call the methods you’ve implemented in your OMRSMetadataCollection.) You might naturally need to iron out a few bugs in those methods before proceeding further…

6. Test your connector’s basic operations

Testing your connector's basic operations via API
Testing your connector’s basic operations via API

Each time you change your connector code, you’ll naturally want to re-build it (mvn clean install) and restart the OMAG Server Platform. If you are not changing any of the configuration, you can simply restart the OMAG Server Platform and re-run the POST to start the server instance (the last step above). If you need to change something in the configuration itself, it will be best to:

  1. Stop the OMAG Server Platform.
  2. Delete the configuration document (a file named something like
  3. Start the OMAG Server Platform again.
  4. Re-run both steps above (enabling the OMAG Server as a proxy, and starting the instance).

From there you can continue to override other methods of the OMRSMetadataCollectionBase class to implement the other metadata functionality for searching, updating and deleting as well as retrieving other instances of metadata like relationships. Most of these methods can be directly invoked (and therefore tested) using the REST API endpoints of the OMAG server.

A logical order of implementation might be:

Read operations


… which you can test through GET to



… which you can test through GET to



… which you can test through POST to


… with a payload like the following (to retrieve all relationships):

  "class": "TypeLimitedFindRequest",
  "pageSize": 100

These are likely to require the most significant logic for any mappings / translations you’re doing between the open metadata types and your own repository. For example, with Apache Atlas these are where we translate between the Apache Atlas native types like AtlasGlossaryTerm and its representation in the Apache Atlas java client and the open metadata type GlossaryTerm and its representation through the standard OMRS interfaces.

The other main area to then implement is searching, for example:


… which you can test through POST to


… with a payload like the following (to find only those GlossaryTerms classified as SpineObjects and whose name also starts with Empl):

  "class": "EntityPropertyFindRequest",
  "typeGUID": "0db3e6ec-f5ef-4d75-ae38-b7ee6fd6ec0a",
  "pageSize": 10,
  "matchCriteria": "ALL",
  "matchProperties": {
    "class": "InstanceProperties",
    "instanceProperties": {
      "displayName": {
        "class": "PrimitivePropertyValue",
        "instancePropertyCategory": "PRIMITIVE",
        "primitiveDefCategory": "OM_PRIMITIVE_TYPE_STRING",
        "primitiveValue": "Empl*"
  "limitResultsByClassification": [ "SpineObject" ]


… which you can test through POST to


… with a payload like the following (to find only those GlossaryTerms classified as ContextDefinitions where the scope of the context definition contains local — note to change the classification type you change the end of the URL path, above):

  "class": "EntityPropertyFindRequest",
  "typeGUID": "0db3e6ec-f5ef-4d75-ae38-b7ee6fd6ec0a",
  "pageSize": 100,
  "matchClassificationCriteria": "ALL",
  "matchClassificationProperties": {
    "class": "InstanceProperties",
    "instanceProperties": {
      "scope": {
        "class": "PrimitivePropertyValue",
        "instancePropertyCategory": "PRIMITIVE",
        "primitiveDefCategory": "OM_PRIMITIVE_TYPE_STRING",
        "primitiveValue": "*local*"


… which you can test through POST to


… with a payload like the following (to find only those GlossaryTerms that contain address somewhere in one of their textual properties):

  "class": "EntityPropertyFindRequest",
  "typeGUID": "0db3e6ec-f5ef-4d75-ae38-b7ee6fd6ec0a",
  "pageSize": 10

and so on.

You hopefully have access to a search API for your repository so that you can efficiently fulfil these requests. You want to avoid pulling back a large portion of your metadata and having to loop through it in memory to find specific objects. Instead, push-down the search to your repository itself as much as possible…

Once you have those working, it should be relatively easy to go back and fill in areas like the other TypeDef-related methods, to ensure your connector can participate appropriately in a broader open metadata cohort.

Write operations

While the ordering above is necessary for all connectors, if you’ve decided to also implement write operations for your repository there are further methods to override. These include:

  • creation operations like addEntity,
  • update operations like updateEntityProperties,
  • and reference copy-related operations like saveEntityReferenceCopy.

If you are only implementing a read-only connector, these methods can be left as-is and the base class will indicate they are not supported by your connector.

7. Add the event mapper connector

Adding the event mapper
Adding the event mapper

The event mapper connector enables events from an existing metadata repository to distribute changes to metadata to the rest of the metadata repositories who are members of the same OMRS cohort. It is not a mandatory component: as long as your connector can “speak” Egeria through an OMRSMetadataCollection it can participate in an open metadata cohort via the Enterprise Connector. However, if your metadata repository already has some kind of event or notification mechanism, the event mapper can be an efficient addition to participating in the broader open metadata cohort.

Within the same adapter Maven module, perhaps under a new sub-package like ...eventmapper, implement the following:

Implement an OMRSRepositoryEventMapperProvider

Start by writing an OMRSRepositoryEventMapperProvider specific to your connector, which extends OMRSRepositoryConnectorProviderBase. The connector provider is a factory for its corresponding connector. Much of the logic needed is coded in the base class, and therefore your implementation really only involves defining the connector class and setting this in the constructor.

For example, the following illustrates this for the Apache Atlas Repository Connector:

package org.odpi.egeria.connectors.apache.atlas.eventmapper;

import org.odpi.openmetadata.repositoryservices.connectors.stores.metadatacollectionstore.repositoryconnector.OMRSRepositoryConnectorProviderBase;

public class ApacheAtlasOMRSRepositoryEventMapperProvider extends OMRSRepositoryConnectorProviderBase {

    static final String  connectorTypeGUID = "daeca2f1-9d23-46f4-a380-19a1b6943746";
    static final String  connectorTypeName = "OMRS Apache Atlas Event Mapper Connector";
    static final String  connectorTypeDescription = "OMRS Apache Atlas Event Mapper Connector that processes events from the Apache Atlas repository store.";

    public ApacheAtlasOMRSRepositoryEventMapperProvider() {
        Class connectorClass = ApacheAtlasOMRSRepositoryEventMapper.class;
        ConnectorType connectorType = new ConnectorType();


Note that you’ll need to define a unique GUID for the connector type, and a meaningful name and description. Really all you then need to implement is the constructor, which can largely be a copy / paste for most adapters. Just remember to change the connectorClass to your own, which you’ll implement in the next step (below).

Implement an OMRSRepositoryEventMapper

Next, write an OMRSRepositoryEventMapper specific to your connector, which extends OMRSRepositoryEventMapperBase and implements VirtualConnectorExtension and OpenMetadataTopicListener. This defines the logic to pickup and process events or notifications from your repository and produce corresponding OMRS events. As such the main logic of this class will be implemented by:

  • Overriding the initialize() method to define how you will initialize your event mapper. For example, this could be connecting to an existing event bus for your repository, or some other mechanism through which events should be sourced.
  • Overriding the start() method to define how to startup the processing of such events.
  • Implement the initializeEmbeddedConnectors() method to register as a listener to any OpenMetadataTopicConnectors that are passed as embedded connectors.
  • Implement the processEvent() method to define how to process each event received from your repository’s event / notification mechanism.

The bulk of the logic in the event mapper should be called from this processEvent() method: defining how events that are received from your repository are processed (translated) into OMRS events that deal with Entities, Classifications and Relationships.

Typically you would want to construct such instances by calling into your OMRSMetadataCollection, ensuring you produce the same payloads of information for these instances both through API connectivity and the events.

Once you have the appropriate OMRS object, you can make use of the methods provided by the repositoryEventProcessor, configured by the base class, to publish these to the cohort. For example:

  • repositoryEventProcessor.processNewEntityEvent(...) to publish a new entity instance (EntityDetail)
  • repositoryEventProcessor.processUpdatedRelationshipEvent(...) to publish an updated relationship instance (Relationship)
  • and so on

To add the event mapper configuration to the OMAG Server Platform configuration you started with above, add:

Configure the cohort event bus

This should be done first, before any of the other configuration steps above, by POSTing to:


… with a payload like the following:

  "producer": {
  "consumer": {

Configure the event mapper

This can be done nearly last, after all of the other configuration steps above but still before the start of the server instance. Specify your canonical OMRSRepositoryEventMapperProvider class name for the connectorProvider={javaClassName} parameter and connection details to your repository’s event source in the eventSource parameter by POSTing to:


For example, in our Apache Atlas example we would POST to:


8. Test your connector’s conformance

Components for testing your connector's conformance
Components for testing your connector’s conformance

Aside from the API-based testing you might do as part of the on-going implementation of your OMRSMetadataCollection class, once you are in a position where you have most of the methods implemented it is a good idea to test your connector against the Egeria Conformance Suite.

This will provide guidance on what features you may still need to implement in order to conform to the open metadata standards.

Once your connector conforms, you should also then have the necessary output to apply to use the ODPi Egeria Conformant mark.

ODPi Member Spotlight: Interview with Ferd Scheepers, Chief Information Architect, ING

By | Blog

The success that ODPi has achieved as a nonprofit organization committed to simplification and standardization of the big data ecosystem is driven by the dedication of our member organizations and individuals. The ODPi Member Spotlight series interviews key ODPi contributors for a conversation exploring why they participate in ODPi, seeking to learn more about the individuals whose efforts are accelerating the development of today’s Big Data ecosystem, standards and solutions.

We recently spoke with Ferd Scheepers, Chief Information Architect for ING, to discuss his involvement with ODPi. In his role as ING’s global Chief Information Architect, Ferd has driven ING’s journey to becoming a data driven company for the last 5 years, defining ING’s Data Lake architecture for information management. He is championing the Apache Atlas and ODPi open metadata initiatives, and took time to share his vision, ideas and what motivates his contributions to ODPi–along with insight on how ING benefits from being an active member.

Tell me about your job position and what you are responsible for at ING?

For the last five years, I have been working as the global Chief Information Architect of ING. In this role, I am responsible for creating the Information Architecture for ING, which is becoming more and more important as we pursue the ambition of becoming a true data-driven organisation. We have created the ING data lake architecture, which is the main vehicle for ING to implement a fully metadata-driven data landscape, where all the data in the organisation is known. By known we mean not only where the data is, but also the data quality, the meaning, the owner of the data, and the full lineage from where the data comes to life, to any place the data is consumed, either by ING employees or by external parties like our regulators.

What is your involvement with ODPi? Tell us about the role you’ve played, your contributions, goals, and interests.

We got involved with ODPi in early 2017. At that time we had started together with IBM and Hortonworks to drive an Open Metadata initiative to define a set of open metadata standards, and build both a reference implementation for an Open Metadata compliant Metadata repository and the Open Metadata Highway. The Open Metadata Highway is a set of (Open Metadata Repository) Services that let different metadata repositories talk to each other in order to exchange metadata. On top of OMRS there is a set of (Open Metadata Access) Services, that enable dedicated applications or UIs specific for different personas in the organisation to consume services from the entire metadata landscape.

ODPi as an existing vendor-neutral organisation came in the picture as the most logical home for this open standard. Apache Atlas was chosen for the reference implementation for an Open Metadata compliant metadata repository, and the Open Metadata Highway was developed as the Egeria project within ODPi.

Why does ING see value in this work that ODPi is providing a vendor-neutral home for?

When ING got involved in driving this Open Metadata initiative, we knew that making such an initiative succeed requires several things. A willingness of several vendors to join together to make it a success. At least one company (preferably more) that represents the consumers of these vendor solutions, to explain the need for such a standard from a consumer perspective. And an open, vendor-neutral and respected community to be a home for the standard.

IBM and Hortonworks were involved from day one representing the vendors. ING took on the role to be the catalyst to bring them together. Not just as a voice of the customer, we decided to sit in the driving seat and have a full team contribute to developing this open standard. ODPi already being a very active group in steering the standardisation around Hadoop distributions seemed the logical choice for a home for the work that we were doing. Both because ODPi already had most of the facilities we needed, and because many of the vendors we wanted to join in this initiative were already a member.

ING also became a full member of ODPi in 2017 to support the valuable work ODPi is doing. We very much value the platform ODPi offers for developing the open standard, but more importantly, we value the community of vendors it brings us, and the exposure we get from ODPi to get the open standard known within a bigger community of both vendors and consumers.

What benefit has ING recognized from its membership with ODPi? What value do you expect to see from your participation?

Our participation in ODPi has already given us the platform to develop the ODPi Egeria open metadata standard. A full team from ING has been actively building the standard on the ODPi infrastructure. As a member, we also get to co-steer the direction of the open metadata initiative, and we benefit from the marketing initiatives from ODPi.

Through the community, we have now also involved SAS in the open metadata initiative, and we are talking to others. We expect ODPi to help us get this initiative known even more, both within the vendor community and with the consumer community.

Once the standard is mature, we see a role for ODPi in validating compliance to the standard, by delivering a test suite. ODPi will also deliver a set of value packs on top of the standard, like a GDPR pack, something we also see a lot of value in.

ODPi Egeria - Project Objectives

ODPi Egeria – Project Objectives

Tell us what excites you the most personally in regards to the technical work being done in ODPi?

Being a real nerd, I love to develop a new standard by really building it from scratch. Unfortunately, I can’t spend all my days coding anymore, so I am limited to reading some of the code that was developed, and to help drive the architecture and design for Egeria.

Building this standard, in my opinion, will be a game changer for the data industry, once we have a way to govern all data in all systems through the metadata, it will take the maturity of data management and governance to a whole new level. Imagine banks like ING delivering data to our regulators through a set of open formats, with the open metadata format on top. Our regulators having full lineage on where the data originated. It would solve all the challenges companies have today on proving that they are in control.

Companies exchanging data will be able to see where their data is being used, and supply usage agreements with that data in an open format. Data being available everywhere with the full metadata, every data consumer understanding what data they look at, the quality, the definitions, in any technology they use. Imagine customers being able to see exactly where their data is, who has access to it, what consent they have given.

Data privacy by design will truly become feasible through such a standard. And we will not stop at the traditional data landscape, it also extends to APIs, events, all the ways data is made accessible. I believe this standard is the beginning of a transformation in data management, and I think it is a very exciting project to work on.

Project Frontier: Shaping the Next Generation Hadoop Build Framework of Apache Bigtop

By | Blog

By Evans Ye, Yahoo Taiwan

As a mature Apache top-level project, Apache Bigtop has now been around for 6 years, serving as a critical component for building Hadoop distributions running in production. From on-premises, to big data solution vendors, to cloud providers—Bigtop has been widely leveraged in the big data world.

Yet today that world is growing even more complex. Having started with only a handful of components (HBase, Hive, Pig, Oozie, etc.), the latest release of Bigtop now includes more than 30 components. To handle such complexity, developers need to make sure a patch won’t break components that are integrated together, and release engineers also need to ensure features are fully functional. This is why we initiated Project Frontier, funded by ODPi.

Project Frontier focuses on extending and hardening the feature that Bigtop was originally designed for: building Hadoop distributions. Bigtop can only produce high-quality distributions if working with upstream projects closely to solve integration problems across multiple Hadoop ecosystem projects.

Based on observations to existing Bigtop build frameworks, we set the following goals for Project Frontier:

  1.  Provide a one-stop seamlessly integrated build pipeline
  2.  Document examples as reference implementations
  3.  Create better documentation for iTest, Smoke Tests and the others

These goals are all around one core mission of Project Frontier: Make Bigtop extremely friendly to use. The industry needs a simplified integration test framework for Apache Bigtop. We need a better solution for Apache Bigtop to work with other Hadoop ecosystem projects, with release and integration tests to ensure versions of different projects are working properly with one another.

For example for you, one of the scenarios we’d like to support is that developers can just submit a commit SHA1 which contains newly developed feature, then the framework will handle all the rest to craft an integration test report. That’s how simple it is.

Project Frontier Feature Preview

To tackle these ambitious goals, we will develop the features and functionality of Project Frontier in phases. The initial phase is focusing on improvements to building components in Bigtop. Let’s preview a feature that will be available in the upcoming Bigtop 1.3 release. In Bigtop’s master branch, users will now be able to run the following command under the Bigtop repository to build components.

Let’s say Hadoop:

$ git clone

$ cd bigtop

$ ./gradlew hadoop-pkg-ind

That’s it. Bigtop will take care of the full build environment, and dependencies,  for you. The advantages of this new feature are:

  1.  It abstracts the tedious work that requires direct user attention
  2.  Now grade targets can be streamlined like this:

$ grade hadoop-pkg-ind docker-provisioner, which has hadoop built and deployed as a testing cluster.

We’re still polishing the feature to support more customizations. For example, adding build packages with Nexus server support. Many more features are under development, so share your input and get involved. The Bigtop community welcomes all kinds of contributions from code, to doc, test and discussion—Learn more by visiting our page on GitHub. Join us now to shape the way we are building and integrating the Big Data ecosystem!


Evans Ye is a PMC member and former Chair of Apache Bigtop, and leads the Project Frontier initiative for ODPi. He works at Yahoo Taiwan to develop E-Commerce data solutions. Ye loves to code, automate things, and develop big data applications.           

Managing Privacy in the GDPR-era

By | Blog


Now that the EU General Data Protection Regulation (GDPR) is in full effect, businesses both large and small have made changes to be fully compliant, regardless of where they are located. The changes include more regulation for how companies collect data, how they store it, keep it safe from hackers and use it in their day-to-day activities. Some people think GDPR as ‘giving the power over data back to the user’. GDPR replaced old data privacy laws that were set up in 1995 and that have been obsolete for some time now.

But what does this mean for the consumer?

According to this Marketing Week article, consumers don’t understand how brands use their data. In fact, 48% of consumers still don’t understand where and how organizations use their personal data. This is up from 31% when the research was last conducted two years ago.

Only 7% feel they have a good understanding of how companies use their data, with 45% saying they “somewhat understand,” but 18% believe businesses treat people’s personal data in an honest and transparent way.

This is where ODPi comes in. ODPi’s Data Governance initiative aims to create an open data governance ecosystem through collaboration with data governance subject matter experts and data platform and tools vendors. On Thursday, July 12, ODPi is hosting a webinar focused on managing privacy.

Mandy Chessell, distinguished engineer and master inventor at IBM, will share best practices for how IBM manages data that keeps individuals’ privacy respected and is compliant with new regulations on data privacy such as the EU GDPR.

Attendees will learn:

  • The life cycle of a digital service as it is developed, sold, enhanced and used. This life cycle breaks the work into six stages. Each stage describes the roles and the activities involved to ensure data privacy.
  • The types of artifacts that need to be collected about a digital service and the methods used to develop it.
  • How these artifacts link together in an open metadata repository (data catalog).

Click to learn more or to register for the webinar.

The state of open source and big data – three years later

By | Blog

Originally posted on DataWorks Summit blog

ODPi turns 3 this year, being first announced at the spring Strata+Hadoop World and brought under the auspice of the Linux Foundation later in the year at the fall Strata+Hadoop World. Hadoop then turned 10 the following year, and seemed to be proclaimed deadthen alive, and then seemingly scrubbed from the world. One might think this meant the nail in the coffin for an organization centered on Hadoop standardization.

The Linux Foundation looks at open source projects in a life cycle, driven by the market needs. A common chart used to describe this is shown below.

In essence, open source foundations such as ODPi invest in developer communities, whose work enables accelerated delivery of new products on the marketplace and cost savings for R&D in organizations. As this produces profits for these organizations, they push investment back into the projects and foundations that support this work. In the present day, open source parlance this practice known as “Managing your Software Supply Chain”. An active cycle here is able to react and adapt to market demands, as well as, take inputs from all stakeholders – developers, implementers, administrators, and end-users.

So, as ODPi started to hit stride in 2016, we talked with people across the data landscape. From these conversations, we quickly saw that big data technology enterprise production adoption numbers were skewed – mostly because of the lack of a solid definition. To better baseline the discussion, we came up with this maturity model on how big data technologies are adopted in the enterprise.

Using this model showed that in 2017, nearly 3/4ths of organizations are still not fully enterprise-wide in deployment of big data. What’s blocking this? Data Governance, a broad and under-invested in area, but one growing more critical by the day with new regulations coming into play along with breakdowns in managing data privacy.

ODPi’s belief is that tackling such a broad issue as Data Governance can only be done with all members of the data ecosystem participating – platform vendors, ISVs, end users, and data governance and privacy experts. This collaboration can only happen in a vendor-neutral space, which is why ODPi has launched a PMC to solely focus on this space.

During Dataworks Summit Berlin, there will be numerous sessions and Meetups around this effort to help you learn more:

We will also be active in the community showcase, where you can chat directly with the experts in this area and learn how to participate in this effort.

Bringing it back to the original question – we are three years into this journey for creating sustainability in big data. We’ve had successes in reducing the numbers of disparate platforms and bringing market awareness to the issues of enterprises adopting these tools. Now the community is poised to take the lessons learned and build a strong community around governance to solidify this practice. Are the challenges different than 3 years ago – absolutely. However, the goal of enterprise adoption remains the same, and with that, we see that big data is becoming more mature, more inclusive, and is building a more collaboratively community.