Category

ODPi Egeria

ODPi Egeria project news and updates

ODPi Egeria Release 1.2

By Blog, ODPi Egeria

It’s only one month since the ODPi Egeria team published the first release cycle and the time has flown by. Today as planned, the team is releasing version 1.2, which will be available to download here.

This release has four key features:

  • Conformance test suite
  • Asset cataloging and consumption
  • Governance zones and metadata security
  • Open metadata archives

Let’s find out what these new features are:

Conformance Test Suite

If you want to test your metadata repository with Egeria, this is the tool to use! The Conformance Test Suite (CTS) will check how a repository works with Egeria. The CTS runs a number of tests to ensuring the repository can interact with Egeria as expected.

We have written two blogs on Conformance Test Suite these are:

What is Egeria’s Conformance Test Suite for Metadata Repositories? (Overview blog)

The ODPi Egeria Open Metadata Conformance Suite – Repository Workbench (Technical deep dive blog)

Why not check if your repository is conformant!

 

Asset cataloging and consumption

These are a set of new access services to support the cataloging of assets:

  • The Asset Consumer OMAS provides services for an individual who wants to work with an asset. Enabling them to manage the connection to the asset, provide feedback, create informal tags and more.
  • The Asset Owner OMAS supports the manual cataloging of new Assets with API’s and Notifications. This is key for asset owners who have to ensure the classification of assets and the assignment of connection(s) to the asset.

If you want to try out the new asset cataloging features check-out our: tutorials, hands-on labs, and samples.

Governance zones and metadata security

The team have added Metadata centric security to the Open Metadata and Governance (OMAG) Platform! This provides support for the Egeria metadata servers and future governance servers. The metadata security is controlled by a connector, that can validate access to individual servers, services, and assets based on the identity of the caller and the service or asset they wish to access.

The governance zones supported in this release are:

  • OpenMetadataServerSecurity
  • OpenMetadataServiceSecurity
  • OpenMetadataRepositorySecurity
  • OpenMetadataAssetSecurity
  • OpenMetadataConnectionSecurity

Open metadata archives

This last feature is an enhancement to support dynamic types and type patching in the Open Metadata Repository Services (OMRS). The OMRS enables metadata repositories to exchange metadata irrespective of the technology, or technology supplier.

A second feature has additionally been added to load archives of metadata instances.

Summary

The ODPi Egeria team hope you enjoy the new features!  If you want to try them out, check out the links in this blog.

To Join the Egeria Project or view our Slack Channel follow the links below.

  • Contribute to ODPi Egeria 
  • Contact the team via slack – join here & go to #egeria-discussions. We’d love to hear what you think!

The ODPi Egeria Open Metadata Conformance Suite – Repository Workbench

By ODPi Egeria, Tech Deep Dive

The Open Metadata Conformance Suite is used to verify the behaviour of the Egeria platform and repository services. The Conformance Suite is the basis of the Egeria Conformance Program.

This document describes the tests run by the Repository Workbench. This workbench defines a set of tests that verify the behaviour of the repository services and is used to verify that a repository connector fulfils the requirements of the metadata collection REST API and the OMRS event exchange.

The Conformance Suite is augmented with each Egeria release, with more tests being added and existing tests being extended or refined. A repository is tested and certified as conformant to a specific version of the Conformance Suite. At the time of writing, Egeria release 1.2 is about to be made available. The built-in Egeria in-memory repository and graph repository are both fully conformant with Conformance Suite release 1.2. Details of the results of testing of the built-in connectors is included below.

What the Conformance Suite does

The Conformance Suite uses a pair of servers – one is the server under test and the other is the server that drives the test suite. The server under test should be configured to use the repository you need to test. For read/write repository connectors you can start with an empty repository. For a read-only connector (one that does not support creation of instances) connect to a repository that already contains instances of the types supported. The conformance tests are quite thorough and will drive a connector into many states (both valid and invalid), so you should not use a repository containing valuable metadata.

The servers need to be in the same cohort. There could be other servers in the cohort, but for simplicity you may want to run with just the two servers: the server under test and the test driver. The servers can be run on the same OMAG Server Platform, or they could be run on separate platforms.

The Conformance Suite repository workbench consists of a number of testcases that exercise the repository under test. The testcases invoke the OMRS REST interface and perform operations that generate OMRS events that the server under test reacts to. The Conformance Suite therefore tests more than just the repository under test. It is also testing the function of the OMRS cohort and the local connectors.

The functions performed by the metadata collection API are a combination of mandatory functions that a conformant repository must support, plus optional functions that a repository should support if possible. A repository is declared as “Conformant” if it supports all the mandatory requirements and behaves properly for any optional requirements that it does not support. When asked if it can perform an optional requirement that it does not support, the repository should respond indicating that the function is not supported.

The Conformance Suite specifies sets of functional requirements and divides them into profiles. A profile can be either mandatory or optional. Profiles and requirements are described in more detail a little further on. Before that, the next section describes how to get started with running the Conformance Suite.

To run the Conformance Suite Repository Workbench

Before you run the workbench, you need to decide a few things:

  • One is to know the repository that you want to test – shown in purple in the diagram;
  • Another is to think up a name for the server under test.
  • The other is to think up a name for a cohort to which you can register both the server under test and the CTS server.

The following steps can be used to configure and run the workbench.

Configure the Server Under Test

Configure the server that is to be the subject of the test. There is nothing special about the configuration of this server – except that it needs to join the cohort and should be configured to use the repository you want to test. For example, the server may be configured to use a local graph repository or an in-memory repository.

In the following examples, the user name is ‘user1’ and the server is called ‘test1’, but you should replace these with whatever names you choose.

You should set its server type name is set to ‘Metadata Repository Server’:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/test1/server-type?typeName=Metadata Repository Server

Set the serverURLRoot:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/test1/server-url-root?url=https://localhost:8080

Set the EventBus:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/test1/event-bus

with request body:

{
“producer”:    {   “bootstrap.servers”:”localhost:9092″  },
“consumer”:   {   “bootstrap.servers”:”localhost:9092″  }
}

The above assumes you are running Kafka on its default ports. If this is not the case insert your preferred port numbers.

In addition, set the repository mode, to whichever repository you need to test. For example:`

http://localhost:8080/open-metadata/admin-services/users/user1/servers/test1/local-repository/mode/in-memory-repository

The server under test must be configured to join the cohort you chose above:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/test1/cohorts/myCohort

Configure the CTS Server

Then configure a second server – referred to as the ‘CTS server’. This is the server that actually initiates the workbenches and testcases. The CTS server’s configuration is largely the default configuration.

The CTS server will by default use a local in-memory repository, so you don’t need to set a repository mode.

In the following examples, the user name is ‘user1’ and the server is called ‘CTS1’, but you should replace these with whatever names you choose.

You should set its server type name is set to ‘Conformance Test Server’:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/CTS1/server-type?typeName=Conformance Test Server

Set its serverURLRoot:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/CTS1/server-url-root?url=https://localhost:8080

Set its EventBus:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/CTS1/event-bus

with request body:

{
“producer”:    {   “bootstrap.servers”:”localhost:9092″  },
“consumer”:   {   “bootstrap.servers”:”localhost:9092″  }
}

The above assumes you are running Kafka on its default ports. If this is not the case insert your preferred port numbers.

The CTS server must be configured to join the same cohort as the server under test:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/CTS1/cohorts/myCohort

The key difference to the configuration of the CTS server is that the repository test workbench is enabled:

POST http://localhost:8080/open-metadata/admin-services/users/user1/servers/CTS1/conformance-suite-workbenches/repository-workbench/repositories/test1

Starting the Servers

Start the CTS server:

http://localhost:8080/open-metadata/admin-services/users/user1/servers/CTS1/instance

The CTS server starts and will load the Egeria types then wait for cocvoMDS1 to register with the cohort.

Start the server under test:

http://localhost:8080/open-metadata/admin-services/users/user1/servers/test1/instance

When the server under test starts, it registers with the cohort. When the CTS server sees the registration of the server under test, it will start the workbench.

The workbench runs the repository conformance testcases which will query the set of types supported by the server under test. For each discovered type, the workbench will run a set of lifecycle tests and other tests that exercise the OMRS interface and repository in the server under test, to determine which of the conformance requirements it supports.

A full run of the workbench includes a lot of tests (several thousand) so it can take quite a long time to run – for example between 30 minutes and an hour.

The results of the testing are collected in the memory of the CTS server. On completion of the tests, the results can be harvested and inspected by issuing a REST request to the CTS server:

localhost:8080/servers/CTS1/open-metadata/conformance-suite/users/user1/report

The results will be returned as JSON. If the conformance test run was fairly small (only ran over a subset of profiles or types), or you use some of the reporting options to request a subset of the results, the results can be formatted and displayed in a tool like Postman. However, a full run of the test suite will generate a very large report and you may find that Postman cannot handle it. For a full run of the workbench it is advisable to use a different means of retrieving the results. For example, you may want to use the httpie tool (available in homebrew), e.g. from a bash command line:

http --json --pretty format  GET localhost:8080/servers/CTS1/open-metadata/conformance-suite/users/user1/report

You may want to additionally ‘tee’ this output to a file for subsequent browsing.

The last part of the test report provides a summary of the number of tests that were run and the number that passed or failed:

The key things to check are there are no failed or skipped tests. If you browse the earlier sections of the report, you can find per-profile and per-requirement results as well as details of which assertions succeeded and which failed.

 

Using the CTS Notebook

There is a Jupyter notebook in the Egeria release, the latest version of which is in the master branch and is located under open-metadata-resources/open-metadata-labs/conformance-testing-labs and is called run-conformance-test-suite.ipynb . The notebook has cells for configuring and running the conformance suite as described above. At the end of the notebook there are cells that retrieve the workbench results and summarise them.

With release 1.2 of the conformance suite, the profile summary for a full-function, conformant repository should look like the following:

The two profiles reported as having “unknown status” are not explicitly tested in the 1.2 release of the conformance suite. It is expected that tests for test profiles will be added soon.

How the Conformance Suite Repository Workbench works.

Profiles

The repository conformance workbench contains a set of test profiles, listed below. The METADATA_SHARING profile is mandatory – so a repository must support it in order for that repository to be certified as conformant. The other profiles are optional. The repository can either support the optional profiles or respond appropriately to indicate to a caller that the function they want to perform is not supported by the repository.

Profile What is tested… Mandatory Optional
METADATA_SHARING The ability to share metadata with other members of the cohort

X

REFERENCE_COPIES The ability to save, lock and purge reference copies of metadata from other members of the cohort

X

METADATA_MAINTENANCE The ability to support requests to create, update and purge metadata instances

X

DYNAMIC_TYPES         The ability to support changes to the list of its supported types while it is running

X

GRAPH_QUERIES         The ability to support graph-like queries that return collections of metadata instances

X

HISTORICAL_SEARCH     The ability to support search for the state of the metadata instances at a specific time in the past

X

ENTITY_PROXIES        The ability to store stubs for entities to use on relationships when the full entity is not available

X

SOFT_DELETE_RESTORE The ability for an instance to be soft-deleted and restored.

X

UNDO_UPDATE           The ability to restore an instance to its previous version (although the version number is updated).”,

X

REIDENTIFY_INSTANCE   The ability to change the unique identifier (guid) of a metadata instance

X

RETYPE_INSTANCE       The ability to change the type of a metadata instance to either its super type or a subtype

X

REHOME_INSTANCE       The ability to update the metadata collection id for a metadata instance

X

ADVANCED_SEARCH       The ability to support the use regular expressions to search for metadata instances

X

 

There are some notes on each of these profiles in the following sections.

METADATA SHARING

This mandatory profile contains a broad set of function that every repository connector must support. The functions include fundamental OMRS capabilities such as joining a cohort, being able to connect to and identify a metadata collection, supporting the open metadata type system and related type queries and notification events. It also includes the ability to retrieve and search for metadata instances, to manage version numbers appropriately and to generate and respond to OMRS events associated with changes to instances.

Requirements in this profile:

  • COHORT_REGISTRATION
  • REPOSITORY_CONNECTOR
  • METADATA_COLLECTION_ID
  • SUPPORTED_TYPE_QUERIES
  • SUPPORTED_TYPE_NOTIFICATIONS
  • CONSISTENT_TYPES
  • METADATA_INSTANCE_ACCESS
  • CURRENT_PROPERTY_SEARCH
  • CURRENT_VALUE_SEARCH
  • INSTANCE_NOTIFICATIONS
  • INSTANCE_VERSIONING
  • TYPE_ENFORCEMENT
  • UNSUPPORTED_TYPE_ERRORS
  • TYPEDEF_CONFLICT_MANAGEMENT

REFERENCE COPIES

This optional profile tests that the repository can save and retrieve reference copies of instances homed elsewhere. It tests that the repository will only allow valid operations to be performed on a saved reference copy. For example, it must not allow a caller to modify the status, properties, type or identity of a reference copy – all these operations can only be performed on the master copy. This profile also tests that reference copies can be deleted.

Requirements in this profile:

  • REFERENCE_COPY_STORAGE
  • REFERENCE_COPY_LOCKING
  • REFERENCE_COPY_DELETE

METADATA MAINTENANCE

This optional profile tests that the repository can manage the lifecycle of metadata instances, including create, update, and purge of instances.

Requirements in this profile:

  • ENTITY_LIFECYCLE
  • CLASSIFICATION_LIFECYCLE
  • RELATIONSHIP_LIFECYCLE

DYNAMIC_TYPES

This profile defines operations that enable the addition and update of type definitions.

  • TYPEDEF_ADD
  • TYPEDEF_MAINTENANCE

This profile is not yet tested in the Conformance Suite v1.2 release of the repository workbench.

GRAPH_QUERIES

This profile defines a set of operations that enable the caller to navigate parts of the metadata instance graph. These operations can return the entities and relationships attached to and surrounding a specified entity (to a range of depths) or return the entities that are connected (directly or indirectly) to the specified entity. Another operation enables the caller to ask for all entities and relationships on all paths between a start entity and end entity, allowing the caller to ascertain how the two entities are related.

Requirements in this profile:

  • ENTITY_NEIGHBORHOOD
  • CONNECTED_ENTITIES
  • LINKED_ENTITIES

Support for this profile is tested by ascertaining what entity and relationship types the repository supports. The testcase then constructs an in-memory graph using a variety of relationship types and directions and a variety of entity types, all of which the repository should be able to support. In parallel with the construction of this in-memory instance graph, operations are dispatched to the repository so that it constructs and stores a matching instance graph. The three operations listed above are then tested, and for each operation the expected result is computed from the in-memory graph and compared to the result returned by the repository connector.

HISTORICAL_SEARCH

This optional profile describes the ability to store historic state of the repository and to search for and retrieve instances as they existed at an earlier time.

Requirements in this profile:

  • HISTORICAL_PROPERTY_SEARCH
  • HISTORICAL_VALUE_SEARCH

Support for this profile is tested as part of the entity and relationship lifecycle testcases. During the running of these testcases, a note is made of the time just prior to an instance being deleted from the repository. On completion of the delete tests, the testcase performs a getEntityDetail() or getRelationship() operation specifying the earlier time. Because the time specified was prior to the delete, the operation should return the version of the instance that existed prior to being deleted.

ENTITY_PROXIES

This optional profile describes the ability to store a proxy instance of an entity, in lieu of a full copy of the entity.

Requirements in this profile:

  • STORE_ENTITY_PROXIES
  • RETRIEVE_ENTITY_PROXIES

Support for this profile will be tested as part of the relationship lifecycle testcase. During the running of this testcase, instances of relationships are created, mostly alongside local instances of the entities at the ends of the relationship. It is also possible to create a relationship instance when there is no local entity instance (neither a locally homed instance nor a reference copy).

SOFT_DELETE_RESTORE

This optional profile includes the ability to perform a ‘soft’ delete or an instance, such that the instance can subsequently be restored. Following a soft delete the instance remains in the repository but is marked as being in deleted state and is not returned in response to searches or read requests.

Requirements in this profile:

  • SOFT_DELETE_INSTANCE
  • UNDELETE_INSTANCE
  • NEW_VERSION_NUMBER_ON_RESTORE

To test for support of this profile, the testcases attempt to delete entity and relationship instances. If the delete operation is supported (does not respond with FunctionNotSupported) further tests are performed to ensure that the deleted instances are not returned in response to requests to ‘get’ or ‘find’ the instances. Further tests are also run to restore instances and to purge deleted instances.

UNDO_UPDATE

This optional profile includes the ability to revert changes previously made to an instance. The result of an undo operation should be that the instance is returned to its previous state, with the exception that its version number will haev moved forward.

Requirements in this profile:

  • RETURN_PREVIOUS_VERSION
  • NEW_VERSION_NUMBER_ON_UNDO

The ability to undo an update is tested as part of the entity and relationship lifecycle testcases.

REIDENTIFY_INSTANCE

This optional profile includes the ability to change the identity (i.e. the GUID) of an instance. This operation can only be performed by the metadata repository that is the home of the instance (i.e. holds the master copy of the instance).

Requirements in this profile:

  • UPDATE_INSTANCE_IDENTIFIER
  • SEND_REIDENTIFIED_EVENT
  • PROCESS_REIDENTIFIED_EVENT

The TestSupportedEntityReidentify and TestSupportedRelationshipReidentify testcases test support for this profile. These testcases create entity and relationship instances and then change their GUIDs. This is only a valid operation on the home repository for the instance. If the repository reports that the reidentify operation is supported further tests are performed to ensure that the reidentified instances can be retrieved using the new GUID and that they are not returned in response to requests using the original GUID.

RETYPE_INSTANCE

This optional profile includes the ability to change the type of an instance. This operation can only be performed by the metadata repository that is the home of the instance (i.e. holds the master copy of the instance).

Requirements in this profile:

  • UPDATE_INSTANCE_TYPE
  • SEND_RETYPED_EVENT
  • PROCESS_RETYPED_EVENT

The TestSupportedEntityRetype testcase tests support for this profile. This testcase creates entity instances and then retypes them to each of their subtypes and back again to their original type. The reason this is limited to entity instances is that subtype and reverting to original type means that all properties of the original instance should remain valid throughout the testcase, which is a convenient condition to verify. It would be possible to add further tests for alternative type changes to entity and to test retype of relationship instances.

REHOME_INSTANCE 

This optional profile includes the ability to change the home (i.e the metadataCollectionId) of an instance. This is a ‘pull’ operation – i.e. the operation can only be performed by the metadata repository that is to become the new home of the instance. Such a repository must currently be in possession of a reference copy of the instance. If the operation is successful, that reference copy will be promoted to become the new master copy.

Requirements in this profile:

  • UPDATE_INSTANCE_HOME
  • SEND_REHOMED_EVENT
  • PROCESS_REHOMED_EVENT

The TestSupportedEntityReferenceCopyLifecycle and TestSupportedRelationshipReferenceCopyLifecycle testcases test support for this profile. These testcases create entity and relationship instances at the CTS server, which triggers the creation of reference copy instances in the repository under test. The testcases then instruct the server under test to become the new home repository for each instance. This has to be performed prior to the delete of the instance in the CTS repository, otherwise the reference copy would also be purged in the repository under test – making the test impossible. Because this is not a natural sequence of events (the instance in the CTS still exists) no further tests are performed after this in each of the testcases.

ADVANCED_SEARCH

This optional profile includes the ability to search for entity and relationship instances using property values or search criteria that contain regular expressions. This is an optional capability of the findEntitiesByProperty and findEntitiesByPropertyValue methods of the metadata collection interface, and the corresponding methods for relationships. It is also an optional capability of the findEntitiesByClassification method. The mandatory search capabilities of the METADATA_SHARING profile do not need to include support for general regular expressions. Instead, it is sufficient for that profile to support literal values and the limited range of regular expressions formulated by the Repository Helper methods. In contrast, the ADVANCED_SEARCH profile includes support for generic Java regular expression syntax.

Requirements in this profile:

  • ADVANCED_PROPERTY_SEARCH
  • ADVANCED_VALUE_SEARCH

The TestSupportedEntityPropertyAdvancedSearch and TestSupportedRelationshipPropertyAdvancedSearch testcases test support for this profile. These testcases create entity and relationship and then issue find requests to search for them, using regular expressions that will match different subsets of the known property values of the instances. The result of each search operation is compared to a computed expected result.

Testcases

There are currently 18 testcases in the repository conformance workbench. Each testcase focusses on a set of requirements that are generally associated with one profile, but some testcases that test function across multiple profiles.

Most tests are associated with a type – such as a particular entity type, relationship type or classification type. In these cases, a test case (such as TestSupportEntityLifecycle) will be run for each entity type supported by the repository being tested. A test case such as the TestGraphQueries test case is not concerned with a particular type – instead it discovers the set of types supported by the repository under test, and constructs a test graph of instances of types within the set of supported types.

Most testcases create instances by invoking the metadata collection interface of the repository; then they update, search for or delete those instances. Each testcase is self-contained so that it can be run in isolation if needed. So testcases that create instances also provide a cleanup method that removes the entities on completion of the test.

Some repositories do not support creation of instances – they are effectively used in a read-only capacity – so to test them, there are testcases that search for existing instances in the repository. Two testcases of this kind test the findXXXByProperty and findXXXByPropertyValue methods that search for instances. This capability is part of the mandatory profile for metadata sharing, which does not require that a repository supports creation or maintenance of instances. These testcases perform a broad search to retrieve an initial set of results, then use that known set to issue further, finer-grain searches and check that the results are consistent with the initial set and the query performed.

Some testcases are ‘single-phase’ – meaning they are created and invoked just once for a given type – within that invocation the testcase will create, test and destroy whatever instances it uses. Other testcases are ‘multi-phase’ – meaning they are created and invoked multiple times. The search test cases are multi-phase testcases. They are called with a phase indicator which is set to CREATE, EXECUTE and finally CLEAN. During the CREATE phase the testcase creates the instances it will need to test its given entity, relationship or classification type. The EXECUTE phase is when the tests are run, such as performing ‘find’ operations against the repository. The CLEAN phase is used to clean up the instances created by this testcase. The benefit of multi-phase testcases is that all the testcases’ CREATE invocations can be performed first, so during the EXECUTE phase the ‘finds’ are run against a repository that is populated with instances of all supported types. This allows testing of features like sub-type retrieval and type filtering by specifying the typeGUID.

Conformance of ODPi Egeria 1.2 Repository Connectors

As we get ready to finalise release 1.2 of Egeria, we’ve been testing the built-in repository connectors against the 1.2 release of the Conformance Suite. The results are shown below.

  • FULL means the repository fully supports this profile
  • PARTIAL means the repository supports some of this profile
  • NONE means the repository does not support this profile
  • NA means this profile is not tested in the current release
Profile What is tested… In-memory repository Local graph repository
METADATA_SHARING The ability to share metadata with other members of the cohort

FULL

FULL
REFERENCE_COPIES The ability to save, lock and purge reference copies of metadata from other members of the cohort FULL

FULL

METADATA_MAINTENANCE The ability to support requests to create, update and purge metadata instances FULL

FULL

DYNAMIC_TYPES         The ability to support changes to the list of its supported types while it is running NA

NA

GRAPH_QUERIES         The ability to support graph-like queries that return collections of metadata instances PARTIAL

FULL

HISTORICAL_SEARCH     The ability to support search for the state of the metadata instances at a specific time in the past FULL

NONE

ENTITY_PROXIES        The ability to store stubs for entities to use on relationships when the full entity is not available NA

NA

SOFT_DELETE_RESTORE The ability for an instance to be soft-deleted and restored. FULL

FULL

UNDO_UPDATE           The ability to restore an instance to its previous version (although the version number is updated).”, FULL

NONE

REIDENTIFY_INSTANCE   The ability to change the unique identifier (guid) of a metadata instance FULL

FULL

RETYPE_INSTANCE       The ability to change the type of a metadata instance to either its super type or a subtype FULL

FULL

REHOME_INSTANCE       The ability to update the metadata collection id for a metadata instance FULL

FULL

ADVANCED_SEARCH       The ability to support the use regular expressions to search for metadata instances FULL

FULL

Going Forward

As new Egeria releases are developed and new function is added, additional test cases will be added to the Conformance Suite and we expect to add refinements to existing test cases to improve the scope and granularity. Tests for dynamic types and entity proxies will be added.

Repository connectors are certified as conformant at a particular version of the Conformance Suite. If you are developing or testing a repository connector, ensure that you test with the relevant release of the conformance suite.

For more information:

Please refer to the Open Metadata Conformance Suite page in the ODPi Egeria git repository:

https://github.com/odpi/egeria/blob/master/open-metadata-conformance-suite/docs/README.md

 

Webinar: New Approaches to Managing Access to Sensitive Data

By Blog, Events, ODPi Egeria

Time and Date: Thursday, December 5, 2019, 10:00am US Eastern Time

Link and Meeting ID: https://zoom.us/j/449431462  449 431 462

Abstract

What happens when you need your data scientist to repeatedly work with your most valuable and sensitive data? How do you prevent them from seeing more than they need whilst ensuring that they have a productive and enabling work environment. In this webinar we look at 3 different approaches to managing secure access to data sets that include the most personal and sensitive data.

In each approach we use increasing automated means to create selective access to an employee data set that includes correlated personal, performance and financial information. The technology involved is all open source and includes ODPi Egeria, Apache Avro and Palisade. Together they will change the way you think about access control.

About the Presenter 

Mandy Chessell CBE FREng CEng FBCS is an IBM Distinguished Engineer, Master Inventor and Fellow of the Royal Academy of Engineering. Mandy is a trusted advisor to executives from large organisations, working with them to develop their strategy and architecture relating to the governance, integration and management of information. She is also driving IBM’s strategic move to open metadata and governance through participation in the Egeria open source project. She also serves as the ODPi Technical Steering Committee chairperson.

About ODPi Egeria

Egeria is the world’s first open source metadata standard. It provides open APIs, event formats, types and integration logic so organizations can share data management and governance across the entire enterprise without reformatting or restricting the data to a single format, platform, or vendor product.

Data governance and security are critical concerns for data-driven organizations. And increasing government regulations ensure that the use and management of metadata will continue to be an important focus for anyone who uses and stores important data.

Full details on the ODPi Egeria project page: https://www.odpi.org/projects/egeria

 

Time and Date: Thursday, December 5, 2019, 10:00am US Eastern Time

Link and Meeting ID: https://zoom.us/j/449431462  449 431 462

ODPi Egeria: How to find entities and relationships

By Blog, ODPi Egeria, Tech Deep Dive

How should an Egeria OMAS find entities and relationships?

An Open Metadata Access Service (OMAS) is a specialized set of APIs and events intended to make using open metadata easier for a specific community of developers. New OMASs can be contributed directly to the Egeria project, or developed/distributed independently. This blog post should be of interest to anyone writing an OMAS.

An OMAS often needs to create or retrieve an entity relationship.

The following patterns are common:

  • The OMAS creates an entity then continues to work with it. The addEntity() method of the Metadata Collection interface returns the EntityDetail object. The OMAS can keep that object around and operate on the entity. If the OMAS retains knowledge of the entity GUID it can later use the getEntity() method to retrieve the same entity again. The same pattern is possible with addRelationship() and getRelationship(). If the GUID is available, getting the instance is straightforward.
  • The OMAS needs to retrieve an entity or relationship that was created earlier. In this case, the OMAS does not know the entity or relationship GUID. In this case the OMAS can use one of the ‘find’ methods to search for the entity or relationship instance. The OMAS may expect to get back exactly one instance, or it may expect a set of instances. In either case, if a set of instances found, the OMAS may filter it to identify the particular entity or relationship it needs.

To keep things brief, the remainder of this article focuses on entities. Working with relationships is similar.

Finding things in a metadata repository

If an OMAS needs to find an entity or relationship, and the instance GUID is not known, the OMAS can use one of the find methods to search for it in the metadata repositories. The find methods are on the Metadata Collection interface supported by OMRS repositories.

To be precise, we are referring to the OMRSMetadataCollection class. This class is extended by the EnterpriseOMRSMetadataCollection class that the OMAS has access to via its Enterprise Connector.

The Metadata Collection interface provides the findEntitiesByProperty() and findEntitiesByPropertyValue() methods. The first method accepts a ‘match properties’ object which can be used to specify a ‘match’ value for each property. The second method accepts a search string which is compared to all string properties. There are similar methods for finding relationships.

Exact match or regular expression?

A string used as a match property or search string can be used as an exact match or as a regular expression.  The author of an OMAS needs to consider what the end-user is expecting when the OMAS performs a search. The author can then decide whether a string should be matched exactly or treated as a regular expression (‘regex’).

The Metadata Collection interface treats all search strings as regular expressions.

If the author is expecting a string to be regex-matched, they should compose the string as a regex expression and call the Metadata Collection interface.

If an OMAS author wants an exact match there is a set of helper methods in the OMRSRepositoryHelper. These methods support escaping of relatively simple search strings in a manner that is supported by most OMRS repository connectors, including those for repositories that do not support full regular expression syntax. An OMAS author should always use the repository helper methods when they can. For more complex searches, beyond the level supported by the helper methods, an OMAS author should implement their own regular expression, but it is important to be aware that not all repositories will support all regular expressions. The regular expressions provided by the helper are a minimal set that most repositories are able to support. More complex expressions can be used with repositories that have full regex processing, such as the in-memory repository or graph repository.

For example, if an OMAS author wants an exact match of a string, they should call OMRSRepositoryHelpergetExactMatchRegex() which will ‘escape’ the whole string, regardless of content. This helper method will frame the whole string with \Q \E escape characters. It’s OK to call getExactMatchRegex() even if the string value only contains alphanumeric characters and has no regex special characters. However, it should only be used for escaping a single, simple string – don’t use it for a string that already contains either of these escape sequences. Also, don’t use it to build up complex regular expressions.

Here’s an example. Metadata objects frequently have compound names composed of multiple fields with separators. For example, an OMAS may need to retrieve the entity with qualifiedName equal to '[table “” not found /]
EMPLOYEE.[column]FNAME'
.  Some of these characters are special characters in a regex. If the OMAS needs an exact match, it can call OMRSRepositoryHelper getExactMatchRegex() to escape the search string. Although the string contains regex special characters, the search will only return an entity with the exact value.

Exact match of a substring

The OMRSRepositoryHelper also provides helper methods that will escape a string and build a regex around it so it will match values that contain, start or end with the original string value. These methods combine exact match processing with relatively simple regex substring expressions. If an OMAS needs a more complicated regex the author should code it directly instead of using the OMRSRepositoryHelper methods.

Getting back more than you expected

Using an exact match doesn’t guarantee you will get only one entity, as there may be multiple entities that have a matching property value. Egeria is designed to be distributed and eventually consistent, so the repositories do not enforce uniqueness. Even if a property is ‘unique’ there may be more than one instance with that value within a cohort.

Filtering a search result

If an OMAS searches and gets back a set of entities, it may need to filter the set to identify an individual entity. The filtering might compare each search match property with the instance properties of each returned entity. However, if the OMRSRepositoryHelper methods were used to escape any match properties prior to the search, the OMAS would need to ‘unescape’ those match properties. It could do this by calling the OMRSRepositoryHelper getUnqualifiedLiteralString() method. Alternatively, the OMAS could construct a pair of match properties objects. One object is never escaped and the other is identical except it is escaped just prior to the search. The OMAS would then use the unescaped object for post-search filtering.

Using Egeria to Integrate with Data Virtualization Tool

By ODPi Egeria

 

What we want to achieve

For a business user, what is relevant are the business concepts and connections between them. What we want to achieve is to have a consistent and meaningful representation of data resources, along with policies to handle them. We want the end user to have access only to the governed, rich metadata.

How to obtain it?

We will use CocoPharmaceuticals database as resource sample. As a result the end user will be able to see a business view on top of the underlying resource and similarly the technical view.
By assigning a glossary term to a relational column, we will trigger the creation of two views on top of the parent table:

  • business view with all the columns having business terms associated and using the business term name as column name
  • technical view with all the columns having business terms associated and using the technical name as column name

Solution Overview Components

  1. Oracle database containing Coco Pharmaceuticals sample data. We will use table EMPSALARYANALYSIS as an example. Link: https://github.com/odpi/egeria/tree/master/open-metadata-resources/open-metadata-deployment/sample-data/coco- pharmaceuticals
  2. IBM Governance Catalog (IGC) as the repository containing glossary terms and the metadata for the above database. This repository is integrated into OpenMetadata world using the proxy
    pattern: https://github.com/odpi/egeria/blob/master/open-metadata-publication/website/open-metadata-integration-patterns/adapter-integration-pattern.md. The proxy is used to translate from proprietary formats, events, APIs to Open Metadata standards (and the other way around). Link: https://github.com/odpi/egeria-connector-ibm-information-server
  3. Kafka as event bus
  4. Open Metadata server configured to have Open Metadata Access Services (OMAS) enabled.
    Open Metadata Access Services are sets of APIS targeting certain type of consumers and use cases. For more general details check link: https://github.com/odpi/egeria/blob/master/open-metadata-implementation/access-services/README.md
    In current example we will use two OMASs:
    1. Information View OMAS is responsible for integrating with virtualization and BI tools. For this flow we are using it because we need the structure and context (location) of the table.
    2. Data Platform OMAS is responsible for integration with tools that want to create new data assets. In this case its responsibility is to integrate with the data virtualization tool used in this setup, gaianDB, and model the views created as new data assets.
  5. Atlas repository as a repository integrated with Open Metadata natively, since it is implementing the OpenMetadata APIs, protocols and types. Using Atlas built-in UI we will explore the entities created for representing the views.
    More details about native pattern can be found here: https://github.com/odpi/egeria/blob/master/open-metadata-publication/website/open-metadata-integration-patterns/native-integration-pattern.md
  6. Virtualizer: component designed to integrate with data virtualization tool, therefore responsible for running the actual queries and statements for creating the views. It also produces events representing the views structure, along with details about the host and platform creating the views (gaianDb in this case)
    Link: https://github.com/odpi/egeria/blob/master/open-metadata-implementation/governance-servers/virtualization-services/README.md
  7. Gaian DB as data virtualization tool. Gaian DB is a leightweight federated database over multiple sources, as a result providing a single centralized view over multiple, heterogeneous back-end data sources. More details can be found here: https://github.com/gaiandb/gaiandb.
    Our setup includes a front-end Gaian node to which virtualizer will connect and also another back-end Gaian node connected to the Oracle database. Views will be created in the front-end Gaian Node and will also be linked to the real database tables from back-end Gaian node.

All OpenMetadata servers (IGC proxy, OMAS server and Atlas) are configured to be part of the same cohort, therefore they will receive and publish events on the same cohort topic.

Configuring the environment

  1. Setting up the cohort
    At the centre of the setup is the concept of cohort. A cohort is a collection of metadata repositories sharing metadata using the Open Metadata Repository Services (OMRS).
    Below is a picture describing the cohort defined for this setup. We’ve configured the three members as part of the same cohort. This implies that they listen and publish OMRS events to the same cohort topic and therefore they have in their local registry store registration details about all the other members in the cohort. As a result, this enables the repositories to act as one single virtual repository.             
  2. Virtualization layer: Virtualizer, data virtualization tool and underlying database. Virtualizer is the component designed to interact with GaianDB. Hence its responsibility is to connect to front-end node (Gaian Node 2) and create the views in this node. The data for these views is located in the Oracle database, but available through backend node( Gaian Node 1) to the other connected nodes (Gaian Node 2 in this scenario).

Flow

  1. First step triggering the creation of views: business user assigns business term to column in IGC
  2. As a result, a new event is published on InfosphereEvent (internal IGC topic)
  3. IGC Event is consumed by the proxy and translated to OMRS event and types. A new OMRS event containing details about the business term and column is published on the cohort topic, thus reaching all the members of the cohort.

The event will include:

  • Details about the type of event. In this case the event is a NEW_RELATIONSHIP_EVENT with type SemanticAssignment
  • Mandatory properties for the 2 entities between which the relationship is created. These are necessary to be able to identify the entities uniquely in Open Metadata(OM) world. This includes the type of each entity (RelationalColumn and GlossaryTerm), the guid of each entity as unique identifiers in the repository and also qualifiedName as unique identifier at entity type level.
  • Other details such as event provenance, originator, auditing and versioning info are also defined in the event.

Link: https://github.com/odpi/egeria/blob/master/open-metadata-publication/website/java-events/new-semantic-assignment-OMRS-Event.json

4. The event is picked up by both Atlas and OMAS server because they are cohort members and not originators of the initial event. As a result of processing this event, Atlas cohort member will create a SEMANTIC_ASSIGNMENT relationship between the entities representing the column and the glossary term.

The OMASs server will also pick up the event. Because it is configured to have access-services enabled, all enabled access services listeners will receive this event and either process it or discard it, based on the logic and use cases for it. Starting from column guid as unique identifier in OM world, Information View OMAS will retrieve all entities describing the table, thus building the full context. This includes host details, connector type, database name, schema name, table name, columns and business terms linked to the columns, columns constraints such as primary key and foreign key.

This event containing the full context is published to Information View Out topic as input for virtualizer component.

Link: https://github.com/odpi/egeria/blob/master/open-metadata-publication/website/java-events/table-context-EMPSALANALYSIS.json

5. Virtualizer is processing the event and creates the 2 views in front-end node (Gaian Node 2). In the screenshot below is the business view created in gaianDB:

As seen in the picture, the view includes the columns that have business terms assigned. Column names are business term names because a business user is not interested in technical name (like FNAME). As a result, what is relevant is the actual meaning: the business term name First Name.

6. Virtualizer is publishing to DataPlatform IN topic the events describing the views.

Sample event can be found here: https://github.com/odpi/egeria/blob/master/open-metadata-publication/website/java-events/information-view-EMPSALANALYSIS.json

Please note that this event contains also details about the host and platform where the view was created, along with the view columns, business terms associated to these column and underlying database columns.

7. DataPlatform OMAS consumes the events from DataPlatform IN topic

As a result, the DataPlatform OMAS will issue calls to enterpriseConnector for creating the entities and relationships modelling the views. The enterpriseConnector is responsible to trigger a federated request by calling the connectors stored in server’s registry store. Therefore it is creating the entities and relationships modelling the view.

Because the current integration for IGC doesn’t support creation of entities/relationships, the entities and relationships defining the view are created only in Atlas. The business and technical views are represented in Open Metadata as RelationalTable entities and all the columns have a relationship to the business term and also to the real database column. The data virtualization tool (gaianDB) and database asset (Gaian database) are also modelled by the entities SoftwareServer, Endpoint, Connection, ConnectorType and Database.

In picture below, all view columns are linked to RelationalTableType

Screenshot below displays the relationship meaning (or semantic assignment) between the column and glossary term

Similarly, picture below displays the connection between the view column and the actual database source column through relationship queryTarget

Getting started with Egeria notebooks using docker

By ODPi Egeria

Do you like understanding a new technology hands-on yet also want to understand the concepts? Concerned it will take too long to get started?

Wait no longer! You can now experiment with Egeria by making use of our new Jupyter notebooks installed via Docker. Within minutes (plus download time) you’ll be happily running REST API calls against a live Egeria environment, and gaining an understand of Egeria’s concepts.

In this first Blog post I’ll take you through getting set up with a lab environment and running your first notebook.

Prerequisites

Before we get started on setting up Egeria, you’ll need access to a few things:

  • docker – the environment in which to run Egeria
  • git – the source code control tool to get files needed

Setting up docker

Docker makes it easy to run pre-created environments in ‘containers’ which are isolated from the host machine such as your laptop. The instructions here were tested with ‘Docker for Mac’, but you can also use ‘Docker for Windows’, or docker installed on linux.

Note: The containers are linux containers built for Intel 64 bit architecture, so they won’t work on ARM, nor will they work in Windows containers …

Once you’ve installed docker, make sure it’s running as covered in the docs above. If using windows or mac, you should see a docker icon (a whale) on the toolbar.

Setting up git

git is the tool we use to manage our code. If you don’t have it installed, install it from the git website (easiest), or else from your linux distribution or homebrew . No special configuration is needed.

Retrieving the Egeria code

You’re now ready to retrieve the Egeria code. Whilst we only need a few files for the docker work this will be useful for further exercises and following along with other blog posts.

Open up a command window (mac, windows or linux), switch to a suitable directory and type:

git clone https://github.com/odpi/egeria 

This will pull down the egeria code locally to your machine.

Running the notebooks

We’re now ready to run the notebook. To do this we will use a feature of docker called ‘docker-compose’. This is a simple approach to running multiple containers (think of these as applications or services) together.

For this example we are running

To get started with the docker compose environment (all one line – and replace / with \ for Windows):


cd egeria/open-metadata-resources/open-metadata-deployment/compose/tutorials
docker-compose -f egeria-tutorial.yaml up

At this point you’ll notice a lot of activity. Once it has settled down go to a web browser and go to http://localhost:18888 . You should see a Jupyter notebook environment open, and a list of our current labs will be shown in the left hand folder tree

If you don’t see the UI appear, press CTRL-C, and retry the docker compose command. Sometimes a slower network download can cause things not to start properly first time.

Running the notebooks

In the Jupyter UI navigate to ‘administration’ and open up the `read-me-first` notebook. This introduces you to how to setup an Egeria environment in a fictional company ‘Coco Pharmaceuticals’.

The large blue bar is effectively a cursor. It shows where you are in the notebook. Read each paragraph in turn and then hit the ‘play’ button to progress through the notebook. You can also press SHIFT-ENTER to run the current step and move to the next one.  As well as text, some paragraphs contain code which are being executed live against a real egeria server in your docker environment.

Once you’ve worked through this notebook try ‘managing-servers’ which goes into more specifics of how to start and stop servers. Other tutorials get into topics such as accessing assets.

Shutting down the environment

docker-compose -f egeria-tutorial.yaml down

Updating the environment

Each time the environment is started the same code will be run, since the container is downloaded the first time it’s used. 

In order to refresh the contains and run the latest code (recommended) run:

docker-compose -f egeria-tutorial.yaml pull

Further information

If you have any problems running the notebooks:

These containers we used above can be used in other ways too – stay tuned to the blog to find out more.

Implementing an Open Metadata Connector

By Blog, ODPi Egeria, Tech Deep Dive

Eager to integrate your own metadata repository into the Egeria ecosystem, but not sure where to start? This article walks through how to do just that: implementing an open metadata repository connector according to the standards of ODPi Egeria.

The following outlines the steps involved:

Introduction

Integrating a metadata repository into the Open Metadata ecosystem involves coding an Open Metadata Collection Store Connector. These are Open Connector Framework (OCF) connectors that define how to connect to and interact with a metadata repository.

Open Metadata Collection Store Connectors are typically comprised of two parts:

  1. The repository connector: which provides a standard repository interface that communicates using the Open Metadata Repository Services (OMRS) API and payloads.
  2. The event mapper connector: which captures events when metadata has changed in the metadata repository and passes these along to the Open Metadata Repository Services (OMRS) cohort.

The event mapper connector often calls the repository connector: to translate the repository-native events into Egeria’s OMRS events.

While various patterns can be used to implement these, perhaps the simplest and most loosely-coupled is the adapter. The adapter approach wraps the proprietary interface(s) of the metadata repository to translate these into OMRS calls and payloads. In this way, the metadata repository can communicate as if it were an open metadata repository.

The remainder of this article will walkthrough:

  • implementing such an adapter pattern as a connector, and
  • using the resulting connector through the proxy capabilities provided by the core of Egeria.

1. Design work

Designing before implementing
Designing before implementing

Before delving straight into the implementation of a connector, you really need to start with a level of design work. Fundamentally this will involve two steps:

  1. Mapping to the meta-model concepts of Egeria: in particular Entities, Classifications and Relationships.
  2. Mapping to the actual open metadata types of Egeria: e.g. GlossaryTerm, GlossaryCategory, RelationalColumn, and so on.

Map to the Egeria meta-model concepts

The best place to start with the design work is to understand the meta-model of Egeria itself. Consider how your metadata repository will map to the fundamental Egeria metadata concepts: Entities, Classifications, and Relationships.

When implementing the code described in the remainder of this article, you’ll be making use of and mapping to these fundamental Egeria concepts. Therefore, it is well worth your time now understanding them in some detail. This is before even considering specific instances of these types like GlossaryTerm or GlossaryCategory.

Meta-model mapping may be quite a straightforward conceptual mapping for some repositories. For example, Apache Atlas has the same concepts of Entities, Classifications and Relationships all as first-class objects.

On the other hand, not all repositories do. For example, IBM Information Governance Catalog (IGC) has Entities, and a level of Relationships and Classifications — but the latter two are not really first-class objects (i.e. properties and values cannot exist on them).

Therefore you may need to consider

  • whether to attempt to support these constructs in your mappings, and
  • if so, how to prescriptively represent them (if they are not first-class objects).

For example, in the implementation of the sample IGC connector we suggest using categories with specific names in IGC to represent certain classifications. Additionally, one of the reasons for implementing a read-only connector is that we can still handle relationships without any properties: by simply having the properties of any Egeria relationships we translate from IGC all be empty.

Map to the Egeria open metadata types

Once you have some idea for how to handle the mapping to the meta-model concepts, check your thinking by working through a few examples. Pick a few of the open metadata types and work out on paper how they map to your metadata repository’s pre-existing model. Common areas to do this would be e.g. GlossaryTerm, GlossaryCategory for glossary (business vocabulary) content; RelationalColumn, etc for relational database structures; and so on.

Most of these should be fairly straightforward after you have an approach for mapping to the fundamental meta-model concepts.

Then you’ll also want to decide how to handle any differences in types between the open metadata types and your repository’s pre-existing types:

  • Can your metadata repository be extended with new types?
  • Can your metadata repository’s pre-existing types be extended with new properties?
  • What impacts might be caused to repositories (and metadata instances) that already exist if you add to or extend the types?
  • What impacts will this have on your UI or how users interact with these extensions?

Your answers to these questions will inevitably depend on your specific metadata repository, but should help you decide on what approach you’d like to take:

  • Ignore any open metadata types that do not map to your pre-existing types.
  • Add any Egeria open metadata types that do not exist in your repository.
  • Add Egeria open metadata properties to your pre-existing types when Egeria has additional properties that do not yet exist in your type(s).
  • Implement a read-only connection (possibly with some hard-coding of property values) for types that are partially map-able, but not easily extended to support the full set of properties defined on the open metadata type.
  • and so on.

2. Pre-requisites

Creating your own connector project
Creating your own connector project

Implementing an adapter can be greatly accelerated by using the pre-built base classes of Egeria. Therefore building a connector using Java is likely the easiest way to start.

This requires an appropriate build environment comprised of both Java (minimally v1.8) and Maven.

Setup a project

Egeria has been designed to allow connectors to be developed in projects independently from the core itself. Some examples have already been implemented, which could provide a useful reference point as you proceed through this walkthrough:

Start by defining a new Maven project in your IDE of choice. In the root-level POM be sure to include the following:

<properties>
    <open-metadata.version>1.1-SNAPSHOT</open-metadata.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.odpi.egeria</groupId>
        <artifactId>repository-services-apis</artifactId>
        <version>${open-metadata.version}</version>
        <scope>compile</scope>
    </dependency>
    <dependency>
        <groupId>org.odpi.egeria</groupId>
        <artifactId>open-connector-framework</artifactId>
        <version>${open-metadata.version}</version>
        <scope>compile</scope>
    </dependency>
</dependencies>

Naturally change the version to whichever version of Egeria you’d like to build against. The dependencies listed ensure you’ll have the necessary portion of Egeria to build your connector against.

3. Implement the repository connector

Implementing your connector in your own project
Implementing your connector in your own project

The repository connector exposes the ability to search, query, create, update and delete metadata in an existing metadata repository. As such, it will be the core of your adapter.

You can start to build this within your new project by creating a new Maven module called something like adapter. Within this adapter module implement the following:

Implement an OMRSRepositoryConnectorProvider

Start by writing an OMRSRepositoryConnectorProvider specific to your connector, which extends OMRSRepositoryConnectorProviderBase. The connector provider is a factory for its corresponding connector. Much of the logic needed is coded in the base class, and therefore your implementation really only involves defining the connector class and setting this in the constructor.

For example, the following illustrates this for the Apache Atlas Repository Connector:

package org.odpi.egeria.connectors.apache.atlas.repositoryconnector;

import org.odpi.openmetadata.frameworks.connectors.properties.beans.ConnectorType;
import org.odpi.openmetadata.repositoryservices.connectors.stores.metadatacollectionstore.repositoryconnector.OMRSRepositoryConnectorProviderBase;

public class ApacheAtlasOMRSRepositoryConnectorProvider extends OMRSRepositoryConnectorProviderBase {

    static final String  connectorTypeGUID = "7b200ca2-655b-4106-917b-abddf2ec3aa4";
    static final String  connectorTypeName = "OMRS Apache Atlas Repository Connector";
    static final String  connectorTypeDescription = "OMRS Apache Atlas Repository Connector that processes events from the Apache Atlas repository store.";

    public ApacheAtlasOMRSRepositoryConnectorProvider() {
        Class connectorClass = ApacheAtlasOMRSRepositoryConnector.class;
        super.setConnectorClassName(connectorClass.getName());
        ConnectorType connectorType = new ConnectorType();
        connectorType.setType(ConnectorType.getConnectorTypeType());
        connectorType.setGUID(connectorTypeGUID);
        connectorType.setQualifiedName(connectorTypeName);
        connectorType.setDisplayName(connectorTypeName);
        connectorType.setDescription(connectorTypeDescription);
        connectorType.setConnectorProviderClassName(this.getClass().getName());
        super.connectorTypeBean = connectorType;
    }
}

Note that you’ll need to define a unique GUID for the connector type, and a meaningful name and description. Really all you then need to implement is the constructor, which can largely be a copy / paste for most adapters. Just remember to change the connectorClass to your own, which you’ll implement in the next step (below).

Implement an OMRSRepositoryConnector

Next, write an OMRSRepositoryConnector specific to your connector, which extends OMRSRepositoryConnector. This defines the logic to connect to and disconnect from your metadata repository. As such the main logic of this class will be implemented by:

  • Overriding the initialize() method to define any logic for initializing the connection: for example, connecting to an underlying database, starting a REST API session, etc.
  • Overriding the setMetadataCollectionId() method to create an OMRSMetadataCollection for your repository (see next step below).
  • Overriding the disconnect() method to properly cleanup / close such resources.

Whenever possible, it makes sense to try to re-use any existing client library that might exist for your repository. For example, Apache Atlas provides a client through Maven that we can use directly. Re-using it saves us from needing to implement and maintain various beans for the (de)serialization of REST API calls.

The following illustrates the start of such an implementation for the Apache Atlas Repository Connector:

package org.odpi.egeria.connectors.apache.atlas.repositoryconnector;

import org.apache.atlas.AtlasClientV2;

public class ApacheAtlasOMRSRepositoryConnector extends OMRSRepositoryConnector {

    private String url;
	private AtlasClientV2 atlasClient;
    private boolean successfulInit = false;

    public ApacheAtlasOMRSRepositoryConnector() { }

    @Override
    public void initialize(String               connectorInstanceId,
                           ConnectionProperties connectionProperties) {
        super.initialize(connectorInstanceId, connectionProperties);

        final String methodName = "initialize";

        // Retrieve connection details
        EndpointProperties endpointProperties = connectionProperties.getEndpoint();
        // ... check for null and handle ...
        this.url = endpointProperties.getProtocol() + "://" + endpointProperties.getAddress();
        String username = connectionProperties.getUserId();
        String password = connectionProperties.getClearPassword();

        this.atlasClient = new AtlasClientV2(new String[]{ this.url }, new String[]{ username, password });

        // Test REST API connection by attempting to retrieve types list
        try {
            AtlasTypesDef atlasTypes = atlasClient.getAllTypeDefs(new SearchFilter());
            successfulInit = (atlasTypes != null && atlasTypes.hasEntityDef("Referenceable"));
        } catch (AtlasServiceException e) {
            log.error("Unable to retrieve types from Apache Atlas.", e);
        }

        if (!successfulInit) {
            ApacheAtlasOMRSErrorCode errorCode = ApacheAtlasOMRSErrorCode.REST_CLIENT_FAILURE;
            String errorMessage = errorCode.getErrorMessageId() + errorCode.getFormattedErrorMessage(this.url);
            throw new OMRSRuntimeException(
                    errorCode.getHTTPErrorCode(),
                    this.getClass().getName(),
                    methodName,
                    errorMessage,
                    errorCode.getSystemAction(),
                    errorCode.getUserAction()
            );
        }

    }

    @Override
    public void setMetadataCollectionId(String metadataCollectionId) {
        this.metadataCollectionId = metadataCollectionId;
        if (successfulInit) {
            metadataCollection = new ApacheAtlasOMRSMetadataCollection(this,
                    serverName,
                    repositoryHelper,
                    repositoryValidator,
                    metadataCollectionId);
        }
    }

}

This has been abbreviated from the actual class for simplicity; however, note that as part of the initialize() it may make sense to test out the parameters received for configuring the connection, to make sure that a connection to your repository can actually be established before proceeding any further.

(This is also used in this example to setup a flag successfulInit to indicate whether connectivity was possible, so that if it was not we do not proceed any further with setting up the metadata collection and we allow the connector to fail immediately, with a meaningful error.)

You may want to wrap the metadata repository client’s methods with your own methods in this class as well. Generally think of this class as “speaking the language” of your proprietary metadata repository, while the next class “speaks” Egeria.

Implement an OMRSMetadataCollection

Finally, write an OMRSMetadataCollection specific to your repository, which extends OMRSMetadataCollectionBase. This can grow to be quite a large class, with many methods, but is essential for the participation of your metadata repository in a broader cohort. In particular, it is heavily leveraged by Egeria’s Enterprise Connector to federate actions against your metadata repository. As such, this is how your connector “speaks” Egeria (open metadata).

Ideally your implementation should override each of the methods defined in the base class. To get started:

  1. Override the addTypeDef() method. For each TypeDef this method should either extend your metadata repository to include this TypeDef, configure the mapping from your repository’s types to the open metadata types, or throw a TypeDefNotSupportedException. (For those that are implemented it may be helpful to store these in a class member for comparison in the next step.)
  2. Override the verifyTypeDef() method, which can check the types you have implemented (above) conform to the open metadata TypeDef received (ie. that all properties are available, of the same data type, etc), and that if none have yet been listed as implemented that false is returned (this will cause addTypeDef() above to automatically be called).
  3. Override the getEntityDetail() method that retrieves an entity by its GUID.

Note that there are various options for implementing each of these. Which route to take will depend on the particulars of your specific metadata repository:

  • In the sample IBM InfoSphere Information Governance Catalog Repository Connector the mappings are defined in code. This approach was used because IGC does not have first-class Relationship or Classification objects. Therefore, some complex logic is needed in places to achieve an appropriate mapping. Furthermore, if a user wants to extend the logic or mappings used for their particular implementation of IGC, this approach allows complete flexibility to do so. (A developer simply needs to override the appropriate method(s) with custom logic.)
  • The sample Apache Atlas Repository Connector illustrates a different approach. Because the TypeDefs are quite similar to those of Egeria, it is easier to map more directly through configuration files. A generic set of classes can be implemented that use these configuration files to drive the specifics of each mapping. In this case, simple JSON files were used to define the omrs name of a particular object or property and the corresponding atlas entity / property name to which it should be mapped. While this allows for much more quickly adding new mappings for new object types, it is far less flexible than the code-based approach used for IGC. (It is only capable of handling very simple mappings: anything complex would either require the definition of a complicated configuration file or still resorting to code).

Once these minimal starting points are implemented, you should be able to configure the OMAG Server Platform as a proxy to your repository connector by following the instructions in the next step.

Important: this will not necessarily be the end-state pattern you intend to use for your repository connector. Nonetheless, it can provide a quick way to start testing its functionality.

This very basic, initial scaffold of an implementation allows:

  • a connection to be instantiated to your repository, and
  • translation between your repository’s representation of metadata and the open metadata standard types.

4. Package your connector

Packaging your connector
Packaging your connector

To make your connector available to run within the OMAG Server Platform, you can package it into a distributable .jar file using another Maven module, something like distribution.

In this module’s POM file include your adapter module (by artifactId) as a dependency, and consider using the maven-shade-plugin to define just the necessary components for your .jar file. Since it should only ever be executed as part of an Egeria OMAG Server Platform, your .jar file does not need to re-include all of the underlying Egeria dependencies.

For example, in our Apache Atlas Repository Connector we only need to include the adapter module itself and the base dependencies for Apache Atlas’s Java client (all other dependencies like Egeria core itself, the Spring framework, etc will already be available through the Egeria OMAG Server Platform):

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>egeria-connector-apache-atlas</artifactId>
        <groupId>org.odpi.egeria</groupId>
        <version>1.1-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>egeria-connector-apache-atlas-package</artifactId>

    <dependencies>
        <dependency>
            <groupId>org.odpi.egeria</groupId>
            <artifactId>egeria-connector-apache-atlas-adapter</artifactId>
            <version>${open-metadata.version}</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>${maven-shade.version}</version>
                <executions>
                    <execution>
                        <id>assemble</id>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <artifactSet>
                                <includes>
                                    <include>org.odpi.egeria:egeria-connector-apache-atlas-adapter</include>
                                    <include>org.apache.atlas:atlas-client-common</include>
                                    <include>org.apache.atlas:atlas-client-v1</include>
                                    <include>org.apache.atlas:atlas-client-v2</include>
                                    <include>org.apache.atlas:atlas-intg</include>
                                    <include>org.apache.hadoop:hadoop-auth</include>
                                    <include>org.apache.hadoop:hadoop-common</include>
                                    <include>com.fasterxml.jackson.jaxrs:jackson-jaxrs-base</include>
                                    <include>com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider</include>
                                    <include>com.fasterxml.jackson.module:jackson-module-jaxb-annotations</include>
                                    <include>com.sun.jersey:jersey-client</include>
                                    <include>com.sun.jersey:jersey-core</include>
                                    <include>com.sun.jersey:jersey-json</include>
                                    <include>com.sun.jersey.contribs:jersey-multipart</include>
                                    <include>javax.ws.rs:jsr311-api</include>
                                </includes>
                            </artifactSet>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

</project>

Of course, you do not need to use the maven-shade-plugin to accomplish such bundling: feel free to define a Maven assembly or other Maven techniques.

Building and packaging your connector should then be as simple as running the following from the root of your project tree:

$ mvn clean install

Working out exactly which dependencies to include when you are using an external client like Apache Atlas’s can be a little bit tricky. Starting small will inevitably result in various errors related to classes not being found: when building you’ll see a list of all the classes considered by the shade plugin and which are included and excluded. You can use this to make some educated guesses as to which may still need to be included if you are running into errors about classes not being found. (Ideally you’ll have a simple, single jar file / dependency you can directly include instead of needing to work through this, but that won’t always be the case.)

Again, since we will just be using this connector alongside the existing OMAG Server Platform, this avoids ending up with a .jar file that includes the entirety of the Egeria OMAG Server Platform (and its dependencies) in your connector .jar — and instead allows your minimal .jar to be loaded at startup of the core OMAG Server Platform and configured through the REST calls covered in section 6.

Of course, if you intend to embed or otherwise implement your own server, the packaging mechanism will likely be different. However, as mentioned in the previous step this should provide a quick and easy initial way of testing the functionality of the connector against the core of Egeria.

5. Startup the OMAG Server Platform with your connector

Configuring the OMAG Server Platform with your connector
Configuring the OMAG Server Platform with your connector

Assuming you’ve built your connector .jar file using the approach outlined above, you’ll now have a .jar file under the distribution/target/ directory of your project: for the Apache Atlas example, this would be distribution/target/egeria-connector-apache-atlas-package-1.1-SNAPSHOT.jar.

When starting up the OMAG Server Platform of Egeria, we need to point to this .jar file using either the LOADER_PATH environment variable or a -Dloader.path= command-line argument to the server start command:

$ export LOADER_PATH=..../distribution/target/egeria-connector-apache-atlas-package-1.1-SNAPSHOT.jar
$ java -jar server-chassis-spring-1.1-SNAPSHOT.jar

or

$ java -Dloader.path=..../distribution/target/egeria-connector-apache-atlas-package-1.1-SNAPSHOT.jar -jar server-chassis-spring-1.1-SNAPSHOT.jar

Either startup should ensure your connector is now available to the OMAG Server Platform to use for connecting to your metadata repository. You may also want to setup the LOGGING_LEVEL_ROOT environment variable to define a more granular logging level for your initial testing, e.g. export LOGGING_LEVEL_ROOT=INFO before running the startup command above, to receive deeper information on the startup. (You can also setup a similar variable to get even deeper information for just your portion of code by using your unique package name, e.g. export LOGGING_LEVEL_ORG_ODPI_EGERIA_CONNECTOR_X_Y_Z=DEBUG.)

Then configure the OMAG Server Platform to use your connector. Note that the configuration and startup sequence is important.

Start with just the following:

Enable the OMAG Server as a repository proxy

Enable the OMAG Server as a repository proxy by specifying your canonical OMRSRepositoryConnectorProvider class name for the connectorProvider={javaClassName} parameter and POSTing to:

http://egeriahost:8080/open-metadata/admin-services/users/myself/servers/test/local-repository/mode/repository-proxy/connection

For example, in our Apache Atlas example we would POST with a payload like the following:

{
  "class": "Connection",
  "connectorType": {
    "class": "ConnectorType",
    "connectorProviderClassName": "org.odpi.egeria.connectors.apache.atlas.repositoryconnector.ApacheAtlasOMRSRepositoryConnectorProvider"
  },
  "endpoint": {
    "class": "Endpoint",
    "address": "{{atlas_host}}:{{atlas_port}}",
    "protocol": "http"
  },
  "userId": "{{atlas_user}}",
  "clearPassword": "{{atlas_password}}"
}

 

Start the server instance

Start the OMAG Server instance by POSTing to:

http://egeriahost:8080/open-metadata/admin-services/users/myself/servers/test/instance

During server startup you should then see various messages related to the metadata type registration process as the open metadata types are checked against your repository. (These in turn call the methods you’ve implemented in your OMRSMetadataCollection.) You might naturally need to iron out a few bugs in those methods before proceeding further…

6. Test your connector’s basic operations

Testing your connector's basic operations via API
Testing your connector’s basic operations via API

Each time you change your connector code, you’ll naturally want to re-build it (mvn clean install) and restart the OMAG Server Platform. If you are not changing any of the configuration, you can simply restart the OMAG Server Platform and re-run the POST to start the server instance (the last step above). If you need to change something in the configuration itself, it will be best to:

  1. Stop the OMAG Server Platform.
  2. Delete the configuration document (a file named something like
    omag.server.test.config).
  3. Start the OMAG Server Platform again.
  4. Re-run both steps above (enabling the OMAG Server as a proxy, and starting the instance).

From there you can continue to override other methods of the OMRSMetadataCollectionBase class to implement the other metadata functionality for searching, updating and deleting as well as retrieving other instances of metadata like relationships. Most of these methods can be directly invoked (and therefore tested) using the REST API endpoints of the OMAG server.

A logical order of implementation might be:

Read operations

getEntitySummary()

… which you can test through GET to

http://egeriahost:8080/servers/test/open-metadata/repository-services/users/myself/instances/entity/{{guidOfEntity}}/summary

getEntityDetail()

… which you can test through GET to

http://egeriahost:8080/servers/test/open-metadata/repository-services/users/myself/instances/entity/{{guidOfEntity}}

getRelationshipsForEntity()

… which you can test through POST to

http://egeriahost:8080/servers/test/open-metadata/repository-services/users/myself/instances/entity/{{guidOfEntity}}/relationships

… with a payload like the following (to retrieve all relationships):

{
  "class": "TypeLimitedFindRequest",
  "pageSize": 100
}

These are likely to require the most significant logic for any mappings / translations you’re doing between the open metadata types and your own repository. For example, with Apache Atlas these are where we translate between the Apache Atlas native types like AtlasGlossaryTerm and its representation in the Apache Atlas java client and the open metadata type GlossaryTerm and its representation through the standard OMRS interfaces.

The other main area to then implement is searching, for example:

findEntitiesByProperty()

… which you can test through POST to

http://egeriahost:8080/servers/test/open-metadata/repository-services/users/myself/instances/entities/by-property

… with a payload like the following (to find only those GlossaryTerms classified as SpineObjects and whose name also starts with Empl):

{
  "class": "EntityPropertyFindRequest",
  "typeGUID": "0db3e6ec-f5ef-4d75-ae38-b7ee6fd6ec0a",
  "pageSize": 10,
  "matchCriteria": "ALL",
  "matchProperties": {
    "class": "InstanceProperties",
    "instanceProperties": {
      "displayName": {
        "class": "PrimitivePropertyValue",
        "instancePropertyCategory": "PRIMITIVE",
        "primitiveDefCategory": "OM_PRIMITIVE_TYPE_STRING",
        "primitiveValue": "Empl*"
      }
    }
  },
  "limitResultsByClassification": [ "SpineObject" ]
}

findEntitiesByClassification()

… which you can test through POST to

http://egeriahost:8080/servers/test/open-metadata/repository-services/users/myself/instances/entities/by-classification/ContextDefinition

… with a payload like the following (to find only those GlossaryTerms classified as ContextDefinitions where the scope of the context definition contains local — note to change the classification type you change the end of the URL path, above):

{
  "class": "EntityPropertyFindRequest",
  "typeGUID": "0db3e6ec-f5ef-4d75-ae38-b7ee6fd6ec0a",
  "pageSize": 100,
  "matchClassificationCriteria": "ALL",
  "matchClassificationProperties": {
    "class": "InstanceProperties",
    "instanceProperties": {
      "scope": {
        "class": "PrimitivePropertyValue",
        "instancePropertyCategory": "PRIMITIVE",
        "primitiveDefCategory": "OM_PRIMITIVE_TYPE_STRING",
        "primitiveValue": "*local*"
      }
    }
  }
}

findEntitiesByPropertyValue()

… which you can test through POST to

http://egeriahost:8080/servers/test/open-metadata/repository-services/users/myself/instances/entities/by-property-value?searchCriteria=address

… with a payload like the following (to find only those GlossaryTerms that contain address somewhere in one of their textual properties):

{
  "class": "EntityPropertyFindRequest",
  "typeGUID": "0db3e6ec-f5ef-4d75-ae38-b7ee6fd6ec0a",
  "pageSize": 10
}

and so on.

You hopefully have access to a search API for your repository so that you can efficiently fulfil these requests. You want to avoid pulling back a large portion of your metadata and having to loop through it in memory to find specific objects. Instead, push-down the search to your repository itself as much as possible…

Once you have those working, it should be relatively easy to go back and fill in areas like the other TypeDef-related methods, to ensure your connector can participate appropriately in a broader open metadata cohort.

Write operations

While the ordering above is necessary for all connectors, if you’ve decided to also implement write operations for your repository there are further methods to override. These include:

  • creation operations like addEntity,
  • update operations like updateEntityProperties,
  • and reference copy-related operations like saveEntityReferenceCopy.

If you are only implementing a read-only connector, these methods can be left as-is and the base class will indicate they are not supported by your connector.

7. Add the event mapper connector

Adding the event mapper
Adding the event mapper

The event mapper connector enables events from an existing metadata repository to distribute changes to metadata to the rest of the metadata repositories who are members of the same OMRS cohort. It is not a mandatory component: as long as your connector can “speak” Egeria through an OMRSMetadataCollection it can participate in an open metadata cohort via the Enterprise Connector. However, if your metadata repository already has some kind of event or notification mechanism, the event mapper can be an efficient addition to participating in the broader open metadata cohort.

Within the same adapter Maven module, perhaps under a new sub-package like ...eventmapper, implement the following:

Implement an OMRSRepositoryEventMapperProvider

Start by writing an OMRSRepositoryEventMapperProvider specific to your connector, which extends OMRSRepositoryConnectorProviderBase. The connector provider is a factory for its corresponding connector. Much of the logic needed is coded in the base class, and therefore your implementation really only involves defining the connector class and setting this in the constructor.

For example, the following illustrates this for the Apache Atlas Repository Connector:

package org.odpi.egeria.connectors.apache.atlas.eventmapper;

import org.odpi.openmetadata.frameworks.connectors.properties.beans.ConnectorType;
import org.odpi.openmetadata.repositoryservices.connectors.stores.metadatacollectionstore.repositoryconnector.OMRSRepositoryConnectorProviderBase;

public class ApacheAtlasOMRSRepositoryEventMapperProvider extends OMRSRepositoryConnectorProviderBase {

    static final String  connectorTypeGUID = "daeca2f1-9d23-46f4-a380-19a1b6943746";
    static final String  connectorTypeName = "OMRS Apache Atlas Event Mapper Connector";
    static final String  connectorTypeDescription = "OMRS Apache Atlas Event Mapper Connector that processes events from the Apache Atlas repository store.";

    public ApacheAtlasOMRSRepositoryEventMapperProvider() {
        Class connectorClass = ApacheAtlasOMRSRepositoryEventMapper.class;
        super.setConnectorClassName(connectorClass.getName());
        ConnectorType connectorType = new ConnectorType();
        connectorType.setType(ConnectorType.getConnectorTypeType());
        connectorType.setGUID(connectorTypeGUID);
        connectorType.setQualifiedName(connectorTypeName);
        connectorType.setDisplayName(connectorTypeName);
        connectorType.setDescription(connectorTypeDescription);
        connectorType.setConnectorProviderClassName(this.getClass().getName());
        super.setConnectorTypeProperties(connectorType);
    }

}

Note that you’ll need to define a unique GUID for the connector type, and a meaningful name and description. Really all you then need to implement is the constructor, which can largely be a copy / paste for most adapters. Just remember to change the connectorClass to your own, which you’ll implement in the next step (below).

Implement an OMRSRepositoryEventMapper

Next, write an OMRSRepositoryEventMapper specific to your connector, which extends OMRSRepositoryEventMapperBase and implements VirtualConnectorExtension and OpenMetadataTopicListener. This defines the logic to pickup and process events or notifications from your repository and produce corresponding OMRS events. As such the main logic of this class will be implemented by:

  • Overriding the initialize() method to define how you will initialize your event mapper. For example, this could be connecting to an existing event bus for your repository, or some other mechanism through which events should be sourced.
  • Overriding the start() method to define how to startup the processing of such events.
  • Implement the initializeEmbeddedConnectors() method to register as a listener to any OpenMetadataTopicConnectors that are passed as embedded connectors.
  • Implement the processEvent() method to define how to process each event received from your repository’s event / notification mechanism.

The bulk of the logic in the event mapper should be called from this processEvent() method: defining how events that are received from your repository are processed (translated) into OMRS events that deal with Entities, Classifications and Relationships.

Typically you would want to construct such instances by calling into your OMRSMetadataCollection, ensuring you produce the same payloads of information for these instances both through API connectivity and the events.

Once you have the appropriate OMRS object, you can make use of the methods provided by the repositoryEventProcessor, configured by the base class, to publish these to the cohort. For example:

  • repositoryEventProcessor.processNewEntityEvent(...) to publish a new entity instance (EntityDetail)
  • repositoryEventProcessor.processUpdatedRelationshipEvent(...) to publish an updated relationship instance (Relationship)
  • and so on

To add the event mapper configuration to the OMAG Server Platform configuration you started with above, add:

Configure the cohort event bus

This should be done first, before any of the other configuration steps above, by POSTing to:

http://egeriahost:8080/open-metadata/admin-services/users/myself/servers/test/event-bus?connectorProvider=org.odpi.openmetadata.adapters.eventbus.topic.kafka.KafkaOpenMetadataTopicProvider&topicURLRoot=OMRSTopic

… with a payload like the following:

{
  "producer": {
    "bootstrap.servers":"kafkahost:9092"
  },
  "consumer": {
    "bootstrap.servers":"kafkahost:9092"
  }
}

Configure the event mapper

This can be done nearly last, after all of the other configuration steps above but still before the start of the server instance. Specify your canonical OMRSRepositoryEventMapperProvider class name for the connectorProvider={javaClassName} parameter and connection details to your repository’s event source in the eventSource parameter by POSTing to:

http://egeriahost:8080/open-metadata/admin-services/users/myself/servers/test/local-repository/event-mapper-details

For example, in our Apache Atlas example we would POST to:

http://egeriahost:8080/open-metadata/admin-services/users/myself/servers/test/local-repository/event-mapper-details?connectorProvider=org.odpi.egeria.connectors.apache.atlas.eventmapper.ApacheAtlasOMRSRepositoryEventMapperProvider&eventSource=atlashost:9027

8. Test your connector’s conformance

Components for testing your connector's conformance
Components for testing your connector’s conformance

Aside from the API-based testing you might do as part of the on-going implementation of your OMRSMetadataCollection class, once you are in a position where you have most of the methods implemented it is a good idea to test your connector against the Egeria Conformance Suite.

This will provide guidance on what features you may still need to implement in order to conform to the open metadata standards.

Once your connector conforms, you should also then have the necessary output to apply to use the ODPi Egeria Conformant mark.

Social Media Auto Publish Powered By : XYZScripts.com