In almost every integration project in existence, you’ll find that at some point you need to map one set of representative values to another. It doesn’t take long to think of a few common examples. Lets take two hypothetical systems named Xup and Yonder. How do they each represent countries in addresses?
There is almost always some variance between these values, if not (sometimes) a completely different set of values. Often the differences can be inconsistent, so there is not a simple algorithmic way to convert from X’s to Y’s value and back again. If we write the mapping into our integration solution, then when a new country is unexpectedly formed, a change to the solution code will be required, which isn’t a desirable approach to these sorts of changes, especially since there are many other code types that would change much more often than that (book or music genres, team codes, product categories). Maintaining these code mappings starts to become expensive, require additional testing and development time, and can often introduce mapping defects.
The Solution: Value Cross Referencing
The best solution here is to manage these mappings in some form of data store that is simpler to modify, and then look up the mapping each time you need it (with a reasonable amount of caching of course). Your integration component can then reference the mappings based on the input data from Xup, look up the correct value for Yonder, and then include that value in the output message. If the mappings need to change, this can be managed in the data store without ever disabling your integration component, and the change management for the values can be maintained as distinct from the code. This is especially useful because the team that maintains the values is often a different team from those who are developing the integration.
Because we’ll be running into “values” and “code” and other often reused terms, I’ll provide a definition for the important components in this solution:
|Entity||A grouping of related terms that are independent from any other set of code values.||Country|
|Subject||The discrete thing or relationship that a set of values represents.||The Country Australia|
|Participant||A system, a group of systems, or an endpoint that uses one set of values.||Xup, Yonder|
|Attribute||Some systems may represent the same set of values in different ways, and these are labelled as attributes.||Name, Code|
|Value||The actual value expected by one participant, or provided by one participant.||AU, AUS|
The relationships between these conceptual elements is outlined in the diagram below:
How this might look in a service Request/Response:
Source to Target
All that an Cross Reference service needs to do is to provide a mechanism for looking up the target value based on the source value. The developer will know most of the information required during development, and will be able to provide the source value from the source system at run time. So, your Cross Reference service should provide an interface that looks like this:
INPUT: entity, sourceValue, sourceAttribute, sourceParticipant, targetParticipant, targetAttribute OUTPUT: targetValue e.g. INPUT: "orderStatus", "submitted", "name", "Xup", "Yonder", "code" OUTPUT: "SUB"
Side note: See that we don’t need subject to be specified here. Even if the Cross Reference service represents the subject, it’s only as a link between the two values that the integration code is interested in. The integration code doesn’t care about meaning, only syntax.
So lets take the above example again. First, without the service:
IF (sourceValue == "AU") THEN "AUS" IF (sourceValue == "NZ") THEN "NZ" IF (sourceValue == "US") THEN "USA" IF (sourceValue == "CA") THEN "CAN" //and so on for every country.
And now with the service:
XREF("Country", sourceValue, "CODE", "Xup", "Yonder", "ShortName")
And consider what would be required if the possible country values change for each of the systems…code change vs data change! The service can also allow a bulk lookup function, so that all your code values can be mapped at once.
Building a Solution: Challenges
There are a number of challenges when it comes to actually implementing this system. Here are a few that are likely to come up, and some suggested solutions for how to proceed.
Problem: 1 -> N mapping rules (e.g. Xup has a “processing” status, and the equivalent in Yonder could be any one of “pending”, “backorder” and “partially shipped”. How do you map from Xup to Yonder? What about the other way?
The answer here lies in talking to the key people who understand both systems, and identifying which mapping is appropriate in the different circumstances. It could be that other fields in the message play a part in the mapping. This will definitely require business rules to be defined by the stakeholders of each system before the mapping can be defined.
Problem: Missing Values (e.g. Yonder’s has an optional field, making blank/null a valid value, whereas Xup’s equivalent field requires a value). This is a one way problem in most cases, but it does need to be resolved.
Again you’ll need to talk to the people who manage both systems, and identify a default value that fits in those circumstances. One that is agreed upon it can be built into the mapping so that a blank or default value can be returned when appropriate.
Problem: Time frames. Development has already started on the integration component, but the design of the Cross Reference solution is still up in the air. What database will be used, and what format?
Like most of these situations, the answer is an interface, or API that can be defined as the common point between the integration component (the Cross Reference client) and the Cross Reference implementation (the Cross Reference Service). So long as the interface can be agreed upon, the actual implementation doesn’t matter so much, and the service can be stubbed out for testing purposes until an actual service becomes available.
You might also enjoy:
Ansible Crash Course 09 March 2016
Developing Bulk APIs with Mule, RAML and APIKit 02 December 2014
Working with TIBCO AMX ServiceGrid and BusinessWorks - Part 1 22 January 2014
Introduction to Elasticsearch, Logstash and Kibana (ELK) Stack 14 November 2014