Back, oooh, ages ago, I wrote about the top five OSS platform architecture things. Those where things (issues, concerns, designs, e.t.c.) that needed to be considered when delivering a new OSS product or project to ensure scalability and performance.
Rarely does a new OSS application live in isolation. Integration with other systems in the OSS and BSS environment could easily be as much work as the development of a new application.
It’s raining today, making me feel a bit low, so I’m going to give you five anti-patterns for how not to integrate your OSS/BSS applications.
1. Opening APIs to Remote Calls
2. The EAI Silver Bullet
3. Avoiding Asynchronous Interfaces for Performance Reasons
4. Data Without Context
5. Implementing Standards
Opening APIs to Remote Calls
It’s now very easy, almost automatic, to take a reasonably well designed application and expose its APIs to remote calls for integration purposes. For example, a Java-based application may have a set of internal interfaces to create/read/update/delete objects in its database. These Java Beans were designed for use by the application’s UI or business logic. Wouldn’t it be nice to offer them as an integration API? Easy. Just point your development tool at the Beans, press a button, and you’ve got a remote API using RMI, SOAP, CORBA or whatever.
Java EJB makes it so easy as to imply it should be done, that any piece of code could reasonably be used locally (internally) or remotely. There are lots of reasons that doing so would be a very Bad Thing.
Such APIs are too fine-grained to be useful. An interface specification that comprises dozens of separate Java classes isn’t going to be popular with your system integrators, particularly five years down the line when some other SI has to work out how to use the API calls to actually achieve something meaningful. A fine-grained interface does not encapsulate any business logic. It doesn’t add any value.
The API will require a ‘chatty’ interface. Many API calls would be needed to achieve something useful. Such integration is known as ‘chatty’, as many messages have to pass between the integrated systems to get a relatively simple job done. Remote interfaces, even using ‘low-level’ interfaces like CORBA and RMI, are *very* slow. An order of magnitude slower than the internal calls the application makes to its own local interface. Chatty, slow, remote interfaces are obviously to be avoided.
System integration will be too tightly-coupled. The external system, or adapter, will need to use very specific, low-level API calls executed in precisely the right order to achieve something meaningful. If the applications data model or internal processes ever change (and they will) then the low level API will most likely also change, effecting any external systems that used its remote interface. The cost of maintaining the integration over time will be higher than the initial development cost.
The EAI Silver Bullet
Seven years ago, or there-about, people were struggling with nasty, low-level CORBA-based integrations. They were even being promoted as industry standards. They weren’t particularly successful.
Then, a little later, middleware vendors provided the answer, as they so often do! They gave us enterprise integration in the form of various TLAs (ESB, MOM, but I’ll use the more general Enterprise Application Integration, EAI, name). The plan was to migrate away from all the multiple, low level, tightly coupled integrations and move all system integration on to a reliable, manageable, homogenous hub or bus architecture. It did look good on paper. Spaghetti-like system diagrams became clean and lasagne-like. But, if people ever thought ‘we’ll use EAI for everything’ (and they did) they were making a big mistake.
Most integration is still point-to-point. Asynchronous, publish-subscribe integration has its benefits, but the reality of most OSS/BSS environments was that the applications are not optimized for this sort of integration. Legacy, and many new applications, still required the EAI to implement spaghetti-like point-to-point integration.
All those nice enterprise features come with a performance hit. I like all that guaranteed-message-delivery, management consoles, XML payloads, message transformation, and stuff. But it all uses up CPU time and/or network bandwidth. Often it’s worth it. But for very quick, point-to-point, low-latency, high-volume integration EAI is not fit for purpose.
It can become a thorny issue when a project mandates use of EAI, but the delivery team is forced to bypass it. Occasionally, one has to accept that there are good (demonstrable, measurable) reasons to use something other than the incumbent EAI.
Oh, by the way, SOA isn’t this decade’s silver bullet either.
Avoiding Asynchronous Interfaces for Performance Reasons
Did I just say that EAI asynchronous messaging buses offer poor performance? I don’t think I said that. Just sometimes you need an integration technology that’s an order of magnitude faster.
People have been stung by trying to use too much EAI, and go to the other extreme of assuming all asynchronous interfaces should be avoided when performance is critical. This removes a useful option from the system integrators toolbox.
Here’s an edge-case where this could cripple overall system performance: When evolving a complex OSS/BSS environment some legacy systems expect very quick acknowledgement to their external requests. If one replaces the external system with a newfangled application that maybe doesn’t respond so quickly (but meets its own performance requirements) and uses a slightly slower interface technology (say, replacing system-level pipes with SOAP) you significantly increase the request-acknowledge latency. This may be acceptable, except for the impact it has on the legacy system’s concurrency: If the system locks any internal resources while waiting for an acknowledgement of its request, any increase in latency can be a serious problem.
Using asynchronous messaging easily solves this problem. The request message reaches the external system, get an immediate acknowledge and is placed on a queue to be processed at the external systems’ leisure. Asynchronous messaging can de-couple decoding and processing from message delivery. When you just need to know it got there, asynchronicity gives you a fast response time.
Data Without Context
Knowledge and wisdom, data and information. There’s plenty of clichés on this topic, yet its easily overlooked when it comes to designing OSS/BSS system integration. The assumption seems to be if you get the data integration or synchronization right, then that’s 80% of the job done. No. Data is 20% of the job, and the context and meaning of the data is the 80%.
There is no single
‘correct’ view of the network that is universally useful. Yes, data reconciliation to an inventory, for example, is useful for activities such as service provisioning, fault management, and so on. But, my point is, an overall OSS/BSS system design has to recognize that different applications need different data and information.
There’s little value in putting the integration effort in to creating one view of the OSS/BSS world. Take a network device, for example. An activation application may see a device with logical and physical interfaces. A provisioning inventory, might have this plus a simple shelf-slot-card data model. An inside-plant tool would have a complex physical data model, but no logical data. While in theory the common elements modelled by these systems should be identical, and therefore candidates for inclusion in a centralized repository, they are rarely easy to recognize. Differences in naming conventions are common, causing significant integration headaches, and also usability issues: which is the ‘right’ name for the centralized repository? Then there are technical issues like finding unique keys to align the data and managing mapping between subtly different device models (maybe tricky issues like card-in-card or aggregate ports are modelled very differently in each system).
All things considered, when is a single centralized data repository worth the effort? Umm… Can’t think of one. That’s why the idea put forward five years ago by EAI vendors for data-on-the-hub came to nothing.
Data needs context to be useful, so inevitably several OSS/BSS applications will maintain their own view of the world. This view will need some synchronization, which requires the context to be understood and maintained by the synchronization process. How does context vary between applications?
Data time frame is a good example of context. Each application considers ‘current’ to be something subtly different. For an activation of alarm application, ‘current’ is the last message from the network. For an inventory system, such transitory states may be irrelevant, and a network device ‘current’ is a record of its design, as implemented in the network. For planning, ‘current’ may encompass the state of the network after the current month-long roll-out, as this is the design that ‘future’ builds are based on.
Data integration is therefore not a case of comparing two XML files for differences and updating a database accordingly. It is only of any value if, first of all, a well understood process integration is taking place.
Implementing Standards
Standards organizations are A Good Thing, but their output isn’t always as useful as it first appears. People have a tendency to misinterpret the intentions of standards bodies and that’s when bad design decisions are made.
Back in the day, when I was a sales engineer, I was cross examined by a prospective customer regarding the standards we had used to implement our inventory. I had discussed at some length the design of the database schema (a bit low level, yes, but they get an idea for its sophistication and integration potential) but a serious question was raised: “Why doesn’t your database implement the SID standard?”. Assuming the true question was lost in translation I responded by explaining they certainly could have SID-type objects in the inventory, and import/export adapters could be offered. “No, why aren’t your database tables based on SID?” And at that point I realized any discussion about good application design, flexible modelling, and data modelling IDEs, was probably going to get me nowhere.
SID, has its uses, I’ve seen successful trials of 3GPP network inventory data standards, and I quite like MTOSI. But none of them tell you how to build an application. They are contracts, file formats, concepts and common reference points. Applications are not standards-compliant, but their interfaces and business processes can be. When implementing standards do so in a light-weight way. Use the standard as a target requirement (‘we need to store and exchange MTOSI data’) but not a strict blueprint for implementing data models and code.
Anti-Pattern Number 6: Listening to Other People
The biggest mistake in any IT project is taking peoples opinions at face value, particularly if that person claims to be some sort of expert (or blogger).
With OSS/BSS integration you’ll find yourself in unique situations where integration of the latest technology has to co-exist with legacy system integration that’s been in place for 10+ years. This means there is no single answer, no single pattern for integration. You might be able to implement SOAP, REST or EAI in some areas, in other you’ll have to make a strategic decision to use the existing string-and-chewing-gum integration, no matter how architecturally impure that may be. If it works, it works.