OSS and BSS are data-centric systems, as I am sure you appreciate. Billing records, CRM records and inventory data immediately spring to mind as the classic data silos found in any service provider. Despite the growing sophistication of both OSS and BSS solutions, the problem of managing and accessing this data has got worse in the last ten years, not better.
There are more types of services being sold, often based around a separate silo of OSS systems. This is particularly true since the spate of acquisitions that has meant either parts of the service providers’ geographies or sections of their network technologies each have separate data silos. If you take a look at the OSS/BSS of a large national quad-play service provider you will probably see:
> Silos of OSS and BSS stacks under one or more of the quad-play services.
> Swivel-chair CRM and physically separate technical support.
> Product and service catalogs distributed around OSS and BSS silos.
> Multiple legacy inventories and next-gen commercial inventories.
> Multiple activation platforms distributed by region and technology.
For the last couple of years OSS vendors have promoted ‘transformation’: Migration away from legacy silos to a unified environment that supports all their next-gen quad-play service delivery needs. This would be lovely. Some providers are even doing it, but they are in the minority. A tier-2 or tier-3 provider might achieve transformation due to the relatively small scale of their requirements. A tier-0 may undertake the project because they’ve got deep pockets and massive operational pain from their acquisitions and mergers. Not every telco has the stomach for transformation and its significant costs, and those that are on the road to unified OSS have to accept it’s a long journey.
What then for the untransformed? And for those unified-OSS butterflies, still in the chrysalis for another three years?How do they get at their fragmented data?
Other than transformation, two other approaches to fixing the data silo issue have emerged recently: Federation and Enterprise Search.
Federated Inventory
Federation is the hard-core solution. It deploys a data-access, and sometimes process-access, layer across multiple silos. It will then either aggregate data through synchronization to its own repository or store only local references to external objects, then query the target data sources on-demand for detail. A commercial OSS inventory system can then be deployed on top of the federation layer. To the end user, and the inventory application for the most part, the appearance is that of a single OSS data source. Under the hood the federation layer is managing read/write activity across the multiple data silos. To allow an OSS application to operate on external data sources as if
they were local significant integration plumbing and data transformation is needed.
Are there any commercial OSS federation products out there today? Sort-of-ish. There are a few products that are more like classic data synchronization/reconciliation (e.g. Telcordia’s Data Federation & Analysis offering, and PSI’s MCCM tool for mobile data reconciliation). And there are federation products that work with a narrow set of data rather than all OSS/BSS data sets (e.g. Amdocs Unified Service Manager). Grand-scale federation is the reserve of ambitious service provider projects, for the time being at least.
Enterprise Search
Enterprise search is the light-touch solution. Effectively a data access layer is deployed across all data sources, accessed through its own UI and API. A single UI provides ad-hoc browsing of data and the ability to build reusable views appropriate to particular business activities. Source data is ‘crawled’, indexed and cached to make the search process much faster than it would be if the data was directly accessed for each user operation. Enterprise search does not attempt to replicate other application’s functionality. As a result its data model is more relaxed and integration effort is reduced by reuse of gateways that know how to crawl and relate data in common repositories such as databases, spreadsheets and XML files.
The key distinction is obvious: Federation attempts to deliver OSS functionality across multiple data sources, while Enterprise Search aims to provide a unified view of the data through an additional application.
NexGenData: Enterprise Search and Web 2.0 for Telcos?
Scott Kelly and George Schaefer from NexGenData gave me the opportunity to check out their Navigator enterprise data integration product. NexGenData is a start-up with a strong telco and OSS heritage having contributors with a background in Visionael and EDS, among other organizations. So while their enterprise Web 2.0 technology is essentially data-agnostic, it is being designed with OSS/BSS applications in mind.
The primary value proposition is a significant reduction in the ‘integration tax’ compared with middleware, federation or data warehouse solutions. Once pointed at a few data sources NexGen Navigator is able to crawl structured (e.g. database, XML) and unstructured (e.g. text, HTML) content and build up relationships. For example, a customer ID or simply an address could be used to tie together records from multiple CRM, billing and support sources. And that’s the key benefit of search technology: Data integration is part of the out-of-the-box algorithm, rather than something that has to be explicitly coded on a case by case basis.
With the crawl of multiple data sources completed, the user has access to what looks a lot like a typical OSS inventory tree-level browser. Data can be searched and then ‘drilled-down’ to uncover hierarchies and relationships.
A couple of other Web 2.0 principals are employed to further enrich the data and user experience:
> Mashups allow users to merge a collection of data feeds in interesting ways, sharing the results as standard web report URLs or as input in to more mashups.
> The grandly named ‘collective intelligence’ feature is like delicious for enterprise data. Users can tag useful and important data streams to promote their ‘usefulness’ to other users. For example, data streams that relate to an important customer could be tagged, promoting them when presented to other users searching for data on that customer.
NexGenData are targeting large enterprise customers, government and service providers with Navigator. Generally speaking, the larger the organization the greater the problems. But it also depends what business processes touch the various data silos. Telecoms is more prone to data silo issues than most as many of its business processes need to cut across department, commercial, and technology divisions. OSS has always tried to address the need for a coherent view of the entire network data source. In the BSS area CRM attempts to provide a unified picture of the customer and services. But if OSS and BSS data source are not joined up, and if silos of data exist in these areas, the costs to a service provider can be significant. George and Scott quoted one US tier-0 that believes misalignment of siloed data causes $30m-$50m in costs per month. Fifty million Dollars per month. Crikey.
To support service providers, Navigator must offer good scalability to deal with the dozens, maybe hundreds, of data sources these organizations will have, and several million data objects contained there-in. Fortunately, we know search technology does scale; it does a pretty good job of indexing the web with its many data sources, distributed systems, poor network latency and high user count. NexGenData have applied some proprietary IPR to simplifying the issue of scalability for their customers. They provide Navigator as an appliance, a single box solution in the first instance. Additional boxes can then be added to create a cluster of boxes supporting a single Navigator deployment. This should satisfy two architectural requirements:
> Scalability. The ability to add more hardware to increase overall system performance.
> Distribution. The need for the appliances to located in different data centers to satisfy IT network requirements, remote access latency issues, or physically diverse resilience.
Data Context
Enterprise search is starting to spawn a number of start-ups offering the ability to access generic corporate data. It’s easy to see why there’s interest in such a solution when it offers practical benefits without expensive, long, integration projects. I would suggest that for any sophisticated data access it is important to use a product that understands the context and processes of the data’s environment. Data always has some sort of context, and the value of data is not just degraded when context is lost; it could become inaccurate, irrelevant or misleading. A couple of examples for you to think about:
Temporal Context. Data source objects exist in different states as each system in the BSS/OSS stack operates on a different point on the line between Slow-Long-Term-Planning and Real-Time. For example, in an inventory a port may be seen as actively carrying traffic. On the NEM the port may be inactive due to a fault or planned maintenance. In a long term network planning tool the port might not even exist, instead being replaced by new equipment. All these data sources are accurate in their own context, but at any one time there may be a significant proportion of objects where their configuration, fault state, or existence differs between systems. An enterprise search application, trying to do the right thing, may start flagging up data discrepancies or breaking associations when in reality everything’s actually correct.
Process Context. Objects in BSS/OSS data sources tend to me manipulated through a trickle-down process. Resources are reserved by service fulfillment processes and recorded in inventory. Sometime later, hours or maybe days, the network is activated and services can be discovered through NEMs and faults tracked in fault management systems. This latency, as a change trickles through the systems, contributes to different temporal context. It also means data that could be flagged as related is not. Until the service delivery completes there may be no common identifier (IP address, MAC address, telephone number) tying the service, used port and used equipment together. That’s a shame, because it would be most useful at precisely that time to know that, for example, the target equipment (as set in the inventory) has a fault against it (as recorded in a trouble-ticket system) before the common identifier (an IP address) is activated on the device.
Context is not just an issue for enterprise search applications. Any system, whether enterprise search, federation, or synchronization, that joins data sources and also identifies discrepancies between them, needs to be aware of these ‘acceptable’ discrepancies and future associations. And it’s certainly not an impossible task. The application just needs to be aware of how the underlying systems operate, then react accordingly whether that be by not throwing a wobbly at a miss-match or by being a bit smarter about the way it builds associations between data sets.
A start-up like NexGenData has an opportunity to grab some of the wider enterprise search market, but they’re going to hit up against Google, SAP, and Microsoft who all offer generic enterprise search products. Merging search with other cool Web 2.0 features is a differentiator, as is a simple model for scalability. Even so, going for a specific market would be very wise. With a product that’s pre-tuned to how service providers operate, NexGenData would be able to offer total-cost-of-ownership below that of existing middleware while delivering business intelligence superior to the generic enterprise search vendors.
You can find out more about NexGenData enterprise data integration on their website, which is looking pretty good for an early stage start-up. There’s a good product overview, PR, data sheets and links to other industry commentators discussing this topic.