Comptel this week issued a PR about their recent benchmark results with IBM. It's a bit different from the Cramer OSS benchmark I wrote about a few weeks back when looking at a new BSS benchmark, also on IBM kit. For starters, Comptel's transactions are more billing and mediation than service design and provisioning. Secondly, the transaction volumes are huge! Billions of transactions per hour rather than thousands/millions one would expect in 'traditional' service provisioning.
The reason for this difference is Comptel's focus on serving large numbers of customer-initiated real-time transactions. This usage pattern, they claim, reflects the trend in 'lifestyle' services. I assume what they mean is that if you're offering, say, quad-play, there's going to be a lot of stuff to do in terms of billing and activation to get it all up and running. Then you also hope to sell lots of little add-ons or on-demand services which results in lots of little configuration and charging changes. Same situation if you allow users to regrade their services on-demand. And at any one time all those services/add-ons/subscriptions will require more billing and mediation, whether on not the service configuration is actually changing.
Comptel and IBM have achieved these figures using an in-memory database. Most popular, modern, system architectures with layers of web, Java and database components just can't deliver real-time transactions. I use the term real-time loosely to mean a transaction with latency acceptable to an inpatient end-customer. 'Acceptable' will vary depending on the transaction, but let's assume less than 10 seconds to do something like issuing an order for an on-demand service and that service becoming available to the customer. Sure, a classic J2EE stack can turn-around a transaction, like a dynamic web page, in less than 10 seconds, but it's just one system. Service provisioning from order to activation requires several systems to complete their jobs (order management, billing, service management, activation, as a minimum set of distinct systems). This means any one system has to deliver their results consistently, while under load, in less than 2 seconds. Latency like this will only be achieved by minimizing calls between 'layers' inside the system, and minimizing or eliminating disk I/O by using in-memory databases or some very clever caching.
With the trend in lifestyle services I expect their to be a complimentary trend in OSS. Near-real-time inventory and design/assign transactions will still be needed, offering data resilience and transaction management for long-running processes. In parallel there will be real-time OSS processes that offer low-latency transactions. Today we are seeing OSS vendors promoting separate architectures for 'service management' solutions that can offer fast-track processes. However, there's no reason why these should be physically distinct applications that require integrating and synchronizing with OSS inventory. In the future, new OSS architectures will deliver more than one path from order to activation in a single system, depending on the transaction's latency requirements.