I’m not in a sceptical mood today, despite the grey skies, rain and hail. So as I write about vendor benchmarks you won’t see me using idioms such as ‘take with a pinch of salt’. I have no reason not to believe that OSS vendors’ benchmarks, what few have been published, are anything but rigorous and representative of real-world use. I know for a fact that some of them are just that, with many man-months of effort to deliver useful data for internal development as well as PR-worthy headline figures.
How ‘representative’ a benchmark is to any telco other than the telco instigating or advising on the benchmark is up for debate. I talk a bit about the usefulness of benchmarks somewhere in here. Benchmarks are A Good Thing, for sure. They drive improvements to the product and encourage competitiveness between vendors. They demonstrate that the vendor has a system architecture which offers a good chance of concurrency and scalability. But they are not indicative of the performance any other telco will achieve with their different inventory network model, user profiles and business processes.
If you have an interest in an OSS vendor’s scalability and performance, then by all means look at their benchmark results. If you’re interested in a hardware vendor’s server performance, then by all means look at their benchmark results too. But in both cases be very careful to pick out what is and is not relevant to your overall system design. There are likely to be fundamental differences between the benchmark design and your own requirements, although these differences are seldom obvious from reading the results published in PRs.
Perhaps the biggest gulf between benchmark and your own OSS performance lies in the evaluation of hardware and software platforms (OS, database and application server). As stated in previous posts, OSS system characteristics are very different from billing or CRM, but these BSS applications are often the ones a server vendor will use as the basis of their benchmark scenarios.
Just how different are OSS and BSS applications, in terms of system requirements? Amdocs recently put out a PR with IBM highlighting scalability of CRM on IBM servers. It didn’t disclose a great deal of detail but the headline numbers illustrate my point. I’ll compare its measurements with the only published Amdocs (actually, Cramer) OSS benchmark PR I could find: It’s not exactly recent, but the size and scale of a tier-1 carrier doesn’t change a huge amount six years.
What I want to look at is the requirements for the system, not the CPU/memory/disk-space that was actually needed (that’s not published anyway). How do OSS and BSS requirements differ? Here’s the headline figures showing the scale of the tests.
Customers. 100 million and 60 million, which is somewhat more than the maximum number of customers most teir-1 division or subsidiary would currently need to manage. For example, the total population of Germany is ~82 million, France ~65 million, ATT Mobility has ~60m customers, T-Mobile US has ~25m customers.
User Count. BSS systems will have thousands of agent performing transactions throughout the day. OSS user counts are typically counted in hundreds. If there’s a very large OSS user count it’s down to field agents/engineers who make only occasional use of the application. These benchmarks highlight a big difference in the quantity of users, but don’t provide detail on the complexity of the work they are doing.
Interactions or Transactions. These are two different things. What constitutes an interaction or transaction is a later stage of discussion when sizing a system to support an OSS or BSS application. I’ve seen published Cramer OSS benchmarks with over 200,000, even 1 million, transactions per hour. Without getting in to a level of detail that’s not shown in the PR we will instead treat these figures as ’interesting’ but draw no conclusions today.
Database Size. The OSS database, for an equal number of customers, is larger than the BSS database due to the inclusion of detailed inventory and technical service information in addition to customer data.
At some point we have to sit down with everyone involved in system architecture design for the project and discuss user profiles, transactions and the complexity of inventory data. Before then, by using these high-level benchmark figures we can clearly highlight the differences between OSS and BSS, and justify the need for a deeper level of discussion.
I suspect I am preaching to the choir, and that as an OSS professional you’re within your rights to accuse me of blogging the obvious. But many OSS customers out there come in to contact with other professionals who aren’t as enlightened as you are. Many IT engineers, many server sales people and, dare I suggest, many system integrators do not fully appreciate how OSS platform requirements are so different from BSS.
By the way, if anyone knows of any other public benchmark figures from OSS and BSS vendors, I’d really appreciate being sent a link. Thanks!