Moving on from 1990s OSS

The OSS market has changed… Part 2 of 4. This series of four blog posts is a synopsis of my contribution to a discussion I chipped in to on LinkedIn’s ‘OSS Gurus’ group.

Systems are going to have to change from those Big OSS products developed in the 90s.

Big OSS inventories focus on reservation and allocation of, usually, fixed resources like SDH/SONET timeslots, DWDM wavelengths, subscriber ports, DDF/ODF ports etc, etc. What traditional inventory is less able to do is model shared resources that vary over time (like IP and the more dynamic MPLS protocols) because in order to capacity manage these you need a more complex traffic/service models, observed utilisation, etc.

Traditional ‘offline’ inventories need to be augmented with additional products to support this (and I'm sure Amdocs/Ericsson/NEC/Oracle have their offerings), or you assume 'the network is the inventory' and make OSS an analytics-centric, rather than database-centric, application.

Inventory is an essential part of an OSS stack, but the area for innovation and exciting new products/projects will not be interested in which timeslot is allocated to which customer. The exciting stuff will be dynamic analytics of the current network state, service profiles, traffic flows, what-if growth analysis, predictive failure analysis, service feasibility/profitability analysis…

The Big OSS products developed in the 1990s and deployed in the 2000s can be categorised as 'inventory-centric fulfilment' solutions. I believe the next ten years will see a shift back to ‘the network as the inventory’ for real-time, utilisation, and performance data. Business-critical OSS solutions will then comprise solutions driven by ‘big data’ network analytics in order to make sense of this data and deliver actionable, proactive insight in to the network state and customer experience.