Oracle Database In-Memory – Why it Matters

OSS/BSS vendors have been saying for years that there is value in the transactional data their applications store. But so far analysing that data has not been easy nor affordable for many users unwilling to invest in the significant cost of buying add-on data warehousing or Big Data products. This month, Oracle Database In-Memory is released, promising a way to enable a standard Oracle-based OSS/BSS transactional databases to deliver high-performance real time data analytics with the flick of a switch….

Two Types of Database

In very broad terms there are two types of database in popular use today. The first is the relational database, used by most OSS application, which stores ‘rows’ of data representing objects like devices, a service, a circuit and so on. They’re great at understanding how things are related to each other (as the name suggests), such as which port a circuit uses, which services use a particular fibre, and so on. Relational databases are at the heart of most service fulfilment, inventory, and process oriented OSS applications. Oracle is probably the most popular relational database in use by CSPs.

The scale of data supported by popular Internet services and the requirements of Big Data have led to a different database type becoming more popular in recent years. These are column-store databases that organise data to make it easy and fast to scan and process large data sets. They’re not suited to understanding how individual objects relate to each other, but they are great for quick look-ups of simple data structure and analysis of large, relatively ‘flat’ data sets.

Relational Databases and OLTP for OSS

Relational databases are often used for what is known as On-Line Transaction Processing (OLTP), as transactions usually find, modify or create a few related objects at a time.

If you have a database primarily used for OLTP you also have a whole lot of data that you might want to run analysis of. An OSS inventory, used for service fulfilment transactions, is a gold mine of data that could be used for sales forecasting, capacity planning, performance analysis, and so much more.

It’s not just business-intelligence or big data analytics. For example, both Amdocs and DonRiver have introduced modern search-engine style user interfaces for browsing OSS inventory systems. Even common search features require very quick scans through large amounts of data, against arbitrary data fields, matching arbitrary criteria.

There’s value in that relational data. If only you could analyse the data.

How to make your OSS Database do Big Data

Yes, you can run relational queries (using SQL) against your OSS database, but analytics can take hours to complete on relational data.

If you want to run analytics against a relational Oracle database, you had two options:

One. Copy the data to a separate database with different indexing, possibly different table structures, to run the analytical queries. This lets you tune the database for analytics, and you could use existing in-memory products like Oracle Times Ten to speed things up further. But the downsides are a greater integration cost and also the latency in getting data from one database to another: You may only be doing a nightly bulk update, meaning any analytics would be working on data hours out of date.

Two. Add additional indexes and/or tables to the relational database. The database is primarily structured for fast row (per-object) access and as such it has indexes and table structures to support this. In the first instance adding indexes to any columns that may be subject to analytics would provide a performance boost. If necessary, restructuring some of the data in to summary tables or dedicated column-data tables within the existing database also improves the performance of analytic queries. The big disadvantage is that this will impact the transactional performance of the database: Every transaction that creates or modifies, an object will require the database to create more indexes and more table entries, which will then need to be written to disk. A high-performance OLTP database will always look to minimise such work, but adding the necessary infrastructure for analytics could double or triple the amount of work the database needs to do for every single update.

Two Databases in One

Oracle Database In-Memory aims to solve this problem, right in your existing relational database.

Because it’s all the same data, just structured differently, Oracle have added the option to automagically have a column-store representation of your data. So, for very little storage, memory or processing overhead (it’s the same data, not duplicated, just accessed in a different way) you have the best of both worlds.

Oracle Database In-Memory also keeps the column-store data in-memory, as far as possible, to give you extremely fast, real-time analytics.

And the really good news is that you don’t have to change you existing applications. Analytic queries are automatically routed to the column store by the SQL query optimizer. Basically, if you run a query, Oracle is smart enough to work out which representation of your data can answer it fastest.

In practice, to make the most of Oracle Database In-Memory, you will need to do a bit of work. You will want to write new queries and reports that were infeasible or too slow before. It really opens up the database to new types of analytics and new applications which, of course, take work to implement.

But that’s fine. Many people are happy to do a bit of work, to write some valuable new reports, without the need to go out and buy, integrate and configure a dedicated analytics database.

How Fast?

So, the big question: How much faster will this make your OLTP and analytic queries?

Most of the examples provided by Oracle talk about analytic queries that took hours running in just a few minutes or even seconds. The difference between running normal SQL on relational data sat on a hard-disk, to analysing column-store data in-memory is enormous.

At the launch event, Oracle also suggested that OLTP transactions, in a database cleared of now unnecessary analytic-specific indexes and tables, could see a 2-3 times improvement in performance. Some OLTP queries may also benefit from the in-memory column data as compatible queries are not uncommon in OSS/BSS transactions, particularly those that support users browsing data and running even simple reports from their workstations.

Your mileage will, of course, vary.

In fact, these two improvements mentioned by Oracle are almost mutually exclusive. You will see the greatest analytical performance improvements if you currently have no dedicated indexes supporting those queries today. You will see the greatest improvement in OLTP performance if you have a lot of existing analytics indexes in place today, which subsequently can be removed.

So as usual the truth is somewhere in-between.

Analytics for your OSS

Oracle probably won’t be your first choice if you’re building a dedicated analytics platform. But if you’re investing in relational database systems, then Oracle Database In-Memory becomes a natural choice for extending the system’s functionality.

Once you have invested in essential OSS systems that fills up a relational database with useful information, there’s a great desire to make more use of that data. But analytics is often still a nice-to-have capability, attracting little budget after the big OSS investment.

Many OSS experts are happy to tweak a database, write a few queries, and drive data to an affordable reporting tool. The problem to-date is that this approach has resulted in poor analytics performance and impacted the business critical OSS transaction performance.

Oracle Database In-Memory solves this problem, making Big Data style analytics on your relational data a low-effort, low-cost possibility.

This matters. Oracle Database In-Memory changes the economics of Big Data analytics for OSS, and will encourage both home-grown innovation by CSPs and product development by OSS vendors.

, ,