Data sheet

HPE NonStop Database Analyzer

Introducing HPE NonStop Database Analyzer (NSDA), a new way to analyze, correlate, and visualize your data workload, in real time. HPE NSDA is a leap forward to effortlessly optimize your database workload while seamlessly serving both transactional and analytical requirements.

Data analytics applied to your query workload

Why understanding your query workload matters
You are a DBA or IT director and you have a well-crafted vision on how to make valuable data available to different business units. Chances are you do a good job at it, but you end up being a victim of your success with many applications accessing the data with conflicting requirements. Typically, it is a combination of transactional and analytical requirements. It not only means simple queries - inserts, updates, and deletes - but also more complex queries with joins involving multiple tables and sometimes, very convoluted predicate clauses where you are not sure if the query even makes sense by just looking at it.
Can you control user queries?
If queries on your database are all programmed from an application, then you have the chance to review those queries and optimize them. This is the case with embedded SQL where queries are hard-coded into the program. However, if users run ad hoc queries on the database, with business intelligence tools like Business Objects or Informatica, chances are these users will come up with all types of queries. No matter what, users will sometimes execute bad queries from a database efficiency or DBA point of view. Not a lot you can do about it unless you have a way to locate and control those queries, manually or automatically to deprioritize them, queue them, or even cancel them.
This typical problem triggered many different approaches in the industry to solve it such as exporting online transaction processing (OLTP) data in dedicated analytical databases or even more recently exporting to NoSQL databases to further solve scalability problems for Big Data. However, this is far from being an ideal solution. Managing multiple copies of the same data has many ramifications such as additional DBs and tools required, security concerns, data synchronization, and compliance. What if the users want real-time data? What if there is a gap between the replicated databases? What if users want to use SQL tools?
A more recent resulting trend, however, is to come back to a simpler and more traditional approach. What if we could keep only one copy of the data for both OLTP and analytical purposes? More commonly known as hybrid transactional and analytical processing (HTAP), this approach has all the advantages of not having to maintain multiple copies of the data along with leveraging the large tool ecosystem available today for relational databases. It is an attractive approach considering it can significantly reduce costs as you don't have to pay additional licenses and maintain multiple DBMS, duplicated storage infrastructure and data integration software.