Friday, 17 October 2014

Unlocking Insight – How to Extract User Experience by Complementing Splunk

Splunk is a great Operational Intelligence solution capable of processing, searching and analyzing masses of machine-generated data from a multitude of disparate sources. By complementing it with an APM solution you can deliver insights that provide value beyond the traditional log analytics Splunk is built upon:
Kollage
True Operational Intelligence with dynaTrace and Splunk for Application Monitoring

Operational Intelligence: Let Your Data Drive Your Business

In a nutshell, the purpose behind Operational Intelligence is the ability to make well-informed decisions quickly based on insights gained from business activity data, with data sources ranging anywhere from applications to the infrastructure to social media platforms.
2-splunk-how-it-works
Analyze data from a multitude of disparate data sources with Splunk (courtesy of splunk.com)
Splunk’s capabilities to process and fuse large volumes of continuously streamed and discretely pushed event data with masses of historical data reliably and with low latency supports businesses to continuously improve their processes, detect anomalies and deficiencies, as well as discover new opportunities.
Many industries are realizing this insight hidden in their log data. Financial services companies, for example, use Splunk to dashboard analytics on infrastructure log files over long periods of time, understanding trends allows for smarter decisions to be made. Analysis on this infrastructure has critical impact when applications are transmitting billions of dollars a day.
Financial services companies are not the only ones taking advantage of this level of log analysis. SaaS companies are using Splunk to analyze log data from many siloed apps hosted for their customers, all with separate system profiles. Splunk allows them to set up custom views with insights and alerts on all their separate application infrastructures.
Why complement Splunk with dynaTrace?
“So, with a solution like Splunk, gaining insights from all our data will be a snap, right?” Unfortunately not. What if I told you that you are essentially building your insights on masses of machine-generated log data. Let’s discuss why this matters.
3-BigDataPyramid
Machine-generated data in the “Big Data Pyramid” (courtesy of hadoopilluminated.com)
In Big Data parlance, machine-generated data, as opposed to human-generated data, signifies data which were generated by a computer process without human intervention, and which typically appear in large quantities. Machine-generated data originates from sources such as: applications, application servers, web servers, firewalls, etc. and thus often occur as traditional log data. However, unstructured log data is not exactly convenient to drive an analytics solution because they require you to:

1. Tell your Solution What Matters to You

Because log data is essentially unstructured, you cannot easily access the various bits and pieces of information encoded into a log message. You will need to teach your analytics solution the patterns by which any valuable information can be identified for later search and analyses:
logMessage
Identify bits and pieces of valuable information inside log messages

2. Reconsider your Application Logging Strategy

While there is not much you can do about how your firewall logs data, you will need to put large efforts into designing and maintaining a thorough logging strategy that serves all the information you will want to have monitored for your application. However, you may want to contemplate whether you really want to take these efforts for a variety of reasons:
  • Semantic Logging is, undoubtedly, a useful concept around writing log messages specifically for gathering analytics data that also emphasizes structure and readability. However, it can help toimprove your logging only where you own the code and thus leaves out code from any third-party libraries.
  • Operational Intelligence solutions rely on you to provide context for your log messages, as outlined in Splunk’s Logging Best Practices. Only then will you be able to correlate events of a particular user transaction and understand the paths your users are taking through your application. Again, context cannot be retained easily once you leave your code.
  • Efforts to establish and maintain a robust logging strategy that delivers must be aligned with ongoing development activities. You would also need to make sure that what your strategy provides is kept in sync with the expectations of your Operational Intelligence solution. If in doubt, and you better be, you will want to enforce automated testing of your strategy to verify your assumptions.
What this would mean to you? Establishing and maintaining an application logging strategy for your analytics solution that delivers actionable insights involves a lot of disciplined work from everyone involved:
  • Developers: need to maintain a logging strategy whose messages are scattered all over their code base: whenever functionality is added or changed, several parts of the application need to be augmented. This makes developing a thorough logging strategy a poorly maintainable, time-consuming and thus error-prone cross-cutting concern.
  • Test Automation Engineers: should enforce automated testing to assert that the assumptions of the Operational Intelligence solution on the setup of the input log data hold.
  • Product Owners and Software Architects: need to cope with a decrease in velocity when they buy into developing and maintaining a thorough logging strategy. They also need to accept that the visibility into user transactions ends where the ownership of their code ends.
  • Operations: continuously need to test and verify the correct functionality of the overall solution.
Why I am telling you all this? Because we have a lot of customers who were already using Splunk before they implemented dynaTrace. They had a really hard time correlating log messages due to the lack of context and were unable to answer one of the most important questions: “how many users were affected by this particular application failure?” We were able to solve their worries by delivering these features out-of-the-box:
  • They could keep their talent focused on critical development, testing and operations since there is no need to change your code, no logging, testing and verification involved.
  • They could quickly get to the root cause of performance issues because they had full end-to-end context for all user interactions including any third-party code which brings you full transaction visibility including method arguments, HTTP headers, HTTP query string parameters, etc.
  • They had analytics customized to their critical focuses because they could decide which data needs to be captured.

Easy Steps to True Operational Intelligence with Splunk and dynaTrace

  1. Get and install Splunk
  2. Get and install the 15 Days Free Trial of dynaTrace
  3. Get and install the Compuware APM dynaTrace for Splunk App
  4. Enable the Real-Time Business Transactions Feed in dynaTrace:
    4-EnableBTFeed
    Enable the Real-Time Business Transaction Feed in dynaTrace
  5. Selectively export Business Transactions data to Splunk in dynaTrace:
    5-EnableBT
    Configure a particular Business Transaction to export data
That’s it. You may refer to the documentation of our dynaTrace for Splunk App for additional information. Here is a selection of insights you could already get today:

Dashboard #1: Top Conversions by Country, Top Landing- and Exit Pages

6-insight-top-countries
Top Conversions by Country, Top Landing- and Exit Pages

Dashboard #2: Visits Across the Globe

7-insight-visits-map
Visits across the globe

Dashboard #3: KPIs

8-insights-kpis
KPIs: Conversion Rates, Bounce Rates, Average Visit Duration, etc.

Dashboard #4: Transaction Timeline and Details

9-insights-transacations-timeline
Transactions timeline and details
However, there is more to it: should you feel the need to drill down deeper on a particular transaction to understand the root cause of an issue or precisely who was affected, you can fire up the PurePath in dynaTrace from within Splunk:
10-insights-goToDynaTrace
Drill down to dynaTrace from raw transactions data in Splunk
…and see deep analysis happen:
11-insights-PurePaths
Deeply analyzing a transaction in dynaTrace

Conclusion

The road to true Operational Intelligence can be a tough one – but it does not necessarily need to be that way! By integrating dynaTrace with Splunk you won’t have to rely on application logging or require any code changes and that does not slow you down. Instead, it will help accelerate your business by providing true visibility into your applications, independent of whether it is your machine, your code or not. This level of end-user visibility enables you to communicate in terms of what matters most to your organization – customer experience.
Should you want to know more about the inherent limitations of logging, you might want to refer to one of my recent articles “Software Quality Metrics for your Continuous Delivery Pipeline – Part III – Logging”.

Monitoring Hadoop beyond Ganglia

Over the last couple of months I have been talking to more and more customers who are either bringing their Hadoop clusters into production or that have already done so and are now getting serious about operations. This leads to some interesting discussions about how to monitor Hadoop properly and one thing pops up quite often: Do they need anything beyond Ganglia? If yes, what should they do beyond it?

The Basics

Like in every other system, monitoring in a Hadoop environment starts with the basics: System Metrics – CPU, Disk, Memory you know the drill. Of special importance in a Hadoop system is a well-balanced cluster; you do not want to have some nodes being much more (or less) utilized then others. Besides CPU and memory utilization, Disk utilization and of course I/O throughput is of high importance. After all the most likely bottleneck in a big data system is I/O – either with ingress (network and disk), moving data around (e.g. MapReduce shuffle on the network) and straight forward read/write to disk.
The problem in a Hadoop system is of course its size. Nothing new for us, some of our customers monitor well beyond 1000+ JVMs with CompuwareAPM. The “advantage” in a Hadoop system is its relative conformity – every node looks pretty much like the other. This is what Ganglia leverages!

Cluster Monitoring with Ganglia

What Ganglia is very good at is to provide an overview over how a cluster is utilized. The load chart is particular interesting:
This chart shows the cpu load on a 1000 Server cluster that has roughly 15.000 CPUs
This chart shows the CPU load on a 1000 Server cluster that has roughly 15.000 CPUs
It tells us the number of available cores in the system and the number of running processes (in theory a core can never handle more than one process at a time) and the 1-min load average. If the system is getting fully utilized the 1-min load average would approach the total number of CPUs. Another view on this is the well-known CPU utilization chart:
CPU Utilization over the last day. While the utilization stays well below 10% we see a lot of I/O wait spikes.
CPU Utilization over the last day. While the utilization stays well below 10% we see a lot of I/O wait spikes.
While the load chart gives a good overall impression of usage, the utilization tells us the story of how the CPUs are used. While typical CPU charts show a single server, Ganglia specializes in showing whole clusters (the picture shows CPU usage of a 1000 machine cluster). In the case of the depicted chart we see that the CPUs are experiencing a lot of I/O wait spikes, which points towards heavy disk I/O. Basically it seem the disk I/O is the reason that we cannot utilize our CPU better at these times. But in general our cluster is well underutilized in terms of CPU.
Trends are also easy to understand, as can be seen in this memory chart over a year.
Memory capacity and usage over a year
Memory capacity and usage over a year
All this looks pretty good, so what is missing? The “so what” and “why” is what is missing ;-) . If my memory demand is growing, I have no way of knowing why it is growing. If the CPU chart tells me that I spend a lot of time waiting, it does not tell what to do, or why that is so? These questions are beyond the scope of Ganglia.

What about Hadoop specifics?

Ganglia also has a Hadoop plugin, which basically give’s you access to all the usual Hadoop metrics (unfortunately a comprehensive list of Hadoop metrics is really hard to find, appreciate if somebody commented the link). There is a good explanation on what is interesting on Edward Caproli’s page: JoinTheGrid. Basically you can use those metrics to monitor the capacity and usage trends of HDFS and the NameNodes and also how many jobs, mappers and reducers are running.
Capacity of the DataNodes over time
Capacity of the DataNodes over time
Capacity of the Name Nodes over time
Capacity of the Name Nodes over time
The DataNode Operations give me an impression of I/O pressure on the Hadoop cluster
The DataNode Operations give me an impression of I/O pressure on the Hadoop cluster
All these charts can of course easily be built in any modern monitoring or APM solution like CompuwareAPM, but Ganglia gives you a simple starting point; and it’s Free as in Beer.
What is missing again, is the so what? If my jobs are running a lot longer than yesterday, what should I do? Why do they run longer? A Hadoop expert might dig into 10 different charts around I/O and Network, spilling; look a log files among other things and try an educated guess as to what might be the problem. But we aren’t all Experts, neither do we have the time to dig into all of these metrics and log files all the time.
This is the reason that we and our customers are moving beyond Ganglia - to solve the “Why” and “So What”  within time constraints.

Beyond the Basics #1 – Understanding Cluster Utilization

A use case that we get from customers is that they want to know which users or which pools (in case of the fair scheduler) are responsible for how much of the cluster utilization. LinkedIn just released White Elephant, a tool that parses MapReduce logs and builds some nice dashboards and shows you which of your users occupy how much of your cluster. This is of course based on log file analysis and thus ok for analysis but not for monitoring. With proper tools in place we can do the same thing in near real time.
The CPU Usage in the Hadoop on per User basis
The CPU Usage in the Hadoop Cluster on per User basis
In this example I wanted to monitor which user consumed how much of my Amazon EMR cluster. If we see a user or pool that occupies a lot of the cluster we can course also see which jobs are running and how much of the cluster they occupy.
The CPU Usage in the Hadoop Cluster on per Job basis
The CPU Usage in the Hadoop Cluster on per Job basis
And this will also tell us if that job has always been there, and just uses a lot more resources now. This would be our cue to start analyzing what has changed.

Beyond the Basics #2 – Understanding why my jobs are slow(er)

If we want to understand why a job is slow we need to look at a high level break down first.
In which phase of the map reduce to we spend the most time, or do we spend more time than yesterday? Understanding these timings in context with the respective job counters, like Map Input or Spilled Records will gives an understanding why the phase took longer.
Overview of the time spent in different phases and the respective input/output counters
Overview of the time spent in different phases and the respective input/output counters
At this point we will already have a pretty good idea as to what happened. We either simply have more data to crunch (more input data) or a portion of the MapReduce job consumes more CPU (code change?) or we spill more records to disk (code change or Hadoop config change?). We might also detect an unbalanced cluster in the performance breakdown.
This job is executing nearly exclusively on a single node instead of distributing
This job is executing nearly exclusively on a single node instead of distributing
In this case we want to check wether all the involved nodes processed the same amount of data
Here we see that there is a wide range from minimum, average to maximum on mapped input and output records. the data is not balanced
Here we see that there is a wide range from minimum, average to maximum on mapped input and output records. the data is not balanced
or if the difference can again be found in the code (different kinds of computations). If we are running against HBase we might of course have an issue with HBase performance or distribution
At the beginning of the job only a single HBase region Server consumes CPU while all others remain idle
At the beginning of the job only a single HBase region Server consumes CPU while all others remain idle
On the other hand, if a lot of mapping time is spent in the garbage collector then you should maybe invest in larger JVMs.
The Performance Breakdown of this particular job shows considerable time in GC suspension
The Performance Breakdown of this particular job shows considerable time in GC suspension
If spilling data to disk is where we spend our time on we should take a closer look at that phase. It might turn out that all of our time is spent on disk wait
If the Disk were the bottleneck we would see it on disk I/O here
If the Disk were the bottleneck we would see it on disk I/O here
Now if disk write is our bottleneck then really the only thing that we can do is reducing the map output records. Adding a combiner will not reduce disk write (it will actually increase it, read here). In other words combining only optimizes the shuffle phase, thus the amount of data sent over the network, but not spill time!!
And at the very detailed level we can look at single task executions and understand in detail what is really going on
The detailed data about each Map, Reduce Task Atttempt as well as the spills and shuffles
The detailed data about each Map, Reduce Task Atttempt as well as the spills and shuffles

Conclusion

Ganglia is a great tool for high level monitoring of your Hadoop cluster utilization, but it is not enough. The fact that everybody is working on additional means to understand the Hadoop cluster (Hortonworks with Ambari, Cloudera with their Manager, LinkedIn with White Elephant, the Star Fish project…) shows that there is a lot more needed beyond simple monitoring. Even those more advanced monitoring tools are not always answering the “why” though, which is what we really need to do. This is where the Performance Management discipline can add a lot of value and really help you get the best out of your Hadoop cluster. In other words don’t just run Hadoop jobs at scale, run them efficiently and at scale!
Related Posts Plugin for WordPress, Blogger...