baseline

As every sysadmin knows, the systems we manage are subject to change. New applications are installed, existing applications have their workload changed, and other similar events are all part of an evolving system. I’d venture that very few systems don’t grow or evolve during their lifetime. Sometimes you know what the expected evolution is right at the beginning of the system’s life, but most often, things aren’t so well planned to give you that foresight. We need a way to keep up with the the system’s behavior, and use this knowledge to forecast how it will behave in a month or in a year’s time.

There are a variety of tools out there, but my favorites are Nagios and Cacti. Both are well known in the industry, are actively developed, and both can use SNMP to collect their information. For this article I’m mostly referring to Cacti as it allows us to gather information and display it as a graph. Nagios is used to generate alarms when (some of these) values exceed certain thresholds, but that’s an all different article…

From the very beginning of the system’s life, we should start collecting performance values. These include the basic information such as CPU load, Memory usage, Network traffic (for every interface), Disk occupation (for every partition in use) and also information that is related to the system’s use. Depending on what applications we have running, this can be a mix of metrics to keep up with.For example, for a webserver we should monitor the number of processes, number of requests/s, average time per request, and any other value that can help understand the system’s performance.For a database system, we should monitor the number of queries/s the number of open files, the average time per query among others. Database systems have a number of items that we can monitor, and the more we use, the better picture we will be able to have of our system.

Mixing system information with application performance is very useful as we’ll see next. At first this information doesn’t seem to help much, but this is the system’s baseline, akin to it’s heartbeat showing you how the system behaves as a whole, and how it evolves as you add new clients or new functionality.

As you gather data you can see some trends emerging from the graphs. In all probability you shouldn’t expect a lot of flat lines. You’ll have a lot more sloped ones. These allow you to forecast your system’s behavior into the future. This in invaluable information, allowing you to pinpoint when you will have capacity problems, and give you data to back up your request for new servers. You can also show management what will happen if you don’t get them.
Correlating application with system metrics is what allows you to really make an informed decision regarding capacity. You can ask questions like “what will happen if the requests per second increases by 20% or 40%?”┬áThis is the main advantage of baselines for management. They allow you to have a visual representation of how your system’s behave and how you stand in capacity terms.

Baselines also serve another purpose, one much dearer to us sysadmins. They show us what our system’s are like under “normal” usage. If there is a problem somewhere, you will probably see a deviation in the metrics you are collecting. Sometimes it will help you pinpoint the exact cause of the problem. Have you been slashdotted? That would show up as a significant increase in TCP connections, with higher network traffic, leading to higher system load, and very probably using up all your memory.Having a ‘picture’ of the system under normal load is something you truly appreciate once you’re in trouble wondering what is happening. But then, it is too late to collect data to analyze. So start now, start building your baseline today, and I assure you it will help you in the future.