Introduction

One of the easiest ways to stay ahead of the curve on hard drive failures is to take advantage of SMART monitoring.

 

SMART, or Self-Monitoring, Analysis, and Reporting Technology, is a is a monitoring system for computer hard disks to detect and report on various indicators of reliability in the hope of anticipating failures.

 

Note that this guide will focus on Linux servers; for Windows users please download the SeaTools package from Seagate which will be good for 99 percent of the hard drives used at Superb.

 

Prerequisites

You will need to have the smartmontools package installed on your server. Note that this is included in the base installation for most Linux distributions.

 

How To Use

To begin, give the command smartctl -a /dev/hda, using the correct path to your disk, as root. If SMART is not enabled on the disk, you first must enable it with the -s on option.

 

The first part of the output (Listing 1) lists model/firmware information about the disk—this one is an IBM/Hitachi GXP-180 example. Smartmontools has a database of disk types. If your disk is in the database, it may be able to interpret the raw Attribute values correctly.

Listing 1. Output of smartctl -i /dev/hda

Device Model:     IC35L120AVV207-0
Serial Number: VNVD02G4G3R72G
Firmware Version: V24OA63A
Device is: In smartctl database [for details use: -P show]
ATA Version is: 6
ATA Standard is: ATA/ATAPI-6 T13 1410D revision 3a
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

The second part of the output (Listing 2) shows the results of the health status inquiry. This is the one-line Executive Summary Report of disk health; the disk shown here has passed. If your disk health status is FAILING, back up your data immediately. The remainder of this section of the output provides information about the disk’s capabilities and the estimated time to perform short and long disk self-tests.

The third part of the output (Listing 3) lists the disk’s table of up to 30 Attributes (from a maximum set of 255). Remember that Attributes are no longer part of the ATA standard, but most manufacturers still support them. Although SFF-8035i doesn’t define the meaning or interpretation of Attributes, many have a de facto standard interpretation. For example, this disk’s 13th Attribute (ID #194) tracks its internal temperature.

Studies have shown that lowering disk temperatures by as little as 5°C significantly reduces failure rates, though this is less of an issue for the latest generation of fluid-drive bearing drives. One of the simplest and least expensive steps you can take to ensure disk reliability is to add a cooling fan that blows cooling air directly onto or past the system’s disks.

Each Attribute has a six-byte raw value (RAW_VALUE) and a one-byte normalized value (VALUE). In this case, the raw value stores three temperatures: the disk’s temperature in Celsius (29), plus its lifetime minimum (23) and maximum (33) values. The format of the raw data is vendor-specific and not specified by any standard. To track disk reliability, the disk’s firmware converts the raw value to a normalized value ranging from 1 to 253. If this normalized value is less than or equal to the threshold (THRESH), the Attribute is said to have failed, as indicated in the WHEN_FAILED column. The column is empty because none of these Attributes has failed. The lowest (WORST) normalized value also is shown; it is the smallest value attained since SMART was enabled on the disk. The TYPE of the Attribute indicates if Attribute failure means the device has reached the end of its design life (Old_age) or it’s an impending disk failure (Pre-fail). For example, disk spin-up time (ID #3) is a prefailure Attribute. If this (or any other prefail Attribute) fails, disk failure is predicted in less than 24 hours.

The names/meanings of Attributes and the interpretation of their raw values is not specified by any standard. Different manufacturers sometimes use the same Attribute ID for different purposes. For this reason, the interpretation of specific Attributes can be modified using the -v option to smartctl; please see the man page for details. For example, some disks use Attribute 9 to store the power-on time of the disk in minutes; the -v 9,minutes option to smartctl correctly modifies the Attribute’s interpretation. If your disk model is in the smartmontools database, these -v options are set automatically.

The next part of the smartctl -a output (Listing 4) is a log of the disk errors. This particular disk has been error-free, and the log is empty. Typically, one should worry only if disk errors start to appear in large numbers. An occasional transient error that does not recur usually is benign. The smartmontools Web page has a number of examples of smartctl -a output showing some illustrative error log entries. They are timestamped with the disk’s power-on lifetime in hours when the error occurred, and the individual ATA commands leading up to the error are timestamped with the time in milliseconds after the disk was powered on. This shows whether the errors are recent or old.

The final part of the smartctl output (Listing 5) is a report of the self-tests run on the disk. These show two types of self-tests, short and long. (ATA-6/7 disks also may have conveyance and selective self-tests.) These can be run with the commands smartctl -t short /dev/hda and smartctl -t long /dev/hda and do not corrupt data on the disk. Typically, short tests take only a minute or two to complete, and long tests take about an hour. These self-tests do not interfere with the normal functioning of the disk, so the commands may be used for mounted disks on a running system. On our computing cluster nodes, a long self-test is run with a cron job early every Sunday morning. The entries in Listing 5 all are self-tests that completed without errors; the LifeTime column shows the power-on age of the disk when the self-test was run. If a self-test finds an error, the Logical Block Address (LBA) shows where the error occurred on the disk. The Remaining column shows the percentage of the self-test remaining when the error was found. If you suspect that something is wrong with a disk, I strongly recommend running a long self-test to look for problems.