
UNIX Disk Usage
Date: Jan 17, 2003
Sample Chapter is provided courtesy of Sams.
One of the most common problems that system administrators face in the Unix world is disk space. Whether it's running out of space, or just making sure that no one user is hogging all your resources, exploring how the hard disks are allocated and utilized on your system is a critical skill.
In this hour, you learn:
How to look at disk usage with df and du
How to simplify analysis with sort
How to identify the biggest files and use diskhogs
Physical Disks and Partitions
In the last few years, hard disks have become considerably bigger than most operating systems can comfortably manage. Indeed, most file systems have a minimum size for files and a maximum number of files and/or directories that can be on a single physical device, and it's those constraints that slam up against the larger devices.
As a result, most modern operating systems support taking a single physical disk and splitting it into multiple virtual disks, or partitions. Windows and Macintosh systems have supported this for a few years, but usually on a personal desktop system you don't have to worry about disks that are too big, or worse, running out of disk space and having the system crash.
Unix is another beast entirely. In the world of Unix, you can have hundreds of different virtual disks and not even know iteven your home directory might be spread across two or three partitions.
One reason for this strategy in Unix is that running programs tend to leave log files, temp files, and other detritus behind, and they can add up and eat a disk alive.
For example, on my main Web server, I have a log file that's currently growing about 140K/day and is 19MB. Doesn't sound too large when you think about 50GB disks for $100 at the local electronics store, but having big disks at the store doesn't mean that they're installed in your server!
In fact, Unix is very poorly behaved when it runs out of disk space, and can get sufficiently corrupted enough that it essentially stops and requires an expert sysadmin to resurrect. To avoid this horrible fate, it's crucial to keep an eye on how big your partitions are growing, and to know how to prune large files before they become a serious problem.
Task 3.1: Exploring Partitions
In the last few years, hard disks have become considerably bigger than most operating systems can comfortably manage. Indeed, most file systems have a minimum size for files and a maximum number of files and/or directories that can be on a single physical device, and it's those constraints that slam up against the larger devices.
As a result, most modern operating systems support taking a single physical disk and splitting it into multiple virtual disks, or partitions. Windows and Macintosh systems have supported this for a few years, but usually on a personal desktop system you don't have to worry about disks that are too big, or worse, running out of disk space and having the system crash.
Unix is another beast entirely. In the world of Unix, you can have hundreds of different virtual disks and not even know iteven your home directory might be spread across two or three partitions.
One reason for this strategy in Unix is that running programs tend to leave log files, temp files, and other detritus behind, and they can add up and eat a disk alive.
For example, on my main Web server, I have a log file that's currently growing about 140K/day and is 19MB. Doesn't sound too large when you think about 50GB disks for $100 at the local electronics store, but having big disks at the store doesn't mean that they're installed in your server!
In fact, Unix is very poorly behaved when it runs out of disk space, and can get sufficiently corrupted enough that it essentially stops and requires an expert sysadmin to resurrect. To avoid this horrible fate, it's crucial to keep an eye on how big your partitions are growing, and to know how to prune large files before they become a serious problem.
Task 3.1: Exploring Partitions
Enough chatter, let's get down to business, shall we?
-
The command we'll be exploring in this section is df, a command that reports disk space usage. Without any arguments at all, it offers lots of useful information:
# df Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda5 380791 108116 253015 30% / /dev/sda1 49558 7797 39202 17% /boot /dev/sda3 16033712 62616 15156608 1% /home none 256436 0 256436 0% /dev/shm /dev/sdb1 17245524 1290460 15079036 8% /usr /dev/sdb2 253871 88384 152380 37% /var
Upon first glance, it appears that I have five different disks connected to this system. In fact, I have two.
-
I'm sure you already know this, but it's worth pointing out that all devices hooked up to a computer, whether for input or output, require a specialized piece of code called a device driver to work properly. In the Windows world, they're typically hidden away, and you have no idea what they're even called.
Device drivers in Unix, however, are files. They're special files, but they show up as part of the file system along with your e-mail archive and login scripts.
That's what the /dev/sda5 is on the first line, for example. We can have a look at this file with ls to see what it is:
# ls -l /dev/sda5brw-rw---- 1 root disk 8, 5 Aug 30 13:30 /dev/sda5
The leading b is something you probably haven't seen before. It denotes that this device is a block-special device.
Here's a nice thing to know: The device names in Unix have meaning. In fact, sd typically denotes a SCSI device, and the next letter is the major device number (in this case an a), and the last letter is the minor device number (5).
From this information, we can glean that there are three devices with the same major number but different minor numbers (sda1, sda3, and sda5), and two devices with a different major number and different minor numbers (sdb1 and sdb2).
In fact, the first three are partitions on the same hard disk, and the second two are partitions on a different disk.
TIP
If you ever have problems with a device, use ls -l to make sure it's configured properly. If the listing doesn't begin with a c (for a character special device) or a b (for a block-special device), something's gone wrong and you need to delete it and rebuild it with mknod.
-
How big is the disk? Well, in some sense it doesn't really matter in the world of Unix, because Unix only cares about the partitions that are assigned to it. If the second disk is 75GB, but we only have a 50MB partition that's available to Unix, the vast majority of the disk is untouchable and therefore doesn't matter.
If you really want to figure it out, you could add up the size of each partition (the Available column), but let's dissect a single line of output first, so you can see what's what:
/dev/sda5 380791 108116 253015 30% /
Here you're shown the device ID (sda5), then the size of the partition (in 1K blocks within Linux). This partition is 380,791KB, or 380MB. The second number shows how much of the partition is used108,116KBand the next how much is available253,015KB. This translates to 30% of the partition in use and 70% available.
The last value is perhaps the most important because it indicates where the partition has been connected to the Unix file system. Partition sda5 is the root partition, as can be seen by the /.
NOTE
Note - Those purists among you will realize the error of this calculation: 380,791/1024 is not a simple division by 1,000. So everyone is happy, that reveals that this partition is exactly 371.8MB.
-
Let's look at another line from the df output:
/dev/sda3 16033712 62616 15156608 1% /home
Notice here that the partition is considerably bigger! In fact, it's 16,033,712KB, or roughly 16GB (15.3GB for purists). Unsurprisingly, very little of this is usedless than 1%and it's mounted to the system as the /home directory.
In fact, look at the mount points for all the partitions for just a moment:
# df Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda5 380791 108116 253015 30% / /dev/sda1 49558 7797 39202 17% /boot /dev/sda3 16033712 62616 15156608 1% /home none 256436 0 256436 0% /dev/shm /dev/sdb1 17245524 1290460 15079036 8% /usr /dev/sdb2 253871 88389 152375 37% /var
We have the topmost root partition (sda5); then we have additional small partitions for /boot, /usr, and /var. The two really big spaces are /home, where all the individual user files will live, and /usr, where I have all the Web sites on this server stored.
This is a very common configuration, where each area of Unix has its own sandbox to play in, as it were. This lets you, the sysadmin, manage file usage quite easily, ensuring that running out of space in one directory (say, /home) doesn't affect the overall system.
-
Solaris 8 has a df command that offers very different information, focused more on files and the file system than on disks and disk space used:
# df / (/dev/dsk/c0d0s0 ): 827600 blocks 276355 files /boot (/dev/dsk/c0d0p0:boot): 17584 blocks -1 files /proc (/proc ): 0 blocks 1888 files /dev/fd (fd ): 0 blocks 0 files /etc/mnttab (mnttab ): 0 blocks 0 files /var/run (swap ): 1179992 blocks 21263 files /tmp (swap ): 1179992 blocks 21263 files /export/home (/dev/dsk/c0d0s7 ): 4590890 blocks 387772 files
It's harder to see what's going on, but notice that the order of information presented on each line is the mount point, the device identifier, the size of the device in 1K blocks, and the number of files on that device.
There's no way to see how much of the disk is in use and how much space is left available, so the default df output isn't very helpful for a system administrator.
Fortunately, there's the -t totals option that offers considerably more helpful information:
# df -t / (/dev/dsk/c0d0s0 ): 827600 blocks 276355 files total: 2539116 blocks 320128 files /boot (/dev/dsk/c0d0p0:boot): 17584 blocks -1 files total: 20969 blocks -1 files /proc (/proc ): 0 blocks 1888 files total: 0 blocks 1932 files /dev/fd (fd ): 0 blocks 0 files total: 0 blocks 258 files /etc/mnttab (mnttab ): 0 blocks 0 files total: 0 blocks 1 files /var/run (swap ): 1180000 blocks 21263 files total: 1180008 blocks 21279 files /tmp (swap ): 1180000 blocks 21263 files total: 1180024 blocks 21279 files /export/home (/dev/dsk/c0d0s7 ): 4590890 blocks 387772 files total: 4590908 blocks 387776 files
Indeed, when I've administered Solaris systems, I've usually set up an alias df="df -t" to always have this more informative output.
NOTE
If you're trying to analyze the df output programmatically so you can flag when disks start to get tight, you'll immediately notice that there's no percentile-used summary in the df output in Solaris. Extracting just the relevant fields of information is quite tricky too, because you want to glean the number of blocks used from one line, then the number of blocks total on the next. It's a job for Perl or awk (or even a small C program).
-
By way of contrast, Darwin has a very different output for the df command:
# df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk1s9 78157200 29955056 48202144 38% / devfs 73 73 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol /dev/disk0s8 53458608 25971048 27487560 48% /Volumes/Macintosh HD automount -fstab [244] 0 0 0 100% /Network/Servers automount -static [244] 0 0 0 100% /automount
About as different as it could be, and notice that it suggests that just about everything is at 100% capacity. Uh oh!
A closer look, however, reveals that the devices at 100% capacity are devfs, fdesc, <volfs>, and two automounted services. In fact, they're related to the Mac OS running within Darwin, and really the only lines of interest in this output are the two proper /dev/ devices:
/dev/disk1s9 78157200 29955056 48202144 38% / /dev/disk0s8 53458608 25971048 27487560 48% /Volumes/Macintosh HD
The first of these, identified as /dev/disk1s9, is the hard disk where Mac OS X is installed, and it has 78,157,200 blocks. However, they're not 1K blocks as in Linux, they're 512-byte blocks, so you need to factor that in when you calculate the size in GB:
78,157,200 ÷ 2 = 39,078,600 1K blocks
39,078,600 ÷ 1024 = 38,162.69MB
38,162.69MB ÷ 1024 = 37.26GB
In fact, this is a 40GB disk, so we're right on with our calculations, and we can see that 38% of the disk is in use, leaving us with 48202144 ÷ (2 x 1024 x 1024) = 22.9GB.
Using the same math, you can calculate that the second disk is 25GB, of which about half (48%) is in use.
TIP
Wondering what happened to the 2.78GB of space that is the difference between the manufacturer's claim of a 40GB disk and the reality of my only having 37.26GB? The answer is that there's always a small percentage of disk space consumed by formatting and disk overhead. That's why manufacturers talk about "unformatted capacity."
-
Linux has a very nice flag with the df command worth mentioning: Use -h and you get:
# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 372M 106M 247M 30% / /dev/sda1 48M 7.7M 38M 17% /boot /dev/sda3 15G 62M 14G 1% /home none 250M 0 250M 0% /dev/shm /dev/sdb1 16G 1.3G 14G 8% /usr /dev/sdb2 248M 87M 148M 37% /var
A much more human-readable format. Here you can see that /home and /usr both have 14GB unused. Lots of space!
This section has given you a taste of the df command, but we haven't spent too much time analyzing the output and digging around trying to ascertain where the biggest files live. That's what we'll consider next.
A Closer Look with du
The df command is one you'll use often as you get into the groove of system administration work. In fact, some sysadmins have df e-mailed to them every morning from cron so they can keep a close eye on things. Others have it as a command in their .login or .profile configuration file so they see the output every time they connect.
Once you're familiar with how the disks are being utilized in your Unix system, however, it's time to dig a bit deeper into the system and ascertain where the space is going.
Task 3.2: Using du to Ascertain Directory Sizes
The du command shows you disk usage, helpfully enough, and it has a variety of flags that are critical to using this tool effectively.
-
There won't be a quiz on this, but see if you can figure out what the default output of du is here when I use the command while in my home directory:
# du 12 ./.kde/Autostart 16 ./.kde 412 ./bin 36 ./CraigsList 32 ./DEMO/Src 196 ./DEMO 48 ./elance 16 ./Exchange 1232 ./Gator/Lists 4 ./Gator/Old-Stuff/Adverts 8 ./Gator/Old-Stuff 1848 ./Gator/Snapshots 3092 ./Gator 160 ./IBM/i 136 ./IBM/images 10464 ./IBM 76 ./CBO_MAIL 52 ./Lynx/WWW/Library/vms 2792 ./Lynx/WWW/Library/Implementation 24 ./Lynx/WWW/Library/djgpp 2872 ./Lynx/WWW/Library 2880 ./Lynx/WWW 556 ./Lynx/docs 184 ./Lynx/intl 16 ./Lynx/lib 140 ./Lynx/lynx_help/keystrokes 360 ./Lynx/lynx_help 196 ./Lynx/po 88 ./Lynx/samples 20 ./Lynx/scripts 1112 ./Lynx/src/chrtrans 6848 ./Lynx/src 192 ./Lynx/test 13984 ./Lynx 28484 .
If you guessed that it's the size of each directory, you're right! Notice that the sizes are cumulative because they sum up the size of all files and directories within a given directory. So the Lynx directory is 13,984 somethings, which includes the subdirectory Lynx/src (6,848), which itself contains Lynx/src/chrtrans (1112).
The last line is a summary of the entire current directory (.), which has a combined size of 28484.
And what is that pesky unit of measure? Unfortunately, it's different in different implementations of Unix so I always check the man page before answering this question. Within RHL7.2, the man page for du reveals that the unit of measure isn't specifically stated, frustratingly enough. However, it shows that there's a -k flag that forces the output to 1KB blocks, so a quick check
# du -k | tail -1 28484 .
produces the same number as the preceding, so we can safely conclude that the unit in question is a 1KB block. Therefore, you can see that Lynx takes up 13.6MB of space, and that the entire contents of my home directory consume 27.8MB. A tiny fraction of the 15GB /home partition!
NOTE
Of course, I can recall when I splurged and bought myself a 20MB external hard disk for an early computer. I couldn't imagine that I could even fill it, and it cost more than $200 too! But I'll try not to bore you with the reminiscence of an old-timer, okay?
-
The recursive listing of subdirectories is useful information, but the higher up you go in the file system, the less helpful that information proves to be. Imagine if you were to type du / and wade through the output:
# du / | wc -l 6077
That's a lot of output!
Fortunately, one of the most useful flags to du is -s, which summarizes disk usage by only reporting the files and directories that are specified, or . if none are specified:
# du -s 28484 . # du -s * 4 badjoke 4 badjoke.rot13 412 bin 4 browse.sh 4 buckaroo 76 CBO_MAIL 36 CraigsList 196 DEMO 48 elance 84 etcpasswd 16 Exchange 3092 Gator 4 getmodemdriver.sh 4 getstocks.sh 4 gettermsheet.sh 0 gif.gif 10464 IBM 13984 Lynx
Note in the latter case that because I used the * wildcard, it matched directories and files in my home directory. When given the name of a file, du dutifully reports the size of that file in 1KB blocks. You can force this behavior with the -a flag if you want.
TIP
Tip - The summary vanishes from the bottom of the du output when I specify directories as parameters, and that's too bad, because it's very helpful. To request a summary at the end, simply specify the -c flag.
-
While we're looking at the allocation of disk space, don't forget to check the root level, too. The results are interesting:
# du -s / 1471202 /
Oops! We don't want just a one-line summary, but rather all the directories contained at the topmost level of the file system. Oh, and do make sure that you're running these as root, or you'll see all sorts of odd errors. Indeed, even as root the /proc file system will sporadically generate errors as du tries to calculate the size of a fleeting process table entry or similar. You can ignore errors in /proc in any case.
One more try:
# du -s /* 5529 /bin 3683 /boot 244 /dev 4384 /etc 29808 /home 1 /initrd 67107 /lib 12 /lost+found 1 /misc 2 /mnt 1 /opt 1 /proc 1468 /root 8514 /sbin 12619 /tmp 1257652 /usr 80175 /var 0 /web
That's what I seek. Here you can see that the largest directory by a significant margin is /usr, weighing in at 1,257,652KB.
Rather than calculate sizes, I'm going to use another du flag (-h) to ask for human-readable output:
# du -sh /* 5.4M /bin 3.6M /boot 244k /dev 4.3M /etc 30M /home 1.0k /initrd 66M /lib 12k /lost+found 1.0k /misc 2.0k /mnt 1.0k /opt 1.0k /proc 1.5M /root 8.4M /sbin 13M /tmp 1.2G /usr 79M /var 0 /web
Much easier. Now you can see that /usr is 1.2GB in size, which is quite a lot!
-
Let's use du to dig into the /usr directory and see what's so amazingly big, shall we?
# du -sh /usr/* 121M /usr/bin 4.0k /usr/dict 4.0k /usr/etc 40k /usr/games 30M /usr/include 3.6M /usr/kerberos 427M /usr/lib 2.7M /usr/libexec 224k /usr/local 16k /usr/lost+found 13M /usr/sbin 531M /usr/share 52k /usr/src 0 /usr/tmp 4.0k /usr/web 103M /usr/X11R6
It looks to me like /usr/share is responsible for more than half the disk space consumed in /usr, with /usr/bin and /usr/X11R6 the next largest directories.
You can easily step into /usr/share and run du again to see what's inside, but before we do, it will prove quite useful to take a short break and talk about sort and how it can make the analysis of du output considerably easier.
-
Before we leave this section to talk about sort, though, let's have a quick peek at du within the Darwin environment:
# du -sk * 5888 Desktop 396760 Documents 84688 Library 0 Movies 0 Music 31648 Pictures 0 Public 32 Sites
Notice that I've specified the -k flag here to force 1KB blocks (similar to df, the default for du is 512-byte blocks). Otherwise, it's identical to Linux.
The du output on Solaris is reported in 512-byte blocks unless, like Darwin, you force 1KB blocks with the -k flag:
# du -sk * 1 bin 1689 boot 4 cdrom 372 dev 13 devices 2363 etc 10 export 0 home 8242 kernel 1 lib 8 lost+found 1 mnt 0 net 155306 opt 1771 platform 245587 proc 5777 sbin 32 tmp 25 TT_DB 3206 users 667265 usr 9268 var 0 vol 9 xfn
This section has demonstrated the helpful du command, showing how -a, -s, and -h can be combined to produce a variety of different output. You've also seen how successive du commands can help you zero in on disk space hogs, foreshadowing the diskhogs shell script we'll be developing later in this hour.
Simplifying Analysis with sort
The output of du has been very informative, but it's difficult to scan a listing to ascertain the four or five largest directories, particularly as more and more directories and files are included in the output. The good news is that the Unix sort utility is just the tool we need to sidestep this problem.
Task 3.3: Piping Output to sort
Why should we have to go through all the work of eyeballing page after page of listings when there are Unix tools to easily let us ascertain the biggest and smallest? One of the great analysis tools in Unix is sort, even though you rarely see it mentioned in other Unix system administration books.
-
At its most obvious, sort alphabetizes output:
# cat names Linda Ashley Gareth Jasmine Karma # sort names Ashley Gareth Jasmine Karma Linda
No rocket science about that! However, what happens if the output of du is fed to sort?
# du -s * | sort 0 gif.gif 10464 IBM 13984 Lynx 16 Exchange 196 DEMO 3092 Gator 36 CraigsList 412 bin 48 elance 4 badjoke 4 badjoke.rot13 4 browse.sh 4 buckaroo 4 getmodemdriver.sh 4 getstocks.sh 4 gettermsheet.sh 76 CBO_MAIL 84 etcpasswd
Sure enough, it's sorted. But probably not as you expectedit's sorted by the ASCII digit characters! Not good.
-
That's where the -n flag is a vital addition: With -n specified, sort will assume that the lines contain numeric information and sort them numerically:
# du -s * | sort -n 0 gif.gif 4 badjoke 4 badjoke.rot13 4 browse.sh 4 buckaroo 4 getmodemdriver.sh 4 getstocks.sh 4 gettermsheet.sh 16 Exchange 36 CraigsList 48 elance 76 CBO_MAIL 84 etcpasswd 196 DEMO 412 bin 3092 Gator 10464 IBM 13984 Lynx
A much more useful result, if I say so myself!
-
The only thing I'd like to change in the sorting here is that I'd like to have the largest directory listed first, and the smallest listed last.
The order of a sort can be reversed with the -r flag, and that's the magic needed:
# du -s * | sort -nr 13984 Lynx 10464 IBM 3092 Gator 412 bin 196 DEMO 84 etcpasswd 76 CBO_MAIL 48 elance 36 CraigsList 16 Exchange 4 gettermsheet.sh 4 getstocks.sh 4 getmodemdriver.sh 4 buckaroo 4 browse.sh 4 badjoke.rot13 4 badjoke 0 gif.gif
One final concept and we're ready to move along. If you want to only see the five largest files or directories in a specific directory, all that you'd need to do is pipe the command sequence to head:
# du -s * | sort -nr | head -5 13984 Lynx 10464 IBM 3092 Gator 412 bin 196 DEMO
This sequence of sort|head will prove very useful later in this hour.
A key concept with Unix is understanding how the commands are all essentially Lego pieces, and that you can combine them in any number of ways to get exactly the results you seek. In this vein, sort -rn is a terrific piece, and you'll find yourself using it again and again as you learn more about system administration.
Identifying the Biggest Files
We've explored the du command, sprinkled in a wee bit of sort for zest, and now it's time to accomplish a typical sysadmin task: Find the biggest files and directories in a given area of the system.
Task 3.4: Finding Big Files
The du command offers the capability to either find the largest directories, or the combination of the largest files and directories, but it doesn't offer a way to examine just files. Let's see what we can do to solve this problem.
-
First off, it should be clear that the following command will produce a list of the five largest directories in my home directory:
# du | sort -rn | head -5 28484 . 13984 ./Lynx 10464 ./IBM 6848 ./Lynx/src 3092 ./Gator
In a similar manner, the five largest directories in /usr/share and in the overall file system (ignoring the likely /proc errors):
# du /usr/share | sort -rn | head -5 543584 /usr/share 200812 /usr/share/doc 53024 /usr/share/gnome 48028 /usr/share/gnome/help 31024 /usr/share/apps # du / | sort -rn | head -5 1471213 / 1257652 /usr 543584 /usr/share 436648 /usr/lib 200812 /usr/share/doc
All well and good, but how do you find and test just the files?
-
The easiest solution is to use the find command. find will be covered in greater detail later in the book, but for now, just remember that find lets you quickly search through the entire file system, and performs the action you specify on all files that match your selection criteria.
For this task, we want to isolate our choices to all regular files, which will omit directories, device drivers, and other unusual file system entries. That's done with -type f.
In addition, we're going to use the -printf option of find to produce exactly the output that we want from the matched files. In this instance, we'd like the file size in kilobytes, and the fully qualified filename. That's surprisingly easy to accomplish with a printf format string of %k %p.
Put all these together and you end up with the command
find . -type f -printf "%k %p\n"
The two additions here are the ., which tells find to start its search in the current directory, and the \n sequence in the format string, which is translated into a carriage return after each entry.
TIP
Don't worry too much if this all seems like Greek to you right now. Hour 12, "Managing Disk Quotas," will talk about the many wonderful features of find. For now, just type in what you see here in the book.
-
Let's see it in action:
# find . -type f -printf "%k %p\n" | head 4 ./.kde/Autostart/Autorun.desktop 4 ./.kde/Autostart/.directory 4 ./.emacs 4 ./.bash_logout 4 ./.bash_profile 4 ./.bashrc 4 ./.gtkrc 4 ./.screenrc 4 ./.bash_history 4 ./badjoke
You can see where the sort command is going to prove helpful! In fact, let's preface head with a sort -rn to identify the ten largest files in the current directory, or the following:
# find . -type f -printf "%k %p\n" | sort -rn | head 8488 ./IBM/j2sdk-1_3_0_02-solx86.tar 1812 ./Gator/Snapshots/MAILOUT.tar.Z 1208 ./IBM/fop.jar 1076 ./Lynx/src/lynx 1076 ./Lynx/lynx 628 ./Gator/Lists/Inactive-NonAOL-list.txt 496 ./Lynx/WWW/Library/Implementation/libhttp://www.a 480 ./Gator/Lists/Active-NonAOL-list.txt 380 ./Lynx/src/GridText.c 372 ./Lynx/configure
Very interesting information to be able to ascertain, and it'll even work across the entire file system (though it might take a few minutes, and, as usual, you might see some /proc hiccups):
# find / -type f -printf "%k %p\n" | sort -rn | head 26700 /usr/lib/libc.a 19240 /var/log/cron 14233 /var/lib/rpm/Packages 13496 /usr/lib/netscape/netscape-communicator 12611 /tmp/partypages.tar 9124 /usr/lib/librpmdb.a 8488 /home/taylor/IBM/j2sdk-1_3_0_02-solx86.tar 5660 /lib/i686/libc-2.2.4.so 5608 /usr/lib/qt-2.3.1/lib/libqt-mt.so.2.3.1 5588 /usr/lib/qt-2.3.1/lib/libqt.so.2.3.1
Recall that the output is in 1KB blocks, so libc.a is pretty huge at more than 26MB!
-
You might find that your version of find doesn't include the snazzy new GNU find -printf flag (neither Solaris nor Darwin do, for example). If that's the case, you can at least fake it in Darwin, with the somewhat more convoluted
# find . -type f -print0 | xargs -0 ls -s | sort -rn | head 781112 ./Documents/Microsoft User Data/Office X Identities/Main Identity/Database 27712 ./Library/Preferences/Explorer/Download Cache 20824 ./.Trash/palmdesktop40maceng.sit 20568 ./Library/Preferences/America Online/Browser Cache/IE Cache.waf 20504 ./Library/Caches/MS Internet Cache/IE Cache.waf 20496 ./Library/Preferences/America Online/Browser Cache/IE Control Cache.waf 20496 ./Library/Caches/MS Internet Cache/IE Control Cache.waf 20488 ./Library/Preferences/America Online/Browser Cache/cache.waf 20488 ./Library/Caches/MS Internet Cache/cache.waf 18952 ./.Trash/Palm Desktop Installer/Contents/MacOSClassic/Installer
Here we not only have to print the filenames and feed them to the xargs command, we also have to compensate for the fact that most of the filenames will have spaces within their names, which will break the normal pipe. Instead, find has a -print0 option that terminates each filename with a null character. Then the -0 flag indicates to xargs that it's getting null-terminated filenames.
CAUTION
Actually, Darwin doesn't really like this kind of command at all. If you want to ascertain the largest files, you'd be better served to explore the -ls option to find and then an awk to chop out the file size:
find /home -type f -ls | awk '{ print $7" "$11 }'
Of course, this is a slower alternative that'll work on any Unix system, if you really want.
-
To just calculate the sizes of all files in a Solaris system, you can't use printf or -print0, but if you omit the concern for filenames with spaces in them (considerably less likely on a more traditional Unix environment like Solaris anyway), you'll find that the following works fine:
# find / -type f -print | xargs ls -s | sort -rn | head 55528 /proc/929/as 26896 /proc/809/as 26832 /usr/j2se/jre/lib/rt.jar 21888 /usr/dt/appconfig/netscape/.netscape.bin 21488 /usr/java1.2/jre/lib/rt.jar 20736 /usr/openwin/lib/locale/zh_TW.BIG5/X11/fonts/TT/ming.ttf 18064 /usr/java1.1/lib/classes.zip 16880 /usr/sadm/lib/wbem/store 16112 /opt/answerbooks/english/solaris_8/SUNWaman/books/REFMAN3B/index/index.dat 15832 /proc/256/as
Actually, you can see that the memory allocation space for a couple of running processes has snuck into the listing (the /proc directory). We'll need to screen those out with a simple grep -v:
# find / -type f -print | xargs ls -s | sort -rn | grep -v '/proc' | head 26832 /usr/j2se/jre/lib/rt.jar 21888 /usr/dt/appconfig/netscape/.netscape.bin 21488 /usr/java1.2/jre/lib/rt.jar 20736 /usr/openwin/lib/locale/zh_TW.BIG5/X11/fonts/TT/ming.ttf 18064 /usr/java1.1/lib/classes.zip 16880 /usr/sadm/lib/wbem/store 16112 /opt/answerbooks/english/solaris_8/SUNWaman/books/REFMAN3B/index/index.dat 12496 /usr/openwin/lib/llib-lX11.ln 12160 /opt/answerbooks/english/solaris_8/SUNWaman/books/REFMAN3B/ebt/REFMAN3B.edr 9888 /usr/j2se/src.jar
The find command is somewhat like a Swiss army knife. It can do hundreds of different tasks in the world of Unix. For our use here, however, it's perfect for analyzing disk usage on a per-file basis.
Keeping Track of Users: diskhogs
Let's put all the information in this hour together and create an administrative script called diskhogs. When run, this script will report the users with the largest /home directories, and then report the five largest files in each of their homes.
Task 3.5: This Little Piggy Stayed Home?
This is the first shell script presented in the book, so a quick rule of thumb: Write your shell scripts in sh rather than csh. It's easier, more universally recognized, and most shell scripts you'll encounter are also written in sh. Also, keep in mind that just about every shell script discussed in this book will expect you to be running as root, since they'll need access to the entire file system for any meaningful or useful system administration functions.
In this book, all shell scripts will be written in sh, which is easily verified by the fact that they all have
#!/bin/sh
as their first line.
-
Let's put all this together. To find the five largest home directories, you can use
du s /home/* | sort rn | cut f2 | head 5
For each directory, you can find the largest files within by using
find /home/loginID -type f -printf "%k %p\n" | sort -rn | head
Therefore, we should be able to identify the top home directories, then step one-by-one into those directories to identify the largest files in each. Here's how that code should look:
for dirname in ´du -s /home/* | sort -rn | cut -f2- | head -5´ do echo "" echo Big directory: $dirname echo Four largest files in that directory are: find $dirname -type f -printf "%k %p\n" | sort -rn | head -4 done exit 0
-
This is a good first stab at this shell script. Let's save it as diskhogs.sh, run it and see what we find:
# sh diskhogs.sh Big directory: /home/staging Four largest files in that directory are: 423 /home/staging/waldorf/big/DSCF0165.jpg 410 /home/staging/waldorf/big/DSCF0176.jpg 402 /home/staging/waldorf/big/DSCF0166.jpg 395 /home/staging/waldorf/big/DSCF0161.jpg Big directory: /home/chatter Four largest files in that directory are: 1076 /home/chatter/comics/lynx 388 /home/chatter/logs/access_log 90 /home/chatter/logs/error_log 64 /home/chatter/responding.cgi Big directory: /home/cbo Four largest files in that directory are: 568 /home/cbo/financing.pdf 464 /home/cbo/investors/CBO-plan.pdf 179 /home/cbo/Archive/cbofinancial-modified-files/CBO Website.zip 77 /home/cbo/Archive/cbofinancial-modified-files/CBO Financial Incorporated.doc Big directory: /home/sherlockworld Four largest files in that directory are: 565 /home/sherlockworld/originals-from gutenberg.txt 56 /home/sherlockworld/speckled-band.html 56 /home/sherlockworld/copper-beeches.html 54 /home/sherlockworld/boscombe-valley.html Big directory: /home/launchline Four largest files in that directory are: 151 /home/launchline/logs/access_log 71 /home/launchline/x/submit.cgi 71 /home/launchline/x/admin/managesubs.cgi 64 /home/launchline/x/status.cgi
As you can see, the results are good, but the order of the output fields is perhaps less than we'd like. Ideally, I'd like to have all the disk hogs listed, then their largest files listed. To do this, we'll have to either store all the directory names in a variable that we then parse subsequently, or we'd have to write the information to a temporary file.
Because it shouldn't be too much information (five directory names), we'll save the directory names as a variable. To do this, we'll use the nifty backquote notation.
Here's how things will change. First off, let's load the directory names into the new variable:
bigdirs="´du s /home/* | sort rn | cut f2- | head 5´"
Then we'll need to change the for loop to reflect this change, which is easy:
for dirname in $bigdirs ; do
Notice I've also pulled the do line up to shorten the script. Recall that a semicolon indicates the end of a command in a shell script, so we can then pull the next line up without any further ado.
TIP
Unix old-timers often refer to backquotes as backticks, so a wizened Unix admin might well say "stick the dee-ewe in backticks" at this juncture.
-
Now let's not forget to output the list of big directories before we list the big files per directory. In total, our script now looks like this:
echo "Disk Hogs Report for System ´hostname´" bigdirs="´du -s /home/* | sort -rn | cut -f2- | head -5´" echo "The Five biggest home directories are:" echo $bigdirs for dirname in $bigdirs ; do echo "" echo Big directory: $dirname echo Four largest files in that directory are: find $dirname -type f -printf "%k %p\n" | sort -rn | head -4 done exit 0
This is quite a bit closer to the finished product, as you can see from its output:
Disk Hogs Report for System staging.intuitive.com The Five biggest home directories are: /home/staging /home/chatter /home/cbo /home/sherlockworld /home/launchline Big directory: /home/staging Four largest files in that directory are: 423 /home/staging/waldorf/big/DSCF0165.jpg 410 /home/staging/waldorf/big/DSCF0176.jpg 402 /home/staging/waldorf/big/DSCF0166.jpg 395 /home/staging/waldorf/big/DSCF0161.jpg Big directory: /home/chatter Four largest files in that directory are: 1076 /home/chatter/comics/lynx 388 /home/chatter/logs/access_log 90 /home/chatter/logs/error_log 64 /home/chatter/responding.cgi Big directory: /home/cbo Four largest files in that directory are: 568 /home/cbo/financing.pdf 464 /home/cbo/investors/CBO-plan.pdf 179 /home/cbo/Archive/cbofinancial-modified-files/CBO Website.zip 77 /home/cbo/Archive/cbofinancial-modified-files/CBO Financial Incorporated.doc Big directory: /home/sherlockworld Four largest files in that directory are: 565 /home/sherlockworld/originals-from gutenberg.txt 56 /home/sherlockworld/speckled-band.html 56 /home/sherlockworld/copper-beeches.html 54 /home/sherlockworld/boscombe-valley.html Big directory: /home/launchline Four largest files in that directory are: 151 /home/launchline/logs/access_log 71 /home/launchline/x/submit.cgi 71 /home/launchline/x/admin/managesubs.cgi 64 /home/launchline/x/status.cgi
This is a script you could easily run every morning in the wee hours with a line in cron (which we'll explore in great detail in Hour 15, "Running Jobs in the Future"), or you can even put it in your .profile to run automatically each time you log in.
-
One final nuance: To have the output e-mailed to you, simply append the following:
| mail s "Disk Hogs Report" your-mailaddr
If you've named this script diskhogs.sh like I have, you could have the output e-mailed to you (as root) with
sh diskhogs.sh | mail s "Disk Hogs Report" root
Try that, then check root's mailbox to see if the report made it.
-
For those of you using Solaris, Darwin, or another Unix, the nifty -printf option probably isn't available with your version of find. As a result, the more generic version of this script is rather more complex, because we not only have to sidestep the lack of -printf, but we also have to address the challenge of having embedded spaces in most directory names (on Darwin). To accomplish the latter, we use sed and awk to change all spaces to double underscores and then back again when we feed the arg to the find command:
#!/bin/sh echo "Disk Hogs Report for System ´hostname´" bigdir2="´du -s /Library/* | sed 's/ /_/g' | sort -rn | cut -f2- | head -5´" echo "The Five biggest library directories are:" echo $bigdir2 for dirname in $bigdir2 ; do echo "" echo Big directory: $dirname echo Four largest files in that directory are: find "´echo $dirname | sed 's/_/ /g'´" -type f -ls | \ awk '{ print $7" "$11 }' | sort -rn | head -4 done exit 0
The good news is that the output ends up being almost identical, which you can verify if you have an OS X or other BSD system available.
Of course, it would be smart to replace the native version of find with the more sophisticated GNU version, but changing essential system tools is more than most Unix users want!
TIP
If you want to explore upgrading some of the Unix tools in Darwin to take advantage of the sophisticated GNU enhancements, then you'd do well to start by looking at http://www.osxgnu.org/ for ported code. The site also includes download instructions.
If you're on Solaris or another flavor of Unix that isn't Mac OS X, check out the main GNU site for tool upgrades at http://www.gnu.org/.
This shell script evolved in a manner that's quite common for Unix toolsit started out life as a simple command line; then as the sophistication of the tool increased, the complexity of the command sequence increased to where it was too tedious to type in directly, so it was dropped into a shell script. Shell variables then offered the capability to save interim output, fine-tune the presentation, and more, so we exploited it by building a more powerful tool. Finally, the tool itself was added to the system as an automated monitoring task by adding it to the root cron job.
Summary
This hour has not only shown you two of the basic Unix commands for analyzing disk usage and utilization, but it's also demonstrated the evolution and development of a useful administrative shell script, diskhogs.
This sequence of command-to-multistage command-to-shell script will be repeated again and again as you learn how to become a powerful system administrator.
Q&A
Why are some Unix systems built around 512-byte blocks, whereas others are built around 1024-byte blocks?
This is all because of the history and evolution of Unix systems. When Unix was first deployed, disks were small, and it was important to squeeze as many bytes out of the disk as possible. As a result, the file system was developed with a fundamental block size of 512 bytes (that is, the space allocated for files was always in 512-byte chunks). As disks became bigger, millions of 512-byte blocks began to prove more difficult to manage than their benefit of allowing more effective utilization of the disk. As a result, the block size doubled to 1KB and has remained there to this day. Some Unix systems have stayed with the 512-byte historical block size, whereas others are on the more modern 1KB block size.
Do all device names have meaning?
As much as possible, yes. Sometimes you can't help but end up with a /dev/fd13x4s3, but even then there's probably a logical explanation behind the naming convention.
If there's a flag to du that causes it to report results in 1KB blocks on a system that defaults to 512-byte blocks, why isn't there a flag on 1KB systems to report in 512-byte blocks?
Ah, you expect everything to make sense? Maybe you're in the wrong field after all....
Workshop
Quiz
Why do most Unix installations organize disks into lots of partitions, rather than a smaller number of huge physical devices?
When you add up the size of all the partitions on a large hard disk, there's always some missing space. Why?
If you see devices /dev/sdb3, /dev/sdb4, and /dev/sdc1, what's a likely guess about how many physical hard disks are referenced?
Both Solaris and Darwin offer the very helpful -k flag to the df command. What does it do, and why would it be useful?
Using the -s flag to ls, the -rn flags to sort, and the -5 flag to head, construct a command line that shows you the five largest files in your home directory.
What do you think would happen to our script if a very large file was accidentally left in the /home directory overnight?
Answers
By dividing a disk into multiple partitions, you have created a more robust system because one partition can fill without affecting the others.
The missing space is typically allocated for low-level disk format information. On a typical 10GB disk, perhaps as much as two to four percent of the disk space might not be available after the drive is formatted.
This probably represents two drives: /dev/sdb and /dev/sdc.
The -k flag makes a system that defaults to 512-byte blocks report file sizes in 1KB block sizes.
ls -s $HOME | sort -rn | head -5.
The script as written would flag the very large file as one of the largest home directories, then would fail when it tried to analyze the files within. It's an excellent example of the need for lots of error condition code and some creative thought while programming.
The next hour will continue to build the foundations of sysadmin knowledge with the oft-convoluted file ownership model. This will include digging into both the passwd and groups files and learning how to safely change them to create a variety of different permission scenarios.