Home > Articles > Home & Office Computing > Microsoft Windows Desktop

Windows PowerShell Unleashed: An Introduction to Shells

  • Print
  • + Share This
This chapter looks at what a shell is and describes the power that can be harnessed by interacting with a shell by walking through some basic shell commands and building a shell script from those basic commands.
This chapter is from the book

Shells are a necessity when using key components of nearly all operating systems, because they make it possible to perform arbitrary actions such as traversing the file system, running commands, or using applications. As such, every computer user has interacted with a shell by typing commands at a prompt or by clicking an icon to start an application. Shells are an ever-present component of modern computing, frequently providing functionality that is not available anywhere else when working on a computer system.

In this chapter, you take a look at what a shell is and see the power that can be harnessed by interacting with a shell. To do this, you walk through some basic shell commands, and then build a shell script from those basic commands to see how they can become more powerful via scripting. Next, you take a brief tour of how shells have evolved over the past 35 years. Finally, you learn why PowerShell was created, why there was a need for PowerShell, what its inception means to scripters and system administrators, and what some of the differences between PowerShell 1.0 and PowerShell 2.0 CTP2 are.

What Is a Shell?

A shell is an interface that enables users to interact with the operating system. A shell isn’t considered an application because of its inescapable nature, but it’s the same as any other process that runs on a system. The difference between a shell and an application is that a shell’s purpose is to enable users to run other applications. In some operating systems (such as UNIX, Linux, and VMS), the shell is a command-line interface (CLI); in other operating systems (such as Windows and Mac OS X), the shell is a graphical user interface (GUI).

In addition, two types of systems in wide use are often neglected in discussions of shells: networking equipment and kiosks. Networking equipment usually has a GUI shell (mostly a Web interface on consumer-grade equipment) or a CLI shell (in commercial-grade equipment). Kiosks are a completely different animal; because many kiosks are built from applications running atop a more robust operating system, often kiosk interfaces aren’t shells. However, if the kiosk is built with an operating system that serves only to run the kiosk, the interface is accurately described as a shell. Unfortunately, kiosk interfaces continue to be referred to generically as shells because of the difficulty in explaining the difference to nontechnical users.

Both CLI and GUI shells have benefits and drawbacks. For example, most CLI shells allow powerful command chaining (using commands that feed their output into other commands for further processing, this is commonly referred to as the pipeline). GUI shells, however, require commands to be completely self-contained and generally do not provide a native method for directing their output into other commands. Furthermore, most GUI shells are easy to navigate, whereas CLI shells do not have an intuitive interface and require a preexisting knowledge of the system to successfully complete automation tasks. Your choice of shell depends on what you’re comfortable with and what’s best suited to perform the task at hand.

Even though GUI shells exist, the term “shell” is used almost exclusively to describe a command-line environment, not a task you perform with a GUI application, such as Windows Explorer. Likewise, shell scripting refers to collecting commands normally entered on the command line or into an executable file.

As you can see, historically there has been a distinction between graphical and nongraphical shells. An interesting development in PowerShell 2.0 CTP2 is the introduction of an alpha version of Graphical PowerShell, which provides a CLI and a script editor in the same window. Although this type of interface has been available for many years in IDE (Integrated Development Environment) editors for programming languages such as C, this alpha version of Graphical PowerShell gives a sense of the direction from the PowerShell team on where they see PowerShell going in the future—a fully featured CLI shell with the added benefits of a natively supported GUI interface.

Basic Shell Use

Many shell commands, such as listing the contents of the current working directory, are simple. However, shells can quickly become complex when more powerful results are required. The following example uses the Bash shell to list the contents of the current working directory.

$ ls
apache2 bin     etc     include lib     libexec man     sbin    share   var

However, often seeing just filenames isn’t enough and so a command-line argument needs to be passed to the command to get more details about the files.

The following command gives you more detailed information about each file using a command-line argument.

$ ls -l
total 8
drwxr-xr-x    13 root  admin   442 Sep 18 20:50 apache2
drwxrwxr-x    57 root  admin  1938 Sep 19 22:35 bin
drwxrwxr-x     5 root  admin   170 Sep 18 20:50 etc
drwxrwxr-x    30 root  admin  1020 Sep 19 22:30 include
drwxrwxr-x   102 root  admin  3468 Sep 19 22:30 lib
drwxrwxr-x     3 root  admin   102 Sep 18 20:11 libexec
lrwxr-xr-x     1 root  admin     9 Sep 18 20:12 man -> share/man
drwxrwxr-x     3 root  admin   102 Sep 18 20:11 sbin
drwxrwxr-x    13 root  admin   442 Sep 19 22:35 share
drwxrwxr-x     3 root  admin   102 Jul 30 21:05 var

Now you need to decide what to do with this information. As you can see, directories are interspersed with files, making it difficult to tell them apart. If you want to view only directories, you have to pare down the output by piping the ls command output into the grep command. In the following example, the output has been filtered to display only lines starting with the letter d, which signifies that the file is a directory.

$ ls -l | grep '^d'
drwxr-xr-x    13 root  admin   442 Sep 18 20:50 apache2
drwxrwxr-x    57 root  admin  1938 Sep 19 22:35 bin
drwxrwxr-x     5 root  admin   170 Sep 18 20:50 etc
drwxrwxr-x    30 root  admin  1020 Sep 19 22:30 include
drwxrwxr-x   102 root  admin  3468 Sep 19 22:30 lib
drwxrwxr-x     3 root  admin   102 Sep 18 20:11 libexec
drwxrwxr-x     3 root  admin   102 Sep 18 20:11 sbin
drwxrwxr-x    13 root  admin   442 Sep 19 22:35 share
drwxrwxr-x     3 root  admin   102 Jul 30 21:05 var

However, now that you have only directories listed, the other information such as date, permissions, size, and so on is superfluous because only the directory names are needed. So in this next example, you use the awk command to print only the last column of output shown in the previous example.

$ ls -l | grep '^d' | awk '{ print $NF }'
apache2
bin
etc
include
lib
libexec
sbin
share
var

The result is a simple list of directories in the current working directory. This command is fairly straightforward, but it’s not something you want to type every time you want to see a list of directories. Instead, we can create an alias or command shortcut for the command that we just executed.

$ alias lsd="ls -l | grep '^d' | awk '{ print \$NF }'"

Then, by using the lsd alias, you can get a list of directories in the current working directory without having to retype the command from the previous examples.

$ lsd
apache2
bin
etc
include
lib
libexec
sbin
share
var

As you can see, using a CLI shell offers the potential for serious power when you’re automating simple, repetitive tasks.

Basic Shell Scripts

Working in a shell typically consists of typing each command, interpreting the output, deciding how to put that data to work, and then combining the commands into a single, streamlined process. Anyone who has gone through dozens of files, manually adding a single line at the end of each one, will agree that scripting this type of manual process is a much more efficient approach than manually editing each file, and the potential for data entry errors is greatly reduced. In many ways, scripting makes as much sense as breathing.

You’ve seen how commands can be chained together in a pipeline to manipulate output from the preceding command, and how a command can be aliased to minimize typing. Command aliasing is the younger sibling of shell scripting and gives the command line some of the power of shell scripts. However, shell scripts can harness even more power than aliases.

Collecting single-line commands and pipelines into files for later execution is a powerful technique. Putting output into variables for further manipulation and reference later in the script takes the power to the next level. Wrapping any combination of commands into recursive loops and flow control constructs takes scripting to the same level of sophistication as programming.

Some may say that scripting isn’t programming, but this distinction is quickly becoming blurred with the growing variety and power of scripting languages these days. With this in mind, let’s try developing the one-line Bash command from the previous section into something more useful.

The lsd command alias from the previous example (referencing the Bash command ls -l | grep ‘^d’ | awk ‘{ print $NF }’) produces a listing of each directory in the current working directory. Now, suppose you want to expand this functionality to show how much space each directory uses on the disk. The Bash utility that reports on disk usage does so on a specified directory’s entire contents or a directory’s overall disk usage in a summary. It also reports disk usage amounts in bytes by default. With all that in mind, if you want to know each directory’s disk usage as a freestanding entity, you need to get and display information for each directory, one by one. The following examples show what this process would look like as a script.

Notice the command line you worked on in the previous section. The for loop parses through the directory list the command returns, assigning each directory name to the DIR variable and executing the code between the do and done keywords.

#!/bin/bash

for DIR in $(ls -l | grep '^d' | awk '{ print $NF }'); do
    du -sk ${DIR}
done

Saving the previous code as a script file named directory.sh and then running the script in a Bash session produces the following output.

$ big_directory.sh
17988   apache2
5900    bin
72      etc
2652    include
82264   lib
0       libexec
0       sbin
35648   share
166768  var

Initially, this output doesn’t seem especially helpful. With a few additions, you can build something considerably more useful. In this example, we add an additional requirement to report the names of all directories using more than a certain amount of disk space. To achieve this requirement, modify the directory.sh script file as shown in this next example.

#!/bin/bash

PRINT_DIR_MIN=35000

for DIR in $(ls -l | grep '^d' | awk '{ print $NF }'); do
    DIR_SIZE=$(du -sk ${DIR} | cut -f 1)
    if [ ${DIR_SIZE} -ge ${PRINT_DIR_MIN} ];then
        echo ${DIR}
    fi
done

One of the first things that you’ll notice about this version of directory.sh is that we have started adding variables. PRINT_DIR_MIN is a value that represents the minimum number of kilobytes a directory uses to meet the printing criteria. This value could change fairly regularly, so we want to keep it as easily editable as possible. Also, we could reuse this value elsewhere in the script so that we don’t have to change the amount in multiple places when the number of kilobytes changes.

You might be thinking the find command would be easier to use. However, although find is terrific for browsing through directory structures, it is too cumbersome for simply viewing the current directory, so the convoluted ls command is used instead. If we were looking for files in the hierarchy, the find command would be the most appropriate choice. However, because we are simply looking for directories in the current directory, the ls command is the best tool for the job in this situation.

The following is an example of the output rendered by the script so far.

$ big_directory.sh
lib
share
var

This output can be used in a number of ways. For example, systems administrators might use this script to watch user directories for disk usage thresholds if they want to notify users when they have reached a certain level of disk space. For this purpose, knowing when a certain percentage of users reaches or crosses the threshold would be useful.

In our next Bash scripting example, we modify the directory.sh script to display a message when a certain percentage of directories are a specified size.

#!/bin/bash

DIR_MIN_SIZE=35000
DIR_PERCENT_BIG_MAX=23

DIR_COUNTER=0
BIG_DIR_COUNTER=0

for DIR in $(ls -l | grep '^d' | awk '{ print $NF }'); do
    DIR_COUNTER=$(expr ${DIR_COUNTER} + 1)
    DIR_SIZE=$(du -sk ${DIR} | cut -f 1)
    if [ ${DIR_SIZE} -ge ${DIR_MIN_SIZE} ];then
        BIG_DIR_COUNTER=$(expr ${BIG_DIR_COUNTER} + 1)
    fi
done

if [ ${BIG_DIR_COUNTER} -gt 0 ]; then
    DIR_PERCENT_BIG=$(expr $(expr ${BIG_DIR_COUNTER} \* 100) / ${DIR_COUNTER})
    if [ ${DIR_PERCENT_BIG} -gt ${DIR_PERCENT_BIG_MAX} ]; then
        echo "${DIR_PERCENT_BIG} percent of the directories are larger than
${DIR_MIN_SIZE} kilobytes."
    fi
fi

Now, the preceding example barely looks like what we started with. The variable name PRINT_DIR_MIN has been changed to DIR_MIN_SIZE because we’re not printing anything as a direct result of meeting the minimum size. The DIR_PERCENT_BIG_MAX variable has been added to indicate the maximum allowable percentage of directories at or above the minimum size. Also, two counters have been added: one (DIR_COUNTER) to count the directories and one (BIG_DIR_COUNTER) to count the directories exceeding the minimum size.

Inside the for loop, DIR_COUNTER is incremented, and the if statement in the for loop now simply increments BIG_DIR_COUNTER instead of printing the directory’s name. An if statement has been added after the for loop to do additional processing, figure out the percentage of directories exceeding the minimum size, and then print the message if necessary. With these changes, the script now produces the following output.

$ big_directory.sh
33 percent of the directories are larger than 35000 kilobytes.

The output shows that 33 percent of the directories are 35MB or more. By modifying the echo line in the script to feed a pipeline into a mail delivery command and tweaking the size and percentage thresholds for the environment, systems administrators can schedule this shell script to run at specified intervals and produce directory size reports easily. If administrators want to get fancy, they can make the size and percentage thresholds configurable via command-line parameters.

As you can see, even a basic shell script can be powerful. With a mere 22 lines of code, we have a useful shell script. Some quirks of the script might seem inconvenient (using the expr command for simple math can be tedious, for example), but every programming language has its strengths and weaknesses. As a rule, some tasks you need to do are convoluted to perform, no matter what language you’re using.

The moral of this story is that shell scripting, or scripting in general, can make life much easier. For example, say your company merges with another company. As part of that merger, you have to create 1,000 user accounts in Active Directory or another authentication system. Usually, a systems administrator grabs the list, sits down with a cup of coffee, and starts clicking or typing away. If an administrator manages to get a migration budget, he can hire an intern or consultants to do the work or purchase migration software. But why bother performing repetitive tasks or spending money that could be put to better use (such as a bigger salary)?

Instead, the answer should be to automate those iterative tasks by using scripting. Automation is the purpose of scripting. As a systems administrator, you should take advantage of scripting with CLI shells or command interpreters to gain access to the same functionality developers have when coding the systems you manage. However, scripting tools tend to be more open, flexible, and focused on the tasks that you as an IT professional need to perform, as opposed to development tools that provide a framework for building an entire application from a blank canvas.

  • + Share This
  • 🔖 Save To Your Account