Home > Articles

Packet Filtering

  • Print
  • + Share This
This chapter is from the book

Problems with Packet Filters

Despite the many positive uses of packet filters, problems exist due to inherent limitations in the way packet filters work. Spoofed and fragmented traffic can bypass the packet filter if protections aren't properly implemented. In addition, because of the always-open nature of a "permit" static packet filter, issues exist with opening such a "hole." Finally, allowing return traffic can be difficult using a technology that lacks the ability to track the state of the current traffic flow. To successfully defend a network with packet filtering, these weaknesses must be understood.

Spoofing and Source Routing

Spoofing means sending a packet that is addressed with false information, so it appears to come from somewhere other than where it did. A packet can be addressed as if it came from an internal host on the target network, one of the private address ranges, or even another network entirely. Of course, a packet doesn't do this on its own; the packet has to be crafted or created with special packet-crafting software.

If your defense isn't set up correctly and the packet gets through, it's possible that an internal host could believe the packet came from a "trusted" host that has rights to private information, and could in turn reply to the spoofed address! You might be asking yourself, "If the packet appeared to come from a station other than the one that sent it, where will the response go?" Well, the answer in typical TCP/IP communication is to the real host, which wouldn't know what to do with the packet, and would drop it and send a reset to the originator. However, if source routing is enabled, the imposter packet could carry source-routing information that would allow it to tell the station where it needs to be sent to go home.

Source routing allows a packet to carry information that tells a router the "correct" or a better way for it to get back to where it came from, allowing it to override the router's prescribed routing rules for the packet. This could allow a devious user to guide return traffic wherever he wants. For this reason, it is imperative to have source routing disabled. It is easily disabled in a Cisco router with the following command typed at the global configuration prompt:

router(config)#no ip source-route

However, by blocking any packet that claims to have an unusable address before it can enter, we can help remove the problem. This is where ingress filters come into play. The best place to cut off packets like these is where they enter: on the perimeter router's interface that connects your network to the Internet.

Fragments

Many of the great fragmenting attacks were originally designed to defeat packet-filtering technology. Originally, some packet-filtering technologies allowed all fragments to pass, which wasn't good. After this was recognized as a security concern, many systems began checking the first fragment to verify that the header information passed the tests set forth by the ACLs. If this initial fragment failed the test and didn't pass through the router, the rest of the fragments could never be reformed at the other side, in theory solving the problem.1

Because of the way packet filtering examines the header information, it could be defeated by splitting up the packet into such small pieces that the header containing TCP or UDP port information was divided. Because the first fragment was often the only fragment that many popular packet-filtering systems checked and that the IP address information would pass, the entire reassembled packet would be passed. In addition, packet filtering was discovered to be vulnerable to other fragmenting attacks, including attacks that allowed a second fragment to overlap a seemingly harmless TCP or UDP port in the initial fragment with deviously chosen port information.2 Many clever ways were determined that could bypass the packet filter's inspection capabilities.

As time went by, packet-filtering product manufacturers advanced their technology, and solutions were proposed to many of the common fragment attack methods. RFC 1858 defined methods to deter fragment flow, including dropping initial fragments that were smaller than a defined size or dropping a second fragment based on information found in it.3

The most important point on using a packet-filtering defense to protect your network from fragment attacks is to verify that you have the latest firmware and security patches (or in the case of Cisco routers, the latest IOS software). These updates reflect the changes made to defend against fragment attacks such as those mentioned. For more complete fragment protection, some firewall technologies include methods such as fragment reassembly before packets are ruled on, the forming of tables that track decisions regarding initial fragments, and the basing of outcome of noninitial fragments on their predecessors. These technologies are not inherent in packet-filtering systems, and they must be checked for when purchasing an individual product.

Cisco access lists can disallow fragmented traffic using the following access list as the first in an ACL series:

router(config)# access-list 111 deny ip any any fragments

This access list disallows any noninitial fragments that have matching IP address information, but it allows non-fragments or initial fragments to continue to the next access list entry because of the fragments keyword at the end of the ACL. The initial fragments or non-fragments are denied or allowed based on the access lists that follow the preceding example. However, fragmented traffic is a normal part of some environments, and a statement like the previous example would deny this normal traffic, as well as maliciously fragmented traffic. This example would only be used in an environment that warrants the highest security to fragmentation attacks, without fear of the loss of potential usability.

Opening a "Hole" in a Static Packet Filter

One of the great flaws of static packet filtering is that to allow a protocol into a network, you need to open a "hole." It is referred to as a hole because no additional checking takes place of the type of traffic allowed in or out based on more intelligent methods of detection. All you can do is open an individual port on your protective wall; as with a bullet hole through a three-foot wall, you can't shoot anywhere else on the other side, but you can fire straight through the existing hole repeatedly. The importance of this analogy is that something must be on the other side at the port in question; otherwise, you won't be able to hit it.

It is recommended when opening a port using an access list of this type that you limit the target hosts as much as possible with the access list. Then, if you have a secured server with all patches and no vulnerabilities (found as often as elves and four leaf clovers) that you are allowing to service this port, this isn't such a bad thing. However, if your host system is exploitable through whatever port number you have open, it is possible that any traffic can be sent through that "hole," not just the protocol that was running on the host inside.

Two-way Traffic and the established Keyword

When we communicate with another host, it's not just us connecting to the host, but also the host connecting to us—a two-way connection. This presents a problem when it comes to preventing unwanted access with a packet filter. If we try to block all incoming traffic, we prevent the return connection from hosts we are trying to contact.

How can we allow only return traffic? The original answer that Cisco came up with was the established keyword for extended access lists. With the word established added to an access list, any traffic, other than return traffic, is blocked, theoretically. The established keyword checks to see which flags are set on incoming packets. Packets with the ACK flag set (or RST flag) would pass, and only response traffic of the type specified could ever get through, right? Wrong! The combination of certain pieces of software and sneaky, nefarious users results in what's known as a crafted packet, which is a packet that the communicating host does not create in the normal way, but builds Frankenstein-style from software residing on a host. Users can set any flag they want.

What happens if a packet that was crafted with malicious intent appears with the ACK flag set in an attempt to sneak by the router's filters? The established keyword access list lets it go through, which isn't good. The good news is that an internal system that is listening for a new connection (initiated by a SYN packet) would not accept the ACK packet that is passed. It would be so offended by the packet that it would send a reset back to the originator, telling it to try again.

This sounds like a good thing, but it has two flaws. First, it proves that a station exists at the address to which the packet was sent. If a station didn't exist there, a reset packet wouldn't be returned. This scanning technique works and is pretty stealthy as well. Second, because it is eliciting a response from a private system, this technique might be used successfully for a denial of service attack. Internal systems could be repeatedly hit with scores of ACK packets, causing those systems to attempt reply after reply with RST packets. This is further accentuated by spoofing the source address on the ACK packets, so the targeted network would be feverishly firing resets back to another innocent network. Fortunately, the innocent network does not respond to the resets, preventing a second volley from being thrown at the target network.

Despite the drawbacks of the established keyword, it is one of the only static means by which a Cisco router can allow only return traffic back in to your network. The following is an example of an established access list:

router(config)#access-list 101 permit tcp any any est log

This basic extended access list allows any TCP traffic that has the ACK bit set, meaning that it allows only return traffic to pass. It is applied inbound on the outside router interface, and it can log matches with the appended log keyword. It also allows RST packets to enter (by definition) to help facilitate proper TCP communication. A more secure version of this same list would be this:

router(config)#access-list 101 permit tcp any eq 80
192.168.1.0 0.0.0.255 gt 1023 est log
router(config)#access-list 101 permit tcp any eq 23
192.168.1.0 0.0.0.255 gt 1023 est log
router(config)#access-list 101 permit tcp any eq 25
192.168.1.0 0.0.0.255 gt 1023 est log
router(config)#access-list 101 permit tcp any eq 110
192.168.1.0 0.0.0.255 gt 1023 est log

In this case, the inside network address is 192.168.1.0–255. These access lists are applied inbound on the external router interface. By writing your access list this way, you allow traffic only from approved protocol port numbers (web traffic, Telnet, email, and so on) to your internal network addresses, and only to ephemeral ports on your systems. However, an access list of this type still has problems. It would not support FTP for reasons we will go over in an upcoming section, and it only handles TCP traffic.

The established Keyword and the Problem of DNS

Remember that the previous ACL did not allow UDP traffic or ICMP traffic. The established (or est) keyword is only valid for TCP access lists. Access lists allow needed ICMP and UDP traffic, which would have to be included along side of this established access list, to form a comprehensive filter set. Without UDP, outside DNS is a real problem, disabling Internet functionality. This shows one of the biggest flaws of the est keyword as an effective defense mechanism. To facilitate Internet access with the est keyword, a UDP access list must be included, allowing any DNS return traffic. Remember that return traffic is coming to a randomly chosen port above 1023, which means that to effectively allow any DNS responses, you need an access list like this:

access-list 101 permit udp host 192.168.1.1 eq 53
172.16.100.0 0.0.0.255 gt 1023 log

This ACL assumes that the external DNS server's address is 192.168.1.1 and that your internal network is 172.16.100.0–255. By adding this line to your existing access list 101, you allow DNS responses to your network. However, you also leave yourself open to outside access on ports greater than 1023 from that external DNS server. Your security red alert should be going off about now! This would be a great argument for bringing DNS inside your perimeter; however, that DNS server would then need to be able to access outside DNS servers for queries and zone transfers. To allow the DNS server to make outbound DNS queries, a similar access list would need to be added to the router:

access-list 101 permit tcp any host 172.16.100.3 eq 53
access-list 101 permit udp any host 172.16.100.3 eq 53

This allows all traffic through port 53 to your inside (and hopefully well-hardened) DNS server. Ideally, such a public access server would be on a separate screened subnet for maximum security.

Remember that neither solution provides for additional UDP or ICMP support. If access to either is needed in your specific environment, more "holes" have to be opened.

Protocol Problems: Extended Access Lists and FTP

File Transfer Protocol (FTP) is a popular means to move files back and forth between remote systems. You need to be careful of outside FTP access because it could allow a malicious user to pull company information or server information (including password files) from inside servers. A user could upload files in an attempt to fill a hard drive and crash a server, upload a Trojan, or overwrite important server configuration files with ones that allow compromise of the server.

FTP is also one of the more complicated services to secure because of the way it works. Securing (or blocking) an incoming connection is relatively easy, but securing outgoing FTP connections is considerably more difficult. Let's take a look at a trace that shows standard FTP communication between a client and a server.

First is the outgoing connection with TCP/IP's three-way handshake:

client.com.4567 > server.com.21: S 1234567890:1234567890(0)
server.com.21 > client.com.4567: S 3242456789:3242456789(0) ack 1234567890
client.com.4567 > server.com.21: . ack 1

Next is the incoming connection when establishing data channel:

server.com.20 > client.com.4568: S 3612244896:3612244896(0)
client.com.4568 > server.com.20: S 1810169911:1810169911(0) ack 3612244896
server.com.20 > client.com.4568: . ack 1

The first part of the communication is a normal three-way handshake, but when the data channel is established, things become complicated. The server starts a connection session from a different port (TCP 20) than the one the client originally contacted (TCP 21), to a port greater than 1023 port on the client that differs from the one the client originally used. Because the server starts the connection, it is not considered return traffic and won't pass through extended access lists with the established keyword or dynamic reflexive access lists. In turn, to open the router for standard FTP, you must allow any traffic with a destination TCP port greater than 1023 and a source port of 20, which is a significant security hole.

One way to get around this problem is to use passive (PASV) FTP. PASV FTP works like standard FTP until the data connection. Instead of connecting to the client from port 20 to a random port greater than 1023, the FTP server tells the client (through the port that the client last used to connect to it) what greater-than 1023 port it wants to use to transfer data. With this port number, the client establishes a connection back to the FTP server. Now let's look at a trace of our previous example's data connection, this time using PASV FTP:

client.com.4568 > server.com.3456: S 1810169911: 1810169911(0)
server.com.3456 > client.com.4568: S 3612244896:3612244896(0) ack 1810169911
client.com.4568 > server.com.3456: . ack 1

All traffic that comes from the server is established traffic, permitting extended lists with the established keyword to function correctly. Using PASV mode FTP requires both the FTP server and client to support PASV mode transfers. Changing to passive FTP clients isn't a problem for most sites because most popular FTP clients support PASV mode. Most of the major web browsers support PASV mode FTP as well; however, this might require some minor setup, such as going to a preferences section and selecting PASV or passive FTP mode support. Using an ACL like the following example would be one way to handle inbound return PASV FTP traffic:

router(config)#access-list 101 permit tcp any gt 1023 192.168.1.0
0.0.0.255 gt 1023 est log

This ACL assumes that our internal network addresses are 192.168.1.0–255 and that they are part of a more complete access list allowing other, more standard traffic types. The problem with this access list is that despite the fact that only return traffic is allowed (in theory), you must leave open all greater-than 1023 TCP ports for return access because you don't know what data channel port the FTP server you are contacting will choose. Although this ACL is more secure than some of the previous options, it still isn't a strong security stance. Wouldn't it be nice if it were possible to find out what port number you were using to contact the PASV FTP server every time, and use that information to allow the traffic back in?

  • + Share This
  • 🔖 Save To Your Account