First and foremost, you want to prevent an inside user from forging the IP address of an outside machine. This can be solved with a straightforward rule in the firewall. If each dialin modem is assigned a static IP address and the modems are attached to serial ports on a router, you could even restrict the IP source address on a modem-by-modem basis.
To prevent smurfing attacks, you could similarly block ping packets addressed to broadcast addresses. Once you've stopped IP spoofing, SYN attacks can now at least be traced back to your users. A stateful packet-filtering firewall could additionally limit the number of outstanding half-open connections originating at any given source address.
To prevent application-level attacks, you would program your packet-filter to reject outgoing packets to certain well-known ports (mail, Web, etc.) and instead require users to go through proxy servers. This could be annoying for Web traffic, but users would have no reason to complain that you only allow them to connect specifically to your mail server.
You also need to protect the ISP's operations center from its own users. The obvious solution is a ring-structured system. On the inside are the untrusted dialup users with a firewall around them. In the first ring, you have the ISP's trusted machines, with a firewall around them. Finally, you have the outside world. The outside firewall could then be more permissive than the inside firewall.
Application-Level | Packet-Filtering | |
Speed (Throughput) | Relatively slow (70 MBytes/sec would be high) because application-level firewalls are usually built from PCs or workstations with limited network bandwidth. Also, application-level firewalls spend more time processing any given byte (going all the way up and down the TCP/IP protocol stack). | Extremely fast (gigabits/sec are common) because packet-filtering firewalls are usually built on routers, where moving packets around is what they do. Also, many packets can be passed along without inspection. |
Protection Against Low-Level Protocol Attacks | Very good. Raw packets do not pass through application-level firewalls. | Relatively weak. While some kinds of attacks can be special-cased, raw packets can travel unmolested through the firewall, creating opportunities for future attacks. |
Protection Against Application-Level Attacks | Very good. In the same way as with raw IP packets, outsiders only speak the application protocol to the firewall. The firewall then speaks to internal machines and tries to conform closely to protocol standards. | Very little protection. Packet-filters do not have enough information to understand application attacks. |
Resistance to Low-Level Protocol Attacks (aimed at the firewall) | Relatively weak. Application firewalls are often running normal Unix operating systems and can be vulnerable to the same attacks as other Unix machines. | Relatively good. Since packet-filters are little more than routers, they will happily pass strange packets across. |
Resistance to Application-Level Attacks (aimed at the firewall) | An interesting question. Hopefully, the application proxies have been carefully written to have no bugs. Of course, bugs can still happen. | Good. Since routers don't run much in the way of applications, there aren't many applications to attack! |
Ease of Supporting New Applications | Weak. When Microsoft releases its latest new toy, somebody has to write a proxy for it. This could require reverse-engineering the Microsoft product. If the data traffic is encrypted, this gets much harder. | Good, but... A packet-filter can always allow more packets to go through, but this isn't always a safe idea if application-level vulnerabilities exist. |
Traditional Applications that Won't Work out of the Box | Weak. If you block direct IP connectivity, traditional utilities like finger and talk simply break. They need to be extended to know about SOCKS or other kinds of proxy servers. | Good, but... As above, it's easy to support any application, but hard to deal with vulnerabilities. |
If the spam filtering was also installed on interior mail servers, it could additionally catch spam that is forwarded from one employee to the next one, perhaps sending back a nice note reminding them that spam letters (or chain letters or whatever) are against company policy.
So, how should something be classified as spam? Certain fake e-mail accounts could be created which only go to censors. Likewise, a mechanism could be created for an employee to forward a questionable message to a spam censor. It would be unrealistic to have censors read each and every e-mail message because the volume of e-mail is simply too high.
The easiest way to address account sharing is to require authentication to include something you have, usually a smart card, SecurID card, iButton, or other physical device that is hard to copy. Your student ID with the magnetic stripe on the back doesn't usually count because they're extremely easy to copy (as are credit cards).
To address unauthorized account usage, you need to have stronger authentication and you need protection against session hijacking attacks. SSH, Kerberos, and other systems which use cryptography to protect the TCP/IP stream can guarantee that only the authentication user is speaking on a connection, although they make no guarantees against denial of service attacks (i.e., TCP reset packets). Note: even though smart cards use cryptography in their inner workings, they do not encrypt the TCP/IP stream. A generic smart card might talk at about 1200 baud, which would be pretty slow for TCP/IP.
The other issue for Owlnet is the practical cost of distributing authentication tokens to every student on campus. At $30-$100 per token, this cost could add up quickly. Additionally, tokens like the iButton or smartcards require every computer to have a reader installed. These usually connect to the serial port. The benefit of challenge-response systems or SecurID cards is that they work with unmodified hardware. That's a big win for students who want to log in from a cyber-cafe while on vacation.
The last issue is how much damage can be done by a Trojan Horse computer. Maybe a student from one residential college wants to play a prank and hacks the computers of their rival. It could happen. Let's say Alice sits down at the Trojan computer, installed by Bob, and logs into her mail account on Owlnet. At this point, Bob is acting as a man-in-the-middle for the duration of Alice's session. Bob can observe all the traffic and can impersonate Alice to Owlnet with his own commands. Even if Alice closes the session, Bob might show Alice the appropriate logout screen and keep the session alive, doing further damage. However, once Alice leaves and takes her smart token with her, Bob cannot authenticate as Alice again. If Owlnet servers require the user to re-authenticate every eight hours, and Alice is long gone, Bob's session will then be limited in time, as well. If re-authentication is a continuous process, then when Alice removes her token, Bob will not be able to continue the session at all.
Once Bob's attack on Alice has been discovered, the damage must still be painstakingly undone. Still, by limiting the duration of Bob's session, hopefully the damage done by Bob can be limited as well.