COMMAND
lynx
SYSTEMS AFFECTED
lynx 2.8.x
PROBLEM
Michal Zalewski found following. Since 2.7 releases (?), lynx has
mechanisms to avoid spoofed 'special URLs'. It's designed to
protect lusers from malicious, internal pseudo-protocols like
LYNXDIRED://, LYNXDOWNLOAD://, LYNXPRINT:// etc, inserted by those
little, evil, hacking boys and girls, into external html
documents. These protocols should be allowed only within internal
lynx pages, like 'Download Options', 'File Management Options' and
so on - these pages are rendered in /tmp (unless $LYNX_TEMP_SPACE
is set) and displayed just like normal html (this solution not for
the first time brings several security problems, but probably is
quite convenient).
Unfortunately, mechanism offered to classify html as 'external'
(where special URLs are not allowed) or 'internal' (where special
URLs *are* allowed) is rather... funny. Take a look at this code
in LYMainLoop.c:
[...]
(!strncmp(links[curdoc.link].lname,
"LYNXDOWNLOAD:", 13) &&
strcmp((curdoc.title ? curdoc.title : ""),
DOWNLOAD_OPTIONS_TITLE)) ||
(!strncmp(links[curdoc.link].lname,
"LYNXHIST:", 9) &&
strcmp((curdoc.title ? curdoc.title : ""),
HISTORY_PAGE_TITLE) &&
[...]
Hmm?! Classification is done by... veryfying title of web page!
Aghrr... Somewhat better checks are done for LYNXDIRED://, good
luck (location of displayed file is verified as well - secure as
long as $LYNX_SAVE_SPACE isn't set). Fortunately, most of
LYNX*:// requests require user interaction/confirmation, but it's
pretty dangerous, anyway, as we can access internal mechanisms,
not designed to be called from nowhere except internal pages (hmm,
what about overflows and lacks of security checks? at least
NULL-pointer SEGVs are possible).
Another issue is LYNXOPTIONS:// protocol. In sources, we can read
about 'paranoid security' when veryfying form submitted to
LYNXOPTIONS:// location. This form contains complete lynx runtime
setup, usually configured within 'Lynx Options' page - you can
invoke it with "O" key. 'Paranoid security' is done by inserting
hidden value called "secure" into this form. Value is calculated
in very-special-and-secure-way - by calling time(NULL) (*SIGH!*).
Attack is quite easy with local access (another lusers and root
are possible victims). All you have to do is create evil webpage
(in our example, A.html). In it, ask websurfer to check something
in his config (eg. by putting text like 'Please make sure you have
TagSoup html parser set in your config (press "O") before
continuing'). Victim pressed 'O', and temporary file of size
approx 8-9 kB (rendered html config interface) has been created in
/tmp. It's name should be in format: /tmp/Lxxxxx-yTMP.html,
where xxxxx is pid of browser and y is unique, small integer
(starting from 1) - for example, /tmp/L1829-1TMP.html. All you
have to do is to utime(...) on this file to determine 'secure'
value. If you can't see what we're talking about, stop now and
read manpages for time(...) and utime(...).
Then, you have to create next webpage, B.html (referenced with
'CLICK HERE WHEN DONE' from A.html, putting form with hidden
fields containing your favourite configuration for victim's
browser, and, of course, 'secure' field. For configuration form
fields, take a look on /tmp file created by your browser. Another
'CLICK HERE' and form will be submitted to LYNXOPTIONS://.
silently modyfying client's configuration. Just a suggestion:
change 'editor' to your favourite shell-combo ('rm -rf /' for
kiddies, 'cat /tmp/mykey >>~/.ssh/authorized_keys #' for script
adults), set 'Save options to disk', then put mailto: in
subsequent webpages - editor will be spawned automatically when
new mail is edited within lynx.
As you can see, direct implications of these lacks of security
checks aren't deadly, but just a little bit of inadvertence
combined with trivial psychological tricks might turn it into
something quite harmful.
Another detail on LYNXOPTIONS:// and bypassing evil configuration
options to victim's browser - attack scheme could be even easier
and can be done remotely. First of all, ask user to check his/her
configuration, as stated in previous post (let's call this webpage
A.html). Then, supply link to another webpage, containing evil
form with configuration (B.html, see previous post for details).
Value of "secure" field can be guessed easily - it's increased
every second (huh, that's the way clock works). Victim's system
time can be precisely estimated with help of it's MTA subsystem,
so you can synchronize your clock with a little bit of shrewdness.
Wait for GET request on A.html from victim, assume eg. +4 seconds
to read and understand text (and to press "O", this time is blind
assumption, probably some real-life test are helpful... but this
time will be constant for maybe 95% requests, if webpage is
designed properly and user won't need too much time to understand
what user should do). Now, time difference (in seconds) between
your and their system clock + time(0) return value at the time of
GET request + your estimation (4 secs mentioned above) is "secure"
value. Rebuild B.html by inserting proper "secure" field. Form
fields should be hidden, some bogus text with big, good-looking
'submit' button will help.
Now, the most interesting thing - by putting funny 'preffered
charset', 'preffered language' and 'user agent' fields into form
(tried with >64kB of 'A's, but probably it could be much smaller),
you'll cause beautifully exploitable stack overflow while viewing
next webpage after pressing Big Button on B.html. After
submitting configuration, last webpage is automatically reloaded,
that's enough. No need to modify 'editor' or anything else and
wait.
Program received signal SIGSEGV, Segmentation fault.
0x4009ab97 in strcpy ()
(gdb) info stack
#0 0x4009ab97 in strcpy ()
#1 0x80b802b in _start ()
#2 0x41414141 in ?? ()
Cannot access memory at address 0x41414141.
Yes, it's much more social (reverse) engineering than hacking, all
of these processes have to be automated and still you don't have
100% certainty, but those hacks where user reactions have critical
meaning are the most interesting.
SOLUTION
Nothing yet.