g_get_current_dir SIGSEGV on long path
Submitted by Robby Griffin
Link to original bug (#447935)
Description
I'm using the SUNWGlib package (VERSION = 11.10.0,REV=2005.01.08.05.16), but I believe this bug would still be present in SVN trunk glib.
In gnome-terminal on Solaris 10, doing the following:
gnome-terminal --command tcsh
cd /tmp
@ i=0
while ($i < 101)
mkdir ten_chars
cd ten_chars
@ i=($i + 1)
end
mkdir ten_chars
cd ten_chars/
soon results in a gnome-terminal crash during g_get_current_dir()
. I'm not absolutely certain this will always reproduce the crash; it seems to depend on some property of the shell as well.
That said, Solaris getcwd()
will set errno=ERANGE
and return NULL
whenever the working directory is equal to or longer than PATH_MAX
(1024), regardless of the available buffer size. This means the loop in gutils.c
:
while (max_len < G_MAXULONG / 2)
{
buffer = g_new (gchar, max_len + 1);
*buffer = 0;
dir = getcwd (buffer, max_len);
if (dir || errno != ERANGE)
break;
g_free (buffer);
max_len *= 2;
}
goes all the way up to max_len >= G_MAXULONG / 2
without succeeding and terminates without break. In a debugger I observed that max_len == 0x80000000
at the time of crash. When the loop terminates without break, buffer
is either NULL
(if the previous g_new()
failed), or points to a freed 1GB buffer. dir
is definitely NULL
.
This means the next bit of code writes to unallocated memory:
if (!dir || !*buffer)
{
/* hm, should we g_error() out here?
* this can happen if e.g. "./" has mode \0000
*/
buffer[0] = G_DIR_SEPARATOR;
buffer[1] = 0;
}
As a quick fix, I would suggest checking (!buffer)
as well, and maybe allocate space for two characters if it happens to be NULL
.
But ultimately, it's not useful to be running the loop above and trying to allocate huge chunks of buffer that getcwd will never use, so I guess a feature test would be called for.
Version: 2.6.x