g_socket_set_timeout() working is not clear, timeout not reset after starting new operation
From https://discourse.gnome.org/t/g-socket-set-timeout-clarification/16615
It is not clear if the "timeout" arg to g_socket_set_timeout()
is a request-timeout or idle-timeout.
Request timeout:
E.g. HTTP client sends request ( at time T1 ) to webserver. With g_socket_set_timeout (client_socket, 10)
, if the client doesn’t receive a response before T1 + 10 seconds, the client GSocket
will be marked as timed-out.
Idle timeout:
E.g. HTTP client sends request ( at time T1 ) to webserver and receives a response ( at time T2 ). With g_socket_set_timeout(client_socket, 10)
, the client GSocket
will be marked as timed-out at T2 + 10 seconds,
I think the following happens in code:
Considering an example HTTP client.
- Client creates a
GSocket
connection with timeout of 10 seconds. - Client creates a
GSocketInputstream
and attaches aGSource
(G_IO_IN
) to theGSocket
. - Client sends a HTTP request ( R1 ) to the web server.
-
GSocket
receives aG_IO_IN
read event andGSocket->GSource->dispatch()
is called. -
GSource->dispatch()
invokes the HTTP client’s callback fn to read the data. ( refer code below ) -
GSource->dispatch()
updates the timeout forGSource
after [5] completes. ( refer code below )
I think [6] is the issue here.
Here is the relevant code in gsocket.c
:
if (timeout >= 0 && timeout < g_source_get_time (source) &&
!g_socket_is_closed (socket_source->socket))
{
socket->priv->timed_out = TRUE;
events |= (G_IO_IN | G_IO_OUT);
}
ret = (*func) (socket, events & socket_source->condition, user_data);
if (socket->priv->timeout && !g_socket_is_closed (socket_source->socket))
g_source_set_ready_time (source, g_get_monotonic_time () + socket->priv->timeout * 1000000);
else
g_source_set_ready_time (source, -1);
Here, consider we have received the full HTTP response in [5], and still the timer is pushed to current + 10 seconds. So, if we want to use the same GSocket
to send request ( R2 ) after current + 10 seconds, the send will fail as the GSource->dispatch()
which was triggered at current + 10 seconds, has already marked the GSocket
as timed out ( as shown in above code )
So, the following function will return false for HTTP request R2 on the GSocket:
static gboolean
check_timeout (GSocket *socket,
GError **error)
{
if (socket->priv->timed_out)
{
socket->priv->timed_out = FALSE;
g_set_error_literal (error, G_IO_ERROR, G_IO_ERROR_TIMED_OUT,
_("Socket I/O timed out"));
return FALSE;
}
return TRUE;
}