Commit a1e85eda authored by Ondrej Holy's avatar Ondrej Holy

gdbus: Add workaround for deadlocks when cancelling jobs

GVfs calls gvfs_dbus_daemon_proxy_new() in cancelled signal handler which
internally needs CONNECTION_LOCK(connection). The lock can be unfortunately
held by gdbus worker thread which can call g_cancellable_disconnect(). This
obviously leads to deadlocks. I don't see any reason why we have to block
g_cancellable_disconnect() because of gvfs_dbus_daemon_proxy_new() resp.
gvfs_dbus_daemon_call_cancel(). Let's call it over idle source to not block
the cancelled signal handler in order to prevent the deadlocks.

It would be better to fix this issue directly in gdbus codes, however, it
is not fully clear to me, what is a proper way to fix this.

glib#1023
parent f581a0ae
Pipeline #130196 passed with stage
in 1 minute and 18 seconds
......@@ -365,10 +365,8 @@ cancelled_got_proxy (GObject *source_object,
g_object_unref (proxy);
}
/* Might be called on another thread */
static void
async_call_cancelled_cb (GCancellable *cancellable,
gpointer _data)
static gboolean
async_call_cancelled_cb_on_idle (gpointer _data)
{
AsyncCallCancelData *data = _data;
......@@ -380,6 +378,29 @@ async_call_cancelled_cb (GCancellable *cancellable,
NULL,
cancelled_got_proxy,
GUINT_TO_POINTER (data->serial)); /* not passing "data" in as long it may not exist anymore between async calls */
return FALSE;
}
/* Might be called on another thread */
static void
async_call_cancelled_cb (GCancellable *cancellable,
gpointer _data)
{
AsyncCallCancelData *data = _data;
AsyncCallCancelData *idle_data;
idle_data = g_new0 (AsyncCallCancelData, 1);
idle_data->connection = g_object_ref (data->connection);
idle_data->serial = data->serial;
/* Call on idle to not block g_cancellable_disconnect() as it causes deadlocks
* in gdbus codes, see: https://gitlab.gnome.org/GNOME/glib/issues/1023.
*/
g_idle_add_full (G_PRIORITY_DEFAULT_IDLE,
async_call_cancelled_cb_on_idle,
idle_data,
async_call_cancel_data_free);
}
gulong
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment