(g_socket_receive_message | g_socket_send_message) performance
Submitted by Miguel París Díaz
Link to original bug (#752769)
Description
Created attachment 307978 Callgrind graph
Hello, I have detected that the most CPU usage of g_socket_receive_message is wasted in the error management, which happens always that there is not more messages. Specifically, the main problem is the management of the error string, which could not be useful for the user because only process the error type.
I attach a callgrind graph where you can see the percentage of the CPU used by each function.
To fix this I think that one approach is:
- If the user is not interested in the error, it must no be managed.
- If the user is interested in the error:
- If he is only interested in the error type, the error string must no be processed
- If he is interested in the error string (2 alternatives):
- The string should be processed internally.
- We should provide a mechanism to the user to process the error string (as recvmsg does).
REFS man recvmsg "If no messages are available at the socket, the receive calls wait for a message to arrive, unless the socket is nonblocking (see fcntl(2)), in which case the value -1 is returned and the external variable errno is set to EAGAIN or EWOULDBLOCK."
Source code: g_socket_receive_message: https://git.gnome.org/browse/glib/tree/gio/gsocket.c?id=2.42.2#n4123
recvmsg call: https://git.gnome.org/browse/glib/tree/gio/gsocket.c?id=2.42.2#n4235
g_set_error and socket_strerror calls: https://git.gnome.org/browse/glib/tree/gio/gsocket.c?id=2.42.2#n4257
g_strerror (from socket_strerror): https://git.gnome.org/browse/glib/tree/gio/gsocket.c?id=2.42.2#n219
Attachment 307978, "Callgrind graph":
Version: 2.42.x