Skip to content

Reduce overload on filesystem events

Rastersoft requested to merge reduce_overload_on_filesystem_events into master

When a big number of files are added to, or removed from, the desktop or the trash, the system generates a big number of filesystem events. With the current code, the current operation of getting the disk info is cancelled and a new operation is launched, all this for each event received. This poses a big overhead for the system, up to the point of being able to completely hang the whole desktop while the operation is running when it happens on the desktop folder.

An example is run this script inside the desktop folder: while true ; do touch testfile ; rm -f testfile ; done The desktop will freeze and won't be usable. It even won't be possible to stop the script itself from the graphical environment, you must change to a virtual text terminal and, there, kill the process.

This patch changes the way the events in the desktop and the trash are managed. Now, the first event launches the operation, as before. But if an event arrives while the operation is in process, it only sets a flag to indicate that the current operation is invalid, and that it must be repeated. When the current operation ends, if the flag is not set the code knows that there were no changes during the operation, and can safely return; but if the flag is set, this operation is invalid, so it will wait 200 milliseconds (to avoid overloading the system) and will retry the operation. This way, the amount of code executed for each event received after the first one is greatly reduced.

Fix #120 (closed)

Edited by Rastersoft

Merge request reports