Sorry for the poor answer.
My OS is Ubuntu 20.04.1 Desktop with KDE. I was using Gnome until a recent switch.
The issue only occurs with launching meld from python using subprocess.Popen(cmd, shell=False). os.system(<cmd_str>)
works fine.
I did not know that you can have two --label parameters. Thats awesome!
I removed the package that provides trash-empty
(trash-cli
), and the issue persisted.
I ran https://wiki.gnome.org/Projects/gvfs/debugging and are no issues showing up in the output. gio monitor trash:///
output shows the detection of files under the snapshots sub-directory. Eg:
~snip~
trash:///: trash:///%5Cdpool%5Cvcmain%5C.zfs%5Csnapshot%5Cautosnap_2022-04-18_00:00:00_daily%5C.Trash-1000%5Cfiles%5C--exclude=*.tar: created
trash:///: trash:///%5Cdpool%5Cvcmain%5C.zfs%5Csnapshot%5Cautosnap_2022-04-18_00:00:00_daily%5C.Trash-1000%5Cfiles%5C.2.vscode: created
trash:///: trash:///%5Cdpool%5Cvcmain%5C.zfs%5Csnapshot%5Cautosnap_2022-04-18_00:00:00_daily%5C.Trash-1000%5Cfiles%5Cbdist.linux-x86_64: created
~snip~
I can reproduce this scanning of the .Trash-<uid>
folder by simply changing directory to a folder in the terminal: /dpool/other/.zfs/snapshot/autosnap_2022-05-13_00:00:01_daily
. Once I do this, gio monitor trash:///
outputs similar to the above snip.
I uninstalled trash-cli
, the package that provides trash-empty
and the issue persisted. When I had Nautilus open, I could see contents of the .zfs/snapshot/<snapshot_id>/.Trash-<uid>/files
folders. The version of gio on Ubuntu 20.04.1 LTS is 2.64.6 and it does not have --list
.
I ran gio trash --empty
and its been at 100% CPU for a very long time. Nautilus is also open on the Trash and its at 100% CPU also. I'll try to let it run through...
Note: To mitigate this issue, I had made some changes on my local system to automatically clean up the trash on the frequently snapshotted volumes using autotrash and this has reduced the problem significantly as it should as there are many less files to enumerate. I will build back up the trash again and reproduce this after a few days of snapshots to get the logs for you.
Regarding mounting of the .zfs/snapshot directory, here is the response from Richard Elling on zfsonlinux
:
The .zfs/snapshot directory isn't mounted, it is not a separate filesystem.
If snapdir = hidden, then the VFS interface to ZFS (function zfs_readdir() in the ZFS source) will
elide the .zfs/snapshot directory from readdir() system calls. The directory still exists and you can
`cd` into it, `ls .zfs/snapshot`, etc., but it isn't in the parent's readdir() results.
I posted a question to ZFSOnLinux forum regarding the mechanics of the .zfs\
mount. I suspect its a VFS.
I don't see any mounts but here is the mount output for your reference:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=49385912k,nr_inodes=12346478,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=9888720k,mode=755,inode64)
rpool/ROOT/ubuntu_2nth3o on / type zfs (rw,relatime,xattr,posixacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,inode64)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,misc)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=42137)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
rpool/USERDATA/jbloggs_stvoqb on /home/jbloggs type zfs (rw,relatime,xattr,posixacl)
rpool/USERDATA/root_stvoqb on /root type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/srv on /srv type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/usr/local on /usr/local type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/games on /var/games type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/lib on /var/lib type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/log on /var/log type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/mail on /var/mail type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/snap on /var/snap type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/www on /var/www type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/spool on /var/spool type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/lib/AccountsService on /var/lib/AccountsService type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/lib/NetworkManager on /var/lib/NetworkManager type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/lib/apt on /var/lib/apt type zfs (rw,relatime,xattr,posixacl)
rpool/ROOT/ubuntu_2nth3o/var/lib/dpkg on /var/lib/dpkg type zfs (rw,relatime,xattr,posixacl)
/var/lib/snapd/snaps/gnome-3-28-1804_145.snap on /snap/gnome-3-28-1804/145 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gtk-common-themes_1515.snap on /snap/gtk-common-themes/1515 type squashfs (ro,nodev,relatime,x-gdu.hide)
bpool/BOOT/ubuntu_2nth3o on /boot type zfs (rw,nodev,relatime,xattr,posixacl)
/dev/sdd1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sdd1 on /boot/grub type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
dpool on /dpool type zfs (rw,xattr,noacl)
/var/lib/snapd/snaps/bare_5.snap on /snap/bare/5 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/core_12834.snap on /snap/core/12834 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/core18_2344.snap on /snap/core18/2344 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/core18_2284.snap on /snap/core18/2284 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/core_12941.snap on /snap/core/12941 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/core20_1434.snap on /snap/core20/1434 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/p7zip-desktop_220.snap on /snap/p7zip-desktop/220 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/whatsdesk_28.snap on /snap/whatsdesk/28 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gtk-common-themes_1519.snap on /snap/gtk-common-themes/1519 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gtk2-common-themes_13.snap on /snap/gtk2-common-themes/13 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gnome-3-38-2004_99.snap on /snap/gnome-3-38-2004/99 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gnome-3-28-1804_161.snap on /snap/gnome-3-28-1804/161 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gnome-3-38-2004_87.snap on /snap/gnome-3-38-2004/87 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/core20_1405.snap on /snap/core20/1405 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gnome-3-34-1804_72.snap on /snap/gnome-3-34-1804/72 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/snap-store_558.snap on /snap/snap-store/558 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/gnome-3-34-1804_77.snap on /snap/gnome-3-34-1804/77 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/snapd_15177.snap on /snap/snapd/15177 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/whatsdesk_25.snap on /snap/whatsdesk/25 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/snap-store_547.snap on /snap/snap-store/547 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/snapd_15534.snap on /snap/snapd/15534 type squashfs (ro,nodev,relatime,x-gdu.hide)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
dpool/vcmain on /dpool/vcmain type zfs (rw,xattr,noacl)
dpool/vccorp on /dpool/vccorp type zfs (rw,xattr,noacl)
dpool/other on /dpool/other type zfs (rw,xattr,noacl)
dpool/devz on /dpool/devz type zfs (rw,xattr,noacl)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=9888716k,mode=700,uid=1000,gid=1000,inode64)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
/dev/fuse on /run/user/1000/doc type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
tmpfs on /run/snapd/ns type tmpfs (rw,nosuid,nodev,noexec,relatime,size=9888720k,mode=755,inode64)
Actually, one other hint to the gvfs trash scan logic is that the .Trash-<uid>
folder is not at the root of the mounted filesystem but rather in a sub-directory. Maybe in such cases, they should be explicitly ignored or possibly expose a mount flag to ignores trash folders in sub directory like x-gvfs-notrashinsub
.
It appears that the .zfs/snapshot
folder is not separately mounted but rather exposed by zfs itself implicitly:
$ mount | grep dpool/vcmain`
dpool/vcsjp on /dpool/vcsjp type zfs (rw,xattr,noacl)
I can't find any info on x-gvfs-notrash
. Is that documented anywhere? Is x-gvfs-notrash
a bit or can I specify a relative path? If I could specify .zfs/snapshot, that would do the trick.
I believe the only way to distinguish the snapshot directory is that its name is <mount_root>/.zfs/snapshot
, its allocated 0 blocks with size 0 and is hidden (will not show up in ls of root of mount).
$ stat .zfs
File: .zfs
Size: 0 Blocks: 0 IO Block: 512 directory
Device: 4ch/76d Inode: 281474976710655 Links: 1
Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-05-03 15:30:58.779300553 -0400
Modify: 2022-05-02 11:12:52.056822801 -0400
Change: 2022-05-02 11:12:52.056822801 -0400
[UPDATE] I was able to narrow this issue down to gvfs-trash
and readonly zfs snapshot directories. I raised a ticket in the gvfs project: #622
This issue does not appear to impact snapshots of user's home directories (/home/<user>/.zfs/snapshot/
) but instead impacts other datasets created by and used by the user. These user datasets will contain a .Trash-1000
directory in the root.
I've attached a video of this issue happening live. As you can see from the clock, the gnome completely hangs at and 44 seconds. I limited the number of file checks in the test to 100 but if I let it go to the full 224, my system will be hung for a very long time and often requiring a reboot to be useful again.
To summarize the issue, I wrote a python toolkit called zfsvc which leverages zfs snapshots and diff capabilities to do SVC style diff / history report without the need for a SVC.
It appears that when my program scans the snapshot folders (<dataset_root>/.zfs/snaphot
), the Gnome VFS detects the .Trash-1000
folders contained therein and starts enumerating them. After a fresh reboot my system was running great but as soon as I ran zfsvc meld -D 12h -p .
, my system hanged. I hit Ctrl-Alt-F3
and run top, I see that gvfs-trash
is pegging my CPU and shortly after gnome-shell
is doing the same. Now when I run trash-empty
I see the error relating to Trash folders under .zfs/snapshot
After letting my Gnome churn as I wrote this post, the .zfs/snapshot
folders are no longer showing in the trash-empty
output. But when I run zfsvc meld -D 12H -p
, my system hangs again and gvfs-trash
is thrashing my CPU.
I'll start trying to simplify the test...
When doing scans of snapshot directories. Gnome will start enumerating .Trash-<uid>
directories in the snapshot directories and can cause Gnome to hang. See follow posts for more details.
FYI - Speed was much snappier after reboot. Any suggestions to debug the issue next time it occurs?
Meld has been getting very slow loading on my development desktop. Just testing right now and it takes 28 seconds to load the UI with no parameters.
$ time meld
real 0m28.195s
user 0m1.381s
sys 0m0.175s
My environment:
There is no CPU or Disk churn happening during the wait and there is no network traffic triggered.
This has been an ongoing issue. I will do a reboot to see if that helps...
I just hit this issue today.
For those who hit this issue, It is likely resolved by rebooting the machine. Unless you feel like digging deeper by debugging, just a reboot and you should be good to go.
Same issues here and same resolution. Reboot and issue went away.