large file throughput via gvfs-smb is over 2x slower than smbclient or fstab mount.cifs
Submitted by Oliver Schonrock
Assigned to gvf..@..e.bugs
Similar issue to this:
But at higher speeds. Full gigabit network => expect ~100MB/s throughput
Server: FreeBSD 10.3, Samba 4.3, fast ZFS pool capable of > 150MB/s reads/writes Client: Ubuntu 16.04 LTS, fast RAID-0 disk array capable of > 150MB/s reads/writes Network: Client => Switch => Server (all 1Gbps rated with Intel NICs etc) testfile: 619.98 MB mp4 file => uncompressible garbage
WORKING CONTROL CASE #1 (closed)
//server/share /mnt/share cifs uid=oliver,username=...,password=...,iocharset=utf8,sec=ntlm 0 0
mount -a cp /mnt/share/testfile . (620MB file in 6.2s => 100MB/s totally consistent over multiple runs)
WORKING CONTROL CASE #2 (closed)
smbclient //server/share pw.. -U ... -c 'get testfile' (620MB file in 5.6s => 110.7MB/s totally consistent over multiple runs)
SLOW GVFS CASE:
create gvfs mount using nautilus
cp /var/run/user/1000/gvfs/smb-share:server=server,share=share/testfile . (620MB file in 14.0s => 44.2MB/s quite consistent over multiple runs 0.5s variation)
(I also tested write (ie upload to samba server) performance. Results for working and slow gvfs case, are very similar to read/download).
I understand that gvfs-smb backend uses libsmbclient and that smbget also uses that lib. So let's try smbget
smbget -u ... -p ... 'smb://server/share/testfile' (620MB file in 13.5s => 46MB/s .. SURPRISINGLY SIMILAR TO GVFS)
from smbget man page:
-b, --blocksize Number of bytes to download in a block. Defaults to 64000.
try bigger blocksize (multiple trials show performance tops out at 16MB block size):
smbget --blocksize 16777216 -u ... -p ... 'smb://server/share/testfile' (620MB file in 6.2s => 100MB/s .. performance now roughly equiv to control cases)
There was this commit in 2013, which added the "-o big_writes" option
confirm we are using that:
$ ps waux | grep fuse .. /usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
yes, we are.
The above commit hard codes "conn->max_write" to 64kB if -o big_writes is passed.
I DO NOT NOW IF conn->max_write is related to "smbget --blocksize" via underlying libsmbclient?
If they are related, then making conn->max_write even bigger (16MB?) or making it end-user tunable via command line option to gvfsd-fuse might resolve the issue?
Please consider. I am available for testing/compiling/feedback...