Skip to content
GitLab
  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • G gvfs
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 406
    • Issues 406
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 18
    • Merge requests 18
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • GNOME
  • gvfs
  • Issues
  • #292
Closed
Open
Created Dec 21, 2016 by Bugzilla@bugzilla-migration💬Reporter

large file throughput via gvfs-smb is over 2x slower than smbclient or fstab mount.cifs

Submitted by Oliver Schonrock

Assigned to gvf..@..e.bugs

Link to original bug (#776339)

Description

Similar issue to this:

https://bugzilla.gnome.org/show_bug.cgi?id=688079

But at higher speeds. Full gigabit network => expect ~100MB/s throughput

Server: FreeBSD 10.3, Samba 4.3, fast ZFS pool capable of > 150MB/s reads/writes Client: Ubuntu 16.04 LTS, fast RAID-0 disk array capable of > 150MB/s reads/writes Network: Client => Switch => Server (all 1Gbps rated with Intel NICs etc) testfile: 619.98 MB mp4 file => uncompressible garbage

WORKING CONTROL CASE #1 (closed)

fstab:

//server/share /mnt/share cifs uid=oliver,username=...,password=...,iocharset=utf8,sec=ntlm 0 0

mount -a cp /mnt/share/testfile . (620MB file in 6.2s => 100MB/s totally consistent over multiple runs)

WORKING CONTROL CASE #2 (closed)

smbclient //server/share pw.. -U ... -c 'get testfile' (620MB file in 5.6s => 110.7MB/s totally consistent over multiple runs)

SLOW GVFS CASE:

create gvfs mount using nautilus

cp /var/run/user/1000/gvfs/smb-share:server=server,share=share/testfile . (620MB file in 14.0s => 44.2MB/s quite consistent over multiple runs 0.5s variation)

(I also tested write (ie upload to samba server) performance. Results for working and slow gvfs case, are very similar to read/download).

THEORY: BLOCKSIZE

I understand that gvfs-smb backend uses libsmbclient and that smbget also uses that lib. So let's try smbget

smbget -u ... -p ... 'smb://server/share/testfile' (620MB file in 13.5s => 46MB/s .. SURPRISINGLY SIMILAR TO GVFS)

from smbget man page:

   -b, --blocksize
       Number of bytes to download in a block. Defaults to 64000.

try bigger blocksize (multiple trials show performance tops out at 16MB block size):

smbget --blocksize 16777216 -u ... -p ... 'smb://server/share/testfile' (620MB file in 6.2s => 100MB/s .. performance now roughly equiv to control cases)

There was this commit in 2013, which added the "-o big_writes" option

https://git.gnome.org/browse/gvfs/commit/?id=8835238

confirm we are using that:

$ ps waux | grep fuse .. /usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes

yes, we are.

The above commit hard codes "conn->max_write" to 64kB if -o big_writes is passed.

I DO NOT NOW IF conn->max_write is related to "smbget --blocksize" via underlying libsmbclient?

If they are related, then making conn->max_write even bigger (16MB?) or making it end-user tunable via command line option to gvfsd-fuse might resolve the issue?

Please consider. I am available for testing/compiling/feedback...

Thank you.

Version: 1.28.x

Depends on

  • Bug 771022
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking