Could Files be be more usable over high latency links?
When opening a directory mounted remotely, over a high-latency link, Nautilus/Files is very slow: Two suggestions:
-
An option so that when opening such a directory, Files does not try to fill in the number of members in each directory (Yes, I know this happens in parallel, but it still uses bandwidth, but it should be "easyish" to implement?) How this is detected, I leave as an exercise to the reader :-)
-
Before getting that far, I often get a blank screen for several (tens of) seconds a) a progress bar would be useful b) It is probably slowed by the check on each name to see if it is a directory, or not. Could this check be done in parallel to providing a "rough-and-ready" filename list, that perhaps only allows "open" (use case: when this director is intermediate to the one I'm trying to get to, I don't need this metadata)
-
You might think of others...
BTW this behaviour is different using nfs3 over VPN, and sshfs - I don't understand why! (I'm finding nfs over VPN takes longer to get started, but is then faster, and caches better.)
Otherwise, Nautilus is my favourite file manager! Thanks!