Slice up your life — Large File Download Optimization
We’ve covered upload speed hiccups for large files, but what about when it’s time to download/read?
In this blog, you’d better get your reading glasses because we’re giving you the download on downloads!
We’ve all been there, staring at our watch, wondering how much longer it’s going to take for that large file to move from Cloud Storage to the virtual machine where computation occurs.
Even if it’s only two minutes, that’s an eternity as far as we’re concerned. Especially if we’re paying for that computing power during download — noooo thank you.
The Problem at Hand
When looking at our options, here, we might recall that GSUTIL saved the day for our uploads, but that doesn’t seem to be the case for our downloads.
We even found that HTTP/CURL outperforms GSUTIL here — but why?
The reason behind this disconnect in performance is that the default settings for GSUTIL don’t work particularly well with large file downloads due to its’ default setting (that’s bad news).
The good news, however, is that we can adjust these settings with various command line options, and pick a sweet-spot for our performance needs.
Fewer threads, more processes
The default settings for gsutil will spread the file download to 4 threads, but only use 1 process.
(Quick recap: Processes are instances of execution, while threads are individual executions that run inside of a process. We want more processes, but each focused on only one task.)
For copying a single file on a powerful GCE VM, we can improve the performance by limiting the number of threads, forcing the system to use multiple processes instead.
To do this, we can specify a command line option to reduce the thread count to 1:
time gsutil -o ‘GSUtil:parallel_thread_count=1’ cp gs://bukket/fileSRC.dat ./localDST.bin
With this small change, we improved the performance by a significant amount, but we’re still not at the level that CURL is showing when fetching through HTTP.
Slice and dice
To try and get our performance down to where we want it, we’re going to have a throwback moment to the previous article where we discussed parallel composite uploads.
Turns out, it also supports parallel “sliced” downloads (which it does using HTTP Range GET requests). This process works by pre-allocating disk space for the file, and slices within the file will be downloaded & filled in in parallel. Once all slices have completed downloading, the temporary file will be renamed to the destination file. No additional local disk space is required for this operation.
Along with setting the thread count to 1, we will also set the number of max components to 8.
time gsutil -o ‘GSUtil:parallel_thread_count=1’ -o ‘GSUtil:sliced_object_download_max_components=8’ cp gs://bukket/fileSRC.dat ./localDST.bin
This gives us a much improved download speed.
Upload-a la mode!
And there you have it! There’s more to discuss about optimizing performance, but we’ll get to that later!