Parallel, Disk-Efficient .zip to .gz Conversion

Similar to my last post about needing to merge shapefiles using Postgis, I recently downloaded a bunch of energy data from the federal government. 13,370 files to be exact. While the data size itself isn’t that large (~8GB, compressed), an open-source tool I was looking to evaluate only supports gzip compression instead of the zip compressed files I actually had.

While I could’ve used this opportunity to merge the files together into one and do all the data cleaning, I became obsessed with figuring out how to just switch the compression scheme. Here’s the one-liner that emerged:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ find . -type f -name '*.zip' | parallel "unzip -p -q {} | gzip > {.}.gz && rm {}"

find . -type f -name '*.zip'
 - find all zip files in the current directory, including subdirectories

| parallel
 - take input list passed by find command, run some command against each argument in parallel

"unzip -p -q {} | gzip > {.}.gz && rm {}"
 - unzip a file, with flags -p to pass data to STDOUT and -q for quiet mode
 - {} represents the input file, which comes from list passed by find
 - gzip takes STDOUT as its input, writes to a file whose name is determined by {.}
 (the input file name going into parallel, where the . removes the file extension)
 - && rm {} is run after the gzip process finishes, removing the original .zip file

As a one-liner, it’s not the hardest to comprehend what’s going on, but it’s also not the most intuitive. The key idea here is that once we find all of the zip files, we can unzip/gzip the files in parallel. Note that this works because each process is independent from the other; a single file itself is being unzipped and then gzipped, we’re not unzipping and gzipping a single file in parallel. Just that multiple single-threaded processes are being kicked off at once instead of leaving the other cores in the CPU idle.

Once the unzip-to-gzip process has occurred, then I delete the original zip file. So for the most part, this process can be considered to take constant disk space (if you ignore that 4-8 files are being processed at one time).

Like many one-liners, this took longer to figure out than was actually worth the time savings. But such is life, and now it’s available in the wild for the next person who wonders how to do something like this!