This command will copy a folder tree (keeping the parent folders) through ssh. It will: - compress the data - stream the compressed data through ssh - decompress the data on the local folder This command will take no additional space on the host machine (no need to create compressed tar files, transfer it and then delete it on the host). There is some situations (like mirroring a remote machine) where you simply cant wait for a huge time taking scp command or cant compress the data to a tarball on the host because of file system space limitation, so this command can do the job quite well. This command performs very well mainly when a lot of data is involved in the process. If you copying a low amount of data, use scp instead (easier to type) Show Sample Output
optipng and advancecomp (for the the advpng and advdef tools) are the best FOSS tools for losslessly compressing PNGs. With the above tool chain, you can cut off as much as 20% off a PNG's file size.
Ever compress a file for the web by replacing all newline characters with nothing so it makes one nice big blob? It is a great idea, however what about when you want to edit that file? ...Serious pain in the butt. I ran into this today in that my only copy of a CSS file was "compressed" with no newlines. I whipped this up and it converted back into nice human readable CSS :-) It could be nicer, but it does the job.
Part of the "atool" package.
Find all .gz files and recompress them to bz2 on the fly. No temp files. edit: forgot the double quotes! jeez!
a - archive m5 - compression level, 0= lowest compression...1...2...3...4...5= max compression -v5M split the output file in 5 megabytes archives, change to 700 for a CD, or 4200 for a DVD R recursive for directories, do not use it for files It's better to have the output of a compression already split than use the 'split' command after compression, would consume the double amount of disk space. Found at http://www.ubuntu-unleashed.com/2008/05/howto-create-split-rar-files-in-ubuntu.html
You need: pxz for the actual work (http://jnovy.fedorapeople.org/pxz/). The function could be better with better multifile and stdin/out support.
If you have servers on Wide Area Network (WAN), you may experience very long transfer rates due to limited bandwidth and latency. To speed up you transfers you need to compress the data so you will have less to transfer. So the solution is to use a compression tools like gzip or bzip or compress before and after the data transfer. Using ssh "-C" option is not compatible with every ssh version (ssh2 for instance).
Useful if you want to reduce PDF file size using command line by ghostscript.
Part of the "atool" package
is preserving creation time, modification time, permission, the directory structure, etc. Show Sample Output
Create compressed, encrypted backup from $source to $targetfile with password $key and exclude-file $excludefile
- recompresses all gz files to bz2 files from this point and below in the directory tree
- output shows the size of the original file, and the size of the new file. Useful.
- conceptually easier to understand than playing tricks with awk and sed.
- don't like output? Use the following line:
for gz in `find . -type f -name '*.gz' -print`; do f=`basename $gz .gz` && d=`dirname $gz` && gunzip -c $gz | bzip2 - -c > $d/$f.bz2 && rm -f $gz ; done
Show Sample Output
the 'aunpack' command is part of atool
Works even if file name contains \n. Spawns one job per core.
Create a 7zip archive named "some_directory.7z" and adds to it the directory "some_directory". The `-mhe=on` is for header encryption, basically it mangles the file names so no one knows whats inside the 7z. If -mhe=on wasn't included, then a person without the password would still be able to view the file names inside the 7z. Having this option ensures confidentiality. To ensure the result is small use lzma2, level 9 compression. Lzma2 fast bytes range from 5 to 272, the higher the number the more aggressive it is at finding repetitive bytes that can be added to the dictionary. Here the fast bytes are set to 64 bytes and the dictionary is 32 MB. Depending on your purposes (the directory size and desired file size), you can be more aggressive with these values. Lastly, `-ms=on` just says concatenate all the individual files and treat them as a singular file when compressing. This leads to a higher compression ratio generally.
This solution is similar to [1] except that it does not have any dependency on GNU Parallel. Also, it tries to minimize the impact on the running system (using ionice and nice). [1] http://www.commandlinefu.com/commands/view/7009/recompress-all-.gz-files-in-current-directory-using-bzip2-running-1-job-per-cpu-core-in-parallel
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: