Commands using tar (225)


  • 70
    wget -qO - "http://www.tarball.com/tarball.gz" | tar zxvf -
    jianingy · 2009-02-08 12:22:54 7

  • 41
    tar -tf <file.tar.gz> | xargs rm -r
    prayer · 2009-07-06 22:23:11 3
  • ~$ tar --version tar (GNU tar) 1.20


    29
    tar --exclude-vcs -cf src.tar src/
    hendry · 2010-01-20 10:16:17 3
  • What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish. Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/ Show Sample Output


    26
    tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz
    opertinicy · 2009-12-18 17:09:08 3
  • Create an AES256 encrypted and compressed tar archive. User is prompted to enter the password. Decrypt with: openssl enc -d -aes256 -in <file> | tar --extract --file - --gzip


    25
    tar --create --file - --posix --gzip -- <dir> | openssl enc -e -aes256 -out <file>
    seb1245 · 2012-11-27 15:33:45 1
  • this bzips a folder and transfers it over the network to "host" at 777k bit/s. cstream can do a lot more, have a look http://www.cons.org/cracauer/cstream.html#usage for example: echo w00t, i'm 733+ | cstream -b1 -t2 hehe :)


    24
    tar -cj /backup | cstream -t 777k | ssh host 'tar -xj -C /backup'
    wires · 2009-07-02 10:05:53 6
  • At client side: tar c myfile | nc localhost 7000 ##Send file myfile to server tar c mydir | nc localhost 7000 ## Send directory mydir to server


    19
    while true; do nc -l 7000 | tar -xvf -; done
    anhpht · 2011-10-26 23:43:51 4
  • Create a tar file in multiple parts if it's to large for a single disk, your filesystem, etc. Rejoin later with `cat .tar.*|tar xf -` Show Sample Output


    17
    tar cf - <dir>|split -b<max_size>M - <name>.tar.
    dinomite · 2009-11-11 01:53:33 0

  • 17
    wget -qO - http://example.com/path/to/blah.tar.gz | tar xzf -
    TuxOtaku · 2010-02-15 04:00:29 1
  • This will uncompress the file while it's being downloaded which makes it much faster


    16
    wget http://URL/FILE.tar.gz -O - | tar xfz -
    theturingmachine · 2011-01-18 12:17:16 1
  • it compresses the files and folders to stdout, secure copies it to the server's stdin and runs tar there to extract the input and output to whatever destination using -C. if you emit "-C /destination", it will extract it to the home folder of the user, much like `scp file [email protected]:`. the "v" in the tar command can be removed for no verbosity.


    13
    tar czv file1 file2 folder1 | ssh [email protected] tar zxv -C /destination
    xsawyerx · 2009-01-29 10:38:26 3
  • Execute it from the source host, where the source files you wish backup resides. With the minus '-' the tar command deliver the compressed output to the standar output and, trough over the ssh session to the remote host. On the other hand the backup host will be receive the stream and read it from the standar input sending it to the /path/to/backup/backupfile.tar.bz2 Show Sample Output


    13
    tar jcpf - [sourceDirs] |ssh [email protected] "cat > /path/to/backup/backupfile.tar.bz2"
    mack · 2010-03-24 01:29:25 0
  • Create a tgz archive of all the files containing local changes relative to a subversion repository. Add the '-q' option to only include files under version control: svn st -q | cut -c 8- | sed 's/^/\"/;s/$/\"/' | xargs tar -czvf ../backup.tgz Useful if you are not able to commit yet but want to create a quick backup of your work. Of course if you find yourself needing this it's probably a sign you should be using a branch, patches or distributed version control (git, mercurial, etc..)


    12
    svn st | cut -c 8- | sed 's/^/\"/;s/$/\"/' | xargs tar -czvf ../backup.tgz
    chrisdrew · 2009-02-09 11:24:31 3
  • This command will copy a folder tree (keeping the parent folders) through ssh. It will: - compress the data - stream the compressed data through ssh - decompress the data on the local folder This command will take no additional space on the host machine (no need to create compressed tar files, transfer it and then delete it on the host). There is some situations (like mirroring a remote machine) where you simply cant wait for a huge time taking scp command or cant compress the data to a tarball on the host because of file system space limitation, so this command can do the job quite well. This command performs very well mainly when a lot of data is involved in the process. If you copying a low amount of data, use scp instead (easier to type) Show Sample Output


    12
    ssh <host> 'tar -cz /<folder>/<subfolder>' | tar -xvz
    polaco · 2009-11-10 20:06:47 4
  • If archive has leading directory level same as archive name and you want to strip it, this command is for you.


    12
    tar -xaf archive.tar.gz --strip-components=1
    sirex · 2011-11-29 07:38:19 0
  • Leave it to a proprietary software vendor to turn a cheap and easy parlor trick into a selling point. "Hey guys, why don't we turn our _collection of multiple files_ into a *collection of multiple files*!!" Extract the ^above with this: cat pics.tar.gz.??? | tar xzv ^extract on any Unix - no need to install junkware! (If you must make proprietary software, at least make it do something *new*) if [ -e windows ]; then use 7-Zip


    11
    tar czv Pictures | split -d -a 3 -b 16M - pics.tar.gz.
    asmoore82 · 2009-06-09 19:48:01 3
  • Useful to move many files (thousands or millions files) over ssh. Faster than scp because this way you save a lot of tcp connection establishments (syn/ack packets). If using a fast lan (I have just tested gigabyte ethernet) it is faster to not compress the data so the command would be: tar -cf - /home/user/test | ssh [email protected] 'cd /tmp; tar xf -'


    11
    tar -cf - /home/user/test | gzip -c | ssh [email protected] 'cd /tmp; tar xfz -'
    esplinter · 2009-08-24 18:35:38 6
  • The command extracting the tar contents into particular directory ...


    11
    tar xfz filename.tar.gz -C PathToDirectory
    Dhinesh · 2011-11-17 12:43:56 2
  • This Anti-TarBomb function makes it easy to unpack a .tar.gz without worrying about the possibility that it will "explode" in your current directory. I've usually always created a temporary folder in which I extracted the tarball first, but I got tired of having to reorganize the files afterwards. Just add this function to your .zshrc / .bashrc and use it like this; atb arch1.tar.gz and it will create a folder for the extracted files, if they aren't already in a single folder. This only works for .tar.gz, but it's very easy to edit the function to suit your needs, if you want to extract .tgz, .tar.bz2 or just .tar. More info about tarbombs at http://www.linfo.org/tarbomb.html Tested in zsh and bash. UPDATE: This function works for .tar.gz, .tar.bz2, .tgz, .tbz and .tar in zsh (not working in bash): atb() { l=$(tar tf $1); if [ $(echo "$l" | wc -l) -eq $(echo "$l" | grep $(echo "$l" | head -n1) | wc -l) ]; then tar xf $1; else mkdir ${1%.t(ar.gz||ar.bz2||gz||bz||ar)} && tar xf $1 -C ${1%.t(ar.gz||ar.bz2||gz||bz||ar)}; fi ;} UPDATE2: From the comments; bepaald came with a variant that works for .tar.gz, .tar.bz2, .tgz, .tbz and .tar in bash: atb() {shopt -s extglob ; l=$(tar tf $1); if [ $(echo "$l" | wc -l) -eq $(echo "$l" | grep $(echo "$l" | head -n1) | wc -l) ]; then tar xf $1; else mkdir ${1%[email protected](ar.gz|ar.bz2|gz|bz|ar)} && tar xf $1 -C ${1%[email protected](ar.gz|ar.bz2|gz|bz|ar)}; fi ; shopt -u extglob} Show Sample Output


    10
    atb() { l=$(tar tf $1); if [ $(echo "$l" | wc -l) -eq $(echo "$l" | grep $(echo "$l" | head -n1) | wc -l) ]; then tar xf $1; else mkdir ${1%.tar.gz} && tar xf $1 -C ${1%.tar.gz}; fi ;}
    elfreak · 2010-10-16 05:50:32 5

  • 10
    tar -czvf - /src/dir | ssh remotehost "(cd /dst/dir ; tar -xzvf -)"
    AllyUnion · 2010-12-18 00:17:34 1
  • Using 7z to create archives is OK, but when you use tar, you preserve all file-specific information such as ownership, perms, etc. If that's important to you, this is a better way to do it.


    8
    tar cf - /path/to/data | 7z a -si archivename.tar.7z
    slashdot · 2009-07-14 14:21:30 2

  • 8
    curl http://example.com/a.gz | tar xz
    psykotron · 2009-12-20 18:47:49 0
  • Here how to recover the remote backup over ssh Show Sample Output


    8
    ssh [email protected] "cat /path/to/backup/backupfile.tar.bz2" |tar jpxf -
    mack · 2010-03-24 01:35:28 2
  • This is freaking sweet!!! Here is the full alias, (I didn't want to cause display problems on commandlinefu.com's homepage): alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); S=$SECONDS; tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$"F "$D" && logger -s "Tarred $D to $F in $(($SECONDS-$S)) seconds" ) & )' Creates a .tgz archive of whatever directory it is run from, in the background, detached from current shell so if you logout it will still complete. Also, you can run this as many times as you want, if the archive .tgz already exists, it just moves it to a numbered backup '--backup=numbered'. The coolest part of this is the transformation performed by tar and sed so that the archive file names are automatically created, and when you extract the archive file it is completely safe thanks to the transform command. If you archive lets say /home/tombdigger/new-stuff-to-backup/ it will create the archive /home/#home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz Then when you extract it, like tar -xvzf #home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz instead of overwriting an existing /home/tombdigger/new-stuff-to-backup/ directory, it will extract to /home/tombdigger/new-stuff-to-backup.2010-11-18/ Basically, the tar archive filename is the PWD with all '/' replaced with '#', and the date is appended to the name so that multiple archives are easily managed. This example saves all archives to your $HOME/archive-name.tgz, but I have a $BKDIR variable with my backup location for each shell user, so I just replaced HOME with BKDIR in the alias. So when I ran this in /opt/askapache/SOURCE/lockfile-progs-0.1.11/ the archive was created at /askapache-bk/#opt#askapache#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz Upon completion, uses the universal logger tool to output its completion to syslog and stderr (printed to your terminal), just remove that part if you don't want it, or just remove the '-s ' option from logger to keep the logs only in syslog and not on your terminal. Here's how my syslog server recorded this.. 2010-11-18T00:44:13-05:00 gravedigger.askapache.com (127.0.0.5) [user] [notice] (logger:) Tarred /opt/askapache/SOURCE/lockfile-progs-0.1.11 to /askapache-bk/tarred/#opt#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz in 4 seconds Caveats Really this is very robust and foolproof, the only issues I ever have with it (I've been using this for years on my web servers) is if you run it in a directory and then a file changes in that directory, you get a warning message and your archive might have a problem for the changed file. This happens when running this in a logs directory, a temp dir, etc.. That's the only issue I've ever had, really nothing more than a heads up. Advanced: This is a simple alias, and very useful as it works on basically every linux box with semi-current tar and GNU coreutils, bash, and sed.. But if you want to customize it or pass parameters (like a dir to backup instead of pwd), check out this function I use.. this is what I created the alias from BTW, replacing my aa_status function with logger, and adding $SECONDS runtime instead of using tar's --totals function tarred () { local GZIP='--fast' PWD=${1:-`pwd`} F=$(date +${BKDIR}/%m-%d-%g-%H%M-`sed -u 's/[\/\ ]/#/g' [[ ! -r "$PWD" ]] && echo "Bad permissions for $PWD" 1>&2 && return 2; ( ( tar --totals --ignore-failed-read --transform "[email protected]^${PWD%/*}@`date +${PWD%/*}.%m-%d-%g`@S" -czPf $F $PWD && aa_status "Completed Tarp of $PWD to $F" ) & ) } #From my .bash_profile http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Show Sample Output


    8
    alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$F" "$D" &>/dev/null ) & )'
    AskApache · 2010-11-18 06:24:34 0

  • 7
    curl http://example.com/foo.tar.gz | tar zxvf -
    anarchivist · 2009-02-18 13:02:05 0
  •  1 2 3 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Count Files in a Directory with Wildcards.
If the dir | wc -l Command not working.

Remove a range of lines from a file

Print all open regular files sorted by the number of file handles open to each.
This command run fine on my Ubuntu machine, but on Red Hat I had to change the awk command to `awk '{print $10}'`.

Periodically run a command without hangups, and send the output to your e-mail
Run "ps -x" (process status) in the background every hour (in this example). The outputs of both "nohup" and "ps -x" are sent to the e-mail (instead of nohup.out and stdout and stderr). If you like it, replace "ps -x" by the command of your choice, replace 3600 (1 hour) by the period of your choice. You can run the command in the loop any time by killing the sleep process. For example $ ps -x 2925 ? S 0:00.00 sh -c unzip E.zip >/dev/null 2>&1 11288 ? O 0:00.00 unzip E.zip 25428 ? I 0:00.00 sleep 3600 14346 pts/42- I 0:00.01 bash -c while true; do ps -x | mail pascalv; sleep 3600; done 643 pts/66 Ss 0:00.03 -bash 14124 pts/66 O+ 0:00.00 ps -x $ kill 25428 You have mail in /mail/pascalv

Get the list of local files that changed since their last upload in an S3 bucket
Can be useful to granulary flush files in a CDN after they've been changed in the S3 bucket.

Enable ** to expand files recursively (>=bash-4.0)
Since bash 4.0, you can use ** to recursively expand to all files in the current directory. This behaviour is disabled by default, this command enables it (you'd best put it in your .profile). See the sample output for clarification. In my opinion this is much better than creating hacks with find and xargs when you want to pass files to an application.

Copy your SSH public key on a remote machine for passwordless login.
Should run on any system with ssh installed.

Clone or rescue a block device
If you use the logfile feature of ddrescue, the data is rescued very efficiently (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. http://www.gnu.org/software/ddrescue/ddrescue.html

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Write a listing of all directories and files on the computer to a compressed file.
This command is meant to be used to make a lightweight backup, for when you want to know which files might be missing or changed, but you don't care about their contents (because you have some way to recover them). Explanation of parts: "ls -RFal /" lists all files in and below the root directory, along with their permissions and some other metadata. I think sudo is necessary to allow ls to read the metadata of certain files. "| gzip" compresses the result, from 177 MB to 16 MB in my case. "> all_files_list.txt.gz" saves the result to a file in the current directory called all_files_list.txt.gz. This name can be changed, of course.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: