Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using tar from sorted by
Terminal - Commands using tar - 203 results
for file in $(find /var/backup -name "backup*" -type f |sort -r | tail -n +10); do rm -f $file; done ; tar czf /var/backup/backup-system-$(date "+\%Y\%m\%d\%H\%M-\%N").tgz --exclude /home/dummy /etc /home /opt 2>&- && echo "system backup ok"
2014-09-24 14:04:11
User: akiuni
Functions: date echo file find rm sort tail tar
Tags: backup Linux cron
0

this command can be added to crontab so as to execute a nightly backup of directories and store only the 10 last backup files.

tar xvzf <file.tar.gz>
tar -cpf - ./sourceFile | tar -xpf - -C ./targetDir/
curl -L https://gist.github.com/westonruter/ea038141e46e017d280b/download | tar -xvz --strip-components=1
2014-08-19 22:19:44
User: westonruter
Functions: tar
Tags: gist
0

https://twitter.com/westonruter/status/501855721172922369

tar -vzc /path/to/cool/directory | ssh -q my_server 'tar -vzx -C /'
2014-07-31 18:42:57
User: regulatre
Functions: ssh tar
0

Copy a local directory to a remote server using ssh+tar (assume server is lame and does not have rsync).

tar cvzf - dir | ssh my_server 'tar xzf -'
bkname="test"; tobk="*" ; totalsize=$(du -csb $tobk | tail -1 | cut -f1) ; tar cvf - $tobk | tee >(sha512sum > $bkname.sha512) >(tar -tv > $bkname.lst) | mbuffer -m 4G -P 100% | pv -s $totalsize -w 100 | dd of=/dev/nst0 bs=256k
2014-07-22 15:47:50
User: johnr
Functions: cut dd du tail tar tee
0

This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray.

Go to the directory where you want to write the output files : cd /bklogs

Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www".

It will create a tar and write it to the tape drive on /dev/nst0.

In the process, it will

1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact

2) generate a filelist of the content of the tar with filesize to $bkname.lst

3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz.

4) show buffer in/out speed and used space in the buffer

5) show progress bar with time approximation using pv

ADD :

To eject the tape :

; sleep 75; mt-st -f /dev/nst0 rewoffl

TODO:

1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it."

2) Fix filesize approximation

3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?)

4) Support spanning on multiple tape.

5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project.

TIP:

1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing.

2) Remove the v in tar argument cvf to prevent listing all files added to the archive.

3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/)

and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !

tar -cj / -X /tmp/exclude.txt | cstream -v 1 -c 3 -T 10 | ssh user@host 'tar -xj -C /backupDestination'
2014-07-21 18:52:19
User: fantleas
Functions: ssh tar
0

The files are automatically uncompressed when they reach the destination machine. This is a fast way to backup your server to your local computer while it's running (shutting down services is recommended).

A file named "exclude.txt" is needed at /tmp/ containing the following :

/dev/*

/media/*

/mnt/*

/proc/*

/sys/*

/tmp/*

/home/*/.local/share/Trash

/home/*/.gvfs

/home/*/.cache

/home/*/.thumbnails

/etc/fstab

/lib/modules/*/volatile/.mounted

/var/run/*

/var/lock/*

/var/tmp/*

/var/cache/apt/archives/*

/lost+found/*

tar cvf my_txt_files.tar `find . -type f -name ".txt"`
2014-06-03 01:08:39
Functions: tar
0

I prefer using using the backquotes `...` command substitution method to capture the output of a shell command(s). I'm interested to hear thoughts/opinions on doing it this way. I have never had problems with this method.

tar -xvpf file.tar.gz
2014-04-25 10:23:03
User: shajeen
Functions: tar
1

-x, --extract, --get

extract files from an archive

-p, --preserve-permissions, --same-permissions

extract information about file permissions (default for superuser)

-f, --file=ARCHIVE

use archive file or device ARCHIVE

-v, --verbose

verbosely list files processed

tar -cf - file1 dir1/ dir2/ | md5sum
2014-04-17 14:33:44
User: snipertyler
Functions: tar
-3

Doesn't create a file

Make sure to list the files / directories in the same order every time.

tar -cvf - /path/to/tar/up | xz - > myTarArchive.tar.xz
2014-03-18 19:51:50
User: razerwolf
Functions: tar
1

compress directory archive with xz compression, if tar doesn't have the -J option (OSX tar doesn't have -J)

tar -cJf myarchive.tar.xz /path/to/archive/
2014-03-13 03:34:18
User: Sepero
Functions: tar
1

Compress files or a directory to xz format. XZ has superior and faster compression than bzip2 in most cases. XZ is superior to 7zip format because it can save file permissions and other metadata data.

git log origin/master..master --name-only --pretty="format:" | sort | uniq | xargs tar -rf mytarfile.tar
git diff-tree -r --no-commit-id --name-only --diff-filter=ACMRT COMMID_HASH | xargs tar -rf mytarfile.tar
2014-03-04 12:16:07
Functions: tar xargs
1

################################################################################

# get all modified files since last commit and zip them to upload to live server

################################################################################

# delete previous tar output file

rm mytarfile.tar -rf

#rm c:/tarOutput/*.* -rf

# get last commit id and store in variable

declare RESULT=$(git log --format="%H" | head -n1)

# generate file list and export to tar file

git diff-tree -r --no-commit-id --name-only --diff-filter=ACMRT $RESULT | xargs tar -rf mytarfile.tar

# extract tar files to specified location

tar -xf mytarfile.tar -C c:/tarOutput

tar --exclude='patternToExclude' --use-compress-program=pbzip2 -cf 'my-archive.tar.bz2' directoyToZip/
tar -axf fileNameHere.tgz
2014-02-01 16:14:22
User: toro
Functions: tar
Tags: tar
1

With -a you don't care about file type (bz2, gzip, etc.)

tar zxvf fileNameHere.tgz
2014-01-28 10:33:51
User: Jonas_E
Functions: tar
Tags: tar unpack
-2

tar command options:

-z : Uncompress the resulting archive with gzip command.

-x : Extract to disk from the archive.

-v : Produce verbose output i.e. show progress and file names while extracting files.

-f backup.tgz : Read the archive from the specified file called backup.tgz.

-C /tmp/data : Unpack/extract files in /tmp/data instead of the default current directory.

tar --exclude-from=$excludefile -zcvp "$source" | openssl aes-128-cbc -salt -out $targetfile -k $key
2013-12-13 19:35:20
User: klausman
Functions: tar
0

Create compressed, encrypted backup from $source to $targetfile with password $key and exclude-file $excludefile

find <PATH> -maxdepth 1 -type f -name "server.log*" -exec tar czPf '{}'.tar.gz --transform='s|.*/||' '{}' --remove-files \;
find /mnt/storage/profiles/ -maxdepth 1 -mindepth 1 -type d | while read d; do tarfile=`echo "$d" | cut -d "/" -f5`; destdir="/local/backupdir/"; tar -g "$destdir"/"$tarfile".snar -czf "$destdir"/"$tarfile"_`date +%F`.tgz -P $d; done
find /mnt/storage/profiles/ -maxdepth 1 -mindepth 1 -type d | while read d; do tarfile=`echo "$d" | cut -d "/" -f5`; destdir="/local/backupdir"; tar -czvf "$destdir"/"$tarfile"_`date +%F`.tgz -P $d; done
2013-12-05 19:18:03
User: jaimerosario
Functions: cut find read tar
1

Problem: I wanted to backup user data individually, using and incremental method. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...)

I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_2013-12-05.tgz" and "lucy_2013-12-05.tgz".

find /mnt/storage/profiles/ -maxdepth 1 -mindepth 1 -type d | while read d; do tarfile=`echo "$d" | cut -d "/" -f5`; destdir="/local/backupdir/"; tar -czf $destdir/"$tarfile"_full.tgz -P $d; done
2013-12-05 19:07:17
User: jaimerosario
Functions: cut find read tar
1

Problem: I wanted to backup user data individually. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...)

I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_full.tgz" and "lucy_full.tgz".

wget --no-check-certificate https://www.kernel.org/$(wget -qO- --no-check-certificate https://www.kernel.org | grep tar | head -n1 | cut -d\" -f2)
tar -cf "../${PWD##*/}.tar" .
2013-11-06 11:15:38
User: joedhon
Functions: tar
-1

should do the same as command #12875, just shorter.