Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged backup from sorted by
Terminal - Commands tagged backup - 59 results
buf() { f=${1%%.*};e=${1/$f/};cp -v $1 $f-$(date +"%Y%m%d_%H%M%S")$e;}
2010-12-15 09:50:04
User: unefunge
Functions: cp date
0

"infix" version in bash (4.x+)

Remove -v to make it silent.

BTW: The OP forgot to use "cat" and "nmap" ;-) I had a good laugh though.

buf () {oldname=$1; if [ "$oldname" != "" ]; then datepart=$(date +%Y-%m-%d); firstpart=`echo $oldname | cut -d "." -f 1`; newname=`echo $oldname | sed s/$firstpart/$firstpart.$datepart/`; cp -i ${oldname} ${newname}; fi }
2010-12-14 19:58:34
User: Seebi
Functions: cp cut date sed
-3

This backup function preserve the file suffix allowing zsh suffix aliases and desktop default actions to work with the backup file too.

buf () { cp $1{,$(date +%Y%m%d_%H%M%S)}; }
2010-12-14 14:02:03
User: unefunge
Functions: cp date
2

1. you don't need to prepend the year with 20 - just use Y instead of y

2. you may want to make your function a bit more secure:

buf () { cp ${1?filename not specified}{,$(date +%Y%m%d_%H%M%S)}; }

alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$F" "$D" &>/dev/null ) & )'
2010-11-18 06:24:34
User: AskApache
Functions: alias date tar
7

This is freaking sweet!!! Here is the full alias, (I didn't want to cause display problems on commandlinefu.com's homepage):

alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); S=$SECONDS; tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$"F "$D" && logger -s "Tarred $D to $F in $(($SECONDS-$S)) seconds" ) & )'

Creates a .tgz archive of whatever directory it is run from, in the background, detached from current shell so if you logout it will still complete. Also, you can run this as many times as you want, if the archive .tgz already exists, it just moves it to a numbered backup '--backup=numbered'. The coolest part of this is the transformation performed by tar and sed so that the archive file names are automatically created, and when you extract the archive file it is completely safe thanks to the transform command.

If you archive lets say /home/tombdigger/new-stuff-to-backup/ it will create the archive /home/#home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz Then when you extract it, like tar -xvzf #home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz instead of overwriting an existing /home/tombdigger/new-stuff-to-backup/ directory, it will extract to /home/tombdigger/new-stuff-to-backup.2010-11-18/

Basically, the tar archive filename is the PWD with all '/' replaced with '#', and the date is appended to the name so that multiple archives are easily managed. This example saves all archives to your $HOME/archive-name.tgz, but I have a $BKDIR variable with my backup location for each shell user, so I just replaced HOME with BKDIR in the alias.

So when I ran this in /opt/askapache/SOURCE/lockfile-progs-0.1.11/ the archive was created at /askapache-bk/#opt#askapache#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz

Upon completion, uses the universal logger tool to output its completion to syslog and stderr (printed to your terminal), just remove that part if you don't want it, or just remove the '-s ' option from logger to keep the logs only in syslog and not on your terminal.

Here's how my syslog server recorded this..

2010-11-18T00:44:13-05:00 gravedigger.askapache.com (127.0.0.5) [user] [notice] (logger:) Tarred /opt/askapache/SOURCE/lockfile-progs-0.1.11 to /askapache-bk/tarred/#opt#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz in 4 seconds

Caveats

Really this is very robust and foolproof, the only issues I ever have with it (I've been using this for years on my web servers) is if you run it in a directory and then a file changes in that directory, you get a warning message and your archive might have a problem for the changed file. This happens when running this in a logs directory, a temp dir, etc.. That's the only issue I've ever had, really nothing more than a heads up.

Advanced:

This is a simple alias, and very useful as it works on basically every linux box with semi-current tar and GNU coreutils, bash, and sed.. But if you want to customize it or pass parameters (like a dir to backup instead of pwd), check out this function I use.. this is what I created the alias from BTW, replacing my aa_status function with logger, and adding $SECONDS runtime instead of using tar's --totals

function tarred ()

{

local GZIP='--fast' PWD=${1:-`pwd`} F=$(date +${BKDIR}/%m-%d-%g-%H%M-`sed -u 's/[\/\ ]/#/g'

[[ ! -r "$PWD" ]] && echo "Bad permissions for $PWD" 1>&2 && return 2;

( ( tar --totals --ignore-failed-read --transform "s@^${PWD%/*}@`date +${PWD%/*}.%m-%d-%g`@S" -czPf $F $PWD && aa_status "Completed Tarp of $PWD to $F" ) & )

}

#From my .bash_profile http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html

7zr a -mx=9 -ms=on -mhc=on -mtc=off db_backup.sql.7z db_dump.sql
2010-10-22 21:05:58
User: mandx
1

These re the best option combination that works fine for compressing my database dumps. It's possible that there are another option or value that might improve the compression ratio, by these are the ones that worked, the syntax for 7zr it's a little messy...

mksnap_ffs /var /var/.snap/snap_var_`date "+%Y-%m-%d"` ; mdconfig -a -t vnode -f /var/.snap/snap_var_`date "+%Y-%m-%d"` -u 1; mount -r /dev/md1 /mnt
2010-09-18 11:37:03
User: bugmenot
Functions: mount
0

(FreeBSD)

Once you've made the snapshot you can resume any stopped services and then back up the file system (using the snapshot) without having to worry about changed files.

When finished, the snapshot can be removed :

umount /mnt

mdconfig -d -u 1

rm /var/.snap/snap_var_`date "+%Y-%m-%d"`

rsync -av --link-dest=$(ls -1d /backup/*/ | tail -1) /data/ /backup/$(date +%Y%m%d%H%M)/
2010-08-05 19:36:24
User: dooblem
Functions: date ls rsync tail
Tags: backup rsync
1

'data' is the directory to backup, 'backup' is directory to store snapshots.

Backup files on a regular basis using hard links. Very efficient, quick. Backup data is directly available.

Same as explained here :

http://blog.interlinked.org/tutorials/rsync_time_machine.html

in one line.

Using du to check the size of your backups, the first backup counts for all the space, and other backups only files that have changed.

dump -0Lauf - /dev/adXsYz | gzip > /path/to/adXsYz.dump.gz
2010-07-19 00:54:40
Functions: dump gzip
2

Opens a snapshot of a live UFS2 filesystem, runs dump to generate a full filesystem backup which is run through gzip. The filesystem must support snapshots and have a .snap directory in the filesystem root.

To restore the backup, one can do

zcat /path/to/adXsYz.dump.gz | restore -rf -
rsync --delete -az -e 'ssh -c blowfish -i /your/.ssh/backup_key -ax' /path/to/backup remote-host:/dest/path/
split -b4m file.tgz file.tgz. ; for i in file.tgz.*; do SUBJ="Backup Archive"; MSG="Archive File Attached"; echo $MSG | mutt -a $i -s $SUBJ YourEmail@(E)mail.com
2010-03-20 16:49:19
User: tboulay
Functions: echo split
-1

This is just a little snippit to split a large file into smaller chunks (4mb in this example) and then send the chunks off to (e)mail for archival using mutt.

I usually encrypt the file before splitting it using openssl:

openssl des3 -salt -k <password> -in file.tgz -out file.tgz.des3

To restore, simply save attachments and rejoin them using:

cat file.tgz.* > output_name.tgz

and if encrypted, decrypt using:

openssl des3 -d -salt -k <password> -in file.tgz.des3 -out file.tgz

edit: (changed "g" to "e" for political correctness)

(svnadmin dump /path/to/repo | gzip --best > /tmp/svn-backup.gz) 2>&1 | mutt -s "SVN backup `date +\%m/\%d/\%Y`" -a /tmp/svn-backup.gz emailaddress
2010-03-08 05:49:01
User: max
Functions: dump gzip
1

Dumps a compressed svn backup to a file, and emails the files along with any messages as the body of the email

tar -zcvpf backup_`date +"%Y%m%d_%H%M%S"`.tar.gz `find <target> -atime +5 -type f` 2> /dev/null | parallel -X rm -f
2010-01-28 12:41:41
Functions: rm tar
-3

This deals nicely with files having special characters in the file name (space ' or ").

Parallel is from https://savannah.nongnu.org/projects/parallel/

find . -iname "*.jpg" -print0 | tr '[A-Z]' '[a-z]' | xargs -0 cp --backup=numbered -dp -u --target-directory {location} &
2009-12-10 08:47:04
User: oracular
Functions: cp find tr xargs
4

Use if you have pictures all over the place and you want to copy them to a central location

Synopsis:

Find jpg files

translate all file names to lowercase

backup existing, don't overwrite, preserve mode ownership and timestamps

copy to a central location

backup() { for i in "$@"; do cp -va $i $i.$(date +%Y%m%d-%H%M%S); done }
2009-11-10 20:59:45
User: polaco
Functions: cp date
Tags: backup copy date
4

This script creates date based backups of the files. It copies the files to the same place the original ones are but with an additional extension that is the timestamp of the copy on the following format: YearMonthDay-HourMinuteSecond

tar pzcvf /result_path/result.tar.gz /target_path/target_folder
2009-11-10 11:17:00
User: CafeNinja
Functions: tar
0

The command as given would create the file "/result_path/result.tar.gz" with the contents of the target folder including permissions and sub- folder structure.

mysqldump -uUSERNAME -pPASSWORD database | gzip > /path/to/db/files/db-backup-`date +%Y-%m-%d`.sql.gz ;find /path/to/db/files/* -mtime +5 -exec rm {} \;
python fsrecovery.py -P 0 -f <path-to-instance>/Data.fs <path-to-instance-destination>/Data.fs.packed
tar czf /path/archive_of_foo.`date -I`.tgz /path/foo
2009-09-07 05:45:33
Functions: tar
Tags: backup tar
1

creates a compressed tar archive of files in /path/foo and writes to a timestamped filename in /path.

tar --create --file /path/$HOSTNAME-my_name_file-$(date -I).tar.gz --atime-preserve -p -P --same-owner -z /path/
2009-09-07 04:52:12
User: Odin_sv
Functions: date tar
Tags: backup tar
1

Use tar command for a backup info with a date of creation

dd if=/dev/cdrom of=whatever.iso
2009-09-05 09:19:41
User: 0disse0
Functions: dd
Tags: backup dd iso dvd
8

A dear friend of mine asked me how do I copy a DVD to your hard drive? If you want to make a copy of the ISO image that was burned to a CD or DVD, insert that medium into your CD/DVD drive and (assuming /dev/cdrom is associated with your computer?s CD drive) type the following command

sudo dd if=/dev/hda1 of=/dev/hdb2
2009-09-05 09:16:52
User: 0disse0
Functions: dd sudo
5

This command clone the first partition of the primary master IDE drive to the second partition

of the primary slave IDE drive (!!! back up all data before trying anything like this !!!)

eval $(sed -n "s/^d[^D]*DB_\([NUPH]\)[ASO].*',[^']*'\([^']*\)'.*/_\1='\2'/p" wp-config.php) && mysqldump --opt --add-drop-table -u$_U -p$_P -h$_H $_N | gpg -er AskApache >`date +%m%d%y-%H%M.$_N.sqls`
2009-08-18 07:03:08
User: AskApache
Functions: eval gpg sed
3

The coolest way I've found to backup a wordpress mysql database using encryption, and using local variables created directly from the wp-config.php file so that you don't have to type them- which would allow someone sniffing your terminal or viewing your shell history to see your info.

I use a variation of this for my servers that have hundreds of wordpress installs and databases by using a find command for the wp-config.php file and passing that through xargs to my function.

curl http://www.commandlinefu.com/commands/by/<your username>/rss|gzip ->commandlinefu-contribs-backup-$(date +%Y-%m-%d-%H.%M.%S).rss.gz
2009-08-10 12:43:33
Functions: date gzip
10

Use `zless` to read the content of your *rss.gz file:

zless commandlinefu-contribs-backup-2009-08-10-07.40.39.rss.gz
lomount -diskimage /path/to/your/backup.img -partition 1 /mnt/foo
4

Instead of calculating the offset and providing an offset option to mount, let lomount do the job for you by just providing the partition number you would like to loop mount.

/opt/psa/bin/pleskbackup server -v --output-file=plesk_server.bak