Commands tagged backup (67)

  • Remember to backup everything before changing it so you can restore all to normal.


    16
    cp httpd.conf{,.bk}
    ideivid · 2011-08-15 16:43:53 4
  • This will create an exact duplicate image of your hard drive that you can then restore by simply reversing the "if" & "of" locations. sudo dd if=/media/disk/backup/sda.backup of=/dev/sda Alternatively, you can use an SSH connection to do your backups: dd if=/dev/sda | ssh user@ssh.server.com dd of=~/backup/sda.backup


    15
    sudo dd if=/dev/sda of=/media/disk/backup/sda.backup
    bandit36 · 2009-02-27 20:23:37 2

  • 14
    curl -u username -o bookmarks.xml https://api.del.icio.us/v1/posts/all
    avi4now · 2009-04-06 13:54:15 3
  • This will backup the _contents_ of /media/SOURCE to /media/TARGET where TARGET is formatted with ntfs. The --modify-window lets rsync ignore the less accurate timestamps of NTFS.


    13
    rsync -rtvu --modify-window=1 --progress /media/SOURCE/ /media/TARGET/
    0x2142 · 2009-07-05 07:40:10 0
  • Use `zless` to read the content of your *rss.gz file: zless commandlinefu-contribs-backup-2009-08-10-07.40.39.rss.gz Show Sample Output


    10
    curl http://www.commandlinefu.com/commands/by/<your username>/rss|gzip ->commandlinefu-contribs-backup-$(date +%Y-%m-%d-%H.%M.%S).rss.gz
    linuxrawkstar · 2009-08-10 12:43:33 0
  • connect to a remote server using ftp protocol over FUSE file system, then rsync the remote folder to a local one and then unmount the remote ftp server (FUSE FS) it can be divided to 3 different commands and you should have curlftpfs and rsync installed


    9
    curlftpfs ftp://YourUsername:YourPassword@YourFTPServerURL /tmp/remote-website/ && rsync -av /tmp/remote-website/* /usr/local/data_latest && umount /tmp/remote-website
    nadavkav · 2009-03-31 18:01:00 3
  • A dear friend of mine asked me how do I copy a DVD to your hard drive? If you want to make a copy of the ISO image that was burned to a CD or DVD, insert that medium into your CD/DVD drive and (assuming /dev/cdrom is associated with your computer?s CD drive) type the following command


    9
    dd if=/dev/cdrom of=whatever.iso
    0disse0 · 2009-09-05 09:19:41 3
  • Suppose you made a backup of your hard disk with dd: dd if=/dev/sda of=/mnt/disk/backup.img This command enables you to mount a partition from inside this image, so you can access your files directly. Substitute PARTITION=1 with the number of the partition you want to mount (returned from sfdisk -d yourfile.img). Show Sample Output


    8
    INFILE=/path/to/your/backup.img; MOUNTPT=/mnt/foo; PARTITION=1; mount "$INFILE" "$MOUNTPT" -o loop,offset=$[ `/sbin/sfdisk -d "$INFILE" | grep "start=" | head -n $PARTITION | tail -n1 | sed 's/.*start=[ ]*//' | sed 's/,.*//'` * 512 ]
    Alanceil · 2009-03-06 21:29:13 3
  • This is freaking sweet!!! Here is the full alias, (I didn't want to cause display problems on commandlinefu.com's homepage): alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); S=$SECONDS; tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$"F "$D" && logger -s "Tarred $D to $F in $(($SECONDS-$S)) seconds" ) & )' Creates a .tgz archive of whatever directory it is run from, in the background, detached from current shell so if you logout it will still complete. Also, you can run this as many times as you want, if the archive .tgz already exists, it just moves it to a numbered backup '--backup=numbered'. The coolest part of this is the transformation performed by tar and sed so that the archive file names are automatically created, and when you extract the archive file it is completely safe thanks to the transform command. If you archive lets say /home/tombdigger/new-stuff-to-backup/ it will create the archive /home/#home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz Then when you extract it, like tar -xvzf #home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz instead of overwriting an existing /home/tombdigger/new-stuff-to-backup/ directory, it will extract to /home/tombdigger/new-stuff-to-backup.2010-11-18/ Basically, the tar archive filename is the PWD with all '/' replaced with '#', and the date is appended to the name so that multiple archives are easily managed. This example saves all archives to your $HOME/archive-name.tgz, but I have a $BKDIR variable with my backup location for each shell user, so I just replaced HOME with BKDIR in the alias. So when I ran this in /opt/askapache/SOURCE/lockfile-progs-0.1.11/ the archive was created at /askapache-bk/#opt#askapache#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz Upon completion, uses the universal logger tool to output its completion to syslog and stderr (printed to your terminal), just remove that part if you don't want it, or just remove the '-s ' option from logger to keep the logs only in syslog and not on your terminal. Here's how my syslog server recorded this.. 2010-11-18T00:44:13-05:00 gravedigger.askapache.com (127.0.0.5) [user] [notice] (logger:) Tarred /opt/askapache/SOURCE/lockfile-progs-0.1.11 to /askapache-bk/tarred/#opt#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz in 4 seconds Caveats Really this is very robust and foolproof, the only issues I ever have with it (I've been using this for years on my web servers) is if you run it in a directory and then a file changes in that directory, you get a warning message and your archive might have a problem for the changed file. This happens when running this in a logs directory, a temp dir, etc.. That's the only issue I've ever had, really nothing more than a heads up. Advanced: This is a simple alias, and very useful as it works on basically every linux box with semi-current tar and GNU coreutils, bash, and sed.. But if you want to customize it or pass parameters (like a dir to backup instead of pwd), check out this function I use.. this is what I created the alias from BTW, replacing my aa_status function with logger, and adding $SECONDS runtime instead of using tar's --totals function tarred () { local GZIP='--fast' PWD=${1:-`pwd`} F=$(date +${BKDIR}/%m-%d-%g-%H%M-`sed -u 's/[\/\ ]/#/g' [[ ! -r "$PWD" ]] && echo "Bad permissions for $PWD" 1>&2 && return 2; ( ( tar --totals --ignore-failed-read --transform "s@^${PWD%/*}@`date +${PWD%/*}.%m-%d-%g`@S" -czPf $F $PWD && aa_status "Completed Tarp of $PWD to $F" ) & ) } #From my .bash_profile http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Show Sample Output


    8
    alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$F" "$D" &>/dev/null ) & )'
    AskApache · 2010-11-18 06:24:34 0
  • create an archive of files with access time older than 5 days, and remove original files.


    7
    tar -zcvpf backup_`date +"%Y%m%d_%H%M%S"`.tar.gz `find <target> -atime +5` 2> /dev/null | xargs rm -fr ;
    angleto · 2009-05-26 17:15:52 2
  • This is a BASH feature. The above command will create a backup of "filename" called "filename.DATE", where DATE is the actual day in %Y%m%d (year, month and day numbers together) format.


    6
    cp filename{,.`date +%Y%m%d`}
    fernandomerces · 2011-04-02 06:41:26 0
  • this will connect to your hosted website service through the cPanel interface and use its backup tool to backup and download the entire website, locally. (do not forget to replace : YourUsername , YourPassword and YourWebsiteUrl for it to work )


    5
    wget --http-user=YourUsername --http-password=YourPassword http://YourWebsiteUrl:2082/getbackup/backup-YourWebsiteUrl-`date +"%-m-%d-%Y"`.tar.gz
    nadavkav · 2009-03-31 17:50:41 4
  • This command clone the first partition of the primary master IDE drive to the second partition of the primary slave IDE drive (!!! back up all data before trying anything like this !!!)


    5
    sudo dd if=/dev/hda1 of=/dev/hdb2
    0disse0 · 2009-09-05 09:16:52 0

  • 5
    mysqldump -uUSERNAME -pPASSWORD database | gzip > /path/to/db/files/db-backup-`date +%Y-%m-%d`.sql.gz ;find /path/to/db/files/* -mtime +5 -exec rm {} \;
    nadavkav · 2009-10-28 19:49:39 1
  • This script creates date based backups of the files. It copies the files to the same place the original ones are but with an additional extension that is the timestamp of the copy on the following format: YearMonthDay-HourMinuteSecond Show Sample Output


    5
    backup() { for i in "$@"; do cp -va $i $i.$(date +%Y%m%d-%H%M%S); done }
    polaco · 2009-11-10 20:59:45 2
  • Sometimes in a hurry you may move or copy a file using an already existent file name. If you aliased the cp and mv command with the -i option you are prompted for a confirmation before overwriting but if your aliases aren't there you will loose the target file! The -b option will force the mv command to check if the destination file already exists and if it is already there a backup copy with an ending ~ is created.


    5
    mv -b old_file_name new_and_already_existent_file_name
    ztank1013 · 2011-09-08 23:57:15 0
  • Apart from an exact copy of your recent contents, also keep all earlier versions of files and folders that were modified or deleted. Inspired by the excellent EVACopy http://evacopy.sourceforge.net Show Sample Output


    5
    backup() { source=$1 ; rsync --relative --force --ignore-errors --no-perms --chmod=ugo=rwX --delete --backup --backup-dir=$(date +%Y%m%d-%H%M%S)_Backup --whole-file -a -v $source/ ~/Backup ; } ; backup /source_folder_to_backup
    pascalv · 2018-08-02 21:27:29 0
  • Instead of calculating the offset and providing an offset option to mount, let lomount do the job for you by just providing the partition number you would like to loop mount.


    4
    lomount -diskimage /path/to/your/backup.img -partition 1 /mnt/foo
    olorin · 2009-07-22 11:32:52 1
  • The coolest way I've found to backup a wordpress mysql database using encryption, and using local variables created directly from the wp-config.php file so that you don't have to type them- which would allow someone sniffing your terminal or viewing your shell history to see your info. I use a variation of this for my servers that have hundreds of wordpress installs and databases by using a find command for the wp-config.php file and passing that through xargs to my function. Show Sample Output


    4
    eval $(sed -n "s/^d[^D]*DB_\([NUPH]\)[ASO].*',[^']*'\([^']*\)'.*/_\1='\2'/p" wp-config.php) && mysqldump --opt --add-drop-table -u$_U -p$_P -h$_H $_N | gpg -er AskApache >`date +%m%d%y-%H%M.$_N.sqls`
    AskApache · 2009-08-18 07:03:08 1
  • Use if you have pictures all over the place and you want to copy them to a central location Synopsis: Find jpg files translate all file names to lowercase backup existing, don't overwrite, preserve mode ownership and timestamps copy to a central location


    4
    find . -iname "*.jpg" -print0 | tr '[A-Z]' '[a-z]' | xargs -0 cp --backup=numbered -dp -u --target-directory {location} &
    oracular · 2009-12-10 08:47:04 0
  • Prior to working on/modifying a file, use the 'install -m' command which can both copy files, create directories, and set their permissions at the same time. Useful when you are working in the public_html folder and need to keep the cp'd file hidden. Show Sample Output


    4
    install -m 0400 foo bar/
    op4 · 2015-03-02 13:20:38 0
  • First, use find to find directories exactly one level below current directory, then create a tar file using the directory as the basename.


    4
    find . -maxdepth 1 -mindepth 1 -type d -exec tar cvf {}.tar {} \;
    unixmonkey64021 · 2020-06-21 14:34:44 30
  • 1. you don't need to prepend the year with 20 - just use Y instead of y 2. you may want to make your function a bit more secure: buf () { cp ${1?filename not specified}{,$(date +%Y%m%d_%H%M%S)}; }


    3
    buf () { cp $1{,$(date +%Y%m%d_%H%M%S)}; }
    unefunge · 2010-12-14 14:02:03 0
  • This will compress the root directory to an external hard drive and split it to parts once it reaches the 4 Gigs file system limit. You can simply restore it with: restore ivf /media/My\ Passport/Fedora10bckup/root_dump_fedora


    2
    dump -0 -M -B 4000000 -f /media/My\ Passport/Fedora10bckup/root_dump_fedora -z2 /
    luqmanux · 2009-07-02 20:25:22 0
  • Opens a snapshot of a live UFS2 filesystem, runs dump to generate a full filesystem backup which is run through gzip. The filesystem must support snapshots and have a .snap directory in the filesystem root. To restore the backup, one can do zcat /path/to/adXsYz.dump.gz | restore -rf -


    2
    dump -0Lauf - /dev/adXsYz | gzip > /path/to/adXsYz.dump.gz
    tensorpudding · 2010-07-19 00:54:40 2
  •  1 2 3 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

which program is this port belongs to ?
Sometimes you need to use a port that is already opened by some program , and you don't know who to "kill" for it to release - so, now you do !

Show sorted list of files with sizes more than 1MB in the current dir

Make redirects to localhost via /etc/hosts more interesting
Normally when a site is blocked through /etc/hosts, traffic is just being redirected to a non-existent server that isn't going to respond. This helps get your point across a little more clearly than a browser timeout. Of course you could use any number of codes: http://en.wikipedia.org/wiki/List_of_HTTP_status_codes Obviously, this command can be added to init-rc.d, and more sophisticated responses can be given. Seems noteworthy to mention that the information sent from the browser can be parsed using the bash READ builtin (such as 'while read -t 1 statement; do parsing'), and the connection stays open until the script exits. Take care that you must use EXEC:'bash -c foo.sh', as 'execvp' (socat's method for executing scripts) invokes 'sh', not 'bash'.

Restrict the use of dmesg for current user/session
Linux offers an interesting option to restrict the use of dmesg. It is available via /proc/sys/kernel/dmesg_restrict. You can check the status with: $ cat /proc/sys/kernel/dmesg_restrict Alternatively you can use sysctl: $ sudo sysctl -w kernel.dmesg_restrict=1 To make your change persistent across reboot, edit a fille in /etc/sysctl.d/.

Zip each file in a directory individually with the original file name
This will list the files in a directory, then zip each one with the original filename individually. video1.wmv -> video1.zip video2.wmv -> video2.zip This was for zipping up large amounts of video files for upload on a Windows machine.

Extended man command
This microscript looks up a man page for each word possible, and if the correct page is not found, uses w3m and Google's "I'm feeling lucky" to output a first possible result. This script was made as a result of an idea on a popular Linux forum, where users often send other people to RTFM by saying something like "man backup" or "man ubuntu one". To make this script replace the usual man command, save it as ".man.sh" in your home folder and add the following string to the end of your .bashrc file: alias man='~/.man.sh'

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

List empty any directories

a for loop with filling 0 format, with seq
seq allows you to format the output thanks to the -f option. This is very useful if you want to rename your files to the same format in order to be able to easily sort for example: $for i in `seq 1 3 10`; do touch foo$i ;done And $ls foo* | sort -n foo1 foo10 foo4 foo7 But: $for i in `seq -f %02g 1 3 10`; do touch foo$i ;done So $ls foo* | sort -n foo01 foo04 foo07 foo10

Print Asterisk phone logs
Prints a log of phonecalls placed from/to an asterisk server, formated into an easily readable table. You can use partial number/queue matches, or use .* to match everything.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: