Back up /etc directory with a name based on the current date and the hostname of the machine, then chown the file for the current user for use.
use -xcvf to decompress
Access www.kernel.org and download the last stable linux kernel release. Show Sample Output
Create compressed, encrypted backup from $source to $targetfile with password $key and exclude-file $excludefile
I prefer using using the backquotes `...` command substitution method to capture the output of a shell command(s). I'm interested to hear thoughts/opinions on doing it this way. I have never had problems with this method.
The files are automatically uncompressed when they reach the destination machine. This is a fast way to backup your server to your local computer while it's running (shutting down services is recommended). A file named "exclude.txt" is needed at /tmp/ containing the following : /dev/* /media/* /mnt/* /proc/* /sys/* /tmp/* /home/*/.local/share/Trash /home/*/.gvfs /home/*/.cache /home/*/.thumbnails /etc/fstab /lib/modules/*/volatile/.mounted /var/run/* /var/lock/* /var/tmp/* /var/cache/apt/archives/* /lost+found/* Show Sample Output
This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray. Go to the directory where you want to write the output files : cd /bklogs Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www". It will create a tar and write it to the tape drive on /dev/nst0. In the process, it will 1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact 2) generate a filelist of the content of the tar with filesize to $bkname.lst 3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz. 4) show buffer in/out speed and used space in the buffer 5) show progress bar with time approximation using pv ADD : To eject the tape : ; sleep 75; mt-st -f /dev/nst0 rewoffl TODO: 1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it." 2) Fix filesize approximation 3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?) 4) Support spanning on multiple tape. 5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project. TIP: 1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing. 2) Remove the v in tar argument cvf to prevent listing all files added to the archive. 3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/) and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !
Copy a local directory to a remote server using ssh+tar (assume server is lame and does not have rsync).
https://twitter.com/westonruter/status/501855721172922369
tar with -p option preserves file permissions
this command can be added to crontab so as to execute a nightly backup of directories and store only the 10 last backup files.
You can create a backup of a directory which does not contain disturbing .svn and similar directories with that command.
This version uses subshells.
This command tars/gz all the folders contained in a directory. Only applies to top level directory. Very handy when you have to transfer mny folders containing lots of stuff. Can also work with tar only, zip...
useful to give directory in archive some descriptive name instead of name from project tree
netcat -lp 6666 > works.tar.g tar -cvzf - work | nc -q0 192.168.1.102 6666 Show Sample Output
The function had to be cut down to meet the maximum command length requirements. The full version of the function is:
extract()
{
if [ -f $1 ]; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via >extract<" ;;
esac
else
echo "'$1' is not a valid file!"
fi
}
Note: This is not my original code. I came across it in a forum somewhere a while ago, and it's been such a useful addition to my .bashrc file, that I thought it worth sharing.
Show Sample Output
Removes files and dirs lilsted by the tar tf command. Errors may occur because of dirs deleted before children.
pack with tar tar.gz
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: