cloning root filesystem without suffering to possible interruptions. useful when moving a running system to a new partition. also works as a solid backup solution.
a : to keep files permissions --no-whole file : use rsync?s delta-transfer algorithm --inplace : writes the updated data directly to the destination file optionnal -> add --remove-source-files to mv instead of cp
Useful when upgrading my Linux distro and trying to copy only "settings" from the old home folder to the new one.
Script has to be modified to be executable! SRC and DEST a relative unix path, olds and news are the terms to be modified. Very helpful to sync source folders present in different SCM. If you dont like this one, just use rsync... Show Sample Output
rsync will copy the source directory into destination and any subsequent run will synchronize only the changes from the source. Show Sample Output
Copy files and dir in parallel. It is Faster.
Using the gnu-parallel.
Has 2 commands:
- First - Create dir structure in /BKP
find Files/ -type d | parallel 'mkdir -p /BKP/{}'
- Second - Copy for structure created
find Files/ -type f | parallel 'rsync -a {} /BKP/$(dirname {})'
- Great for backups!
- Can use "rsync" or "cp".
- Compare with a simple "rsync" or "cp"!
This line unbuffers the interactive output of rsync's --progress flag creating a new line for every update. This output can now be used within a script to make actions (or possibly piped into a GUI generator for a progress bar)
While edtiing a project under git, it is sometimes nice to sync changes immediately to a test machine. This command will take care of this if you have inotifywait installed on the developement machine. Note the -R (relative) in rsync. with rsync foo/bar/baz user@host:dest/dir/ it will put 'baz' in dest/dir/foo/bar/ which is what we want. this can be turned into a function for additionnal flexibility : function gitwatch() { if [ -z $1 ]; then echo "You must provide a rsync destination" return fi while true; do rsync -vR $(git ls-files | inotifywait -q -e modify -e attrib -e close_write --fromfile - --format '%w') $1 done }
This command allows you to mirror folders or files with rsync using a secure SSH channel with a forced HMAC integrity algorithm. Use this if you are absolutely adamant about preserving data integrity while mirroring a set of files. --partial is for resumability.
forgot to use a pv or rsync and want to know how much has been copied. Show Sample Output
Copy a local directory to a remote server using ssh+tar (assume server is lame and does not have rsync).
Useful, when you need to backup/copy/sync a folder over ssh with a non standard port number
Functionally the same as the Microsoft Robocopy (https://en.wikipedia.org/wiki/Robocopy) command below but with the benefits of compression and optionally specifying a user.
robocopy /e [//host]/source/path [//host]/destination/path
Options:
-a: archive mode - rescursive, copy symlinks as symlinks, preserve permissions, preserve modification times, preserve group, preserve owner, preserve device files and special files
-hh: Numbers in human-readable K=1024 format. Single "h" will produce human-readable K=1000 format
-m: don't copy empty directories
-z: use compression (if both source and destination are local it's faster to omit this)
--progress: Shows progress during the transfer and implies --verbose (verbose output)
--stats: Summary after the transfer stops
Show Sample Output
Include in your .bashrc
To help store and keep important but not often used commands I resorted to this. A basic for loop which when fed separate commands for its input searches the history and any references to that command or string gets appended to a file named [command name]_hist.txt
Revising it to the following though ti include root / sudo'd commands is probably critical the output above reflects the change, here below:
for i in docker elinks ufw fail2ban awk sed grep diff nginx apt bash for function bower github rsync sshfs who scp sftp tugboat aws pip npm ssh mysql php 8000 8080 3000 python serve s3ql s3cmd s3api s3 bash init wget; do cat /home/ray/.bash_history |grep -i "$i" >> /home/ray/histories/"${i}"_hist.txt;sudo cat /root/.bash_history |grep -i "$i" >> /home/ray/histories/"${i}"_sudo_hist.txt;done
then a simple more to look for a particular result
more -s -40 -p -f -d tugboat*txt
simple, solved my problem and alerted me to a lack of certain appearances of commands that signal a bit of an issue Not so sold on the usefulness as to warrant a bash function or further convenience or logic we shall see. Could use some tweaking but what commands dont!
Show Sample Output
With these options rsync wont waste time copying over files that are already present and will show you whats going on. Optionally you can add in the option --Dry-run to see what will be changed without actually changing anything.
built-in command to see the last command executed by some users Show Sample Output
run rsync over ssh where root login is not allowed. You need to have "ForwardAgent yes" in your ssh config and run ssh-add before you login to the machine where you want to sync the files to.
Especially useful while syncing to Amazon EC2 instance. avz stands for archive verbose compress
This command will backup the entire / directory, excluding /dev, /proc, /sys, /tmp, /run, /mnt, /media, /lost+found directories. Let us break down the above command and see what each argument does. rsync: A fast, versatile, local and remote file-copying utility -aAXv: The files are transferred in ?archive? mode, which ensures that symbolic links, devices, permissions, ownerships, modification times, ACLs, and extended attributes are preserved. -/: Source directory -exclude: Excludes the given directories from backup. -/mnt: It is the backup destination folder. Please be mindful that you must exclude the destination directory, if it exists in the local system. It will avoid the an infinite loop. To restore the backup, just reverse the source and destination paths in the above command.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: