commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
Create a persistent SSH connection to the host in the background. Combine this with settings in your ~/.ssh/config:
All the SSH connections to the machine will then go through the persisten SSH socket. This is very useful if you are using SSH to synchronize files (using rsync/sftp/cvs/svn) on a regular basis because it won't create a new socket each time to open an ssh connection.
It can resume a failed secure copy ( usefull when you transfer big files like db dumps through vpn ) using rsync.
It requires rsync installed in both hosts.
rsync --partial --progress --rsh=ssh $file_source $user@$host:$destination_file local -> remote
rsync --partial --progress --rsh=ssh $user@$host:$remote_file $destination_file remote -> local
If your user has sudo on the remote box, you can rsync data as root without needing to login as root. This is very helpful if the remote box does not allow root to login over SSH (which is a common security restriction).
The command copies a file from remote SSH host on port 8322 with bandwidth limit 100KB/sec;
--progress shows a progress bar
--partial turns partial download on; thus, you can resume the process if something goes wrong
--bwlimit limits bandwidth by specified KB/sec
--ipv4 selects IPv4 as preferred
I find it useful to create the following alias:
alias myscp='rsync --progress --partial --rsh="ssh -p 8322" --bwlimit=100 --ipv4'
in ~/.bash_aliases, ~/.bash_profile, ~/.bash_login or ~/.bashrc where appropriate.
Put it into your sh startup script (I use
alias scpresume='rsync --partial --progress --rsh=ssh'
in bash). When a file transfer via scp has aborted, just use scpresume instead of scp and rsync will copy only the parts of the file that haven't yet been transmitted.
copy files to a ssh server with gzip compression
Say you just typed a long command like this:
rsync -navupogz --delete /long/path/to/dir_a /very/long/path/to/dir_b
but you really want to sync dir_b to dir_a. Instead of rewriting all the command line, just type followed by , and your command line will read
rsync -navupogz --delete /very/long/path/to/dir_b /long/path/to/dir_a
This will backup the _contents_ of /media/SOURCE to /media/TARGET where TARGET is formatted with ntfs. The --modify-window lets rsync ignore the less accurate timestamps of NTFS.
use find with rsync
This command works by rsyncing the target directory (containing the files you want to delete) with an empty directory. The '--delete' switch instructs rsync to remove files that are not present in the source directory. Since there are no files there, all the files will be deleted.
I'm not clear on why it's faster than 'find -delete', but it is.
Benchmarks here: https://web.archive.org/web/20130929001850/http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html
-r for recursive (if you want to copy entire directories)
src for the source file (or wildcards)
dst for the destination
--progress to show a progress bar
This creates an archive that does the following:
(Everyone seems to like -z, but it is much slower for me)
-a: archive mode - rescursive, preserves owner, preserves permissions, preserves modification times, preserves group, copies symlinks as symlinks, preserves device files.
-H: preserves hard-links
-A: preserves ACLs
-X: preserves extended attributes
-x: don't cross file-system boundaries
-v: increase verbosity
--numeric-ds: don't map uid/gid values by user/group name
--delete: delete extraneous files from dest dirs (differential clean-up during sync)
--progress: show progress during transfer
-T: turn off pseudo-tty to decrease cpu load on destination.
-c arcfour: use the weakest but fastest SSH encryption. Must specify "Ciphers arcfour" in sshd_config on destination.
-o Compression=no: Turn off SSH compression.
-x: turn off X forwarding if it is on by default.
Flip: rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" [source_dir] [dest_host:/dest_dir]
Assumed dir A, B, C are subdirs of the current dir
Exact syntax of the command is:
rsync -v -r --size-only --compare-dest=/path_to_A/A/ /path_to_B/B/ /path_to_C/C/
(do not omit end-slashes, since that would copy only the names and not the contents of subdirs of dir B to dir C)
You can replace --size-only with --checksum for more thorough file differences validation
-n, --dry-run perform a trial run with no changes made
connect to a remote server using ftp protocol over FUSE file system, then rsync the remote folder to a local one and then unmount the remote ftp server (FUSE FS)
it can be divided to 3 different commands and you should have curlftpfs and rsync installed
Yes, rsync(1) supports local directories. And, should anything change, it's trivial to run the command again, and grab only the changes, instead of the full directory.
transfer files from localhost to a remotehost.
Create a exact mirror of the local folder "/root/files", on remote server 'remote_server' using SSH command (listening on port 22)
(all files & folders on destination server/folder will be deleted)
Copies the complete root-dir of a linux server to another one, where the new harddisks formated and mountet. Very useful to migrate a root-server to another one.
rsync from source to dest all between >30
dname is a directory named something like 20090803 for Aug 3, 2009. lastbackup is a soft link to the last backup made - say 20090802. $folder is the folder being backed up. Because this uses hard linking, files that already exist and haven't changed take up almost no space yet each date directory has a kind of "snapshot" of that day's files. Naturally, lastbackup needs to be updated after this operation. I must say that I can't take credit for this gem; I picked it up from somewhere on the net so long ago I don't remember where from anymore. Ah, well...
Systems that are only somewhat slicker than this costs hundreds or even thousands of dollars - but we're HACKERS! We don't need no steenkin' commercial software... :)
If you have lots of remote hosts sitting "behind" an ssh proxy host, then there is a special-case use of "rsynch" that allows one to easily copy directories and files across the ssh proxy host, without having to do two explicit copies: the '-e' option allows for a replacement "rsh" command. We use this option to specify an "ssh" tunnel command, with the '-A' option that causes authentication agent requests to be forwarded back to the local host. If you have ssh set up correctly, the above command can be done without any passwords being entered.
With this cron, rsync begins to sinchronize the contents of the local directory on /[VIPdirectory] with the directory /backup/[VIPdirectory] on the remote server X.X.X.X. Previously we need working on public/private-keys ssh to guarantee the acces to the remote server on X.X.X.X