commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
gpg's compression is as suitable as gzip's however your backups can now be encrypted.
to extract use:
gpg < folder.tpg | tar -xf -
An easy one but nice to keep in mind.
Clone a partion with tar.
creates a tar.gz with a name like:
of a given directory.
this file was made 10 April 2009 at 5:30:53pm
see date's man page to customize the timestamp format
This command will copy files and directories from a remote machine to the local one.
Ensure you are in the local directory you want to populate with the remote files before running the command.
To copy a directory and it's contents, you could:
ssh [email protected] "(cd /path/to/a/directory ; tar cvf - ./targetdir)" | tar xvf -
This is especially useful on *nix'es that don't have 'scp' installed by default.
This is useful for sending data between 2 computers that you have shell access to. Uses tar compression during transfer. Files are compressed & uncompressed automatically. Note the trailing dash on the listening side that makes netcat listen to stdin for data.
on the listening side:
sudo nc -lp 2022 | sudo tar -xvf -
explanation: open netcat to -l listen on -p port 2022, take the data stream and pipe to tar -x extract, -v verbose, -f using file filename - means "stdin"
on the sending side:
tar -cvzf - ./*| nc -w 3 name_of_listening_host 2022
explanation: compress all files in current dir using tar -c create, -v verbose, -f using file, - filename - here means "stdout" because we're tar -c instead of tar -x, -w3 wait 3 seconds on stream termination and then end the connection to the listening host name_of_listening_host, on port 2022
I recently found myself with a filesystem I couldn't write to and a bunch of files I had to get the hell out of dodge, preferably not one at a time. This command makes it possible to pack a bunch of files into a single archive and write it to a remote server.
Create backup (.tar.gz) for all first-level directory from current dir.
create tar.bz2 package from files "-type f" modificated today "-mtime -1" in ~/project
Add z to the flags to enable compression.
* Adjust the find command to your own filters.
* The -P flag forces to keep absolute paths in the tarball, so that you can be sure that the exact same file hierarchy will be created on the second machine.
These days, most software distributed in tar files will just contain a directory at the top level, but some tar files don't have this and can leave you with a mess of files in the current folder if you blindly execute
tar zxvf something.tar.gz
This command can help you clean up after such a mistake. However, note that this has the potential to do bad things if someone has been *really* nasty with filenames.
You can exclude more system folders or individual files which are not necessary for the backup and can be recreated after the restore procedure, like /lost+found, /mnt, /media, /tmp, /usr ...
Restoring the above backup procedure is as simple as becoming root and typing:
tar zxpf backup.tgz -C /
You can extract any file or directory out of the backup.tgz file for recovery, for instance, if you have a corrupt or mis-configured fstab file, you could simply issue the command:
tar zxpf backup.tgz /ect/fstab -C /
v add verbose option to see files processed
A far safer solution is to restore the desired files under a different directory, and then compare, move, or update the files to their original locations afterward.
This command tar?s up a directory and sends the output to gzip, showing a rate of 223MB/s.
This may require you installing the pv command.
For debian based users out there:
sudo aptitude install pv
This is a little bash script that will take all files following the *gz pattern in the directory and apply the tar -zxvf command to them.
I use this all the time for taking manual backups of stuff i want to keep but not important enough to backup regularly.
tar options may change ;)
c to compress into a tar file, z for gzip (j for bzip) man tar
-print0 and -0t are usefull for names with spaces, \, etc.
uses tar to dump files from /orignl/path to /dst/dir. i find tar's out more readable than cp, and it doesn't mess with modified dates.
due to bug can not comment