commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
tar doesn't support wildcard for unpacking (so you can't use tar -xf *.tar) and it's shorter and simpler than
for i in *.tar;do tar -xf $i;done (or even 'for i in *.tar;tar -xf $i' in case of zsh)
-i says tar not to stop after first file (EOF)
Using the COPYFILE_DISABLE=true environment variable you can prevent tar from adding any ._-files to your .tar-file on Mac OS X.
The following command finds all the files not modified in the last 5 days under /protocollo/paflow directory and creates an archive files under /var/dump-protocollo in the format of ddmmyyyy_archive.tar
This is a shortcut to tar up all files matching a wildcard. Tar doesn't have the --include (apparently).
You don't need to create an intermediate file, just pipe the output directly to tar command and use stin as file (put a dash after the f flag).
Requires the GNU tar ignore zeros option. http://www.gnu.org/software/tar/manual/html_section/Blocking.html
tar directory and compress it with showing progress and Disk IO limits. Pipe Viewer can be used to view the progress of the task, Besides, he can limit the disk IO, especially useful for running Servers.
Compresses each file individually, creating a $fileneame.tar.gz and removes the uncompressed version, usefull if you have lots of files and don't want 1 huge archive containing them all. you could replace ls with ls *.pdf to just perform the action on pdfs for example.
Execute it from the source host, where the source files you wish backup resides. With the minus '-' the tar command deliver the compressed output to the standar output and, trough over the ssh session to the remote host. On the other hand the backup host will be receive the stream and read it from the standar input sending it to the /path/to/backup/backupfile.tar.bz2
Create a tarball on the client and send it across the network with netcat on port 1234 where its extracted on the server in the current directory.
Create tarball on stdout which is piped to tar reading from stdin all over ssh
This deals nicely with files having special characters in the file name (space ' or ").
Parallel is from https://savannah.nongnu.org/projects/parallel/
xargs deals badly with special characters (such as space, ' and "). In this case if you have a file called '12" record'.
Parallel https://savannah.nongnu.org/projects/parallel/ does not have this problem.
Both solutions work bad if the number of files is more than the allowed line length of the shell.
xargs deals badly with special characters (such as space, ' and "). To see the problem try this:
touch important_file
touch 'not important_file'
ls not* | xargs rm
Parallel https://savannah.nongnu.org/projects/parallel/ does not have this problem.