commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
USAGE: gate listening_port host port
Creates listening socket and connects to remote device at host:port. It uses pipes for connection between two sockets. Traffic which goes through pipes is wrote to stdout. I use it for debug network scripts.
Sends both stdout and stderr to the pipe which captures the data in the file 'out.test' and sends to stdout of tee (likely /dev/tty unless redirected). Works on Bourne, Korn and Bash shells.
Locally watch MySQL process list update every 5s on a remote host. While you watch pipe to a file. The file out put is messy though but hey at least you have a history of what you see.
The command `cat file >> file` failes with the following error message:
cat: file: input file is output file
`tee` is a nice workaround without using any temporary files.
When plumbers use pipes, they sometimes need a T-joint. The Unix equivalent to this is 'tee'. The -a flag tells 'tee' to append to the file, rather than clobbering it.
Tested on bash and tcsh.
PRIVATEKEY - Of course the full path to the private key \n
HOST - The host where to get the backup \n
SOURCE - The directory you wish to backup \n
DESTINATION - The destination for the backup on your local machine
can display the commands and their output to another user who is connected to another terminal, by example pts/3
Optionally, you can create a new function to do this with a custom command. Edit $HOME/.bashrc and add:
myssh () { ssh $1 | tee sshlog ; }
Save it.
At command prompt:
myssh [email protected]
In the above example 'muspi merol' (the output of the first rev command) is sent to stderr and 'lorem ipsum' (the output of the second rev command) is sent to stdout. rev reverse lines of a file or files. This use of tee allows testing if a program correctly handles its input without using files that hold the data.
This will get the job done in the most efficient way -
spawning only one `rm` process.
"On-the-fly" find data is displayed through `tee` and
you should have plenty of time to ctrl-c if needed before it's too late.
You may need to re-run this after major Software Updates.
To leave more languages in, add more ``-and \! -iname "lang*"'' statements:
sudo find / -iname "*.lproj" -and \! -iname "en*" -and \! -iname "spanish*" -print0 | tee /dev/stderr | sudo xargs -0 rm -rfv
**Edit: note the 2nd sudo near the end of the pipeline - this is necessary.
only for sudo-style systems.
Use this construct instead of I/O re-directors ``>'' or ``>>'' because
sudo only elevates the commands and *not* the re-directors.
***warning: remember that the `tee` command will clobber
file contents unless it is given the ``-a'' argument
Also, for extra security, the "left" command is still run unprivileged.
[re]verify those burned CD's early and often - better safe than sorry -
at a bare minimum you need the good old `dd` and `md5sum` commands,
but why not throw in a super "user-friendly" progress gauge with the `pv` command -
adjust the ``-s'' "size" argument to your needs - 700 MB in this case,
and capture that checksum in a "test.md5" file with `tee` - just in-case for near-future reference.
*uber-bonus* ability - positively identify those unlabeled mystery discs -
for extra credit, what disc was used for this sample output?
The large context number (-C 1000) is a bit of a hack, but in most of my use cases, it makes sure I'll see the whole log output.
Useful for cron jobs -- all output will be logged but only errors will cause email to be sent. NB the order of "2>&1" and ">> logfile" is important, it doesn't work if you reverse them (everything goes to the logfile, nothing left for tee).
Forwards localhost:1234 to machine:port, running all data through your chain of piped commands. The above command logs inbound and outbound traffic to two files.
Tip: replace tee with sed to manipulate the data in real time (use "sed -e 's/400 Bad Request/200 OK/'" to tweak a web server's responses ;-) Limitless possibilities.
This is the solution to the common mistake made by sudo newbies, since
sudo echo "foo bar" >> /path/to/some/file
does NOT add to the file as root.
Alternatively,
sudo echo "foo bar" > /path/to/some/file
should be replaced by
echo "foo bar" | sudo tee /path/to/some/file
And you can add a >/dev/null in the end if you're not interested in the tee stdout :
echo "foo bar" | sudo tee -a /path/to/some/file >/dev/null
Using process substitution, we can 'trick' tee into sending a command's STDOUT to an arbitrary number of commands. The last command (command4) in this example will get its input from the pipe.
Find all files that contain string XXX in them, change the string from XXX to YYY, make a backup copy of the file and save a list of files changed in /tmp/fileschanged.
The tee (as in "T" junction) command is very useful for redirecting output to two places.