commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Using tail to follow and standard perl to count and print the lps when lines are written to the logfile.
You can actually do the same thing with a combination of head and tail. For example, in a file of four lines, if you just want the middle two lines:
head -n3 sample.txt | tail -n2
Line 1 --\
Line 2 } These three lines are selected by head -n3,
Line 3 --/ this feeds the following filtered list to tail:
Line 2 \___ These two lines are filtered by tail -n2,
Line 3 / This results in:
being printed to screen (or wherever you redirect it).
Uses history to get the last n+1 commands (since this command will appear as the most recent), then strips out the line number and this command using sed, and appends the commands to a file.
Returns a the directory depth.
you can listen to your computer, but don't be carried away
when using named pipes only one reader is given the output by default. Also, most commands piped to by grep use a buffer which save output until tail -f finishes, which is not convenient. Here, using a combination of tee, sub-processes and the --line-buffered switch in grep we can workaround the problem.
Changed wget to curl and it doesn't create a file anymore.
Substitute that 724349691704 with an UPC of a CD you have at hand, and (hopefully) this oneliner should return the $Artist - $Title, querying discogs.com.
Yes, I know, all that head/tail/grep crap can be improved with a single sed command, feel free to send "patches" :D
make, find and a lot of other programs can take a lot of time. And can do not. Supppose you write a long, complicated command and wonder if it will be done in 3 seconds or 20 minutes. Just add "R" (without quotes) suffix to it and you can do other things: zsh will inform you when you can see the results.
You can replace zenity with other X Window dialogs program.
du -m option to not go across mounts (you usually want to run that command to find what to destroy in that partition)
-a option to also list . files
-k to display in kilobytes
sort -n to sort in numerical order, biggest files last
tail -10 to only display biggest 10
reverse the sorting of ls to get the newest file:
ls -1tr --group-directories-first /path/to/dir/ | tail -n 1
If there are no files in the directory you will get a directory or nothing.
Takes IP from web logs and pipes to iptables, use grep to white list IPs.. use if a particular file is getting requested by many different addresses.
Sure, its already down pipe and you bandwidth may suffer but that isnt the concern. This one liner saved me from all the traffic hitting the server a second time, reconfigure your system so your system will work like blog-post-1.php or the similar so legitimate users can continue working while the botnet kills itself.
Should be a bit more portable since echo -e/n and date's -Ins are not.
This is useful when watching a log file that does not contain timestamps itself.
If the file already has content when starting the command, the first lines will have the "wrong" timestamp when the command was started and not when the lines were originally written.