commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
It grabs the PID's top resource users with $(ps -eo pid,pmem,pcpu| sort -k 3 -r|grep -v PID|head -10)
The sort -k is sorting by the third field which would be CPU. Change this to 2 and it will sort accordingly.
The rest of the command is just using diff to display the output of 2 commands side-by-side (-y flag) I chose some good ones for ps.
pidstat comes with the sysstat package(sar, mpstat, iostat, pidstat) so if you don't have it, you should.
I might should take off the timestamp... :|
Returns the most recently modified file in the current (or specified) directory. You can also get the oldest file, via:
ls -t1 $* | tail-1 ;
Use the excellent sensiblepasswords.com to a generate random (yet easy-to-remember) password every second, and copy it to the clipboard. Useful for generating a list of passwords and pasting them into a spreadsheet.
This script uses "madebynathan"'s "cb" function (http://madebynathan.com/2011/10/04/a-nicer-way-to-use-xclip/); you could also replace "cb" with
xclip -selection c
Remove "while true; do" and "; done" to generate and copy only 1 password.
Use this command if your file may contain empty lines and you need to optain the first non-empty line.
Using urandom to get random data, deleting non-letters with tr and print the first $1 bytes.
Probably more trouble than its worth, but worked for the obscure need.
wrap it in a function if you like...
lastfile () { ls -ltp | sed '1 d' | head -n1 }
Enhancement for the 'busy' command originally posted by busybee : less chars, no escape issue, and most important it exclude small files ( opening a 5 lines file isn't that persuasive I think ;) )
This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim.
Tail is much faster than sed, awk because it doesn't check for regular expressions.
Useful for situations where you have word lists or dictionaries that range from hundreds of megabytes to several gigabytes in size. Replace file.lst with your wordlist, replace 50000 with however many lines you want the resulting list to be in total. The result will be redirected to output.txt in the current working directory. It may be helpful to run wc -l file.lst to find out how many lines the word list is first, then divide that in half to figure out what value to put for the head -n part of the command.
Not perfect but working (at least on the project i wrote it ;) )
Specify what you want search in var search, then it grep the folder and show one result at a time.
Press enter and then it will show the next result.
It can work bad on result in the firsts lines, and it can be improved to allow to come back.
But in my case (a large project, i was checking if a value wasn't used withouth is corresponding const and the value is "1000" so there was a lot of result ...) it was perfect ;)
This does the same thing that the command 'j_melis' submitted, but does it a lot quicker.
That command takes 43 seconds to complete on my system, while the command I submitted takes 6 seconds.
Replace the head -1 with head -n that is the n-th item you want to go to.
Replace the head with tail, go to the last dir you listed.
You also can change the parameters of ls.
Specify the size in bytes using the 'c' option for the -size flag. The + sign reads as "bigger than". Then execute du on the list; sort in reverse mode and show the first 10 occurrences.