Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using sort from sorted by
Terminal - Commands using sort - 606 results
ps aux | sort -n -k2 | awk '{if ($2 < 300) print($0)}'
2013-05-09 13:09:58
User: lili
Functions: awk ps sort
0

Display all pid less the 300 processes info

for i in `cat /proc/mounts | awk '{print $2}' | grep ${CDIR} |sort -r` ; do umount $i; done
curl -k https://Username:Password@api.del.icio.us/v1/posts/all?red=api | xml2| \grep '@href' | cut -d\= -f 2- | sort | uniq | linkchecker -r0 --stdin --complete -v -t 50 -F blacklist
2013-05-04 17:43:21
User: bbelt16ag
Functions: cut sort uniq
-1

This commands queries the delicious api then runs the xml through xml2, grabs the urls cuts out the first two columns, passes through uniq to remove duplicates if any, and then goes into linkchecker who checks the links. the links go the blacklist in ~/.linkchecker/blacklist. please see the manual pages for further info peeps. I took me a few days to figure this one out. I how you enjoy it. Also don't run these api more then once a few seconds you can get banned by delicious see their site for info. ~updated for no recursive

awk '{print $1}' ~/.bash_history | sort | uniq -c | sort -rn | head -n 10
svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{count[$2]++}END{for(j in count) print count[j] "\t" j}' | sort -rn
2013-05-03 01:45:12
User: kurzum
Functions: awk egrep ls sort xargs
Tags: svn count
0

This one has a better performance, as it is a one pass count with awk. For this script it might not matter, but for others it is a good optiomization.

awk '{ print $9 }' access.log | sort | uniq -c | sort -nr | head -n 10
awk '/Dec\/2012/ {print $1,$8}' logfile | grep -ivE '(.gif|.jpg|.png|favicon|.css|.js|robots.txt|wp-l|wp-term)' | sort | uniq -c | sort -rn | head -n 20
du -h --time --max-depth=1 | sort -hr
parallel -j 50 ssh {} "ls" ::: host1 host2 hostn | sort | uniq -c
2013-04-12 11:56:41
User: macoda
Functions: sort ssh uniq
1

parallel can be installed on your central node and can be used to run a command multiple times.

In this example, multiple ssh connections are used to run commands. (-j is the number of jobs to run at the same time). The result can then be piped to commands to perform the "reduce" stage. (sort then uniq in this example).

This example assumes "keyless ssh login" has been set up between the central node and all machines in the cluster.

bashreduce may also do what you want.

svn ls -R | egrep -v -e "\/$" | tr '\n' '\0' | xargs -0 svn blame | awk '{print $2}' | sort | uniq -c | sort -nr
2013-04-10 19:37:53
User: rymo
Functions: awk egrep ls sort tr uniq xargs
Tags: svn count
1

make usable on OSX with filenames containing spaces. note: will still break if filenames contain newlines... possible, but who does that?!

cut -d',' -f6 file.csv | sort | uniq
netstat -antu | awk '{print $5}' | awk -F: '{print $1}' | sort | uniq -c | sort -n
2013-04-08 19:46:41
User: wejn
Functions: awk netstat sort uniq
-1

Output contains also garbage (text parts from netstat's output) but it's good enough for quick check who's overloading your server.

find . -name *js -type f | xargs yardstick | sort -k6 -n
2013-04-06 00:19:46
User: noah
Functions: find sort xargs
0

The number on the far right is ratio of comments to code, expressed as a percentage. For the rest of the Yardstick documentation see https://github.com/calmh/yardstick/blob/master/README.md#reported-metrics

cat /sys/block/md1/holders/dm*/dm/name | awk -F- '{print $1}' | sort -u
cat /sys/block/{*,*/*}/holders/dm*/dm/name | awk -F- '{print $1}' | sort -u
for a in $(seq 5 8); do cat twit.txt | cut -d " " -f$a | grep "^@" | sort -u; done > followlst.txt
2013-03-29 21:07:09
User: xmuda
Functions: cat cut grep seq sort
-6

Go to "https://twitter.com/search/realtime?q=%23TeamFollowBack&src=hash" and then copy al the text on the page. If you scroll down the page will be bigger. Then put al the text in a text file called twit.txt

If you follow the user there is a high probability the users give you follow back.

To follow all the users you can use an iMacros script.

rpm -qa --queryformat '%{size} %{name}-%{version}-%{release}\n' | sort -k 1,1 -rn | nl | head -16
2013-03-19 21:10:54
User: mpb
Functions: head nl rpm sort
1

Interesting to see which packages are larger than the kernel package.

Useful to understand which RPMs might be candidates to remove if drive space is restricted.

count=0;while IFS= read -r -d '' line; do echo "${line#* }"; ((++count==5)) && break; done < <(find . -type f -printf '%s %p\0' | sort -znr)
2013-03-19 17:19:26
User: sharfah
Functions: echo find read sort
Tags: sort find head,
-4

This command is more robust because it handles spaces, newlines and control characters in filenames. It uses printf, not ls, to determine file size.

find . -type f -exec ls -s {} \; | sort -n -r | head -5
find /some/path -type f -printf '%f\n' | grep -o '\..\+$' | sort | uniq -c | sort -rn
2013-03-18 14:42:29
User: skkzsh
Functions: find grep sort uniq
2

Get the longest match of file extension (Ex. For 'foo.tar.gz', you get '.tar.gz' instead of '.gz')

find /some/path -type f | gawk -F/ '{print $NF}' | gawk -F. '/\./{print $NF}' | sort | uniq -c | sort -rn
2013-03-18 14:40:26
User: skkzsh
Functions: find gawk sort uniq
0

If you have GNU findutils, you can get only the file name with

find /some/path -type f -printf '%f\n'

instead of

find /some/path -type f | gawk -F/ '{print $NF}'
for i in `gpg --list-sigs | perl -ne 'if(/User ID not found/){s/^.+([a-fA-F0-9]{8}).*/\1/; print}' | sort | uniq`; do gpg --keyserver-options no-auto-key-retrieve --recv-keys $i; done
2013-03-10 09:15:15
User: hank
Functions: gpg perl sort
Tags: bash GPG sed fetch
0

The original command doesn't work for me - does something weird with sed (-r) and xargs (-i) with underscores all over...

This one works in OSX Lion. I haven't tested it anywhere else, but if you have bash, gpg and perl, it should work.

prlimit --cpu=10 sort -u hugefile
2013-02-27 15:59:11
User: mhs
Functions: sort
Tags: cpu util-linux
4

Similar to `cpulimit`, although `prlimit` can be found shipped with recent util-linux.

Example: limit CPU consumption to 10% for a math problem which ordinarily takes up 100% CPU:

Before:

bc -l <(echo "1234123412341234^12341234")

See the difference `prlimit` makes:

prlimit --cpu=10 bc -l <(echo "1234123412341234^12341234")

To actually monitor the CPU usage, use `top`, `sar`, etc.. or:

pidstat -C 'bc' -hur -p ALL 1
find . -type f -size +0 -printf "%-25s%p\n" | sort -n | uniq -D -w 25 | sed 's/^\w* *\(.*\)/md5sum "\1"/' | sh | sort | uniq -w32 --all-repeated=separate
2013-02-23 20:44:20
User: jimetc
Functions: find sed sh sort uniq
0

Avoids the nested 'find' commands but doesn't seem to run any faster than syssyphus's solution.