Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using uniq from sorted by
Terminal - Commands using uniq - 207 results
ls | tr [:upper:] [:lower:] | grep -oP '\.[^\.]+$' | sort | uniq -c | sort
2014-01-30 11:37:27
User: icefyre
Functions: grep ls sort tr uniq
1

displays a list of all file extensions in current directory and how many files there are of each type of extension in ascending order (case insensitive)

find . -type f -not -empty -printf "%-25s%p\n"|sort -n|uniq -D -w25|cut -b26-|xargs -d"\n" -n1 md5sum|sed "s/ /\x0/"|uniq -D -w32|awk -F"\0" 'BEGIN{l="";}{if(l!=$1||l==""){printf "\n%s\0",$1}printf "\0%s",$2;l=$1}END{printf "\n"}'|sed "/^$/d"
2013-10-22 13:34:19
User: alafrosty
Functions: awk cut find sed sort uniq xargs
0

* Find all file sizes and file names from the current directory down (replace "." with a target directory as needed).

* sort the file sizes in numeric order

* List only the duplicated file sizes

* drop the file sizes so there are simply a list of files (retain order)

* calculate md5sums on all of the files

* replace the first instance of two spaces (md5sum output) with a \0

* drop the unique md5sums so only duplicate files remain listed

* Use AWK to aggregate identical files on one line.

* Remove the blank line from the beginning (This was done more efficiently by putting another "IF" into the AWK command, but then the whole line exceeded the 255 char limit).

>>>> Each output line contains the md5sum and then all of the files that have that identical md5sum. All fields are \0 delimited. All records are \n delimited.

grep cwd /var/log/exim_mainlog | grep -v /var/spool | awk -F"cwd=" '{print $2}' | awk '{print $1}' | sort | uniq -c | sort -n
find . -type f | perl -ne 'chop(); $ext = substr($_, rindex($_, ".") + 1); print "$ext\n";' | sort | uniq --count | sort -n
2013-09-26 07:45:19
User: t3o
Functions: find perl sort uniq
0

When trying to find an error in a hosted project it's interesting to find out how the source is organized: Are there .inc files? Or .php files only? Or .xml files that probably contain translated texts?

cut -d " " -f1,4 access_log | sort | uniq -c | sort -rn | head
2013-09-20 21:29:32
User: zlemini
Functions: cut sort uniq
1

Analyze an Apache access log for the time period with most activity and display the hit count, requesting IP and the timestamp. May help detect a brute force dos attack.

cut -d ' ' -f 1 /var/log/apache2/access_logs | uniq -c | sort -n
2013-09-17 20:05:03
User: BorneBjoern
Functions: cut sort uniq
0

avoiding UUOC!

cut can handle files as well. No neet for a cat.

grep "10/Sep/2013" access.log| cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c | awk '{ if ($1 > 10) print $0}'
netstat -ntu | awk ' $5 ~ /^(::ffff:|[0-9|])/ { gsub("::ffff:","",$5); print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr
2013-09-10 19:28:06
User: mrwulf
Functions: awk cut netstat sort uniq
1

Same as the rest, but handle IPv6 short IPs. Also, sort in the order that you're probably looking for.

cat /var/log/apache2/access_logs | cut -d' ' -f1 | sort | uniq -c | sort -n
2013-09-07 23:57:31
User: while0pass
Functions: cat cut sort uniq
1

The first sort is necessary for ips in a list to be actually unique.

cat /var/log/apache2/access_logs | cut -d ' ' -f 1 | uniq -c | sort -n
2013-09-02 13:04:47
User: basvdburg
Functions: cat cut sort uniq
1

Show's per IP of how many requests they did to the Apache webserver

cut -d, -f1 /var/opt/example/dumpfile.130610_subscriber.csv | cut -c3-5 | sort | uniq -c | sed -e 's/^ *//;/^$/d' | awk -F" " '{print $2 "," $1}' > SubsxPrefix.csv
2013-07-17 07:58:56
User: neomefistox
Functions: awk cut sed sort uniq
Tags: Linux UNIX
0

dumpfile is a CSV file, which its 1st field is a phone number in format CC+10 digits

Empty lines are deleted, before the output in format "prefix,ocurrences"

sudo /usr/sbin/exim -bp | sed -n '/\*\*\* frozen \*\*\*/,+1!p' | awk '{print $1}' | tr -d [:blank:] | grep @ | sort | uniq -c | sort -n
more restart_weblogic.log | grep "LISTEN" | awk '{ print $7 }' | uniq | wc -l
tail -F some.log | perl -ne 'print time(), "\n";' | uniq -c
curl -k https://Username:Password@api.del.icio.us/v1/posts/all?red=api | xml2| \grep '@href' | cut -d\= -f 2- | sort | uniq | linkchecker -r0 --stdin --complete -v -t 50 -F blacklist
2013-05-04 17:43:21
User: bbelt16ag
Functions: cut sort uniq
-1

This commands queries the delicious api then runs the xml through xml2, grabs the urls cuts out the first two columns, passes through uniq to remove duplicates if any, and then goes into linkchecker who checks the links. the links go the blacklist in ~/.linkchecker/blacklist. please see the manual pages for further info peeps. I took me a few days to figure this one out. I how you enjoy it. Also don't run these api more then once a few seconds you can get banned by delicious see their site for info. ~updated for no recursive

awk '{print $1}' ~/.bash_history | sort | uniq -c | sort -rn | head -n 10
awk '{ print $9 }' access.log | sort | uniq -c | sort -nr | head -n 10
awk '/Dec\/2012/ {print $1,$8}' logfile | grep -ivE '(.gif|.jpg|.png|favicon|.css|.js|robots.txt|wp-l|wp-term)' | sort | uniq -c | sort -rn | head -n 20
uniq -c | sed -r 's/([0-9]+)\s(.*)/"\2": \1,/;$s/,/\n}/;1i{'
parallel -j 50 ssh {} "ls" ::: host1 host2 hostn | sort | uniq -c
2013-04-12 11:56:41
User: macoda
Functions: sort ssh uniq
1

parallel can be installed on your central node and can be used to run a command multiple times.

In this example, multiple ssh connections are used to run commands. (-j is the number of jobs to run at the same time). The result can then be piped to commands to perform the "reduce" stage. (sort then uniq in this example).

This example assumes "keyless ssh login" has been set up between the central node and all machines in the cluster.

bashreduce may also do what you want.

svn ls -R | egrep -v -e "\/$" | tr '\n' '\0' | xargs -0 svn blame | awk '{print $2}' | sort | uniq -c | sort -nr
2013-04-10 19:37:53
User: rymo
Functions: awk egrep ls sort tr uniq xargs
Tags: svn count
1

make usable on OSX with filenames containing spaces. note: will still break if filenames contain newlines... possible, but who does that?!

netstat -antu | awk '{print $5}' | awk -F: '{print $1}' | sort | uniq -c | sort -n
2013-04-08 19:46:41
User: wejn
Functions: awk netstat sort uniq
-1

Output contains also garbage (text parts from netstat's output) but it's good enough for quick check who's overloading your server.

git log | grep Date | awk '{print " : "$4" "$3" "$6}' | uniq -c
find /some/path -type f -printf '%f\n' | grep -o '\..\+$' | sort | uniq -c | sort -rn
2013-03-18 14:42:29
User: skkzsh
Functions: find grep sort uniq
2

Get the longest match of file extension (Ex. For 'foo.tar.gz', you get '.tar.gz' instead of '.gz')

find /some/path -type f | gawk -F/ '{print $NF}' | gawk -F. '/\./{print $NF}' | sort | uniq -c | sort -rn
2013-03-18 14:40:26
User: skkzsh
Functions: find gawk sort uniq
0

If you have GNU findutils, you can get only the file name with

find /some/path -type f -printf '%f\n'

instead of

find /some/path -type f | gawk -F/ '{print $NF}'