Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using uniq from sorted by
Terminal - Commands using uniq - 207 results
sort file1 file2 | uniq -d
2010-05-28 10:25:31
User: emacs
Functions: sort uniq
-1

print members both in file1 and file2

cut -d'/' -f3 file | sort | uniq -c
2010-05-23 16:02:51
User: rubenmoran
Functions: cut file sort uniq
2

count the times a domain appears on a file which lines are URLs in the form http://domain/resource.

grep -R usepackage * | cut -d']' -f2 | cut -s -d'{' -f 2 | sed s/"}"/.sty"}"/g | cut -d'}' -f1 | sort | uniq | xargs dpkg -S | cut -d':' -f1 | sort | uniq
tail -n2000 /var/www/domains/*/*/logs/access_log | awk '{print $1}' | sort | uniq -c | sort -n | awk '{ if ($1 > 20)print $1,$2}'
netstat -an | awk '/tcp/ {print $6}' | sort | uniq -c
2010-05-06 17:04:37
User: Kered557
Functions: awk netstat sort uniq
1

Counts TCP states from Netstat and displays in an ordered list.

grep <something> logfile | cut -c2-18 | uniq -c
2010-04-29 11:26:09
User: buzzy
Functions: cut grep uniq
Tags: uniq grep cut
2

The cut should match the relevant timestamp part of the logfile, the uniq will count the number of occurrences during this time interval.

tail -f access_log | cut -c2-21 | uniq -c
2010-04-29 11:16:54
User: buzzy
Functions: cut tail uniq
Tags: uniq tail cut
4

Change the cut range for hits per 10 sec, minute and so on... Grep can be used to filter on url or source IP.

ps hax -o user | sort | uniq -c
printf "\n%25s%10sTOTAL\n" 'FILE TYPE' ' '; for ext in $(find . -iname \*.* | egrep -o '\.[^[:space:].]+$' | egrep -v '\.svn*' | sort -f | uniq -i); do count=$(find . -iname \*$ext | wc -l); printf "%25s%10s%d\n" $ext ' ' $count; done
2010-04-16 21:12:11
User: rkulla
Functions: egrep find printf sort uniq wc
0

I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing.

Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs.

for i in emerg alert crit error warn ; do awk '$6 ~ /^\['$i'/ {print substr($0, index($0,$6)) }' error_log | sort | uniq -c | sort -n | tail -1; done
2010-04-15 21:47:18
User: zlemini
Functions: awk sort tail uniq
4

This searches the Apache error_log for each of the 5 most significant Apache error levels, if any are found the date is then cut from the output in order to sort then print the most common occurrence of each error.

sudo awk '($9 ~ /404/)' /var/log/httpd/www.domain-access_log | awk '{print $2,$9,$7,$11}' | sort | uniq -c
2010-04-09 10:31:50
User: ninjasys
Functions: awk sort sudo uniq
Tags: log error apache
1

This command will return a full list of Error 404 pages in the given access log. The following variables have been given to awk

Hostname ($2), ERROR Code ($9), Missing Item ($7), Referrer ($11)

You can then send this into a file (>> /path/to/file), which you can open with OpenOffice as a CSV

awk '$9 == 404 {print $7}' access_log | uniq -c | sort -rn | head
2010-04-08 21:40:53
User: zlemini
Functions: awk sort uniq
8

Finds the top ten pages returning an http response code of 404 in an apache log.

history | perl -F"\||<\(|;|\`|\\$\(" -alne 'foreach (@F) { print $1 if /\b((?!do)[a-z]+)\b/i }' | sort | uniq -c | sort -nr | head
2010-04-08 13:46:09
User: alperyilmaz
Functions: perl sort uniq
4

Most of the "most used commands" approaches does not consider pipes and other complexities.

This approach considers pipes, process substitution by backticks or $() and multiple commands separated by ;

Perl regular expression breaks up each line using | or < ( or ; or ` or $( and picks the first word (excluding "do" in case of for loops)

note: if you are using lots of perl one-liners, the perl commands will be counted as well in this approach, since semicolon is used as a separator

cat /etc/apache2/sites-enabled/* | egrep 'ServerAlias|ServerName' | tr -s " " | sed 's/^[ ]//g' | uniq | cut -d ' ' -f 2 | sed 's/www.//g' | sort | uniq
2010-04-08 08:51:17
User: chronosMark
Functions: cat cut egrep sed sort tr uniq
0

Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server.

cut -d\ -f 1 ~/.bash_history | sort | uniq -c | sort -rn | head -n 10 | sed 's/.*/ &/g'
svn log -r {`date +"%Y-%m-%d" -d "1 month ago"`}:HEAD|grep '^r[0-9]' |cut -d\| -f2|sort|uniq -c
LC_ALL=C sort file | uniq -c | sort -n -k1 -r
grep current_state= /var/log/nagios/status.dat|sort|uniq -c|sed -e "s/[\t ]*\([0-9]*\).*current_state=\([0-9]*\)/\2:\1/"|tr "\n" " "
svn log -r {`date "+%Y-%m-%d"`}:HEAD|grep '^r[0-9]' |cut -d\| -f2|sort|uniq -c
find . -type f |sed "s#.*/##g" |sort |uniq -c -d
2010-02-17 11:59:54
User: shadycraig
Functions: find sed sort uniq
0

Useful for C projects where header file names must be unique (e.g. when using autoconf/automake), or when diagnosing if the wrong header file is being used (due to dupe file names)

find -type d -name ".svn" -prune -o -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type d -name ".svn" -prune -o -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
2010-01-28 09:45:29
User: 2chg
Functions: find md5sum sort uniq xargs
2

Improvement of the command "Find Duplicate Files (based on size first, then MD5 hash)" when searching for duplicate files in a directory containing a subversion working copy. This way the (multiple dupicates) in the meta-information directories are ignored.

Can easily be adopted for other VCS as well. For CVS i.e. change ".svn" into ".csv":

find -type d -name ".csv" -prune -o -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type d -name ".csv" -prune -o -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
find -not -empty -type f -printf "%s\n" | sort | uniq -d | parallel find -type f -size {}c | parallel md5sum | sort | uniq -w32 --all-repeated=separate
2010-01-28 08:40:18
Functions: find md5sum sort uniq
Tags: xargs parallel
-1

A bit shorter and parallelized. Depending on the speed of your cpu and your disk this may run faster.

Parallel is from https://savannah.nongnu.org/projects/parallel/

nmap -sP <subnet>.* | egrep -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' > results.txt ; for IP in {1..254} ; do echo "<subnet>.${IP}" ; done >> results.txt ; cat results.txt | sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 | uniq -u
cat -n <file> | sort -k 2 | uniq -f 1 | sort -n | cut -f 2-
2010-01-21 18:55:58
User: fpunktk
Functions: cat cut sort uniq
4

i wanted to delete all duplicate lines from .bash_history and keep the order of the other lines.

the command cat's the file and adds line numbers, then sorts by the second column. afterwards uniq omits repeated lines, but skips the first field (the line number). then it sorts by the line numbers and at the end cuts the numbers off.

grep -e `date +%Y-%m-%d` /var/log/dpkg.log | awk '/install / {print $4}' | uniq | xargs apt-get -y remove