What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags



Psst. Open beta.

Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:

  • » The open beta is running a copy of the database that will not carry over to the final version. Don't post anything you don't mind losing.
  • » If you wish to use your user account, you will probably need to reset your password.
Your feedback is appreciated via the form on the beta page. Thanks! -Jon & CLFU Team

Remove duplicate entries in a file without sorting.

Terminal - Remove duplicate entries in a file without sorting.
awk '!x[$0]++' <file>
2009-12-20 02:33:21
User: din7
Functions: awk
Remove duplicate entries in a file without sorting.

Using awk, find duplicates in a file without sorting, which reorders the contents. awk will not reorder them, and still find and remove duplicates which you can then redirect into another file.


There is 1 alternative - vote for the best!

Terminal - Alternatives
perl -ne 'print if !$a{$_}++'
2011-02-17 02:18:44
User: doherty
Functions: perl

Reads stdin, and outputs each line only once - without sorting ahead of time. This does use more memory than your system's sort utility.

export PATH=`echo -n $PATH | awk -v RS=":" '{ if (!x[$0]++) {printf s $0; s=":"} }'`
awk '!NF || !seen[$0]++'
2015-02-25 17:03:13
User: Soubsoub
Functions: awk

Remove duplicate lines whilst keeping order and empty lines

glu() { (local IFS="$1"; shift && echo "$*") }; repath() { ( _E=`echo "${PATH//:/$'\n'}" | awk '!x[$0]++'`; glu ":" $_E ) ; } ; PATH=`repath` ; export PATH
2011-06-09 12:11:18
User: Timothy
Functions: awk echo export shift

Thanks to the authors of:

awk '!x[$0]++' <file>

and the author of:

joinargs() { (local IFS="$1"; shift && echo "$*") }

and others, we can have a fast Linux or android.

IMPORTANT if you find a priority order problem in PATH you can push a path directory to the front without duplication as follows:


then ...

Check duplication with:

echo $PATH|tr : '\n'|sort|uniq -d

Finally do a very neat line by line list of $PATH:

echo "${PATH//:/$'\n'}

The speed up is very noticeable for android, and builds on Linux Ubantu are much faster with make and scripts.

I will update the command on request. Timothy from SONY

Know a better way?

If you can do better, submit your command here.

What others think

sort | uniq


sort -u

Comment by KevinM 362 weeks and 4 days ago

Yes, but that sorts all the rest of the data in as well. awk will leave the rest of the data alone.

Comment by din7 362 weeks and 4 days ago

It's very clever, din7, but you need to describe it better. It doesn't FIND the duplicates in a file, it REMOVES them.

Comment by flatcap 362 weeks and 4 days ago

I generally pass stdout to this command then redirect into another file so I can just see duplicates. The command above is in its original context. Even so, having used this several times in its original context I haven't seen where it actually removes duplicates without further modification. It seems to me that it just prints the duplicates.

Comment by din7 362 weeks and 4 days ago

It prints the lines that aren't duplicated, too. That's why what it's doing is removing the duplicates.

echo -e "aaa\nbbb\naaa"|awk \!'x[$0]++'




not just "aaa"

Comment by dennisw 362 weeks and 3 days ago

I see what you mean now.

Comment by din7 362 weeks and 2 days ago

I thought flatcap was saying that it modifies the file when the command is executed.

Comment by din7 362 weeks and 2 days ago

Both solutions are very elegant and easily replicated in unix. thanks.

Comment by csj565 285 weeks and 3 days ago

Your point of view

You must be signed in to comment.