What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Efficient count files in directory (no recursion)

Terminal - Efficient count files in directory (no recursion)
perl -e 'if(opendir D,"."){@a=readdir D;print $#a-1,"\n"}'
2009-07-23 20:14:33
User: recursiverse
Functions: perl
Efficient count files in directory (no recursion)
time perl -e 'if(opendir D,"."){@a=readdir D;print $#a - 1,"\n"}'


real 0m0.497s

user 0m0.220s

sys 0m0.268s

time { ls |wc -l; }


real 0m3.776s

user 0m3.340s

sys 0m0.424s


** EDIT: turns out this perl liner is mostly masturbation. this is slightly faster:

find . -maxdepth 1 | wc -l

sh-3.2$ time { find . -maxdepth 1|wc -l; }


real 0m0.456s

user 0m0.116s

sys 0m0.328s

** EDIT: now a slightly faster perl version

perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"'

sh-3.2$ time perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"'


real 0m0.415s

user 0m0.176s

sys 0m0.232s


There are 4 alternatives - vote for the best!

Terminal - Alternatives

Know a better way?

If you can do better, submit your command here.

What others think

what if you have to recurse? I don't know how to extend the perl to recurse, but it would be interesting to compare to

find . -type f | wc -l

On the speed difference, I'm guessing perl has an advantage since it really doesn't deal with the filename strings like ls must. Somehow ls|wc seems to use twice as much system call time as the perl but also spends more time in userspace presumably shuttling filenames around. Any ideas?

Comment by bwoodacre 323 weeks and 6 days ago

And if it is the filename overhead, I wonder how much faster 'ls -i | wc -l' is.

Comment by bwoodacre 323 weeks and 6 days ago

In my home, I got the following counts:

$ find . -type f | wc -l 15812 $ perl -e 'if(opendir D,"."){@a=readdir D;print $#a-1,"\n"}' 109 $ ls |wc -l 14
Comment by kzh 323 weeks and 6 days ago

@kzh you forgot to time the results, that is what is being measured! Also, your home dir is a bad place to test because there are many hidden files/directories which find/ls ignores as well as many levels of subdirs which perl or ls do not consider, that is why your file counts vary.

Comment by bwoodacre 323 weeks and 6 days ago

~$ time perl -e 'if(opendir D,"."){@a=readdir D;print $#a - 1,"\n"}'280

real 0m0.117s

user 0m0.010s

sys 0m0.000s

~$ time { ls |wc -l; }


real 0m0.007s

user 0m0.010s

sys 0m0.000s

Looks like your perl line does it ~15 times slower than the ls on my machine.

Comment by tedkozma 323 weeks and 6 days ago


you could do...

perl -e 'use File::Find;$c=0;find sub {++$c;},".";print $c,"\n"'

but it is not faster than

find . | wc -l

which made me realize that

find . -maxdepth 1 | wc -l

...is faster than my original oneliner :}

i'm going to make an edit with this comment

Comment by recursiverse 323 weeks and 6 days ago


the perl liner should be faster on larger directories. my test directory had ~200k files.

Comment by recursiverse 323 weeks and 6 days ago

a slight modification brings perl slightly ahead again (though not by much and only for very large directories)...

perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c,"\n"'
Comment by recursiverse 323 weeks and 5 days ago
ls -U1 | wc -l

is much faster because it doesn't have to read in the whole directory listing into memory and sort it

Comment by hfs 274 weeks and 6 days ago

Your point of view

You must be signed in to comment.