Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Efficient count files in directory (no recursion)

Terminal - Efficient count files in directory (no recursion)
perl -e 'if(opendir D,"."){@a=readdir D;print $#a-1,"\n"}'
2009-07-23 20:14:33
User: recursiverse
Functions: perl
1
Efficient count files in directory (no recursion)
time perl -e 'if(opendir D,"."){@a=readdir D;print $#a - 1,"\n"}'

205413

real 0m0.497s

user 0m0.220s

sys 0m0.268s

time { ls |wc -l; }

205413

real 0m3.776s

user 0m3.340s

sys 0m0.424s

*********

** EDIT: turns out this perl liner is mostly masturbation. this is slightly faster:

find . -maxdepth 1 | wc -l

sh-3.2$ time { find . -maxdepth 1|wc -l; }

205414

real 0m0.456s

user 0m0.116s

sys 0m0.328s

** EDIT: now a slightly faster perl version

perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"'

sh-3.2$ time perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"'

205414

real 0m0.415s

user 0m0.176s

sys 0m0.232s

Alternatives

There are 2 alternatives - vote for the best!

Terminal - Alternatives

Know a better way?

If you can do better, submit your command here.

What others think

what if you have to recurse? I don't know how to extend the perl to recurse, but it would be interesting to compare to

find . -type f | wc -l

On the speed difference, I'm guessing perl has an advantage since it really doesn't deal with the filename strings like ls must. Somehow ls|wc seems to use twice as much system call time as the perl but also spends more time in userspace presumably shuttling filenames around. Any ideas?

Comment by bwoodacre 274 weeks and 1 day ago

And if it is the filename overhead, I wonder how much faster 'ls -i | wc -l' is.

Comment by bwoodacre 274 weeks and 1 day ago

In my home, I got the following counts:

$ find . -type f | wc -l 15812 $ perl -e 'if(opendir D,"."){@a=readdir D;print $#a-1,"\n"}' 109 $ ls |wc -l 14
Comment by kzh 274 weeks ago

@kzh you forgot to time the results, that is what is being measured! Also, your home dir is a bad place to test because there are many hidden files/directories which find/ls ignores as well as many levels of subdirs which perl or ls do not consider, that is why your file counts vary.

Comment by bwoodacre 274 weeks ago

~$ time perl -e 'if(opendir D,"."){@a=readdir D;print $#a - 1,"\n"}'280

real 0m0.117s

user 0m0.010s

sys 0m0.000s

~$ time { ls |wc -l; }

108

real 0m0.007s

user 0m0.010s

sys 0m0.000s

Looks like your perl line does it ~15 times slower than the ls on my machine.

Comment by tedkozma 274 weeks ago

@bwoodacre

you could do...

perl -e 'use File::Find;$c=0;find sub {++$c;},".";print $c,"\n"'

but it is not faster than

find . | wc -l

which made me realize that

find . -maxdepth 1 | wc -l

...is faster than my original oneliner :}

i'm going to make an edit with this comment

Comment by recursiverse 274 weeks ago

@tedkozma

the perl liner should be faster on larger directories. my test directory had ~200k files.

Comment by recursiverse 274 weeks ago

a slight modification brings perl slightly ahead again (though not by much and only for very large directories)...

perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c,"\n"'
Comment by recursiverse 274 weeks ago
ls -U1 | wc -l

is much faster because it doesn't have to read in the whole directory listing into memory and sort it

Comment by hfs 225 weeks and 1 day ago

Your point of view

You must be signed in to comment.

Related sites and podcasts