What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

UpGuard checks and validates configurations for every major OS, network device, and cloud provider.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags



Commands tagged count from sorted by
Terminal - Commands tagged count - 32 results
egrep -v '^\s*($|#)' $(git grep -l '#!/bin/.*sh' *) | wc -l
2016-02-15 11:15:48
User: Natureshadow
Functions: egrep grep wc
Tags: git grep count code

Uses git grep for speed, relies on a valid she-bang, ignores leading whitespace when stripping comments and blank lines

find . -name '*.php' | xargs wc -l
2014-12-24 11:15:18
User: erez83
Functions: find wc xargs
Tags: count code

count all the lines of code in specific directory recursively

in this case only *.php

can be *.*

find . -name "*.pdf" -exec pdftk {} dump_data output \; | grep NumberOfPages | awk '{print $1,$2}'
2014-11-14 23:36:56
User: mtrgrrl
Functions: awk find grep

using awk, changed the line given by sucotronic in command #11733 to print the first and second columns

for i in */; do echo $(find $i -type f -regextype posix-extended -regex ".*\.(mp3|ogg|wav|flac)" | wc -l) $i ; done
find . -type d -maxdepth 1 -print0 | xargs -0 -I{} sh -c 'find "{}" -type f | grep "ogg\|mp3\|wav\|flac$" | wc -l | tr -d "\n"; echo " {}"'
2013-12-22 13:40:29
User: dbrgn
Functions: echo find grep sh tr wc xargs

This lists the number of ogg/mp3/wav/flac files in each subdirectory of the current directory. The output can be sorted by piping it into "sort -n".

grep -c "search_string" /path/to/file
2013-12-10 18:13:54
User: meatflag
Functions: grep

-c will count the number of times your search matches in the file.

for i in `find -L /var/ -wholename \*log\* -type d`; do COUNT=`ls -1U $i | wc -l`; if [ $COUNT -gt 10 ]; then echo $i $COUNT; fi; done
find . -maxdepth 1 -type d -exec sh -c "printf '{} ' ; find '{}' -type f -ls | wc -l" \;
2013-07-29 19:46:35
User: HerbCSO
Functions: find sh

For each directory from the current one, list the counts of files in each of these directories. Change the -maxdepth to drill down further through directories.

find /usr/include/ -name '*.[c|h]pp' -o -name '*.[ch]' -print0 | xargs -0 cat | grep -v "^ *$" | grep -v "^ *//" | grep -v "^ */\*.*\*/" | wc -l
2013-06-17 08:37:37
Functions: cat find grep wc xargs

Count your source and header file's line numbers. This ignores blank lines, C++ style comments, single line C style comments.

This will not ignore blank lines with tabs or multiline C style comments.

svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{count[$2]++}END{for(j in count) print count[j] "\t" j}' | sort -rn
2013-05-03 01:45:12
User: kurzum
Functions: awk egrep ls sort xargs
Tags: svn count

This one has a better performance, as it is a one pass count with awk. For this script it might not matter, but for others it is a good optiomization.

svn ls -R | egrep -v -e "\/$" | tr '\n' '\0' | xargs -0 svn blame | awk '{print $2}' | sort | uniq -c | sort -nr
2013-04-10 19:37:53
User: rymo
Functions: awk egrep ls sort tr uniq xargs
Tags: svn count

make usable on OSX with filenames containing spaces. note: will still break if filenames contain newlines... possible, but who does that?!

find . -name "*.pdf" -exec pdftk {} dump_data output \; | grep NumberOfPages | awk '{s+=$2} END {print s}'
grep -c "^$" filename
2012-06-26 17:43:17
User: ankush108
Functions: grep
Tags: grep count empty

This pattern matches empty lines in the file and -c gives the count

find /some/path -type f -and -iregex '.*\.mp3$' -and -print0 | tr -d -c '\000' |wc -c
2012-03-31 21:57:33
User: kyle0r
Functions: find tr wc

In this example, the command will recursively find files (-type f) under /some/path, where the path ends in .mp3, case insensitive (-iregex).

It will then output a single line of output (-print0), with results terminated by a the null character (octal 000). Suitable for piping to xargs -0. This type of output avoids issues with garbage in paths, like unclosed quotes.

The tr command then strips away everything but the null chars, finally piping to wc -c, to get a character count.

I have found this very useful, to verify one is getting the right number of before you actually process the results through xargs or similar. Yes, one can issue the find without the -print0 and use wc -l, however if you want to be 1000% sure your find command is giving you the expected number of results, this is a simple way to check.

The approach can be made in to a function and then included in .bashrc or similar. e.g.

count_chars() { tr -d -c "$1" | wc -c; }

In this form it provides a versatile character counter of text streams :)

find -iname "*.pdf" -exec pdfinfo -meta {} \;|awk '{if($1=="Pages:"){s+=$2}}END{print s}'
2011-12-13 15:02:11
User: Barabbas
Functions: awk find
Tags: awk find pdf count sum

This sums up the page count of multiple pdf files without the useless use of grep and sed which other commandlinefus use.

find /usr/include/ -name '*.[c|h]pp' -o -name '*.[ch]' -print0 | xargs -0 wc -l | tail -1
find /usr/include/ -name '*.[c|h]pp' -o -name '*.[ch]' -exec cat {} \;|wc -l
2011-12-01 19:58:52
User: kerim
Functions: cat find wc

Count your source and header file's line numbers

For example for java change the command like this

find . -name '*.java' -exec cat {} \;|wc -l

find /path/folder -type f -name "*.*" -print -exec rm -v {} + | wc -l;
2011-09-19 14:53:37
User: Koobiac
Functions: find rm wc

It does not work without the verbose mode (-v is important)

find . -type f | sed -n 's/..*\.//p' | sort -f | uniq -ic
2011-08-19 00:19:43
User: tyler_l
Functions: find sed sort uniq

Change "sort -f" to "sort" and "uniq -ic" to "uniq -c" to make it case sensitive.

awk '{printf("/* %02d */ %s\n", NR,$0)}' inputfile > outputfile
2011-01-04 19:13:55
User: lucasrangit
Functions: awk

I often find the need to number enumerations and other lists when programming. With this command, create a new file called 'inputfile' with the text you want to number. Paste the contents of 'outputfile' back into your source file and fix the tabbing if necessary. You can also change this to output hex numbering by changing the "%02d" to "%02x". If you need to start at 0 replace "NR" with "NR-1". I adapted this from http://osxdaily.com/2010/05/20/easily-add-line-numbers-to-a-text-file/.

while [[ COUNTER -le 10 && IFS=':' ]]; do for LINE in $(cat /tmp/list); do some_command(s) $LINE; done; COUNTER=$((COUNTER+1)); done
2010-09-01 15:09:59
User: slashdot
Functions: cat

At times I find that I need to loop through a file where each value that I need to do something with is not on a separate line, but rather separated with a ":" or a ";". In this instance, I create a loop within which I define 'IFS' to be something other than a whitespace character. In this example, I iterate through a file which only has one line, and several fields separated with ":". The counter helps me define how many times I want to repeat the loop.

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) -exec wc -l {} + | sort -n
2010-05-03 00:16:02
User: hackerb9
Functions: find sort wc

The same as the other two alternatives, but now less forking! Instead of using '\;' to mark the end of an -exec command in GNU find, you can simply use '+' and it'll run the command only once with all the files as arguments.

This has two benefits over the xargs version: it's easier to read and spaces in the filesnames work automatically (no -print0). [Oh, and there's one less fork, if you care about such things. But, then again, one is equal to zero for sufficiently large values of zero.]

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) | xargs wc -l | sort -n
2010-04-30 12:21:28
User: rbossy
Functions: find sort wc xargs
Tags: find count

find -exec is evil since it launches a process for each file. You get the total as a bonus.

Also, without -n sort will sort by lexical order (that is 9 after 10).

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) -exec wc -l {} \; | sort
2010-04-28 07:18:21
User: rkulla
Functions: find wc
Tags: find count code

Gives you a nice quick summary of how many lines each of your files is comprised of. (In this example, we just check .c, .h, .php and .pl). Since we just use wc -l to count, you'll just get a very rough estimate of how many lines of actual code there are. Use a more sophisticated algorithm instead if you need to.

printf "\n%25s%10sTOTAL\n" 'FILE TYPE' ' '; for ext in $(find . -iname \*.* | egrep -o '\.[^[:space:].]+$' | egrep -v '\.svn*' | sort -f | uniq -i); do count=$(find . -iname \*$ext | wc -l); printf "%25s%10s%d\n" $ext ' ' $count; done
2010-04-16 21:12:11
User: rkulla
Functions: egrep find printf sort uniq wc

I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing.

Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs.