Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using wc from sorted by
Terminal - Commands using wc - 151 results
ls | wc -l
2013-01-22 03:35:35
User: Sebasg
Functions: ls wc
-3

ls -1 shows one file per line (update: -1 was not really needed)

wc -l counts the lines received from the previous command

find . -type d | while read dir ; do num=`ls -l $dir | grep '^-' | wc -l` ; echo "$num $dir" ; done | sort -rnk1 | head
for host in $HOSTNAMES; do ping -q -c3 $host && ssh $host 'command' & for count in {1..15}; do sleep 1; jobs | wc -l | grep -q ^0\$ && continue; done; kill %1; done &>/dev/null
for host in $MYHOSTS; do ping -q -c3 $H 2>&1 1>/dev/null && ssh -o 'AllowedAuthe ntications publickey' $host 'command1; command2' & for count in 1 2 3 4 5; do sleep 1; jobs | wc -l | grep -q ^0\$ && continue; done; kill %1; done
2012-11-13 23:12:27
User: a8ksh4
Functions: grep host jobs kill ping sleep ssh wc
0

Execute commands serially on a list of hosts. Each ssh connection is made in the background so that if, after five seconds, it hasn't closed, it will be killed and the script will go on to the next system.

Maybe there's an easier way to set a timeout in the ssh options...

find -maxdepth 3 -type d | while read -r dir; do printf "%s:\t" "$dir"; find "$dir" | wc -l; done
2012-10-15 15:00:09
User: brainstorm
Functions: find printf read wc
1

Counts the files present in the different directories recursively. One only has to change maxdepth to have further insight in the directory hierarchy.

Found at unix.stackexchange.com:

http://unix.stackexchange.com/questions/4105/how-do-i-count-all-the-files-recursively-through-directories

ls|wc -l
NUMCPUS=`grep ^proc /proc/cpuinfo | wc -l`; FIRST=`cat /proc/stat | awk '/^cpu / {print $5}'`; sleep 1; SECOND=`cat /proc/stat | awk '/^cpu / {print $5}'`; USED=`echo 2 k 100 $SECOND $FIRST - $NUMCPUS / - p | dc`; echo ${USED}% CPU Usage
2012-10-02 03:57:51
User: toxick
Functions: awk echo sleep wc
0

Using the output of 'ps' to determine CPU usage is misleading, as the CPU column in 'ps' shows CPU usage per process over the entire lifetime of the process. In order to get *current* CPU usage (without scraping a top screen) you need to pull some numbers from /proc/stat. Here, we take two readings, once second apart, determine how much IDLE time was spent across all CPUs, divide by the number of CPUs, and then subtract from 100 to get non-idle time.

find . \( -iname '*.cpp' -o -iname '*.h' \) -exec wc -l {} \; | sort -n | cut --delimiter=. -f 1 | awk '{s+=$1} END {print s}'
2012-09-19 15:21:01
User: jecxjoopenid
Functions: awk cut find sort wc
0

Searches for *.cpp and *.h in directory structure, counts the number of lines for each matching file and adds the counts together.

netstat -an | grep 80 | wc -l
centralized(){ L=`echo -n $*|wc -c`; echo -e "\x1b[$[ ($COLUMNS / 2) - ($L / 2) ]C$*"; }
2012-08-16 18:19:26
User: xenomuta
Functions: echo wc
Tags: echo ansi
0

Echoes text horizontally centralized based on screen width

git log --pretty=oneline b56b83.. | wc -l
git log --summary 223286b.. | grep 'Author:' | wc -l
ls -d1 pattern*/ | wc -l
find /some/path -type f -and -iregex '.*\.mp3$' -and -print0 | tr -d -c '\000' |wc -c
2012-03-31 21:57:33
User: kyle0r
Functions: find tr wc
1

In this example, the command will recursively find files (-type f) under /some/path, where the path ends in .mp3, case insensitive (-iregex).

It will then output a single line of output (-print0), with results terminated by a the null character (octal 000). Suitable for piping to xargs -0. This type of output avoids issues with garbage in paths, like unclosed quotes.

The tr command then strips away everything but the null chars, finally piping to wc -c, to get a character count.

I have found this very useful, to verify one is getting the right number of before you actually process the results through xargs or similar. Yes, one can issue the find without the -print0 and use wc -l, however if you want to be 1000% sure your find command is giving you the expected number of results, this is a simple way to check.

The approach can be made in to a function and then included in .bashrc or similar. e.g.

count_chars() { tr -d -c "$1" | wc -c; }

In this form it provides a versatile character counter of text streams :)

find . -type f -name "*.*" -exec cat {} > totalLines 2> /dev/null \; && wc -l totalLines && rm totalLines
who am i | wc -l
cat z.log | cut -d ':' -f1 | sort | uniq | xargs -l1 -iFF echo 'echo FF $(cat z.log | grep -e "^FF" | grep -e Timeout | wc -l )' | bash
find . -maxdepth 1 -type f | wc -l
sayspeed() { for i in $(seq 1 `echo "$1"|wc -c`); do echo -n "`echo $1 |cut -c ${i}`"; sleep 0.1s; done; echo "";}
2012-02-11 05:51:42
User: kundan
Functions: echo seq sleep wc
0

change the time that you would like to have as print interval

and just use it to say whatever you want to

find . -type f -name "*.php" -exec wc -l {} +;
function xzv() { THREADS=`grep processor /proc/cpuinfo | wc -l`; for file in $*; do pv -s `stat -c%s $file` < $file | pxz -q -T $THREADS > $file.xz ; done; }
2011-12-14 08:22:08
User: oernii2
Functions: file wc
0

You need: pxz for the actual work (http://jnovy.fedorapeople.org/pxz/). The function could be better with better multifile and stdin/out support.

svn info -R --xml file:///path/to/rep | grep kind=\"file\"|wc -l
2011-12-06 20:06:42
User: ijeyanthan
Functions: grep info wc
Tags: svn subversion
2

Number of files in a SVN Repository

This command will output the total number of files in a SVN Repository.

find /usr/include/ -name '*.[c|h]pp' -o -name '*.[ch]' -print0 | xargs -0 wc -l | tail -1
find /usr/include/ -name '*.[c|h]pp' -o -name '*.[ch]' -exec cat {} \;|wc -l
2011-12-01 19:58:52
User: kerim
Functions: cat find wc
-4

Count your source and header file's line numbers

For example for java change the command like this

find . -name '*.java' -exec cat {} \;|wc -l

for f in $(ls -A ./dir); do echo -n $f && diff original.txt ./dir/$f | wc -l ; done | perl -ne 'my $h={}; while (<>) { chomp; if (/^(\S+?)\s*(\d+?)$/){$h->{$1}=$2;} }; for my $k (sort { $h->{$a} $h->{$b} } keys %$h ){ print "$k\t$h->{$k}\n"}'