Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using wc from sorted by
Terminal - Commands using wc - 151 results
history | awk '{a[$'$(echo "1 2 $HISTTIMEFORMAT" | wc -w)']++}END{for(i in a){print a[i] " " i}}' | sort -rn | head
2010-05-02 21:48:53
User: bandie91
Functions: awk echo sort wc
Tags: history awk wc
0

If you use HISTTIMEFORMAT environment e.g. timestamping typed commands, $(echo "1 2 $HISTTIMEFORMAT" | wc -w)

gives the number of columns that containing non-command parts per lines.

It should universify this command.

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) | xargs wc -l | sort -n
2010-04-30 12:21:28
User: rbossy
Functions: find sort wc xargs
Tags: find count
0

find -exec is evil since it launches a process for each file. You get the total as a bonus.

Also, without -n sort will sort by lexical order (that is 9 after 10).

tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log
2010-04-29 21:23:46
User: dooblem
Functions: kill sleep tail wc
Tags: tail kill wc sleep
1

Another way of counting the line output of tail over 10s not requiring pv.

Cut to have the average per second rate :

tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log | cut -c-2

You can also enclose it in a loop and send stderr to /dev/null :

while true; do tail -n0 -f access.log>/tmp/tmp.log & sleep 2; kill $! ; wc -l /tmp/tmp.log | cut -c-2; done 2>/dev/null

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) -exec wc -l {} \; | sort
2010-04-28 07:18:21
User: rkulla
Functions: find wc
Tags: find count code
2

Gives you a nice quick summary of how many lines each of your files is comprised of. (In this example, we just check .c, .h, .php and .pl). Since we just use wc -l to count, you'll just get a very rough estimate of how many lines of actual code there are. Use a more sophisticated algorithm instead if you need to.

printf "\n%25s%10sTOTAL\n" 'FILE TYPE' ' '; for ext in $(find . -iname \*.* | egrep -o '\.[^[:space:].]+$' | egrep -v '\.svn*' | sort -f | uniq -i); do count=$(find . -iname \*$ext | wc -l); printf "%25s%10s%d\n" $ext ' ' $count; done
2010-04-16 21:12:11
User: rkulla
Functions: egrep find printf sort uniq wc
0

I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing.

Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs.

for i in `find /home/ -maxdepth 1 -type d`; do echo -n $i " ";find $i|wc -l; done
2010-04-07 05:33:49
User: shantanuo
Functions: echo wc
1

Find the number of files from each folder

date +%A | cut -c $(( $(date +%A | wc -c) - 1 ))
2010-04-07 00:23:15
User: DaveQB
Functions: cut date wc
Tags: bash echo cut date wc
0

A command to find out what the day ends in. Can be edited slightly to find out what "any" output ends in.

NB: I haven't tested with weird and wonderful output.

wget randomfunfacts.com -O - 2>/dev/null | grep \<strong\> | sed "s;^.*<i>\(.*\)</i>.*$;\1;" | while read FUNFACT; do notify-send -t $((1000+300*`echo -n $FUNFACT | wc -w`)) -i gtk-dialog-info "RandomFunFact" "$FUNFACT"; done
2010-04-02 09:43:32
User: mtron
Functions: grep read sed wc wget
1

extension to tali713's random fact generator. It takes the output & sends it to notify-osd. Display time is proportional to the lengh of the fact.

find . -name file.txt | xargs -e grep "token" -o | wc -l
find * \( -name "*.[hc]pp" -or -name "*.py" -or -name "*.i" \) -print0 | xargs -0 wc -l | tail -n 1
2010-03-25 18:58:29
User: neologism
Functions: find tail wc xargs
Tags: find xargs wc
1

Finds all C++, Python, SWIG files in your present directory (uses "*" rather than "." to exclude invisibles) and counts how many lines are in them. Returns only the last line (the total).

ps uH p <PID_OF_U_PROCESS> | wc -l
2010-03-23 15:05:27
User: mrbyte
Functions: ps wc
Tags: java ps
1

if you have problem threads problem in tomcat

svn log -v --xml file:///path/to/rep | grep kind=\"file\"|wc -l
2010-03-23 12:16:06
User: andremta
Functions: grep wc
Tags: svn subversion
3

This command will output the total number of files in a SVN Repository.

more /var/log/auth.log |grep "month"|grep ipop|grep "failed"|wc -l
2010-03-12 18:48:53
User: efuoax
Functions: grep more wc
-2

Usefull if you want to check if something is applying a dictonary of brute force.

echo $(( `ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l` ))
2010-03-12 08:42:49
User: AskApache
Functions: echo wc
1

There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100.

OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 ))

This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript).

An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command.

find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p'

I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this:

cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'

I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this.

cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'

If you were building a process-killer (very common for cheap hosting) this would also be handy.

Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

alias busy='my_file=$(find /usr/include -type f | sort -R | head -n 1); my_len=$(wc -l $my_file | awk "{print $1}"); let "r = $RANDOM % $my_len" 2>/dev/null; vim +$r $my_file'
2010-03-09 21:48:41
User: busybee
Functions: alias awk find head sort vim wc
22

This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim. Drop this in your .bash_aliases and make sure that file is initialized in your .bashrc.

libquery=/lib32/libgcc_s.so.1; if [ `nm -D $libquery | sed -n '/[0-9A-Fa-f]\{8,\}/ {p; q;}' | grep "[0-9A-Fa-f]\{16\}" | wc -l` == 1 ]; then echo "$libquery is a 64 bit library"; else echo "$libquery is a 32 bit library"; fi;
2010-03-07 04:24:08
User: birnam
Functions: echo grep sed wc
Tags: bash nm
2

Determines the flavor of a shared library by looking at the addresses of its exposed functions and seeing if they are 16 bytes or 8 bytes long. The command is written so the library you are querying is passed to a variable up font -- it would be simple to convert this to a bash function or script using this format.

if [ `fuser $0|wc -w` -gt "1" ];then exit; fi
slice="-rw-r--r-- "; ls -l | cut -c $(echo "$slice" | wc -c)-
cat /path/to/some/file.txt | tee /dev/pts/0 | wc -l
2009-11-07 22:24:28
User: atoponce
Functions: cat tee wc
Tags: tee
2

This is a cool trick to view the contents of the file on /dev/pts/0 (or whatever terminal you're using), and also send the contents of that file to another program by way of an unnamed pipe. All the while, you've not bothered saving any extra data to disk, like you might be tempted to do with sed or grep to filter output.

find . -name '*.java' | xargs -L 1 cpp -fpreprocessed | grep . | wc -l
2009-10-29 09:58:43
User: rbossy
Functions: cpp find grep wc xargs
2

I took java to make the find command simpler and to state that it works for any language supported by cpp.

cpp is the C/C++ preprocessor (interprets macros, removes comments, inserts includes, resolves trigraphs). The -fpreprocessor option tells cpp to assume the input has already been preprocessed so it will only replace comment lines with blank lines.

The -L 1 option tells xargs to launch one process for each line, indeed cpp can only process one file at the time...

find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'
2009-09-08 08:25:45
User: karpoke
Functions: awk find tail wc xargs
Tags: awk find wc
0

This is really fast :)

time find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'

204753

real 0m0.191s

user 0m0.068s

sys 0m0.116s

find . -type f -name '*.c' -exec wc -l {} \; | awk '{sum+=$1} END {print sum}'
2009-09-04 15:51:30
User: arcege
Functions: awk find wc
Tags: awk find wc
-1

Have wc work on each file then add up the total with awk; get a 43% speed increase on RHEL over using "-exec cat|wc -l" and a 67% increase on my Ubuntu laptop (this is with 10MB of data in 767 files).

find / -type f -exec wc -c {} \; | sort -nr | head -100
wget -q -O- PAGE_URL | grep -o 'WORD_OR_STRING' | wc -w
file -i * | grep 'text/plain' | wc -l
2009-08-16 21:22:46
User: voyeg3r
Functions: file grep wc
0

get files without extensions, get ASCII and utf-8 as "text/plain"