Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using xargs from sorted by
Terminal - Commands using xargs - 608 results
egrep -r '(render_message|multipart).*('`find app/views -name '*.erb' | grep mailer | sed -e 's/\..*//' -e 's/.*\///' | uniq | xargs | sed 's/ /|/g'`')' app/models
mkdir phrack66; (cd phrack66; for n in {1..17} ; do echo "http://www.phrack.org/issues.html?issue=66&id=$n&mode=txt" ; done | xargs wget)
2009-06-11 21:42:42
Functions: cd echo mkdir xargs
2

Nice reading in the morning on the way to work, but sadly the .tar.gz for the whole issue 66 is not on phrack's website yet. So use wget to download.

find . -iname '*.jar' | xargs du -ks | cut -f1 | xargs echo | sed "s/ /+/g" | bc
find . -type f -print0|xargs -0 md5sum|sort|perl -ne 'chomp;$ph=$h;($h,$f)=split(/\s+/,$_,2);print "$f"."\x00" if ($h eq $ph)'|xargs -0 rm -v --
2009-06-07 03:14:06
Functions: find perl rm xargs
19

This one-liner will the *delete* without any further confirmation all 100% duplicates but one based on their md5 hash in the current directory tree (i.e including files in its subdirectories).

Good for cleaning up collections of mp3 files or pictures of your dog|cat|kids|wife being present in gazillion incarnations on hd.

md5sum can be substituted with sha1sum without problems.

The actual filename is not taken into account-just the hash is used.

Whatever sort thinks is the first filename is kept.

It is assumed that the filename does not contain 0x00.

As per the good suggestion in the first comment, this one does a hard link instead:

find . -xdev -type f -print0 | xargs -0 md5sum | sort | perl -ne 'chomp; $ph=$h; ($h,$f)=split(/\s+/,$_,2); if ($h ne $ph) { $k = $f; } else { unlink($f); link($k, $f); }'
find ./* -ctime -1 | xargs ls -ltr --color
2009-06-05 13:53:26
User: gnuyoga
Functions: find ls xargs
3

added alias in ~/.bashrc

alias lf='find ./* -ctime -1 | xargs ls -ltr --color'

find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs tail -f
2009-06-03 09:47:08
User: mohan43u
Functions: cut file find grep sed tail xargs
Tags: tail
5

Works in Ubuntu, I hope it will work on all Linux machines. For Unixes, tail should be capable of handling more than one file with '-f' option.

This command line simply take log files which are text files, and not ending with a number, and it will continuously monitor those files.

Putting one alias in .profile will be more useful.

find /var/logs -name * | xargs tar -jcpf logs_`date +%Y-%m-%e`.tar.bz2
git grep -l "your grep string" | xargs gedit
grep -PL "\t" -r . | grep -v ".svn" | xargs sed -i 's/\t/ /g'
2009-05-28 08:52:14
User: root
Functions: grep sed xargs
3

Note that this assumes the application is an SVN checkout and so we have to throw away all the .svn files before making the substitution.

find . -uid 0 -print0 | xargs -0 chown foo:foo
2009-05-27 19:52:13
User: abcde
Functions: chown find xargs
1

In the example, uid 0 is root. foo:foo are the user:group you want to make owner and group. '.' is the "current directory and below." -print0 and -0 indicate that filenames and directories "are terminated by a null character instead of by whitespace."

tar -zcvpf backup_`date +"%Y%m%d_%H%M%S"`.tar.gz `find <target> -atime +5` 2> /dev/null | xargs rm -fr ;
2009-05-26 17:15:52
User: angleto
Functions: rm tar xargs
Tags: backup
7

create an archive of files with access time older than 5 days, and remove original files.

sed -e's/%\([0-9A-F][0-9A-F]\)/\\\\\x\1/g' | xargs echo -e
2009-05-25 05:37:44
User: mohan43u
Functions: echo sed xargs
10
echo "http%3A%2F%2Fwww.google.com" | sed -e's/%\([0-9A-F][0-9A-F]\)/\\\\\x\1/g' | xargs echo -e

http://www.google.com

Works under bash on linux. just alter the '-e' option to its corresponding equivalence in your system to execute escape characters correctly.

find . -name \*.mp3 -printf "%C+ %h/%f\n" | sort -r | head -n20 | awk '{print "\""$2"\""}' | xargs -I {} cp {} ~/tmp
2009-05-17 07:06:10
User: bkinsey
Functions: awk cp find head sort xargs
2

Change ~/tmp to the destination directory, such as your mounted media. Change -n20 to whatever number of files to copy. It should quit when media is full. I use this to put my most recently downloaded podcasts onto my phone.

find . -name '*.m4a' | xargs -I audiofile mplayer -ao pcm "audiofile" -ao pcm:file="audiofile.wav"
svn st | grep '^\?' | awk '{print $2}' | xargs svn add; svn st | grep '^\!' | awk '{print $2}' | xargs svn rm
2009-05-14 14:34:50
User: stedwick
Functions: awk grep xargs
0

automatically add and remove files in subversion so that you don't have to do it through the annoying svn commands anymore

du -sb *|sort -nr|head|awk '{print $2}'|xargs du -sh
find / \( -name "*.log" -o -name "*.mylogs" \) -exec ls -lrt {} \; | sort -k6,8 | head -n1 | cut -d" " -f8- | tr -d '\n' | xargs -0 rm
2009-05-10 10:45:48
User: ghazz
Functions: cut find head ls sort tr xargs
1

This works on my ubuntu/debian machines.

I suspect other distros need some tweaking of sort and cut.

I am sure someone could provide a shorter/faster version.

for i in *jpg; do jpeginfo -c $i | grep -E "WARNING|ERROR" | cut -d " " -f 1 | xargs -I '{}' find /mnt/sourcerep -name {} -type f -print0 | xargs -0 -I '{}' cp -f {} ./ ; done
2009-05-07 00:30:36
User: vincentp
Functions: cp cut find grep xargs
0

Find all corrupted jpeg in the current directory, find a file with the same name in a source directory hierarchy and copy it over the corrupted jpeg file.

Convenient to run on a large bunch of jpeg files copied from an unsure medium.

Needs the jpeginfo tool, found in the jpeginfo package (on debian at least).

dpkg-query -l| grep -v "ii " | grep "rc " | awk '{print $2" "}' | tr -d "\n" | xargs aptitude purge -y
2009-04-28 19:25:53
User: thepicard
Functions: awk grep tr xargs
-3

This will, for an application that has already been removed but had its configuration left behind, purge that configuration from the system. To test it out first, you can remove the last -y, and it will show you what it will purge without actually doing it. I mean it never hurts to check first, "just in case." ;)

find -type f -printf '%P\000' | egrep -iz '\.(avi|mpg|mov|flv|wmv|asf|mpeg|m4v|divx|mp4|mkv)$' | sort -z | xargs -0 ls -1
xmms2 mlib search NOT +rating | grep -r '^[0-9]' | sed -r 's/^([0-9]+).*/\1/' | sort -R | head | xargs -L 1 xmms2 addid
2009-04-16 20:27:30
Functions: grep head sed sort xargs
3

If you're like me and want to keep all your music rated, and you use xmms2, you might like this command.

I takes 10 random songs from your xmms2 library that don't have any rating, and adds them to your current playlist. You can then rate them in another xmms2 client that supports rating (I like kuechenstation).

I'm pretty sure there's a better way to do the grep ... | sed ... part, probably with awk, but I don't know awk, so I'd welcome any suggestions.

find /path/to/my/files/ -type f -name "*txt*" | xargs du -k | awk 'BEGIN{x=0}{x=x+$1}END{print x}'
2009-04-16 14:17:04
Functions: awk du find xargs
2

Use the find command to match certain files and summarise their total size in KBytes.

locate searchstring | xargs grep foo
2009-04-16 12:51:24
User: zimon
Functions: grep locate xargs
Tags: grep locate
-3

Greps located files for an expression.

Example greps all LaTeX files for 'foo':

locate *.tex | xargs grep foo

To avoid searching thousands of files with grep it could be usefull to test first how much files are returned by locate:

locate -c *.tex
svn status | grep "^\?" | awk '{print $2}' | xargs svn add
svn status | grep '^?' | awk '{ print $2; }' | xargs svn add
2009-04-10 21:55:37
Functions: awk grep xargs
Tags: svn awk xargs
1

Lists the local files that are not present in the remote repository (lines beginning with ?)

and add them.