Commands using xargs (770)

  • This dup finder saves time by comparing size first, then md5sum, it doesn't delete anything, just lists them.


    82
    find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
    grokskookum · 2009-09-21 00:24:14 61

  • 42
    tar -tf <file.tar.gz> | xargs rm -r
    prayer · 2009-07-06 22:23:11 20

  • 36
    ssh root@remote.host "rpm -qa" | xargs yum -y install
    BoxingOctopus · 2012-01-06 17:10:56 16
  • This one-liner will the *delete* without any further confirmation all 100% duplicates but one based on their md5 hash in the current directory tree (i.e including files in its subdirectories). Good for cleaning up collections of mp3 files or pictures of your dog|cat|kids|wife being present in gazillion incarnations on hd. md5sum can be substituted with sha1sum without problems. The actual filename is not taken into account-just the hash is used. Whatever sort thinks is the first filename is kept. It is assumed that the filename does not contain 0x00. As per the good suggestion in the first comment, this one does a hard link instead: find . -xdev -type f -print0 | xargs -0 md5sum | sort | perl -ne 'chomp; $ph=$h; ($h,$f)=split(/\s+/,$_,2); if ($h ne $ph) { $k = $f; } else { unlink($f); link($k, $f); }' Show Sample Output


    19
    find . -type f -print0|xargs -0 md5sum|sort|perl -ne 'chomp;$ph=$h;($h,$f)=split(/\s+/,$_,2);print "$f"."\x00" if ($h eq $ph)'|xargs -0 rm -v --
    masterofdisaster · 2009-06-07 03:14:06 15

  • 16
    find . -type d -name '.svn' -print0 | xargs -0 rm -rdf
    blue64 · 2009-02-05 17:47:03 34
  • I'm working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren't, I wrote this command up. It will print a full repository listing of all files, remove the directories which confuse blame, run svn blame on each individual file, and tally the resulting line counts. It seems quite slow, depending on your repository location, because blame must hit the server for each individual file. You can remove the -R on the first part to print out the tallies for just the current directory. Show Sample Output


    16
    svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{print $2}' | sort | uniq -c | sort -r
    askedrelic · 2009-07-29 02:10:45 51
  • This is a very simple and lightweight way to play DI.FM stations For a more complete version of the command with proper strings in the menu, try: (couldnt fit in the command field above) zenity --list --width 500 --height 500 --title 'DI.FM' --text 'Pick a Radio' --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer This command line parses the html returned from http://di.fm and display all radio stations in a nice graphical menu. After the radio is chosen, the url is passed to mplayer so the music can start dependencies: - x11 with gtk environment - zenity: simple app for displaying gtk menus (sudo apt-get install zenity on ubuntu) - mplayer: simple audio player (sudo apt-get install mplayer on ubuntu) Show Sample Output


    16
    zenity --list --width 500 --height 500 --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer
    polaco · 2010-04-28 23:45:35 17
  • echo "http%3A%2F%2Fwww.google.com" | sed -e's/%\([0-9A-F][0-9A-F]\)/\\\\\x\1/g' | xargs echo -e http://www.google.com Works under bash on linux. just alter the '-e' option to its corresponding equivalence in your system to execute escape characters correctly.


    14
    sed -e's/%\([0-9A-F][0-9A-F]\)/\\\\\x\1/g' | xargs echo -e
    mohan43u · 2009-05-25 05:37:44 43
  • checks which files are not under version control, fetches the names and runs them through "svn add". WARNING: doesn't work with white spaces.


    13
    svn status |grep '\?' |awk '{print $2}'| xargs svn add
    xsawyerx · 2009-01-29 10:33:22 82
  • Using xargs is better than: find /path/to/dir -type f -exec rm \-f {} \; as the -exec switch uses a separate process for each remove. xargs splits the streamed files into more managable subsets so less processes are required.


    12
    find /path/to/dir -type f -print0 | xargs -0 rm
    root · 2009-01-26 11:30:47 72
  • Create a tgz archive of all the files containing local changes relative to a subversion repository. Add the '-q' option to only include files under version control: svn st -q | cut -c 8- | sed 's/^/\"/;s/$/\"/' | xargs tar -czvf ../backup.tgz Useful if you are not able to commit yet but want to create a quick backup of your work. Of course if you find yourself needing this it's probably a sign you should be using a branch, patches or distributed version control (git, mercurial, etc..)


    12
    svn st | cut -c 8- | sed 's/^/\"/;s/$/\"/' | xargs tar -czvf ../backup.tgz
    chrisdrew · 2009-02-09 11:24:31 17
  • This helped me find a botnet that had made into my system. Of course, this is not a foolproof or guarantied way to find all of them or even most of them. But it helped me find it.


    12
    cat /var/lib/dpkg/info/*.list > /tmp/listin ; ls /proc/*/exe |xargs -l readlink | grep -xvFf /tmp/listin; rm /tmp/listin
    kamathln · 2009-09-09 18:09:14 22

  • 12
    find /proc -user myuser -maxdepth 1 -type d -mtime +7 -exec basename {} \; | xargs kill -9
    sharfah · 2009-10-05 14:49:51 10
  • no loop, only one call of grep, scrollable ("less is more", more or less...)


    12
    ls /usr/bin | xargs whatis | grep -v nothing | less
    michelsberg · 2010-01-26 12:59:47 39
  • xargs can be used in this manner to download multiple files at a time, and xargs will in this case run 10 processes at a time and initiate a new one when the number running falls below 10. Show Sample Output


    11
    wget -nv http://en.wikipedia.org/wiki/Linux -O- | egrep -o "http://[^[:space:]]*.jpg" | xargs -P 10 -r -n 1 wget -nv
    grokskookum · 2009-08-31 18:37:33 62
  • Search for files and list the 20 largest. find . -type f gives us a list of file, recursively, starting from here (.) -print0 | xargs -0 du -h separate the names of files with NULL characters, so we're not confused by spaces then xargs run the du command to find their size (in human-readable form -- 64M not 64123456) | sort -hr use sort to arrange the list in size order. sort -h knows that 1M is bigger than 9K | head -20 finally only select the top twenty out of the list Show Sample Output


    11
    find . -type f -print0 | xargs -0 du -h | sort -hr | head -20
    flatcap · 2012-03-30 10:21:12 10
  • This command will find the biggest files recursively under a certain directory, no matter if they are too many. If you try the regular commands ("find -type f -exec ls -laSr {} +" or "find -type f -print0 | xargs -0 ls -laSr") the sorting won't be correct because of command line arguments limit. This command won't use command line arguments to sort the files and will display the sorted list correctly. Show Sample Output


    10
    find . -type f -printf '%20s %p\n' | sort -n | cut -b22- | tr '\n' '\000' | xargs -0 ls -laSr
    fsilveira · 2009-08-13 13:13:33 14
  • change the *.avi to whatever you want to match, you can remove it altogether if you want to check all files.


    10
    find -type f -name "*.avi" -print0 | xargs -0 mplayer -vo dummy -ao dummy -identify 2>/dev/null | perl -nle '/ID_LENGTH=([0-9\.]+)/ && ($t +=$1) && printf "%02d:%02d:%02d\n",$t/3600,$t/60%60,$t%60' | tail -n 1
    grokskookum · 2009-09-24 15:50:39 34
  • I needed a way to search all files in a web directory that contained a certain string, and replace that string with another string. In the example, I am searching for "askapache" and replacing that string with "htaccess". I wanted this to happen as a cron job, and it was important that this happened as fast as possible while at the same time not hogging the CPU since the machine is a server. So this script uses the nice command to run the sh shell with the command, which makes the whole thing run with priority 19, meaning it won't hog CPU processing. And the -P5 option to the xargs command means it will run 5 separate grep and sed processes simultaneously, so this is much much faster than running a single grep or sed. You may want to do -P0 which is unlimited if you aren't worried about too many processes or if you don't have to deal with process killers in the bg. Also, the -m1 command to grep means stop grepping this file for matches after the first match, which also saves time. Show Sample Output


    10
    sh -c 'S=askapache R=htaccess; find . -mount -type f|xargs -P5 -iFF grep -l -m1 "$S" FF|xargs -P5 -iFF sed -i -e "s%${S}%${R}%g" FF'
    AskApache · 2009-10-02 05:03:10 8
  • This command is useful when you want to know what process is responsible for a certain GUI application and what command you need to issue to launch it in terminal. Show Sample Output


    9
    xprop | awk '/PID/ {print $3}' | xargs ps h -o pid,cmd
    jackhab · 2009-02-16 07:55:19 88
  • Very useful set of commands to know when your file system was created. Show Sample Output


    9
    df / | awk '{print $1}' | grep dev | xargs tune2fs -l | grep create
    Kaio · 2009-02-16 18:45:03 1155
  • Run this in the directory you store your music in. mp3gain and vorbisgain applies the ReplayGain normalization routine to mp3 and ogg files (respectively) in a reversible way. ReplayGain uses psychoacoustic analysis to make all files sound about the same loudness, so you don't get knocked out of your chair by loud songs after cranking up the volume on quieter ones.


    9
    find . -iname \*.mp3 -print0 | xargs -0 mp3gain -krd 6 && vorbisgain -rfs .
    Viaken · 2009-03-09 18:11:35 13
  • 1. find file greater than 10 MB 2. direct it to xargs 3. xargs pass them as argument to ls Show Sample Output


    9
    find ./ -size +10M -type f -print0 | xargs -0 ls -Ssh1 --color
    eastwind · 2009-08-25 18:40:47 8
  • Adapted using your usefull comments !


    9
    grep -e `date +%Y-%m-%d` /var/log/dpkg.log | awk '/install / {print $4}' | uniq | xargs apt-get -y remove
    skygreg · 2010-01-12 09:42:22 12
  • Maybe simpler, but again, don't know how it will work with space in filename.


    9
    xargs -n 2 mv < file_with_colums_of_names
    Juluan · 2010-12-27 18:06:15 7
  •  1 2 3 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

List your installed Firefox extensions

Find C/C++ source files
Find C/C++ source files and headers in the current directory.

list files recursively by size

View ~/.ssh/known_hosts key information
Will return the SSH server key information for each host you have in your ~/.ssh/known_hosts file, including key size, key fingerprint, key IP address or domain name, and key type.

List out classes in of all htmls in directory
Lists out all classes used in all *.html files in the currect directory. usefull for checking if you have left out any style definitions, or accidentally given a different name than you intended. ( I have an ugly habit of accidentally substituting camelCase instead of using under_scores: i would name soemthing counterBox instead of counter_box) WARNING: assumes you give classnames in between double quotes, and that you apply only one class per element.

Automator Bash script to create Clean zips in MacOS Finder without __MACOSX metadata
Finder compresses to ZIP but always includes extraneous metadata files (__MACOSX and .DS_Store) files and folders that may confuse other programs. One alternative is creating them and then editing the ZIP. This can work standalone or in an automator script accepting multiple selections (files or folders) and creating one zip per argument/selected file without that metada.

convert single digit to double digits
Uses 'rename' to pad zeros in front of first existing number in each filename. The "--" is not required, but it will prevent errors on filenames which start with "-". You can change the "2d" to any number you want, equaling the total numeric output: aka, 4d = ????, 8d = ????????, etc. I setup a handful of handy functions to this effect (because I couldn't figure out how to insert a var for the value) in the form of 'padnum?', such as: padnum5 () { /usr/bin/rename 's/\d+/sprintf("%05d",$&)/e' -- $@ } Which would change a file "foo-1.txt" to "foo-00001.txt"

fast access to any of your favorite directory.
example: -------------------------------------------------------------------------------------------- user@ubuntu:~/workspace/SVN/haystak-repos/trunk/internal/src$ addpi -------------------------------------------------------------------------------------------- Now that directory is in the list of fast access directories. You can switch to it anytime like this: -------------------------------------------------------------------------------------------- user@ubuntu:~$ pi internal` user@ubuntu:~/workspace/SVN/haystak-repos/trunk/internal/src$ -------------------------------------------------------------------------------------------- Please note the backquote ( the symbol that shares its key with ~ in the keyboard ) pi will switch you to that directory. To see the list of all fast access directories you have to say "cat ~/.pi"

List recorded formular fields of Firefox
When you fill a formular with Firefox, you see things you entered in previous formulars with same field names. This command list everything Firefox has registered. Using a "delete from", you can remove anoying Google queries, for example ;-)

Generate diff of first 500 lines of two files
Useful for massive files where doing a full diff would take too long. This just runs diff on the first 500 lines of each. The use of subshells to feed STDIN is quite a useful construct.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: