Add thousand separator with sed, in a file or within pipe

sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta' filename
Does not necessarily require a file to process, it can be used in a pipe as well: cat filename | sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta' I don't remember where I copy/pasted this from, I wish I credited the original author
Sample Output
$ cat sample.txt
1234567891234567890
$ sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta' sample.txt
1,234,567,891,234,567,890

3
2009-03-24 20:06:02

These Might Interest You

  • set colsep "&TAB" -- for tab separator set colsep "|" -- for pipe separator etc...


    -5
    set colsep "{char}"
    EBAH · 2011-04-05 10:48:48 0
  • Don't need to pipe the output into rs if you just tell jot to use a null separator character.


    1
    jot -s '' -r -n 8 0 9
    Hal_Pomeranz · 2009-08-24 13:35:20 1
  • The find command isn't the important bit, here: it's just what feeds the rest of the pipe (this one looks for all PDFs less than 7 days old, in an archive directory, whose structure is defined by a wildcard pattern: modify this find, to suit your real needs). I consider the next bit the useful part. xargs stats out the byte-size of each file, and this is passed to awk, which adds them all together, and prints the grand total. I use printf, in order to override awk's tendency to swtich to exponential output above a certain threshold, and, specifically "%0.0f\n", because it was all I can find to force things back to digital on Redhat systems. This is then passed to an optional sed, which formats them in a US/UK number format, to make large numbers easier to read. Change the comma in the sed, for your preferred separator character (e.g. sed -r ':L;s=\b([0-9]+)([0-9]{3})\b=\1 \2=g;t L' for most European countries). (This sed is credited to user name 'archtoad6', on the Linuxquestions forum.) This is useful for monitoring changes in the storage use within large and growing archives of files, and appears to execute much more quickly than some options I have seen (use of a 'for SIZE in find-command -exec du' style approach, instead, for instance). I just ran it on a not particularly spectacular server, where a directory tree with over three thousand subdirectories, containing around 4000 files, of about 4 Gigs, total, responded in under a second. Show Sample Output


    0
    find /path/to/archive/?/??/??? -mtime -7 -name "*.pdf" | xargs stat -c "%s"| awk '{sum +=$1}END{printf("%0.0f\n",sum)}'|sed -r ':Label;s=\b([0-9]+)([0-9]{3})\b=\1,\2=g;t Label'
    daniel_walker · 2010-08-23 15:55:30 0
  • Shorter version with proper stderr redirection .


    2
    mkfifo pipe && nc remote_server 1337 <pipe | /bin/bash &>pipe
    mikispag · 2011-08-18 19:02:09 0

What Others Think

Doesn't work well with decimals: echo 1234.5678 | sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta' 1,234.5,678
pleiades · 401 weeks and 4 days ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: