Commands tagged sed (376)

  • This is freaking sweet!!! Here is the full alias, (I didn't want to cause display problems on commandlinefu.com's homepage): alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); S=$SECONDS; tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$"F "$D" && logger -s "Tarred $D to $F in $(($SECONDS-$S)) seconds" ) & )' Creates a .tgz archive of whatever directory it is run from, in the background, detached from current shell so if you logout it will still complete. Also, you can run this as many times as you want, if the archive .tgz already exists, it just moves it to a numbered backup '--backup=numbered'. The coolest part of this is the transformation performed by tar and sed so that the archive file names are automatically created, and when you extract the archive file it is completely safe thanks to the transform command. If you archive lets say /home/tombdigger/new-stuff-to-backup/ it will create the archive /home/#home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz Then when you extract it, like tar -xvzf #home#tombdigger#new-stuff-to-backup#-2010-11-18.tgz instead of overwriting an existing /home/tombdigger/new-stuff-to-backup/ directory, it will extract to /home/tombdigger/new-stuff-to-backup.2010-11-18/ Basically, the tar archive filename is the PWD with all '/' replaced with '#', and the date is appended to the name so that multiple archives are easily managed. This example saves all archives to your $HOME/archive-name.tgz, but I have a $BKDIR variable with my backup location for each shell user, so I just replaced HOME with BKDIR in the alias. So when I ran this in /opt/askapache/SOURCE/lockfile-progs-0.1.11/ the archive was created at /askapache-bk/#opt#askapache#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz Upon completion, uses the universal logger tool to output its completion to syslog and stderr (printed to your terminal), just remove that part if you don't want it, or just remove the '-s ' option from logger to keep the logs only in syslog and not on your terminal. Here's how my syslog server recorded this.. 2010-11-18T00:44:13-05:00 gravedigger.askapache.com (127.0.0.5) [user] [notice] (logger:) Tarred /opt/askapache/SOURCE/lockfile-progs-0.1.11 to /askapache-bk/tarred/#opt#SOURCE#lockfile-progs-0.1.11#-2010-11-18.tgz in 4 seconds Caveats Really this is very robust and foolproof, the only issues I ever have with it (I've been using this for years on my web servers) is if you run it in a directory and then a file changes in that directory, you get a warning message and your archive might have a problem for the changed file. This happens when running this in a logs directory, a temp dir, etc.. That's the only issue I've ever had, really nothing more than a heads up. Advanced: This is a simple alias, and very useful as it works on basically every linux box with semi-current tar and GNU coreutils, bash, and sed.. But if you want to customize it or pass parameters (like a dir to backup instead of pwd), check out this function I use.. this is what I created the alias from BTW, replacing my aa_status function with logger, and adding $SECONDS runtime instead of using tar's --totals function tarred () { local GZIP='--fast' PWD=${1:-`pwd`} F=$(date +${BKDIR}/%m-%d-%g-%H%M-`sed -u 's/[\/\ ]/#/g' [[ ! -r "$PWD" ]] && echo "Bad permissions for $PWD" 1>&2 && return 2; ( ( tar --totals --ignore-failed-read --transform "s@^${PWD%/*}@`date +${PWD%/*}.%m-%d-%g`@S" -czPf $F $PWD && aa_status "Completed Tarp of $PWD to $F" ) & ) } #From my .bash_profile http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Show Sample Output


    8
    alias tarred='( ( D=`builtin pwd`; F=$(date +$HOME/`sed "s,[/ ],#,g" <<< ${D/${HOME}/}`#-%F.tgz); tar --ignore-failed-read --transform "s,^${D%/*},`date +${D%/*}.%F`,S" -czPf "$F" "$D" &>/dev/null ) & )'
    AskApache · 2010-11-18 06:24:34 2
  • I find it useless but definitely simpler than #9230 Show Sample Output


    8
    read -ra words <<< "<sentence>" && echo "${words[@]^}"
    RanyAlbeg · 2011-09-10 02:46:42 7
  • Print all lines between two line numbers This command uses sed(1) to print all lines between two known line numbers in a file. Useful for seeing output in a log file, where the line numbers are known. The above command will print all lines between, and including, lines 3 and 6. Show Sample Output


    8
    sed -n '3,6p' /path/to/file
    flatcap · 2011-12-14 15:09:38 8
  • will purge: only installed apps: /^ii/!d avoiding current kernel stuff: /'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d using app names: s/^[^ ]* [^ ]* \([^ ]*\).*/\1/ avoiding stuff without a version number: /[0-9]/!d


    7
    dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge
    plasticdoc · 2009-06-19 10:11:00 9
  • If BREs can be used, this sed version will also get the job done.


    7
    command | sed -n '1,/regex/p'
    putnamhill · 2009-12-22 15:04:38 10
  • Will edit *.db files in the same directory with todays date. Useful for doing a mass update to domains on a nameserver, adding spf records, etc. Looks for a string starting with 200 or 201 followed by 7 numbers, and replaces with todays date. This won't overwrite Ip's but i would still do some double checking after running this. Make sure your server's date is correct, otherwise insert your own serial number. rndc reload should usually follow this command.


    7
    sed -i 's/20[0-1][0-9]\{7\}/'`date +%Y%m%d%I`'/g' *.db
    alf · 2010-03-24 07:28:58 10
  • Here is the full function (got trunctated), which is much better and works for multiple queries. function cmdfu () { local t=~/cmdfu; until [[ -z $1 ]]; do echo -e "\n# $1 {{{1" >> $t; curl -s "commandlinefu.com/commands/matching/$1/`echo -n $1|base64`/plaintext" | sed '1,2d;s/^#.*/& {{{2/g' | tee -a $t > $t.c; sed -i "s/^# $1 {/# $1 - `grep -c '^#' $t.c` {/" $t; shift; done; vim -u /dev/null -c "set ft=sh fdm=marker fdl=1 noswf" -M $t; rm $t $t.c } Searches commandlinefu for single/multiple queries and displays syntax-highlighted, folded, and numbered results in vim. Show Sample Output


    7
    cmdfu(){ local t=~/cmdfu;echo -e "\n# $1 {{{1">>$t;curl -s "commandlinefu.com/commands/matching/$1/`echo -n $1|base64`/plaintext"|sed '1,2d;s/^#.*/& {{{2/g'>$t;vim -u /dev/null -c "set ft=sh fdm=marker fdl=1 noswf" -M $t;rm $t; }
    AskApache · 2012-02-21 05:43:16 11
  • A powerfull way to rename file using sed groups. & stand for the matched expression. \1 referes to the first group between parenthesis. \2 to the second. Show Sample Output


    6
    ls | sed -n -r 's/banana_(.*)_([0-9]*).asc/mv & banana_\2_\1.asc/gp' | sh
    log0 · 2009-04-28 17:53:58 6
  • I modify 4077 and marssi commandline to simplify it and skip an error when parsing the first line of lsmod (4077). Also, it's more concise and small now. I skip using xargs ( not required here ). This is only for GNU sed. For thoses without GNU sed, use that : modinfo $(lsmod | awk 'NR>1 {print $1}') | sed -e '/^dep/s/$/\n/g' -e '/^file/b' -e '/^desc/b' -e '/^dep/b' -e d


    6
    modinfo $(cut -d' ' -f1 /proc/modules) | sed '/^dep/s/$/\n/; /^file\|^desc\|^dep/!d'
    sputnick · 2009-11-18 23:40:46 4
  • I took matthewbauer's cool one-liner and rewrote it as a shell function that returns all the suggestions or outputs "OK" if it doesn't find anything wrong. It should work on ksh, zsh, and bash. Users that don't have tee can leave that part off like this: spellcheck(){ typeset y=$@;curl -sd "<spellrequest><text>$y</text></spellrequest>" https://google.com/tbproxy/spell|sed -n '/s="[1-9]"/{s/<[^>]*>/ /g;s/\t/ /g;s/ *\(.*\)/Suggestions: \1\n/g;p}';} Show Sample Output


    6
    spellcheck(){ typeset y=$@;curl -sd "<spellrequest><text>$y</text></spellrequest>" https://www.google.com/tbproxy/spell|sed -n '/s="[0-9]"/{s/<[^>]*>/ /g;s/\t/ /g;s/ *\(.*\)/Suggestions: \1\n/g;p}'|tee >(grep -Eq '.*'||echo -e "OK");}
    eightmillion · 2010-02-17 08:20:48 17
  • Use curl and sed to shorten an URL using goo.gl without any other api Show Sample Output


    6
    curl -s -d'&url=URL' http://goo.gl/api/url | sed -e 's/{"short_url":"//' -e 's/","added_to_history":false}/\n/'
    Soubsoub · 2010-10-01 23:20:08 10

  • 5
    sed -i 'your sed stuff here' file
    Keruspe · 2009-05-19 15:35:42 5
  • I've been auto-generating some complex GnuPlots; with multiplots the first plot of each group needs to be a 'plot' whereas the others need to be 'replots' to allow overplotting/autoscaling/etc to work properly. This is used to replace only the first instance of 'replot'. Show Sample Output


    5
    sed '/MARKER/{N;s/THIS/THAT/}'
    mungewell · 2009-06-12 02:29:50 4
  • This will comment out a line, specified by line number, in a given file.


    5
    sed -i '19375 s/^/#/' file
    BoxingOctopus · 2009-10-07 17:50:40 7
  • In this simple example the command will add a comma to the end of every line except the last. I found this really useful when programatically constructing sql scripts. See sample output for example. Show Sample Output


    5
    sed -e "$ ! s/$/,/"
    jgc · 2009-10-13 10:13:52 7
  • The original was a little bit too complicated for me. This one does not use any variables.


    5
    pronounce(){ wget -qO- $(wget -qO- "http://www.m-w.com/dictionary/$@" | grep 'return au' | sed -r "s|.*return au\('([^']*)', '([^'])[^']*'\).*|http://cougar.eb.com/soundc11/\2/\1|") | aplay -q; }
    matthewbauer · 2010-03-12 17:44:16 6
  • If you're going to use od, here's how to suppress the labels at the beginning. Also, it doesn't output the \x, hence the sed command at the end. Remove it for space separated hex values instead Show Sample Output


    5
    echo -n "text" | od -A n -t x1 |sed 's/ /\\x/g'
    camocrazed · 2010-07-14 15:31:36 8
  • This command outputs a table of sighting opportunities for the International Space Station. Find the URL for your city here: http://spaceflight.nasa.gov/realdata/sightings/ Show Sample Output


    5
    links -dump "http://spaceflight.nasa.gov/realdata/sightings/cities/view.cgi?country=United_States&region=Wisconsin&city=Portage" | sed -n '/--/,/--/p'
    eightmillion · 2011-05-03 12:15:56 7
  • Using the sed -i (inline), you can replace the beginning of the first line of a file without redirecting the output to a temporary location.


    5
    sed -i '1s/^/text to prepend\n/' file1
    xeor · 2011-06-25 12:02:11 14
  • Show the current load of the CPU as a percentage. Read the load from /proc/loadavg and convert it using sed: Strip everything after the first whitespace: sed -e 's/ .*//' Delete the decimal point: sed -e 's/\.//' Remove leading zeroes: sed -e 's/^0*//' Show Sample Output


    5
    sed -e 's/ .*//' -e 's/\.//' -e 's/^0*//' /proc/loadavg
    flatcap · 2014-04-18 19:12:05 10
  • will show: installed linux headers, image, or modules: /^ii/!d avoiding current kernel: /'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d only application names: s/^[^ ]* [^ ]* \([^ ]*\).*/\1/ avoiding stuff without a version number: /[0-9]/!d Show Sample Output


    4
    dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d'
    plasticdoc · 2009-06-19 10:23:38 8
  • Limited, but useful construct to extract text embedded in XML tags. This will only work if bar is all on one line. If nobody posts an alternative for the multiline sed version, I'll figure it out later...


    4
    sed -n 's/.*<foo>\([^<]*\)<\/foo>.*/\1/p'
    recursiverse · 2009-07-23 07:59:30 3
  • The coolest way I've found to backup a wordpress mysql database using encryption, and using local variables created directly from the wp-config.php file so that you don't have to type them- which would allow someone sniffing your terminal or viewing your shell history to see your info. I use a variation of this for my servers that have hundreds of wordpress installs and databases by using a find command for the wp-config.php file and passing that through xargs to my function. Show Sample Output


    4
    eval $(sed -n "s/^d[^D]*DB_\([NUPH]\)[ASO].*',[^']*'\([^']*\)'.*/_\1='\2'/p" wp-config.php) && mysqldump --opt --add-drop-table -u$_U -p$_P -h$_H $_N | gpg -er AskApache >`date +%m%d%y-%H%M.$_N.sqls`
    AskApache · 2009-08-18 07:03:08 7

  • 4
    awk '{print $1}' "/proc/modules" | xargs modinfo | awk '/^(filename|desc|depends)/'
    unixmonkey7109 · 2009-11-20 13:06:25 3
  • xargs deals badly with special characters (such as space, ' and "). To see the problem try this: touch important_file touch 'not important_file' ls not* | xargs rm Parallel https://savannah.nongnu.org/projects/parallel/ does not have this problem.


    4
    grep -rl oldstring . | parallel sed -i -e 's/oldstring/newstring/'
    unixmonkey8046 · 2010-01-28 08:44:16 6
  •  < 1 2 3 4 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Define shell variable HISTIGNORE so that comments (lines starting with #) appear in shell history
I was surprised to find that with RedHat bash, I could not find any comment lines (begining with #) in my bash shell history. Surprised because in Mageia Linux this works. It turns out that RedHat's bash will keep comment lines if in my .bashrc, I define: export HISTIGNORE=' cd "`*: PROMPT_COMMAND=?*?' Why have comment lines in shell history? It's a handy and convenient way to make proto-commands (to be completed later) and for storing brief text data that is searchable in shell history.

Use find to get around Argument list too long problem
Can be used for other commands as well, replace rm with ls. It is easy to make this shorter but if the filenames involved have spaces, you will need to do use find's "-print0" option in conjunction with xargs's "-0" option. Otherwise the shell that xargs uses to execute the "rm" command line will treat the space as a token separator, thereby treating the name as two (or more) names.

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

Retrieve the size of a file on a server
Downloads the entire file, but http servers don't always provide the optional 'Content-Length:' header, and ftp/gopher/dict/etc servers don't provide a filesize header at all.

Sort all running processes by their memory & CPU usage
you can also pipe it to "tail" command to show 10 most memory using processes.

Create POSIX tar archive
tar(1) and cpio(1) are not fully platform agnostic, although their file formats are specified in POSIX.1-2001. As such, GNU tar(1) might not be able to extract a BSD tar(1) archive, and ivce versa. pax(1) is defined in POSIX.1-2001. To extract an archive: $ pax -rf archive.tar

BourneShell: Go to previous directory
cd - would return to the previous directory of your cd command. NB: previous dir is always stored in $OLDPWD variable.

Copy a file using dd and watch its progress
This is a more accurate way to watch the progress of a dd process. The $DDPID=$! is needed so that you don't get the PID of the sleep. The sleep 1 is needed because in my testing at least, if you run kill -USR1 against dd too quickly, it will kill it off instead of display the status. So you need to wait a second, probably so that it can configure itself to trap the USR1 signal.

Find files with size over 100MB and output with better lay-out

Convert all Flac in a directory to Mp3 using maximum quality variable bitrate


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: