Commands matching grep (2,286)

  • I find this terribly useful for grepping through a file, looking for just a block of text. There's "grep -A # pattern file.txt" to see a specific number of lines following your pattern, but what if you want to see the whole block? Say, the output of "dmidecode" (as root): dmidecode | awk '/Battery/,/^$/' Will show me everything following the battery block up to the next block of text. Again, I find this extremely useful when I want to see whole blocks of text based on a pattern, and I don't care to see the rest of the data in output. This could be used against the '/etc/securetty/user' file on Unix to find the block of a specific user. It could be used against VirtualHosts or Directories on Apache to find specific definitions. The scenarios go on for any text formatted in a block fashion. Very handy.


    95
    awk '/start_pattern/,/stop_pattern/' file.txt
    atoponce · 2009-03-28 14:28:59 28
  • As an alternative to using an additional grep -v grep you can use a simple regular expression in the search pattern (first letter is something out of the single letter list ;-)) to drop the grep command itself. Show Sample Output


    72
    ps aux | grep [p]rocess-name
    olorin · 2009-08-13 05:44:45 52

  • 69
    tr -c "[:digit:]" " " < /dev/urandom | dd cbs=$COLUMNS conv=unblock | GREP_COLOR="1;32" grep --color "[^ ]"
    allinurl · 2009-06-30 17:23:49 27
  • Prints a graphical directory tree from your current directory Show Sample Output


    64
    ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
    unixmonkey842 · 2009-02-15 20:43:21 33
  • This is the result of a several week venture without X. I found myself totally happy without X (and by extension without flash) and was able to do just about anything but watch YouTube videos... so this a the solution I came up with for that. I am sure this can be done better but this does indeed work... and tends to work far better than YouTube's ghetto proprietary flash player ;-) Replace $i with any YouTube ID you want and this will scrape the site for the _real_ URL to the full quality .FLV file on Youtube's server and will then will hand that over to mplayer (or vlc or whatever you want) to be streamed. In some browsers you can replace $i with just a % or put this in a shell script so all YouTube IDs can be handed directly off to your media player of choice for true streaming without the need for Flash or a downloader like clive. (I do however fully recommend clive if you wish to archive videos instead of streaming them) If any interest is shown I would be more than happy to provide similar commands for other sites. Most streaming flash players use similar logic to YouTube. Edit: 05/03/2011 - Updated line to work with current YouTube. It could be a lot prettier but I will probably follow up with another update when I figure out how to get rid of that pesky Grep. Sed should take that syntax... but it doesn't. Original (no longer working) command: mplayer -fs $(echo "http://youtube.com/get_video.php?$(curl -s $youtube_url | sed -n "/watch_fullscreen/s;.*\(video_id.\+\)&title.*;\1;p")") Show Sample Output


    58
    i="8uyxVmdaJ-w";mplayer -fs $(curl -s "http://www.youtube.com/get_video_info?&video_id=$i" | echo -e $(sed 's/%/\\x/g;s/.*\(v[0-9]\.lscache.*\)/http:\/\/\1/g') | grep -oP '^[^|,]*')
    lrvick · 2009-03-09 03:57:44 59

  • 57
    alias 'ps?'='ps ax | grep '
    fzero · 2009-02-05 13:36:37 52
  • Find random strings within /dev/urandom. Using grep filter to just Alphanumeric characters, and then print the first 30 and remove all the line feeds. Show Sample Output


    54
    strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo
    jbcurtis · 2009-02-16 00:39:28 32
  • Written for linux, the real example is how to produce ascii text graphs based on a numeric value (anything where uniq -c is useful is a good candidate). Show Sample Output


    54
    netstat -an | grep ESTABLISHED | awk '{print $5}' | awk -F: '{print $1}' | sort | uniq -c | awk '{ printf("%s\t%s\t",$2,$1) ; for (i = 0; i < $1; i++) {printf("*")}; print "" }'
    knassery · 2009-04-27 22:02:19 24
  • for one line per process: ss -p | cat for established sockets only: ss -p | grep STA for just process names: ss -p | cut -f2 -sd\" or ss -p | grep STA | cut -f2 -d\"


    53
    ss -p
    Escher · 2009-09-19 21:55:01 15
  • This is how I typically grep. -R recurse into subdirectories, -n show line numbers of matches, -i ignore case, -s suppress "doesn't exist" and "can't read" messages, -I ignore binary files (technically, process them as having no matches, important for showing inverted results with -v) I have grep aliased to "grep --color=auto" as well, but that's a matter of formatting not function.


    50
    grep -RnisI <pattern> *
    birnam · 2009-09-22 15:09:43 33
  • just make some data scrolling off the terminal. wow.


    48
    cat /dev/urandom | hexdump -C | grep "ca fe"
    BOYPT · 2010-09-27 08:20:44 23
  • Ever ask yourself "How much data would be lost if I pressed the reset button?" Scary, isn't it? Show Sample Output


    34
    grep ^Dirty /proc/meminfo
    h3xx · 2011-08-24 08:48:49 17
  • I have a bash alias for this command line and find it useful for searching C code for error messages. The -H tells grep to print the filename. you can omit the -i to match the case exactly or keep the -i for case-insensitive matching. This find command find all .c and .h files Show Sample Output


    33
    find . -name "*.[ch]" -exec grep -i -H "search pharse" {} \;
    bunedoggle · 2009-05-06 15:22:49 27
  • Get your colorized grep output in less(1). This involves two things: forcing grep to output colors even though it's not going to a terminal and telling less to handle those properly.


    32
    grep --color=always | less -R
    dinomite · 2009-05-20 20:30:19 13

  • 30
    grep . filename > newfilename
    alvinx · 2009-09-25 09:25:34 18
  • You have an external USB drive or key. Apply this command (using the file path of anything on your device) and it will simulate the unplug of this device. If you just want the port, just type : echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") Show Sample Output


    30
    echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") | sudo tee /sys/bus/usb/drivers/usb/unbind
    tweet78 · 2014-04-06 12:06:29 24
  • You got some results in two variables within your shell script and would like to find the differences? Changes in process lists, reworked file contents, ... . No need to write to temporary files. You can use all the diff parameters you'll need. Maybe anything like $ grep "^>" is helpful afterwards.


    27
    diff <(echo "$a") <(echo "$b")
    olorin · 2009-07-15 07:26:23 10
  • PDF files are simultaneously wonderful and heinous. They are wonderful in being ubiquitous and mostly being cross platform. They are heinous in being very difficult to work with from the command line, search, grep, use only the text inside the PDF, or use outside of proprietary products. xpdf is a wonderful set of PDF tools. It is on many linux distros and can be installed on OS X. While primarily an open PDF viewer for X, xpdf has the tool "pdftotext" that can extract formated or unformatted text from inside a PDF that has text. This text stream can then be further processed by grep or other tool. The '-' after the file name directs output to stdout rather than to a text file the same name as the PDF. Make sure you use version 3.02 of pdftotext or later; earlier versions clipped lines. The lines extracted from a PDF without the "-layout" option are very long. More paragraphs. Use just to test that a pattern exists in the file. With "-layout" the output resembles the lines, but it is not perfect. xpdf is available open source at http://www.foolabs.com/xpdf/


    27
    pdftotext [file] - | grep 'YourPattern'
    drewk · 2010-02-14 21:42:35 19

  • 27
    grep -Fx -f file1 file2
    zarathud · 2010-05-28 14:50:14 25

  • 26
    netstat -ant | awk '{print $NF}' | grep -v '[a-z]' | sort | uniq -c
    himynameisthor · 2009-02-05 18:02:59 43
  • Use this command to find out a list of committers sorted by the frequency of commits. Show Sample Output


    26
    svn log -q|grep "|"|awk "{print \$3}"|sort|uniq -c|sort -nr
    psytek · 2009-02-17 21:37:03 26
  • I save this to bin/iptrace and run "iptrace ipaddress" to get the Country, City and State of an ip address using the http://ipadress.com service. I add the following to my script to get a tinyurl of the map as well: URL=`lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=$1|grep details|awk '{print $2}'` lynx -dump http://tinyurl.com/create.php?url=$URL|grep tinyurl|grep "19. http"|awk '{print $2}'


    25
    lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=$1|grep address|egrep 'city|state|country'|awk '{print $3,$4,$5,$6,$7,$8}'|sed 's\ip address flag \\'|sed 's\My\\'
    leftyfb · 2009-02-25 17:16:56 49
  • recursively traverse the directory structure from . down, look for string "oldstring" in all files, and replace it with "newstring", wherever found also: grep -rl oldstring . |xargs perl -pi~ -e 's/oldstring/newstring'


    25
    $ grep -rl oldstring . |xargs sed -i -e 's/oldstring/newstring/'
    netfortius · 2009-03-03 20:10:19 34
  • This function displays the latest comic from xkcd.com. One of the best things about xkcd is the title text when you hover over the comic, so this function also displays that after you close the comic. To get a random xkcd comic, I also use the following: xkcdrandom(){ wget -qO- dynamic.xkcd.com/comic/random|tee >(feh $(grep -Po '(?<=")http://imgs[^/]+/comics/[^"]+\.\w{3}'))|grep -Po '(?<=(\w{3})" title=").*(?=" alt)';}


    24
    xkcd(){ wget -qO- http://xkcd.com/|tee >(feh $(grep -Po '(?<=")http://imgs[^/]+/comics/[^"]+\.\w{3}'))|grep -Po '(?<=(\w{3})" title=").*(?=" alt)';}
    eightmillion · 2009-11-27 09:11:47 25
  • grep searches through a file and prints out all the lines that match some pattern. Here, the pattern is some string that is known to be in the deleted file. The more specific this string can be, the better. The file being searched by grep (/dev/sda1) is the partition of the hard drive the deleted file used to reside in. The ?-a? flag tells grep to treat the hard drive partition, which is actually a binary file, as text. Since recovering the entire file would be nice instead of just the lines that are already known, context control is used. The flags ?-B 25 -A 100? tell grep to print out 25 lines before a match and 100 lines after a match. Be conservative with estimates on these numbers to ensure the entire file is included (when in doubt, guess bigger numbers). Excess data is easy to trim out of results, but if you find yourself with a truncated or incomplete file, you need to do this all over again. Finally, the ?> results.txt? instructs the computer to store the output of grep in a file called results.txt. Source: http://spin.atomicobject.com/2010/08/18/undelete?utm_source=y-combinator&utm_medium=social-media&utm_campaign=technical


    24
    grep -a -B 25 -A 100 'some string in the file' /dev/sda1 > results.txt
    olalonde · 2010-08-19 20:07:42 19
  •  1 2 3 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

change exif data in all jpeg's
this takes every jpg in the current directory and changes the exif data for manufactur and model. in my case i change it to LOMO LC-A because my scanner puts his data in there :]

Convert the contents of a directory listing into a colon-separated environment variable
Useful for making a CLASSPATH out of a list of JAR files, for example. Also: export CLASSPATH=.:$(find ./lib -name '*.jar' -printf '%p:')

See how many % of your memory firefox is using

Replace all forward slashes with backward slashes
Use -i option to edit directly a file: sed -i 's|\/|\\|g' file

Install pip with Proxy
Installs pip packages defining a proxy

Commandline document conversion with Libreoffice
In this example, the docx gets converted to Open Document .odt format. For other formats, you'll need to specify the correct filter (Hint: see "Comments" link below for a nice list).

Export unpushed files list

Convert files from DOS line endings to UNIX line endings
Here "^M" is NOT "SHIFT+6" and "M". Type CTRL+V+M to get it instead. Its shortest and easy. And its sed!, which is available by default in all linux flavours.. no need to install extra tools like fromdos.

convert single digit to double digits
works only in zsh, requires autoload zmv

Use QEMU to create a hardware dual-boot without rebooting
After downloading an ISO image, assuming you have QEMU installed, it’s possible to boot an ISO image in a virtual machine and then install that ISO from within the virtual machine directly to a physical drive, bypassing the need to reboot. Simply pass the ISO image as the -cdrom parameter, followed by “format=raw,file=/dev/sdb” (replace /dev/sdb with the drive you want to install to) as the hard drive parameter (making absolutely certain to specify the raw format, of course). Once you boot into the ISO image with QEMU, just run the installer as if it were a virtual machine — it’ll just use the physical device as an install target. After that, you’ll be able to seamlessly boot multiple distros (or even other operating systems) at once.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: