As an alternative to using an additional grep -v grep you can use a simple regular expression in the search pattern (first letter is something out of the single letter list ;-)) to drop the grep command itself. Show Sample Output
Prints a graphical directory tree from your current directory Show Sample Output
This is the result of a several week venture without X. I found myself totally happy without X (and by extension without flash) and was able to do just about anything but watch YouTube videos... so this a the solution I came up with for that. I am sure this can be done better but this does indeed work... and tends to work far better than YouTube's ghetto proprietary flash player ;-) Replace $i with any YouTube ID you want and this will scrape the site for the _real_ URL to the full quality .FLV file on Youtube's server and will then will hand that over to mplayer (or vlc or whatever you want) to be streamed. In some browsers you can replace $i with just a % or put this in a shell script so all YouTube IDs can be handed directly off to your media player of choice for true streaming without the need for Flash or a downloader like clive. (I do however fully recommend clive if you wish to archive videos instead of streaming them) If any interest is shown I would be more than happy to provide similar commands for other sites. Most streaming flash players use similar logic to YouTube. Edit: 05/03/2011 - Updated line to work with current YouTube. It could be a lot prettier but I will probably follow up with another update when I figure out how to get rid of that pesky Grep. Sed should take that syntax... but it doesn't. Original (no longer working) command: mplayer -fs $(echo "http://youtube.com/get_video.php?$(curl -s $youtube_url | sed -n "/watch_fullscreen/s;.*\(video_id.\+\)&title.*;\1;p")") Show Sample Output
Find random strings within /dev/urandom. Using grep filter to just Alphanumeric characters, and then print the first 30 and remove all the line feeds. Show Sample Output
Written for linux, the real example is how to produce ascii text graphs based on a numeric value (anything where uniq -c is useful is a good candidate). Show Sample Output
This is how I typically grep. -R recurse into subdirectories, -n show line numbers of matches, -i ignore case, -s suppress "doesn't exist" and "can't read" messages, -I ignore binary files (technically, process them as having no matches, important for showing inverted results with -v) I have grep aliased to "grep --color=auto" as well, but that's a matter of formatting not function.
just make some data scrolling off the terminal. wow.
Ever ask yourself "How much data would be lost if I pressed the reset button?" Scary, isn't it? Show Sample Output
I have a bash alias for this command line and find it useful for searching C code for error messages. The -H tells grep to print the filename. you can omit the -i to match the case exactly or keep the -i for case-insensitive matching. This find command find all .c and .h files Show Sample Output
Get your colorized grep output in less(1). This involves two things: forcing grep to output colors even though it's not going to a terminal and telling less to handle those properly.
You have an external USB drive or key. Apply this command (using the file path of anything on your device) and it will simulate the unplug of this device. If you just want the port, just type : echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") Show Sample Output
PDF files are simultaneously wonderful and heinous. They are wonderful in being ubiquitous and mostly being cross platform. They are heinous in being very difficult to work with from the command line, search, grep, use only the text inside the PDF, or use outside of proprietary products. xpdf is a wonderful set of PDF tools. It is on many linux distros and can be installed on OS X. While primarily an open PDF viewer for X, xpdf has the tool "pdftotext" that can extract formated or unformatted text from inside a PDF that has text. This text stream can then be further processed by grep or other tool. The '-' after the file name directs output to stdout rather than to a text file the same name as the PDF. Make sure you use version 3.02 of pdftotext or later; earlier versions clipped lines. The lines extracted from a PDF without the "-layout" option are very long. More paragraphs. Use just to test that a pattern exists in the file. With "-layout" the output resembles the lines, but it is not perfect. xpdf is available open source at http://www.foolabs.com/xpdf/
recursively traverse the directory structure from . down, look for string "oldstring" in all files, and replace it with "newstring", wherever found
also:
grep -rl oldstring . |xargs perl -pi~ -e 's/oldstring/newstring'
This function displays the latest comic from xkcd.com. One of the best things about xkcd is the title text when you hover over the comic, so this function also displays that after you close the comic.
To get a random xkcd comic, I also use the following:
xkcdrandom(){ wget -qO- dynamic.xkcd.com/comic/random|tee >(feh $(grep -Po '(?<=")http://imgs[^/]+/comics/[^"]+\.\w{3}'))|grep -Po '(?<=(\w{3})" title=").*(?=" alt)';}
grep searches through a file and prints out all the lines that match some pattern. Here, the pattern is some string that is known to be in the deleted file. The more specific this string can be, the better. The file being searched by grep (/dev/sda1) is the partition of the hard drive the deleted file used to reside in. The ?-a? flag tells grep to treat the hard drive partition, which is actually a binary file, as text. Since recovering the entire file would be nice instead of just the lines that are already known, context control is used. The flags ?-B 25 -A 100? tell grep to print out 25 lines before a match and 100 lines after a match. Be conservative with estimates on these numbers to ensure the entire file is included (when in doubt, guess bigger numbers). Excess data is easy to trim out of results, but if you find yourself with a truncated or incomplete file, you need to do this all over again. Finally, the ?> results.txt? instructs the computer to store the output of grep in a file called results.txt. Source: http://spin.atomicobject.com/2010/08/18/undelete?utm_source=y-combinator&utm_medium=social-media&utm_campaign=technical
-l outputs only the file names -i ignores the case -r descends into subdirectories
Change the -p argument for the port number. See "man nmap" for different ways to specify address ranges. Show Sample Output
When you fill a formular with Firefox, you see things you entered in previous formulars with same field names. This command list everything Firefox has registered. Using a "delete from", you can remove anoying Google queries, for example ;-)
Tested in Linux and OSX Show Sample Output
Usage: flight_status airline_code flight_number (optional)_offset_of_departure_date_from_today So for instance, to track a flight which departed yesterday, the optional 3rd parameter should have a value of -1. eg. flight_status ua 3655 -1 output --------- Status: Arrived Departure: San Francisco, CA (SFO) Scheduled: 6:30 AM, Jan 3 Takeoff: 7:18 AM, Jan 3 Term-Gate: Term 1 - 32A Arrival: Newark, NJ (EWR) Scheduled: 2:55 PM, Jan 3 At Gate: 3:42 PM, Jan 3 Term-Gate: Term C - C131 Note: html2text needs to be installed for this command. only tested on ubuntu 9.10 Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: