commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Recursively replace a string in files with lines matching string. Lines with the string "group name" will have the first > character replaced while other > characters on other lines will be ignored.
This is a minimalistic version of the ubiquitious Google definition screen scraper. This version was designed not only to run fast, but to work using BusyBox. BusyBox is a collection of basic Unix tools that have been compiled into a single binary to save space on tiny installations of Unix. For example, although my phone doesn't have perl or the GNU utilities, it does have BusyBox's stripped down versions of wget, tr, and sed. It turns out that those tools suffice for many tasks.
Known Bugs: This script does not handle HTML entities at all. I don't think there's an easy way to do that within BusyBox, but I'd love to see it if someone could do it. Also, this script can only define a single word, not phrases. (Well, you could if you typed in %20, but that'd be gross.) Lastly, this script does not show the URL where definitions were found. Given the randomness of the Net, that last bit of information is often key.
xargs deals badly with special characters (such as space, ' and "). To see the problem try this:
touch important_file
touch 'not important_file'
ls not* | xargs rm
Parallel https://savannah.nongnu.org/projects/parallel/ does not have this problem.
sed already has an option for editing files in place and making backup copies of the old file. -i will edit a file in place and if you give it an argument, it will make a backup file using that string as an extension.
Slightly simpler version of previous sed command that does the same thing. In this case, the output will stop at the command, and the entire command will be terminated as well, instead of proceeding through the whole file.
If BREs can be used, this sed version will also get the job done.
Print out contents of file with line numbers.
This version will print a number for every line, and separates the numbering from the line with a tab.
Require "grep -P" ( pcre ).
If you don't have grep -P, use that :
grep -Eo '"url":"[^"]+' $(ls -t ~/.mozilla/firefox/*/sessionstore.js | sed q) | cut -d'"' -f4
Only need to install Image Magick package.
Display a xkcd comic with its title and save it in /tmp directory
If you prefer to view the newest xkcd, use this command:
wget -q http://xkcd.com/ -O-| sed -n '/<img src="http:\/\/imgs.xkcd.com\/comics/{s/.*\(http:.*\)" t.*/\1/;p}' | awk '{system ("wget -q " $1 " -O- | display -title $(basename " $1") -write /tmp/$(basename " $1")");}'
This command find all files in the current dir and subdirs, and replace all occurances of "oldstring" in every file with "newstring".
My version uses printf and command substitution ($()) instead of echo -e and xargs, this is a few less chars, but not real substantive difference.
Also supports lowercase hex letters and a backslash (\) will make it through unescaped
as unixmonkey7109 pointed out, first awk parse replaces three steps.
It's not a big line, and it *may not* work for everybody, I guess it depends on the detail of access_log configuration in your httpd.conf. I use it as a prerotate command for logrotate in httpd section so it executes before access_log rotation, everyday at midnight.
For this example, all files in the current directory that end in '.xml.skippy' will have the '.skippy' removed from their names.
I modify 4077 and marssi commandline to simplify it and skip an error when parsing the first line of lsmod (4077). Also, it's more concise and small now. I skip using xargs ( not required here ). This is only for GNU sed.
For thoses without GNU sed, use that :
modinfo $(lsmod | awk 'NR>1 {print $1}') | sed -e '/^dep/s/$/\n/g' -e '/^file/b' -e '/^desc/b' -e '/^dep/b' -e d
Liked command 4077 so I improved it, by doing all text manipulation with sed.
"Run this as root, it will be helpful to quickly get information about the loaded kernel modules." THX mohan43u
Strips comments from at least bash and php scripts. Normal # and // as well as php block comments
removes all of the:
empty/blank lines
lines beginning with #
lines beginning with //
lines beginning with /*
lines beginning with a space and then *
lines beginning with */
It also deletes the lines if there's whitespace before any of the above.
Add an alias to use in .bashrc like this:
alias stripcomments="sed -e '/^[[:blank:]]*#/d; s/[[:blank:]][[:blank:]]*#.*//' -e '/^$/d' -e '/^\/\/.*/d' -e '/^\/\*/d;/^ \* /d;/^ \*\//d'"
The -i option in sed allows in-place editing of the input file.
Replace myexpression with any regular expression.
/expr/d syntax means if the expression matches then delete the line.
You can reverse the functionality to keep matching lines only by using:
sed -i -n '/myexpression/p' /path/to/file.txt
That makes a function you can put in your ~/.bashrc to run it when you need in any term with an IP as argument
Alternative command to retrieve the CPU model name and strip off the "model name : " labels.