worse alternative to ctrl+r: grep the history removing duplicates without sorting (case insensitive search). Show Sample Output
This shell function displays a list of binaries contained in an installed package; works on Debian based Linux distributions. Show Sample Output
GNU grep's perl-compatible regular expression(PCRE).
Directly download all mp3 files of the desired podcast
Sometimes things break. You can find the most recent errors using a combination of journalctl, along with the classic tools sort and uniq Show Sample Output
In this case I'm selecting all php files in a dir, then echoing the filename and piping it to ~/temp/errors.txt. Then I'm running my alias for PHPCS (WordPress flags in my alias), then piping the PHPCS output to grep and looking for GET. Then I'm piping that output to the same file as above. This gets a list of files and under each file the GET security errors for that file. Extrapolate this to run any command on any list of files and pipe the output to a file. Remove the >> ~/temp/errors.txt to get output to the screen rather than to a file. Show Sample Output
This is a working version, though probably clumsy, of the script submitted by felix001. This works on ubuntu and CygWin. This would be great as a bash function, defined in .bashrc. Additionally it would work as a script put in the path. Show Sample Output
I had some trouble removing empty lines from a file (perhaps due to utf-8, as it's the source of all evil), \W did the trick eventually.
greps using only ascii, skipping the overhead of matching UTF chars. Some stats: $ export LANG=C; time grep -c Quit /var/log/mysqld.log 7432 real 0m0.191s user 0m0.112s sys 0m0.079s $ export LANG=en_US.UTF-8; time grep -c Quit /var/log/mysqld.log 7432 real 0m13.462s user 0m9.485s sys 0m3.977s Try strace-ing grep with and without LANG=C
This one will work a little better, the regular expressions it is not 100% accurate for XML parsing but it will suffice any XML valid document for sure. Show Sample Output
This will drop you into vim to edit all files that contain your grep string.
saves one command. Needs GNU grep though :-(
D/l ack from http://betterthangrep.com/
This obey that you don't match any broadcast or network addresses and stay between 1.1.1.1 - 254.254.254.254
Require "grep -P" ( pcre ).
If you don't have grep -P, use that :
grep -Eo '"url":"[^"]+' $(ls -t ~/.mozilla/firefox/*/sessionstore.js | sed q) | cut -d'"' -f4
This version makes uses of Bash shell expansion, so it might not work in all other shells.
The difference between this and the other alternatives here using only grep is that find will, by default, not follow a symlink. In some cases, this is definitely desirable.
Using find also allows you to exclude certain files, eg
find directory/ ! -name "*.tmp" -exec grep -ni phrase {} +
would allow you to exclude any files .tmp files.
Also note that there's no need for calling grep recursively, as find passes each found file to grep.
I use this sometimes when ctags won't help.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: