commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
This will catch most separators in the section of the email:
dot .
dash -
underscore _
plus + (added for gmail)
... and the basic dash '-' of host names.
This command is useful for searching through a whole folder worth of pdf files.
I know how hard it is to find an old command running through all the files because you couldn't remember for your life what it was. Heres the solution!! Grep the history for it. depending on how old the command you can head or tail or if you wanted to search all because you cannot think how long ago it was then miss out the middle part of the command. This is a very easy and effective way to find that command you are looking for.
I have found that base64 encoded webshells and the like contain lots of data but hardly any newlines due to the formatting of their payloads. Checking the "width" will not catch everything, but then again, this is a fuzzy problem that relies on broad generalizations and heuristics that are never going to be perfect.
What I have done is set an arbitrary threshold (200 for example) and compare the values that are produced by this script, only displaying those above the threshold. One webshell I tested this on scored 5000+ so I know it works for at least one piece of malware.
Searched strings:
passthru, shell_exec, system, phpinfo, base64_decode, chmod, mkdir, fopen, fclose, readfile
Since some of the strings may occur in normal text or legitimately you will need to adjust the command or the entire regex to suit your needs.
Fast and easy way to find all established tcp connections without using the netstat command.
calculate how many different lines between two files
Get the longest match of file extension (Ex. For 'foo.tar.gz', you get '.tar.gz' instead of '.gz')
`pwd` returns the current path
`grep -o` prints each slash on new line
perl generates the paths sequence: './.', './../.', ...
`readlink` canonicalizes paths (it makes the things more transparent)
`xargs -tn1` applies chmod for each of them. Each command applied is getting printed to STDERR.
Runs a diff on two files ignore comments and blank lines (diff -I=RE does not work as expected). Adapted from a post found on stackexchange.
ls -lhR
Lists everithing using -l "long listing format" wich includes the space used by the folder. Displays it in -h "human readable form" (i.e. 2.2G, 32K), and -R recurses subfolders.
grep -e using a regex, show lines containing the word "total" or a ":" at the end of the line (those with the name of the folder) only.
Substitute for #11720
Can probably be even shorter and easier.