My most used bash function without a doubt!
Finds all files recursively from your working directory, matching 'aMethodName', except if 'target' is in that file's path. Handy for finding text without matching all your files in target or subversion directories.
Old Sys5 system and SUN computers don't have the -H option. Adding /dev/null forces grep to use the multi-file output and report the file name.
This is a modified version of the OP, wrapped into a bash function. This version handles newlines and other whitespace correctly, the original has problems with the thankfully rare case of newlines in the file names. It also allows checking an arbitrary number of directories against each other, which is nice when the directories that you think might have duplicates don't have a convenient common ancestor directory.
Executing pfiles will return a list of all descriptors utilized by the process We are interested in the S_IFREG entries since they are pointing usually to files In the line, there is the inode number of the file which we use in order to find the filename. The only bad thing is that in order not to search from / you have to suspect where could possibly be the file. Improvements more than welcome. lsof was not available in my case Show Sample Output
If you have GNU findutils, you can get only the file name with
find /some/path -type f -printf '%f\n'
instead of
find /some/path -type f | gawk -F/ '{print $NF}'
Show Sample Output
Example of using zsh glob qualifier ... "." = files "f:" = files with access rights matching: o+w = other plus write
Searched strings: passthru, shell_exec, system, phpinfo, base64_decode, chmod, mkdir, fopen, fclose, readfile Since some of the strings may occur in normal text or legitimately you will need to adjust the command or the entire regex to suit your needs.
I have found that base64 encoded webshells and the like contain lots of data but hardly any newlines due to the formatting of their payloads. Checking the "width" will not catch everything, but then again, this is a fuzzy problem that relies on broad generalizations and heuristics that are never going to be perfect. What I have done is set an arbitrary threshold (200 for example) and compare the values that are produced by this script, only displaying those above the threshold. One webshell I tested this on scored 5000+ so I know it works for at least one piece of malware.
This command is for producing GNU sha256sum-compatible hashes on UNIX systems that don't have sha256sum but do have OpenSSL, such as stock IBM AIX. 1.- Saves a wrapper script for UNIX find that does the following: A.- Feeds a file to openssl on SHA256 hash calculation mode B.- Echoes the output followed by the filename 2.- Makes the file executable 3.- Runs find on a directory, only processing files, and running on each one the wrapper script that calculates SHA256 hashes Pending is figuring out how to verify a sha256sum file on a similar environment. Show Sample Output
* Find all file sizes and file names from the current directory down (replace "." with a target directory as needed). * sort the file sizes in numeric order * List only the duplicated file sizes * drop the file sizes so there are simply a list of files (retain order) * calculate md5sums on all of the files * replace the first instance of two spaces (md5sum output) with a \0 * drop the unique md5sums so only duplicate files remain listed * Use AWK to aggregate identical files on one line. * Remove the blank line from the beginning (This was done more efficiently by putting another "IF" into the AWK command, but then the whole line exceeded the 255 char limit). >>>> Each output line contains the md5sum and then all of the files that have that identical md5sum. All fields are \0 delimited. All records are \n delimited.
note that sed -i is non-standard (although both GNU and current BSD systems support it)
Can also be accomplished with
find . -name "*.txt" | xargs perl -pi -e 's/old/new/g'
as shown here - http://www.commandlinefu.com/commands/view/223/a-find-and-replace-within-text-based-files-to-locate-and-rewrite-text-en-mass.
Will find and list all core files from the current directory on. You can pass | xargs rm -i to be prompted for the removal if you'd like to double check before removal.
Grep can search files and directories recursively. Using the -Z option and xargs -0 you can get all results on one line with escaped spaces, suitable for other commands like rm. Show Sample Output
This find syntax seems a little easier to remember for me when I have to use -prune on AIX's find. It works with gnu find, too. Add whatever other find options after -prune Show Sample Output
A simple bash function to the find command. I use this much more than find itself. Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: