commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
This command is useful to recursively make executable all "*.sh" files in a folder.
This command is useful to apply chmod recursively in a determined kind of file.
Avoids the nested 'find' commands but doesn't seem to run any faster than syssyphus's solution.
Opening several files at once in Vim can be very easy in connection with find command.
This command will take the files in a directory, rename them, and then number them from 1...N.
Black belt stuff.
Hell of a time saver.
This checks jpeg data and metadata, should be grepped as needed, maybe a -B1 Warning for the first, and a -E "WARNING|ERROR" for the second part....
Linux users wanting to extract text from PDF files in the current directory and its sub-directories can use this command. It requires "bash", "ps2ascii" and "par", and the PARINIT environment variable sanely set (see man par). WARNING: the file "junk.sh" will be created, run, and destroyed in the current directory, so you _must_ have sufficient rights. Edit the command if you need to avoid using the file name "junk.sh"
Let the shell handle the repetition in stead of find :)
Executing pfiles will return a list of all descriptors utilized by the process
We are interested in the S_IFREG entries since they are pointing usually to files
In the line, there is the inode number of the file which we use in order to find the filename.
The only bad thing is that in order not to search from / you have to suspect where could possibly be the file.
Improvements more than welcome.
lsof was not available in my case
This is a modified version of the OP, wrapped into a bash function.
This version handles newlines and other whitespace correctly, the original has problems with the thankfully rare case of newlines in the file names.
It also allows checking an arbitrary number of directories against each other, which is nice when the directories that you think might have duplicates don't have a convenient common ancestor directory.
# find assumes email files start with a number 1-9
# sed joins the lines starting with " " to the previous line
# gawk print the received and from lines
# sort according to the second field (received+from)
# uniq print the duplicated filename
# a message is viewed as duplicate if it is received at the same time as another message, and from the same person.
The command was intended to be run under cron. If run in a terminal, mutt can be used:
mutt -e "push otD~=xq" -f $folder
Removes all *.swp files underneath the current directory. Replace "*.swp" with your file pattern(s).
Will check if the given module is installed in the @INC. It will print the path and return 0 if found, or 1 otherwise.
Based on script from SharpyWarpy in http://www.linuxquestions.org/questions/linux-general-1/how-to-list-all-installed-perl-modules-216603/
Just added maxdepth
This command is adapted from http://otomaton.wordpress.com/2012/12/26/find-broken-symbolic-links/
don't work when the link is a loop, an error message is printed.
A lot of files in one dir is not so cool for filesystem.
Old Sys5 system and SUN computers don't have the -H option. Adding /dev/null forces grep to use the multi-file output and report the file name.