commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.
Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?
Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....
Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.
This dup finder saves time by comparing size first, then md5sum, it doesn't delete anything, just lists them.
This command dumps all SVN repositories inside of folder "repMainPath" (not recursively) to the folder "dumpPath", where one dump file will be created for each SVN repository.
Same thing, only "head" instead of grep/egrep..
Useful for backing up old files, custom logs, etc. via a cronjob.
you must be in the directory to analyse
report all files and links in the currect directory, not recursively.
this find command ahs been tested on hp-ux/linux/aix/solaris.
A little bit smaller, faster and should handle files with special characters in the name.
This can be much faster than downloading one or both trees to a common servers and comparing the files there. After, only those files could be copied down for deeper comparison if needed.
search argument in PATH
accept grep expressions
without args, list all binaries found in PATH
OS: Debian based (or those that use dpkg)
Equivalent to doing a dpkg -S on each file in $PATH, but way faster.
May report files generated though postinstall scripts and such. For example . It will report /usr/bin/vim .. which is not not a file installed directly by dpkg, but a link generated by alternatives hooks
This is really fast :)
time find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'
204753
real 0m0.191s
user 0m0.068s
sys 0m0.116s
Lists out all classes used in all *.html files in the currect directory. usefull for checking if you have left out any style definitions, or accidentally given a different name than you intended. ( I have an ugly habit of accidentally substituting camelCase instead of using under_scores: i would name soemthing counterBox instead of counter_box)
WARNING: assumes you give classnames in between double quotes, and that you apply only one class per element.
This is the way how you can find header and cpp files in the same time.
Have wc work on each file then add up the total with awk; get a 43% speed increase on RHEL over using "-exec cat|wc -l" and a 67% increase on my Ubuntu laptop (this is with 10MB of data in 767 files).
Based on the MrMerry one, just add some visuals and sort directory and files
Make sure that find does not touch anything other than regular files, and handles non-standard characters in filenames while passing to xargs.
needs no GNU tools, as far as I see it
This has helped me numerous times trying to find either log files or tmp files that get created after execution of a command. And really eye opening as to how active a given process really is. Play around with -anewer, -cnewer & -newerXY
get diskusage of files (in this case logfiles in /var/log) modified during the last n days:
sudo find /var/log/ -mtime -n -type f | xargs du -ch
n -> last modified n*24 hours ago
Numeric arguments can be specified as
+n for greater than n,
-n for less than n,
n for exactly n.
=> so 7*24 hours (about 7 days) is -7
sudo find /var/log/ -mtime -7 -type f | xargs du -ch | tail -n1
1. find file greater than 10 MB
2. direct it to xargs
3. xargs pass them as argument to ls