commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
use find with rsync
counts the total (recursive) number of files in the immediate (depth 1) subdirectories as well as the current one and displays them sorted.
Fixed, as per ashawley's comment
Searches for all .project files in current folder and below and uses "svn info" to get the last changed revision. The last sed joins every two lines.
All output is placed in file SHA1SUMS which you can later check with 'sha1sum --check'. Works on most Linux distros where 'sha1sum' is installed.
I needed a way to search all files in a web directory that contained a certain string, and replace that string with another string. In the example, I am searching for "askapache" and replacing that string with "htaccess". I wanted this to happen as a cron job, and it was important that this happened as fast as possible while at the same time not hogging the CPU since the machine is a server.
So this script uses the nice command to run the sh shell with the command, which makes the whole thing run with priority 19, meaning it won't hog CPU processing. And the -P5 option to the xargs command means it will run 5 separate grep and sed processes simultaneously, so this is much much faster than running a single grep or sed. You may want to do -P0 which is unlimited if you aren't worried about too many processes or if you don't have to deal with process killers in the bg.
Also, the -m1 command to grep means stop grepping this file for matches after the first match, which also saves time.
change the *.avi to whatever you want to match, you can remove it altogether if you want to check all files.
shorter typing with no need to use xargs.
List all text files in the current directory.
This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.
Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?
Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....
Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.
This dup finder saves time by comparing size first, then md5sum, it doesn't delete anything, just lists them.
This command dumps all SVN repositories inside of folder "repMainPath" (not recursively) to the folder "dumpPath", where one dump file will be created for each SVN repository.
Same thing, only "head" instead of grep/egrep..
Useful for backing up old files, custom logs, etc. via a cronjob.
you must be in the directory to analyse
report all files and links in the currect directory, not recursively.
this find command ahs been tested on hp-ux/linux/aix/solaris.
A little bit smaller, faster and should handle files with special characters in the name.
This can be much faster than downloading one or both trees to a common servers and comparing the files there. After, only those files could be copied down for deeper comparison if needed.