commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
find . = will set up your recursive search. You can narrow your search to certain file by adding -name "*.ext" or limit buy using the same but add prune like -name "*.ext" -prune
xargs =sets it up like a command line for each file find finds and will invoke the next command which is perl.
perl = invoke perl
-p sets up a while loop
-i in place and the .bak will create a backup file like filename.ext.bak
-e execute the following....
's/ / /;' your basic substitute and replace.
find files recursively from the current directory, and list the extensions of files uniquely
Assumes you've cd'd to the folder in which all your git repos reside; you could run it from ~ without -maxdepth, although that might make find take quite a while longer.
If you have several processor cores, but not that much ram, you might want to run
git config --global pack.threads 1
first, since gc-ing can eat lots of ram.
Sometimes when copying files from one place to another, the timestamps get lost. Maybe you forgot to add a flag to preserve timestamps in your copy command. You're sure the files are exactly the same in both locations, but the timestamps of the files in the new home are wrong and you need them to match the source.
Using this command, you will get a shell script (/tmp/retime.sh) than you can move to the new location and just execute - it will change the timestamps on all the files and directories to their previous values. Make sure you're in the right directory when you launch it, otherwise all the touch commands will create new zero-length files with those names. Since find's output includes "." it will also change the timestamp of the current directory.
Ideally rsync would be the way to handle this - since it only sends changes by default, there would be relatively little network traffic resulting. But rsync has to read the entire file contents on both sides to be sure no bytes have changed, potentially causing a huge amount of local disk I/O on each side. This could be a problem if your files are large. My approach avoids all the comparison I/O. I've seen comments that rsync with the "--size-only" and "--times" options should do this also, but it didn't seem to do what I wanted in my test. With my approach you can review/edit the output commands before running them, so you can tell exactly what will happen.
The "tee" command both displays the output on the screen for your review, AND saves it to the file /tmp/retime.sh.
Credit: got this idea from Stone's answer at http://serverfault.com/questions/344731/rsync-copying-over-timestamps-only?rq=1, and combined it into one line.
Counts the files present in the different directories recursively. One only has to change maxdepth to have further insight in the directory hierarchy.
Found at unix.stackexchange.com:
To understand why this is the equivalent of "find -L /path/to/search -type l, see http://ynform.org/w/Pub/FindBrokenSymbolicLinks or look at http://www.gnu.org/software/findutils/manual/html_mono/find.html
From the cwd, recursively find all rar files, extracting each rar into the directory where it was found, rather than cwd.
A nice time saver if you've used wget or similar to mirror something, where each sub dir contains an rar archive.
Its likely this can be tuned to work with multi-part archives where all parts use ambiguous .rar extensions but I didn't test this. Perhaps unrar would handle this gracefully anyway?
Sometimes my /var/cache/pacman/pkg directory gets quite big in size. If that happens I run this command to remove old package files. Packages that we're upgraded in last N days are kept in case you are forced to downgrade a specific package. The command is obviously Arch Linux related.