Check These Out
With a couple of little commands, you?ll be able to ignore the .DS_Store files forever from your git repositories on mac!
The following command will add the .gitignore file to the git configuration
git config --global core.excludesfile ~/.gitignore
then, the following, will add the .DS_Store to the list
echo .DS_Store >> ~/.gitignore
This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage.
[*Note: of course there are no PDFs on example.com. This is just an example]
The `-q' arg forces tail to not output the name of the current file
Emits the device names which will be printed by iostat for an LVM volume; doesn't show the names for the underlying devices when snapshots are being used (the -cow and -real devices in /dev/mapper)
This one liner; combines all sequentially numbered files; in this example IMG_0001.png to IMG_1121.png by generating the shell script, making the shell script executable and then running the shell script to combine the 1121 png into a single png file named _final.png
tested on Mac OS X 10.6.3 with ImageMagick 6.5.8-0 2009-11-22 Q16 http://www.imagemagick.org
Retrieve top ip threats from http://isc.sans.org/sources.html and add them into iptables output chain.
If you have used bash for any scripting, you've used the date command alot. It's perfect for using as a way to create filename's dynamically within aliases,functions, and commands like below.. This is actually an update to my first alias, since a few commenters (below) had good observations on what was wrong with my first command.
# creating a date-based ssh-key for askapache.github.com
$ ssh-keygen -f ~/.ssh/`date +git-$USER@$HOSTNAME-%m-%d-%g` -C 'webmaster@askapache.com'
$ # /home/gpl/.ssh/git-gplnet@askapache.github.com-04-22-10
# create a tar+gzip backup of the current directory
$ tar -czf $(date +$HOME/.backups/%m-%d-%g-%R-`sed -u 's/\//#/g'
This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.
Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?
Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....
Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.