Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

create an incremental backup of a directory using hard links

Terminal - create an incremental backup of a directory using hard links
rsync -a --delete --link-dest=../lastbackup $folder $dname/
2009-08-04 07:08:54
User: pamirian
Functions: rsync
6
create an incremental backup of a directory using hard links

dname is a directory named something like 20090803 for Aug 3, 2009. lastbackup is a soft link to the last backup made - say 20090802. $folder is the folder being backed up. Because this uses hard linking, files that already exist and haven't changed take up almost no space yet each date directory has a kind of "snapshot" of that day's files. Naturally, lastbackup needs to be updated after this operation. I must say that I can't take credit for this gem; I picked it up from somewhere on the net so long ago I don't remember where from anymore. Ah, well...

Systems that are only somewhat slicker than this costs hundreds or even thousands of dollars - but we're HACKERS! We don't need no steenkin' commercial software... :)

Alternatives

There are 2 alternatives - vote for the best!

Terminal - Alternatives

Know a better way?

If you can do better, submit your command here.

What others think

this command is completely useless as the hardlinked "backuped" files will change, if the original file changes. Only use is to recoveraccidentally deleted files - but by the option "--delete" these will be deleted in yout backup also next time you run this ...

DOWN DOWN DOWN

Comment by sneaker 245 weeks and 1 day ago

No, removing the --delete part doesn't remove the hardlinked files, it just maintains the integrity of the backup that the snapshot reflects. The originals are still maintained in the previous backup.

The assertion by sneaker is true but misleading that when the hardlinked files change this modifies the file to which it links. You see, a new version of the file is created in each "incremental" backup by the rsync if any changes have occurred, so there's no worry that you are going to retrieve a backup of a file with modifications that have happened at a future date.

The failed assumption is that you are always backing up to the same location (directory), which is false, friends. Read pamirian's explanation of the command more carefully. Each backup gets its own directory, which builds on the backup before it UNLESS there is a change to any given file in the backup in which case a new copy is created rather than hardlinked back. If a file has been deleted at the location of origin, that file is simply omitted from the incremental backup you're doing. It isn't "deleted" in all previous backups.

UP UP UP

This is the same technology used by microsoft shadow copy technology for their fileshares, and also by netapp SANs that have LUNs with snapshot volumes enabled.

It's a good thing though that sneaker pointed out his/her reservations, so that those common misconceptions could be revealed and dispelled. Now the otherwise unseen argument could never have been resolved.

Kudos to both pamirian and sneaker.

Comment by linuxrawkstar 245 weeks and 1 day ago

as long as you remember that --link-dest requires rotation and an empty target dir, you'll be fine. If you still don't get it, try using a system that hardlinks and rotates for you, such as rsnapshot.

Comment by rkulla 201 weeks and 6 days ago

You find the concept beautifully explained with a script to put those backups on a remote location here:

http://blog.interlinked.org/tutorials/rsync_time_machine.html

Comment by joedhon 155 weeks and 5 days ago

Your point of view

You must be signed in to comment.

Related sites and podcasts