Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Create a symbolic link tree that shadows a directory structure

Terminal - Create a symbolic link tree that shadows a directory structure
find /home/user/doc/ -type d -printf "mkdir -vp '/home/user/Dropbox%p'\n" -o -type f -printf "ln -vs '%p' '/home/user/Dropbox%p'\n" | sh
2009-03-29 09:25:12
User: jnash
Functions: find
0
Create a symbolic link tree that shadows a directory structure

Extremely useful to maintain backups if you're using Dropbox. This mirrors the entire directory structure and places symlinks in each to the original file. Instead of copying over the data again to the ~/Dropbox folder creating a symbolic link tree is much more sensible in terms of space usage.

This has to be supplemented by another script that removes dead symlinks in the Dropbox folder which point to files that have been moved/removed.

find -L ./ -type l -delete

And then removing empty directories

find ./ -type d -exec rmdir 2>/dev/null {} \;

**Actually after some finding I found lndir which creates symbolic trees but it wasn't in the Arch repos so.. ;)

Alternatives

There are 3 alternatives - vote for the best!

Terminal - Alternatives

Know a better way?

If you can do better, submit your command here.

What others think

Holy crap, that's compicated.

If the src and dest are on the same filesystem (and you're on a *nix box), you may as well use hard links. Your command is reduced to:

cp -al src dest

Rather than dangling symlinks, which can also be found using the "symlinks" command, you will end up with files with a hard-link count of 1.

find src -type f -links 1 -delete

Finally you can tidy the empty directory command:

find . -depth -type d ! -name . -exec rmdir --ignore-fail-on-non-empty {} \;

This version will remove deep, but empty trees, and it won't complain about non-empty dirs.

Comment by flatcap 283 weeks and 1 day ago

Er, .. well I'll take that as a compliment :)

But, in all truthfullness, I think it is simple. find just branches into two (if/else) for dirs and files and either makes a directory or makes a symlink to the file.

Relatively, however, I guess yours wins in terms of complexity. But hard links might be a tad little more unsafe IMO considering I'm backing up stuff..

Comment by jnash 283 weeks and 1 day ago

If you're trying to back up data, this is very much the wrong way to do it.

Symlinks are NOT a backup. All they do is point to a file. If that file is changed, the symlink reflects that. Try this:

echo foo>file;ln -s file symlink;cat file;echo whoops>file;cat symlink

This creates a file, symlinks it, edits the file, and changed your "backup".

Hard links aren't safe either. do the same thing, only, you have to look harder to find out it's a link.

echo foo>file;ln file hardlink;cat file;echo whoops>file;cat hardlink

The only way to back up the data is to actually copy it elsewhere.

If you want to back up and save space, then rsync is your friend. It has some parameters that will compare to previous backups, and copy any changed files since the last backup, and hard link the rest. This way, you can delete any backup you want, and all others are unaffected. I can't tell you what they are, but google will help out:

http://www.google.com/search?q=rsync+incremental+backup

You may also be interested in Flyback, http://flyback-project.org/ , which is a gui wrapper over rsync.

Good luck!

Comment by clockworkavian 283 weeks ago

OP mentioned "dropbox", I think that copies the stuff to the server.

I am interested in expanding this idea to re-point the symlinks to other locations if the information has moved. it won't be foolproof but something that "finds" and repoints the link. Only ways I can think of doing this involve long scripts, so oneliners would be apprecaited if anyone can think of one.

Comment by unixmonkey3054 283 weeks ago

Your point of view

You must be signed in to comment.

Related sites and podcasts