Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!
Hide

Top Tags

Hide

Functions

Psst. Open beta.

Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:

  • » The open beta is running a copy of the database that will not carry over to the final version. Don't post anything you don't mind losing.
  • » If you wish to use your user account, you will probably need to reset your password.
Your feedback is appreciated via the form on the beta page. Thanks! -Jon & CLFU Team

defragment files

Terminal - defragment files
find ~ -maxdepth 20 -type f -size -16M -print > t; for ((i=$(wc -l < t); i>0; i--)) do a=$(sed -n ${i}p < t); mv "$a" /dev/shm/d; mv /dev/shm/d "$a"; echo $i; done; echo DONE; rm t
2010-07-07 04:29:22
User: LinuxMan
Functions: echo find mv rm sed wc
2
defragment files

Thanks to flatcap for optimizing this command.

This command takes advantage of the ext4 filesystem's resistance to fragmentation.

By using this command, files that were previously fragmented will be copied / deleted / pasted essentially giving the filesystem another chance at saving the file contiguously. ( unlike FAT / NTFS, the *nix filesystem always try to save a file without fragmenting it )

My command only effects the home directory and only those files with your R/W (read / write ) permissions.

There are two issues with this command:

1. it really won't help, it works, but linux doesn't suffer much (if any ) fragmentation and even fragmented files have fast I/O

2. it doesn't discriminate between fragmented and non-fragmented files, so a large ~/ directory with no fragments will take almost as long as an equally sized fragmented ~/ directory

The benefits i managed to work into the command:

1. it only defragments files under 16mb, because a large file with fragments isn't as noticeable as a small file that's fragmented, and copy/ delete/ paste of large files would take too long

2. it gives a nice countdown in the terminal so you know how far how much progress is being made and just like other defragmenters you can stop at any time ( use ctrl+c )

3. fast! i can defrag my ~/ directory in 11 seconds thanks to the ramdrive powering the command's temporary storage

bottom line:

1. its only an experiment, safe ( i've used it several times for testing ), but probably not very effective ( unless you somehow have a fragmentation problem on linux ). might be a placebo for recent windows converts looking for a defrag utility on linux and won't accept no for an answer

2. it's my first commandlinefu command

Alternatives

There are 7 alternatives - vote for the best!

Terminal - Alternatives

Know a better way?

If you can do better, submit your command here.

What others think

I'm impressed, but why is the chmod there at all? Why not keep the file's original permissions?

Comment by kaedenn 320 weeks and 6 days ago

I'm intrigued to know what you're doing that's SO time dependent. ext4 is pretty fast, so unless you have millions of files, or you read these files millions of times, I doubt you'd ever notice the problem of fragmentation.

As for the script...

First get rid of sudo. You're performing this operation in your own home directory.

Next, if you mv a file onto /dev/shm (another filesystem) and you mv it back you will get a new file. By mv'ing the file, you don't need to worry about chmod (or rm).

Big security note, though. If someone else has created a file in /dev/shm called 'd', your script will replace all your files with it.

Next, I've used wc (word count) which is faster than grep. I only use it once, too, to initialise a for loop.

Then, instead of sed reading/altering/writing the file list every loop, I get it to give me the n'th line.

Finally, I added "rm t" to clean up the temporary file. For speed, you could put the file in /dev/shm too :-)

Here's my updated version:

find ~ -maxdepth 20 -type f -size -16M -print > t; for ((i=$(wc -l < t); i>0; i--)) do a=$(sed -n ${i}p < t); mv "$a" /dev/shm/d; mv /dev/shm/d "$a"; echo $i; done; echo DONE; rm t

74 fewer chars.

List of commands: find, wc, sed, mv, rm, echo

Comment by flatcap 320 weeks and 6 days ago

Thank you flatcap, you've managed to squeeze my command into an even smaller space and remove the junk from my command ( left over from my big 2kb command, i was so concerned with reducing the size of to under 255 characters, i failed to realize that some code from the original script wasn't necessary anymore ). after testing your script, i would like to know if i can replace my command with yours.

@kaedenn the chmod was left over from my original script that used "dd" and attempted to work on files aoutside of the home directory

Comment by LinuxMan 320 weeks and 6 days ago

No problem, I enjoyed the challenge :-)

I didn't do much testing, but I'm /fairly/ certain it'll do the same as your script. Be careful before using it on your home directory.

Comment by flatcap 320 weeks and 6 days ago

You could use mktemp to bypass the potential /dev/shm/d file issue. The temp file will be created when the var is set.

d=$(mktemp --tmpdir=/dev/shm); find ~ -maxdepth 20 -type f -size -16M -print > t; for ((i=$(wc -l < t); i>0; i--)) do a=$(sed -n ${i}p < t); mv "$a" /dev/shm/$d; mv /dev/shm/$d "$a"; echo $i; done; echo DONE; rm t; rm $d
Comment by Vilemirth 146 weeks and 6 days ago

Sorry. Screwed that up. You don't need the path for $d, as it's already there.

d=$(mktemp --tmpdir=/dev/shm); find ~ -maxdepth 20 -type f -size -16M -print > t; for ((i=$(wc -l < t); i>0; i--)) do a=$(sed -n ${i}p < t); mv "$a" "$d"; mv "$d" "$a"; echo $i; done; echo DONE; rm t; rm $d
Comment by Vilemirth 146 weeks and 6 days ago

Your point of view

You must be signed in to comment.