commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
Mac OS X needs some clean up. First command works in Linux, Solaris just needs the "0000000"'s removed
piping through 'pv' shows a simple progress/speed bar for dd. This is a replacement for my otherwise favorite 'while :;do killall -USR1 dd;sleep 1;done'
This line removes the 300k header from a Nero image file converting it to ISO format
bs = buffer size (basically defined the size of a "unit" used by count and skip)
count = the number of buffers to copy (16m * 32 = 1/2 gig)
skip = (32 * 2) we are grabbing piece 3...which means 2 have already been written so skip (2 * count)
i will edit this later if i can to make this all more understandable
This is a more accurate way to watch the progress of a dd process. The $DDPID=$! is needed so that you don't get the PID of the sleep. The sleep 1 is needed because in my testing at least, if you run kill -USR1 against dd too quickly, it will kill it off instead of display the status. So you need to wait a second, probably so that it can configure itself to trap the USR1 signal.
The following command will clone usb stick inside /dev/sdc to /dev/sdd
Double check you got the correct usb sticks (origional-clone)with fdisk -l.
A bit different from some of the other submissions. Has bold and uses all c printable characters. Change the bs=value to speed up and increase the sizes of the bold and non-bold strings.
I wanted to create a copy of my whole laptop disk on an lvm disk of the same size.
First I created the logical volume: lvcreate -L120G -nlaptop mylvms
SOURCE: dd if=/dev/sda bs=16065b | netcat ip-target 1234
TARGET: nc -l -p 1234 | dd of=/dev/mapper/mylvms-laptop bs=16065b
to follow its process you issue the following command in a different terminal
STATS: on target in a different terminal: watch -n60 -- kill -USR1 $(pgrep dd)
If you have some drive imaging to do, you can boot into any liveCD and use a commodity machine. The drives will be written in parallel.
To improve efficiency, specify a larger block size in dd:
dd if=/dev/sda bs=64k | tee >(dd of=/dev/sdb bs=64k) | dd of=/dev/sdc bs=64k
To image more drives , insert them as additional arguments to tee:
dd if=/dev/sda | tee >(dd of=/dev/sdb) >(dd of=/dev/sdc) >(dd of=/dev/sdd) | dd of=/dev/sde
This is similar to how you would generate a file with all zeros
dd if=/dev/zero of=allzeros bs=1024 count=2k
For disk space constraint testing. Leaves a little space available for creating temp files, etc. Easily free up the used disk space again by deleting the dummy00 file. Can tailor the testing by building smaller 'blocks' to suit the needs of the testing.
WARNING: do not do this to the '/' (root) filesystem unless you know what you are doing... on some systems it could crash the OS.
If you leave out the block size it defaults to 512 bytes. I set it to 16 Megabytes and it was much faster...
This is an useful command for when your OS is reporting less free RAM than it actually has. In case terminated processes did not free their variables correctly, the previously allocated RAM might make a bit sluggis over time.
This command then creates a huge file made out of zeroes and then removes it, thus freeing the amount of memory occupied by the file in the RAM.
In this example, the sequence will free up to 1GB(1M * 1K) of unused RAM. This will not free memory which is genuinely being used by active processes.
See: http://imgur.com/JgjK2.png for example.
Do some serious benchmarking from the commandline. This will write to a file with the time it took to compress n bytes to the file (increasing by 1).
gnuplot -persist <(echo "plot 'lzma' with lines, 'gzip' with lines, 'bzip2' with lines")
To see it in graph form.
Intentional hash in the beginning. May run a looong time. Wipes your data for real. Was meant to be /dev/urandom - I mistyped it. :-)