commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Pressing ctrl-t will display the progress
While a dd is running in one terminal, open another and enter the while loop. The sample output will be displayed in the window running the dd and the while loop will exit when the dd is complete. It's possible that a "sudo" will need to be inserted before "pkill", depending on your setup, for example:
while pgrep ^dd; do sudo pkill -INFO dd; sleep 10; done
This example is taken from Cygwin running on Win7Ent-64. Device names will vary by platform.
Both commands resulted in identical files per the output of md5sum, and ran in the same time down to the second (2m45s), less than 100ms apart. I timed the commands with 'time', which added before 'dd' or 'readom' gives execution times after the command completes. See 'man time' for more info...it can be found on any Unix or Linux newer than 1973. Yeah, that means everywhere.
readom is supposed to guarantee good reads, and does support flags for bypassing bad blocks where dd will either fail or hang.
readom's verbosity gave more interesting output than dd.
On Cygwin, my attempt with 'readom' from the first answer actually ended up reading my hard drive. Both attempts got to 5GB before I killed them, seeing as that is past any CD or standard DVD.
'bs=1M' says "read 1MB into RAM from source, then write that 1MB to output. I also tested 10MB, which shaved the time down to 2m42s.
'if=/dev/scd0' selects Cygwin's representation of the first CD-ROM drive.
'of=./filename.iso' simply means "create filename.iso in the current directory."
'-v' says "be a little noisy (verbose)." The man page implies more verbosity with more 'v's, e.g. -vvv.
dev='D:' in Cygwin explicitly specifies the D-drive. I tried other entries, like '/dev/scd0' and '2,0', but both read from my hard drive instead of the CD-ROM. I imagine my LUN-foo (2,0) was off for my system, but on Cygwin 'D:' sort of "cut to the chase" and did the job.
f='./filename.iso' specifies the output file.
speed=2 simply sets the speed at which the CD is read. I also tried 4, which ran the exact same 2m45s.
retries=8 simply means try reading a block up to 8 times before giving up. This is useful for damaged media (scratches, glue lines, etc.), allowing you to automatically "get everything that can be copied" so you at least have most of the data.
Your platform may not have pv by default. If you are using Homebew on OSX, simply 'brew install pv'.
The comp.unix.shell posting by St?phane Chazelas also lists the following offsets:
type 32768 (1 byte)
id 32769 (5 bytes)
version 32774 (1 byte)
system_id 32776 (32 bytes)
volume_id 32808 (32 bytes)
volume_space_size 32848 (8 bytes)
escape_sequences 32856 (32 bytes)
volume_set_size 32888 (4 bytes)
volume_sequence_number 32892 (4 bytes)
logical_block_size 32896 (4 bytes)
path_table_size 32900 (8 bytes)
type_l_path_table 32908 (4 bytes)
opt_type_l_path_table 32912 (4 bytes)
type_m_path_table 32916 (4 bytes)
opt_type_m_path_table 32920 (4 bytes)
root_directory_record 32924 (34 bytes)
volume_set_id 32958 (128 bytes)
publisher_id 33086 (128 bytes)
preparer_id 33214 (128 bytes)
application_id 33342 (128 bytes)
copyright_file_id 33470 (37 bytes)
abstract_file_id 33507 (37 bytes)
bibliographic_file_id 33544 (37 bytes)
creation_date 33581 (17 bytes)
modification_date 33598 (17 bytes)
expiration_date 33615 (17 bytes)
effective_date 33632 (17 bytes)
file_structure_version 33649 (1 byte)
application_data 33651 (512 bytes)
If you want to delete lines fast then all you need to do is vi/vim a text file, type in the amount of lines you want to delete (in my example I wanted to delete 10056 lines) followed by dd (no spaces). There will be no output so becareful with what number you type.
Sends SIGINFO to the process. This is a BSD feature OS X inherited. You must have the terminal window executing dd selected when entering CTRL + T for this to work.
Sends the "USR1" signal every 1 second (-n 1) to a process called exactly "dd".
The signal in some systems can be INFO or SIGINFO ...
look at the signals list in: man kill
An easy method to generate ISOs from CD/DVD media.
run this in another terminal, were xxxx is the process ID of the running dd process.
the progress will report on the original terminal that you ran dd on
This version was mentioned in the comments. Credits go to flatcap.
The previously-posted one-liner didn't work for me for whatever reason, so I ended up doing this instead.
Show running time. eta, progressbar
Create an image of "device" and send it to another machine through the network ("target" and "port" sets the ip and port the stream will be sent to), outputting a progress bar
On the machine that will receive, compress and store the file, use:
nc -l -p <port> | 7z a <filename> -si -m0=lzma2 -mx=9 -ms=on
Optionally, add the -v4g switch at the end of the line in order to split the file every 4 gigabytes (or set another size: accepted suffixes are k, m and g).
The file will be compressed using 7z format, lzma2 algorithm, with maximum compression level and solid file activated.
The compression stage will be executed on the machine which will store the image. It was planned this way because the processor on that machine was faster, and being on a gigabit network, transfering the uncompressed image wasn't much of a problem.
Only slightly different than previous commands. The benefit is that your "watch" should die when the dd command has completed. (Of course this would depend on /proc being available)
Assuming we have a disk image, created by dd if=/dev/sda of=image.dd we can check the image's partition layout with fdisk -ul image.dd, then substitute "x" with starting sector of the partition we want to mount. This example assumes that the disk uses 512Byte sectors
I'm both a one-liner fan and a haskell learner
the speed is about 500MB/s on my machine.
i think it's fast enough to output not too many bytes.
while a C program may output 1GB per sencond on my machine.
if the size is not the power of 512,you may change the bs and count in dd.
Step#2 Create a copy of the bootload and partition table!