commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
Uses the pv utility to show progress of data transfer and an ETA until completion.
You can install pv via Homebrew on macOS
uses the wonderful 'pv' command to give a progress bar when copying one partition to another. Amazing for long running dd commands
Scope should have the Rigol Ultravision Technology otherwise it won't accept the command. ImageMagic is required. Scope sends a 1.1M BMP file and converted to PNG it's only 18-20K
By this command you can take the snapshot of you harddisk (full) and create the image , the image will be directly store on remote server through ssh. Here i am creating the image of /dev/hda and saving it at 126.96.36.199 as /root/server.img.
SIZE is the number of gigabytes and the file name is at the end. Random data is generated by encrypting /dev/zero, similar to other techniques posted on here.
This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray.
Go to the directory where you want to write the output files : cd /bklogs
Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www".
It will create a tar and write it to the tape drive on /dev/nst0.
In the process, it will
1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact
2) generate a filelist of the content of the tar with filesize to $bkname.lst
3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz.
4) show buffer in/out speed and used space in the buffer
5) show progress bar with time approximation using pv
To eject the tape :
; sleep 75; mt-st -f /dev/nst0 rewoffl
1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it."
2) Fix filesize approximation
3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?)
4) Support spanning on multiple tape.
5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project.
1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing.
2) Remove the v in tar argument cvf to prevent listing all files added to the archive.
3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/)
and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !
I sometimes have use an usb stick to distribute files to several standalone "internet" pc's. I don't trust these machines period. The sticks I have do not have a write protection. So as a added security measure I fill the unused space on the (small) usb stick with a file with randomly generated bits. Any malware that tries to write to this stick will find no space on it.
Tested on slackware 14
Note: you may need root access to write to the device. This depends on your mount options.
This is to overcome the issue of slow I/O by reading once and forwarding the output to several processes (e. g. 3 in the given command). One could also invoke grep or other programs to work on read data.
This example is taken from Cygwin running on Win7Ent-64. Device names will vary by platform.
Both commands resulted in identical files per the output of md5sum, and ran in the same time down to the second (2m45s), less than 100ms apart. I timed the commands with 'time', which added before 'dd' or 'readom' gives execution times after the command completes. See 'man time' for more info...it can be found on any Unix or Linux newer than 1973. Yeah, that means everywhere.
readom is supposed to guarantee good reads, and does support flags for bypassing bad blocks where dd will either fail or hang.
readom's verbosity gave more interesting output than dd.
On Cygwin, my attempt with 'readom' from the first answer actually ended up reading my hard drive. Both attempts got to 5GB before I killed them, seeing as that is past any CD or standard DVD.
'bs=1M' says "read 1MB into RAM from source, then write that 1MB to output. I also tested 10MB, which shaved the time down to 2m42s.
'if=/dev/scd0' selects Cygwin's representation of the first CD-ROM drive.
'of=./filename.iso' simply means "create filename.iso in the current directory."
'-v' says "be a little noisy (verbose)." The man page implies more verbosity with more 'v's, e.g. -vvv.
dev='D:' in Cygwin explicitly specifies the D-drive. I tried other entries, like '/dev/scd0' and '2,0', but both read from my hard drive instead of the CD-ROM. I imagine my LUN-foo (2,0) was off for my system, but on Cygwin 'D:' sort of "cut to the chase" and did the job.
f='./filename.iso' specifies the output file.
speed=2 simply sets the speed at which the CD is read. I also tried 4, which ran the exact same 2m45s.
retries=8 simply means try reading a block up to 8 times before giving up. This is useful for damaged media (scratches, glue lines, etc.), allowing you to automatically "get everything that can be copied" so you at least have most of the data.
removes all files/filesystems of a harddisk. It removes EVERYTHING of your hard disk. Be carefull when to select a device. It does not prompt for and second check.
Your platform may not have pv by default. If you are using Homebew on OSX, simply 'brew install pv'.
For DVD: dd if=/dev/cdrom of=cd.iso
MBR is first 512B in partions.
The comp.unix.shell posting by St?phane Chazelas also lists the following offsets:
type 32768 (1 byte)
id 32769 (5 bytes)
version 32774 (1 byte)
system_id 32776 (32 bytes)
volume_id 32808 (32 bytes)
volume_space_size 32848 (8 bytes)
escape_sequences 32856 (32 bytes)
volume_set_size 32888 (4 bytes)
volume_sequence_number 32892 (4 bytes)
logical_block_size 32896 (4 bytes)
path_table_size 32900 (8 bytes)
type_l_path_table 32908 (4 bytes)
opt_type_l_path_table 32912 (4 bytes)
type_m_path_table 32916 (4 bytes)
opt_type_m_path_table 32920 (4 bytes)
root_directory_record 32924 (34 bytes)
volume_set_id 32958 (128 bytes)
publisher_id 33086 (128 bytes)
preparer_id 33214 (128 bytes)
application_id 33342 (128 bytes)
copyright_file_id 33470 (37 bytes)
abstract_file_id 33507 (37 bytes)
bibliographic_file_id 33544 (37 bytes)
creation_date 33581 (17 bytes)
modification_date 33598 (17 bytes)
expiration_date 33615 (17 bytes)
effective_date 33632 (17 bytes)
file_structure_version 33649 (1 byte)
application_data 33651 (512 bytes)
It will produce passwords with length of 20 printable characters within a reasonable time.
For shorter or longer passwords just change the 20 in bs=20 to something more convenient.
To create only alpha numeric passwords change [:print:] to [:alnum:]
This is a useful command to backup an sd card with relative total size for piping to pv with a progressbar
This is just a proof of concept: A FILE WHICH CAN AUTOMOUNT ITSELF through a SIMPLY ENCODED script. It takes advantage of the OFFSET option of mount, and uses it as a password (see that 9191? just change it to something similar, around 9k). It works fine, mounts, gets modified, updated, and can be moved by just copying it.
USAGE: SEE SAMPLE OUTPUT
The file is composed of three parts:
a) The legible script (about 242 bytes)
b) A random text fill to reach the OFFSET size (equals PASSWORD minus 242)
c) The actual filesystem
Logically, (a)+(b) = PASSWORD, that means OFFSET, and mount uses that option.
PLEASE NOTE: THIS IS NOT AN ENCRYPTED FILESYSTEM. To improve it, it can be mounted with a better encryption script and used with encfs or cryptfs. The idea was just to test the concept... with one line :)
It applies the original idea of http://www.commandlinefu.com/commands/view/7382/command-for-john-cons for encrypting the file.
The embedded bash script can be grown, of course, and the offset recalculation goes fine. I have my own version with bash --init-file to startup a bashrc with a well-defined environment, aliases, variables.
Blocksize (bs) is not mandatory. It's only needed when the count option is specified.