/sbin/dumpe2fs /dev/hda2 | grep 'Block size'

How to Find the Block Size

Useful to know, especially if you are dealing with output configurations in block size. Tested on 'Red Hat'.

0
By: rez0r
2009-05-15 22:23:21

These Might Interest You

  • the result of the command helped a check the Maximum file size and Maximum file system size. EXT3 Exemple: Block size; MAX File size; Maximum file system size 1 KiB; 16 GiB ; 2 TiB 2 KiB ; 256 GiB ; 8 TiB 4 KiB ; 2 TiB ; 16 TiB 8 KiB[limits 1]; 2 TiB; 32 TiB Show Sample Output


    -1
    tune2fs -l /dev/XXXX | grep -w ^"Block size:"
    ncaio · 2011-02-10 16:39:14 0
  • I find this terribly useful for grepping through a file, looking for just a block of text. There's "grep -A # pattern file.txt" to see a specific number of lines following your pattern, but what if you want to see the whole block? Say, the output of "dmidecode" (as root): dmidecode | awk '/Battery/,/^$/' Will show me everything following the battery block up to the next block of text. Again, I find this extremely useful when I want to see whole blocks of text based on a pattern, and I don't care to see the rest of the data in output. This could be used against the '/etc/securetty/user' file on Unix to find the block of a specific user. It could be used against VirtualHosts or Directories on Apache to find specific definitions. The scenarios go on for any text formatted in a block fashion. Very handy.


    85
    awk '/start_pattern/,/stop_pattern/' file.txt
    atoponce · 2009-03-28 14:28:59 7
  • The command gives size of all files smaller than 1024k, this information, together with disk usage, can help determin file system parameter (e.g. block size) or storage device (e.g. SSD v.s. HDD). Note if you use awk instead of "cut| dc", you easily breach maximum allowed number of records in awk. Show Sample Output


    1
    find dir -size -1024k -type f | xargs -d $'\n' -n1 ls -l | cut -d ' ' -f 5 | sed -e '2,$s/$/+/' -e '$ap' | dc
    zhangweiwu · 2009-12-28 04:23:01 1
  • Shows all block devices in a tree with descruptions of what they are.


    2
    sudo lsblk -o name,type,fstype,label,partlabel,model,mountpoint,size
    bugmenot · 2018-04-25 00:16:39 0
  • This command utilizes 'pv' to show dd's progress. Notes on use with dd: -- dd block size (bs=...) is a widely debated command-line switch and should usually be between 1024 and 4096. You won't see much performance improvements beyond 4096, but regardless of the block size, dd will transfer every bit of data. -- pv's switch, '-s' should be as close to the size of the data source as possible. -- dd's out file, 'of=...' can be anything as the data within that file are the same regardless of the filename / extension. Show Sample Output


    19
    sudo dd if=/dev/sdc bs=4096 | pv -s 2G | sudo dd bs=4096 of=~/USB_BLACK_BACKUP.IMG
    BruceLEET · 2010-07-28 22:39:46 4
  • It's common to want to split up large files and the usual method is to use split(1). If you have a 10GiB file, you'll need 10GiB of free space. Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem). This takes AGES. . The command uses a set of loop block devices to create fake chunks, but without making any changes to the file. This means the file splitting is nearly instantaneous. The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15). . Note: This isn't a drop-in replacement for using split. The results are block devices. tar and zip won't do what you expect when given block devices. . These commands will work: hexdump /dev/loop4 . gzip -9 < /dev/loop6 > part6.gz . cat /dev/loop10 > /media/usb/part10.bin Show Sample Output


    5
    FILE=file_name; CHUNK=$((64*1024*1024)); SIZE=$(stat -c "%s" $FILE); for ((i=0; i < $SIZE; i+=$CHUNK)); do losetup --find --show --offset=$i --sizelimit=$CHUNK $FILE; done
    flatcap · 2014-10-03 13:18:19 2

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: