Check These Out
It's common to want to split up large files and the usual method is to use split(1).
If you have a 10GiB file, you'll need 10GiB of free space.
Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem).
This takes AGES.
.
The command uses a set of loop block devices to create fake chunks, but without making any changes to the file.
This means the file splitting is nearly instantaneous.
The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15).
.
Note: This isn't a drop-in replacement for using split. The results are block devices.
tar and zip won't do what you expect when given block devices.
.
These commands will work:
$ hexdump /dev/loop4
.
$ gzip -9 < /dev/loop6 > part6.gz
.
$ cat /dev/loop10 > /media/usb/part10.bin
Imagine you've started a long-running process that involves piping data,
but you forgot to add the progress-bar option to a command.
e.g.
$ xz -dc bigdata.xz | complicated-processing-program > summary
.
This command uses lsof to see how much data xz has read from the file.
$ lsof -o0 -o -Fo FILENAME
Display offsets (-o), in decimal (-o0), in parseable form (-Fo)
This will output something like:
.
p12607
f3
o0t45187072
.
Process id (p), File Descriptor (f), Offset (o)
.
We stat the file to get its size
$ stat -c %s FILENAME
.
Then we plug the values into awk.
Split the line at the letter t: -Ft
Define a variable for the file's size: -s=$(stat...)
Only work on the offset line: /^o/
.
Note this command was tested using the Linux version of lsof.
Because it uses lsof's batch option (-F) it may be portable.
.
Thanks to @unhammer for the brilliant idea.
you can also pipe it to "tail" command to show 10 most memory using processes.
May be used on (embedded) systems lack ldd
Quick and dirty forkbomb for all flavors of windows
Do not use in production. Replace start with a command of your choice, this will just open a new command prompt and is pretty tricky to stop once started