If you want to create fast a very big file for testing purposes and you do not care about its content, then you can use this command to create a file of arbitrary size within less than a second. Content of file will be all zero bytes. The trick is that the content is just not written to the disk, instead the space for it is somehow reserved on operating system level and file system level. It would be filled when first accessed/written (not sure about the mechanism that lies below, but it makes the file creation super fast). Instead of '1G' as in the example, you could use other modifiers like 200K for kilobytes (1024 bytes), 500M for megabytes (1024 * 1024 bytes), 20G for Gigabytes (1024*1024*1024 bytes), 30T for Terabytes (1024^4 bytes). Also P for Penta, etc... Command tested under Linux. Show Sample Output
If you're trying to create a sparse file, you can use dd by 'skip'ing to the last block instance. ls -ls shows the actual size vs. the reported size Show Sample Output
Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?
You must be signed in to comment.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: