commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
get the size of S3 bucket
centos list directories sorted by size
A more efficient way, with reversed order to put the focus in the big ones.
Same result as with 'du -ks .[^.]* * | sort -n' but with size outputs in human readable format (e.g., 1K 234M 2G)
This command summarizes the disk usage across the files and folders in a given directory, including hidden files and folders beginning with ".", but excluding the directories "." and ".."
It produces a sorted list with the largest files and folders at the bottom of the list
Tested on MacOS and GNU/Linux.
It works in dirs containing files starting with '-'.
It runs 'du' only once.
It sorts according to size.
It treats 1K=1000 (and not 1024)
thanks to GREP_COLOR the output will highlite the first 4 digits. if all files are few MB only, this gives a quick overview of how many powers of 10 bigger than 1MB they really are, a logarithmic scale. same works if files are more than 1GB when you replace the "4" by a "7", I usually use "5" in order to manually decide what files to delete...
This will list all the files that are a gigabyte or larger in the current working directory. Change the G in the regex to be a M and you'll find all files that are a megabyte up to but not including a gigabyte.
This command makes a small graph with the histogram of size blocks (5MB in this example), not individual files. Fine tune the 4+5*int($1/5) block for your own size jumps : jump-1+jump*($1/jump)
Also in the hist=hist-5 part, tune for bigger or smaller graphs
This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray.
Go to the directory where you want to write the output files : cd /bklogs
Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www".
It will create a tar and write it to the tape drive on /dev/nst0.
In the process, it will
1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact
2) generate a filelist of the content of the tar with filesize to $bkname.lst
3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz.
4) show buffer in/out speed and used space in the buffer
5) show progress bar with time approximation using pv
To eject the tape :
; sleep 75; mt-st -f /dev/nst0 rewoffl
1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it."
2) Fix filesize approximation
3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?)
4) Support spanning on multiple tape.
5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project.
1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing.
2) Remove the v in tar argument cvf to prevent listing all files added to the archive.
3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/)
and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !
This command sorts the content including hidden files in human readable format of the current directory.
Add date time to output whithin the current directory
Very quick! Based only on the content sizes and the character counts of filenames. If both numbers are equal then two (or more) directories seem to be most likely identical.
if in doubt apply:
diff -rq path_to_dir1 path_to_dir2
AWK function taken from here: