commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
It's common to want to split up large files and the usual method is to use split(1).
If you have a 10GiB file, you'll need 10GiB of free space.
Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem).
This takes AGES.
The command uses a set of loop block devices to create fake chunks, but without making any changes to the file.
This means the file splitting is nearly instantaneous.
The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15).
Note: This isn't a drop-in replacement for using split. The results are block devices.
tar and zip won't do what you expect when given block devices.
These commands will work:
gzip -9 < /dev/loop6 > part6.gz
cat /dev/loop10 > /media/usb/part10.bin
Copies file.org to file.copy1 ... file.copyn
Gives the DNS listed IP for the host you're on... or replace `hostname` with any other host
I occasionally need to see if a machine is hitting ulimit for threads, and what process is responsible. This gives me the total number, sorted low to high so the worst offender is at the end, then gives me the total number of threads, for convenience.
Some PDF viewers don't manage form fields correctly when printing. Instead of treating them as transparent, they print as black shapes.
This is based on __unixmonkey73469__ answer. You will need to supply `--multiline 1` option to JSON importer if your .json is multiline (i.e. it was prettyfied)
And you still need catmandu installed via `cpanm Catmandu`
This modifies the output of ls so that the file size has commas every three digits. It makes room for the commas by destructively eating any characters to the left of the size, which is probably okay since that's just the "group".
Note that I did not write this, I merely cleaned it up and shortened it with extended regular expressions. The original shell script, entitled "sl", came with this description:
: For tired eyes (sigh), do an ls -lF plus whatever other flags you give
: but expand the file size with commas every 3 digits. Really helps me
: distinguish megabytes from hundreds of kbytes...
: Corey Satten, email@example.com, 11/8/89
Of course, some may suggest that fancy new "human friendly" options, like "ls -Shrl", have made Corey's script obsolete. They are probably right. Yet, at times, still I find it handy. The new-fangled "human-readable" numbers can be annoying when I have to glance at the letter at the end to figure out what order of magnitude is even being talked about. (There's a big difference between 386M and 386P!). But with this nifty script, the number itself acts like a histogram, a quick visual indicator of "bigness" for tired eyes. :-)
This is the most straightforward approach: first regexp limits dictionary file to words with thirteen or more characters, second regexp discards any words that have a letter repeated. (Bonus challenge: Try doing it in a single regexp!)
Monitoring system in one line :
DISK : disk space
MEM: memory ( mem , swap, Total)
CPU : all information about cpu activity
LOAD : load average
Top 30 History Command line with histogram display
Useful, when you need to backup/copy/sync a folder over ssh with a non standard port number
capture 2000 packets and print the top 10 talkers
This does not require you to know the partition offset, kpartx will find all partitions in the image and create loopback devices for them automatically. This works for all types of images (dd of hard drives, img, etc) not just vmkd. You can also activate LVM volumes in the image by running
vgchange -a y
and then you can mount the LV inside the image.
To unmount the image, umount the partition/LV, deactivate the VG for the image
vgchange -a n <volume_group>
kpartx -dv <image-flad.vmdk>
to remove the partition mappings.
When booting a VM through OpenStack and managed through cloudinit, the hosts file gets to write a line simiar to
127.0.1.1 ns0.novalocal ns0
This command proven useful while installing a configuration manager such as Salt Stack (or Puppet, or Ansible) and getting node name
I used this (along with a modified one replacing `mkv` with `srt`) to remove the slight differences in who the provider of the video / matching subtitle was (as they are the same contents and the subs match anyway).
So now VLC (and other video players) can easily guess the subtitle file.
A nice way to interrupt a sleep with a signal.
This commands compresses the "tmp" directory into an initrd file.
this command extracts an initrd files into the "tmp" directory
this command can be added to crontab so as to execute a nightly backup of directories and store only the 10 last backup files.
Mac have direct conversion of seconds (Epoch time)