Commands by amiga500 (2)

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

ASCII webcam live stream video using mplayer
Mplayer starts a webcam capture using ASCII art. Only mplayer required

Delete newline

remove execute bit only from files. recursively

for loop with leading zero in bash 3

Create package dependency graph
Create Debian package dependency graph using GraphViz

Export log to html file
Logtool is a nice tool that can export log file to various format, but its strength lies in the capacity of colorize logs. This command take a log as input and colorize it, then export it to an html file for a more confortable view. Logtool is part of logtool package.Tested on Debian.

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Fastest Sort. Sort Faster, Max Speed
sort is way slow by default. This tells sort to use a buffer equal to half of the available free memory. It also will use multiple process for the sort equal to the number of cpus on your machine (if greater than 1). For me, it is magnitudes faster. If you put this in your bash_profile or startup file, it will be set correctly when bash is started. $ sort -S1 --parallel=2 /dev/null && alias sortfast='sort -S$(($(sed '\''/MemF/!d;s/[^0-9]*//g'\'' /proc/meminfo)/2048)) $([ `nproc` -gt 1 ]&&echo -n --parallel=`nproc`)' Alternative $ echo|sort -S10M --parallel=2 &>/dev/null && alias sortfast="command sort -S$(($(sed '/MemT/!d;s/[^0-9]*//g' /proc/meminfo)/1024-200)) --parallel=$(($(command grep -c ^proc /proc/cpuinfo)*2))"

New Maintainer for CommandLineFu
Welcome to Jon H. (@fart), the new maintainer of CommandLineFu. . In the absence of a forum, I encourage people welcome him, here, in the comments. . Also... What would you like to improve/change about the site?

Split a large file, without wasting disk space
It's common to want to split up large files and the usual method is to use split(1). If you have a 10GiB file, you'll need 10GiB of free space. Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem). This takes AGES. . The command uses a set of loop block devices to create fake chunks, but without making any changes to the file. This means the file splitting is nearly instantaneous. The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15). . Note: This isn't a drop-in replacement for using split. The results are block devices. tar and zip won't do what you expect when given block devices. . These commands will work: $ hexdump /dev/loop4 . $ gzip -9 < /dev/loop6 > part6.gz . $ cat /dev/loop10 > /media/usb/part10.bin


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: