Commands by h3xx (20)

  • Real gurus don't need fancy tools like iftop or jnettop. Show Sample Output

    tcpdump -w - |pv -bert >/dev/null
    h3xx · 2011-12-14 00:24:02 7
  • For when you need a quick spell check. Show Sample Output

    aspell -a <<< '<WORDS>'
    h3xx · 2011-11-30 01:47:46 6
  • This causes cp to detect and omit large blocks of nulls. Sparse files are useful for implying a lot of disk space without actually having to write it all out. You can use it in a pipe too: dd if=/dev/zero bs=1M count=5 |cp --sparse=always /dev/stdin SPARSE_FILE Show Sample Output

    cp --sparse=always <SRC> <DST>
    h3xx · 2011-09-07 08:02:50 46
  • This is shorter and actually much faster than >/dev/null (see sample output for timings) Plus, it looks like a disappointed face emoticon. Show Sample Output

    <COMMAND> |:
    h3xx · 2011-08-28 23:48:29 22
  • Ever ask yourself "How much data would be lost if I pressed the reset button?" Scary, isn't it? Show Sample Output

    grep ^Dirty /proc/meminfo
    h3xx · 2011-08-24 08:48:49 13
  • Tells you everything you could ever want to know about all files and subdirectories. Great for package creators. Totally secure too. On my Slackware box, this gets set upon login: LS_OPTIONS='-F -b -T 0 --color=auto' and alias ls='/bin/ls $LS_OPTIONS' which works great. Show Sample Output

    lsr() { find "${@:-.}" -print0 |sort -z |xargs -0 ls $LS_OPTIONS -dla; }
    h3xx · 2011-08-15 03:10:58 3

  • 0
    echo $(($(ulimit -u)-$(pgrep -u $USER|wc -l))
    h3xx · 2011-07-30 05:03:36 3
  • These are way better than fortune(6). Show Sample Output

    grep -2riP '\b(fuck|shit|bitch|tits|ass\b)' /usr/src/linux/
    h3xx · 2011-07-27 23:11:02 7
  • For instance: find . -type f -name '*.wav' -print0 |xargs -0 -P 3 -n 1 flac -V8 will encode all .wav files into FLAC in parallel. Explanation of xargs flags: -P [max-procs]: Max number of invocations to run at once. Set to 0 to run all at once [potentially dangerous re: excessive RAM usage]. -n [max-args]: Max number of arguments from the list to send to each invocation. -0: Stdin is a null-terminated list. I use xargs to build parallel-processing frameworks into my scripts like the one here:

    xargs -P 3 -n 1 <COMMAND> < <FILE_LIST>
    h3xx · 2011-07-25 22:53:32 34
  • Crash Override, man! Apparently the exec call tricks BASH into setting the output buffer size to 0 under the assumption that the system (or the calling shell) will handle the output buffering. trapping the ERR signal will stop the subshell from dying and sending the ERR signal to the main script--which will terminate immediately if it does--when the program fails. The only problem is that the kernel will output a whole bunch of stack trace garbage directly to the console device once the process segfaults, so there's no way to prevent it from being output [that I know of].

    (trap 'true' ERR; exec <SEGFAULT_PRONE_PROGRAM>)
    h3xx · 2011-07-25 02:30:52 4
  • Even adds a newline.

    xxd -p <<< <STRING>
    h3xx · 2011-07-24 19:16:32 3

  • 0
    read VAR1 VAR2 VAR3 <<< aa bb cc; echo $VAR2
    h3xx · 2011-07-24 18:56:30 3
  • You can also save EXIF information by copying it to temp.jpg: jpegtran -optimize -outfile temp.jpg <JPEG> && jhead -te temp.jpg "$_" && mv temp.jpg "$_"

    jpegtran -optimize -outfile temp.jpg <JPEG> && mv temp.jpg "$_"
    h3xx · 2011-07-24 08:55:46 3
  • Or, aumix -v -5 Map these to key combinations in your window manager and who needs special buttons?

    aumix -v +5
    h3xx · 2011-07-24 07:41:40 3
  • This forces X back to its maximum resolution configured. To get a list, type `xrandr'.

    xrandr -s 0
    h3xx · 2011-07-24 07:38:01 3
  • Works really well for playing DVDs, which have the volume turned way down for some reason. The `2' method is better IMHO because it will adjust to changing loud/soft parts. If you want to add it to your ~/.mplayer/config: # format: volnorm[=method:target] # method: # 1: use single sample (default) # 2: multiple samples # target: # default is 0.25 af-add=volnorm=2:0.75

    mplayer -af volnorm=2:0.75 dvd://
    h3xx · 2011-07-24 07:26:51 4
  • Make sure the file contents can't be retrieved if anyone gets ahold of your physical hard drive. With hard drive partition: gpg --default-recipient-self -o /path/to/encrypted_backup.gpg -e /dev/sdb1 && shred -z /dev/sdb1 WARNING/disclaimer: Be sure you... F&%k it--just don't try this.

    gpg -e --default-recipient-self <SENSITIVE_FILE> && shred -zu "$_"
    h3xx · 2011-07-24 05:51:47 3
  • Skip forward and back using the < and > keys. Display the file title with I.

    mplayer -playlist <(find "$PWD" -type f)
    h3xx · 2011-07-24 03:27:03 6
  • zless /proc/config.gz

    zgrep CONFIG_MAGIC_SYSRQ /proc/config.gz
    h3xx · 2011-07-24 02:06:09 4
  • This will affect all invocations of grep, even when it is called from inside a script.

    export GREP_OPTIONS='--color=auto'
    h3xx · 2011-07-24 01:32:10 3

What's this? is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands

Check These Out

Instant mirror from your laptop + webcam

Put split files back together, without a for loop
After splitting a file, put them all back together a lot faster then doing $cat file1 file2 file3 file4 file5 > mainfile or $for i in {0..5}; do cat file$i > mainfile; done When splitting, be sure to do split -d for getting numbers instead of letters

write the output of a command to /var/log/user.log... each line will contain $USER, making this easy to grep for.
This command is useful if you want to copy the output of a series of commands to a file, for example if you want to pastebin the output from 'uname -a', 'lspci -vvv' and 'lsmod' for video driver trouble-shooting on your favorite Linux forum. 'log' takes all the following arguments as a command to execute, with STDOUT sent to /var/log/user.log. The command is echoed to the log before it is executed. The advantages of using logger (as opposed to appending output from commands to a file) are 1) commands are always appended to the logs... you don't have to worry about clobbering your log file accidentally by using '>' rather than '>>' 2) logs are automatically cleaned up by logrotate. The following functions allow you to mark the start and end of a section of /var/log/user.log. $ startlog() { export LOGMARK=$(date +%Y.%m.%d_%H:%M:%S); echo "$LOGMARK.START" | logger -t $USER; } then $ endlog() { echo "$LOGMARK.END" | logger -t $USER; } printlog will print all lines between $LOGMARK.START and $LOGMARK.END, removing everything that is prepended to each line by logger. $ printlog() { sudo sed -n -e "/$LOGMARK.START/,/$LOGMARK.END/p" /var/log/user.log| sed "s/.*$USER: //"; } The following command should dump just about all the information that you could possibly want about your linux configuration into the clipboard. $ startlog; for cmd in 'uname -a' 'cat /etc/issue' 'dmesg' 'lsusb' 'lspci' 'sudo lshw' 'lsmod'; do log $cmd; done; endlog; printlog | xsel --clipboard This is ready for a trip to, and you don't have to worry about leaving temporary files lying around cluttering up $HOME. Caveats: I'm sure that startlog, endlog, and printlog could use some cleanup and error checking... there are unchecked dependencies between printlog and endlog, as well as between endlog and startlog. It might be useful for 'log' to send stderr to logger as well.

Poor man's nmap for a class C network from rfc1918
What do you do when nmap is not available and you want to see the hosts responding to an icmp echo request ? This one-liner will print all hosts responding with their ipv4 address.

get colorful side-by-side diffs of files in svn with vim
This will diff your local version of the file with the latest version in svn. I put this in a shell function like so: $svd() { vimdiff

GRUB2: set Super Mario as startup tune
I'll let Slayer handle that. Raining Blood for your pleasure.

Keep track of diff progress
You're running a program that reads LOTS of files and takes a long time. But it doesn't tell you about its progress. First, run a command in the background, e.g. $ find /usr/share/doc -type f -exec cat {} + > output_file.txt Then run the watch command. "watch -d" highlights the changes as they happen In bash: $! is the process id (pid) of the last command run in the background. You can change this to $(pidof my_command) to watch something in particular.

df output, sorted by Use% and correctly maintaining header row
Show disk space info, grepping out the uninteresting ones beginning with ^none while we're at it. The main point of this submission is the way it maintains the header row with the command grouping, by removing it from the pipeline before it gets fed into the sort command. (I'm surprised sort doesn't have an option to skip a header row, actually..) It took me a while to work out how to do this, I thought of it as I was drifting off to sleep last night!

Extract public key from private
This will extract the public key that is stored in the private key using openssl.

Annotate tail -f with timestamps

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.


Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: