Commands by mechmind (7)

  • It's useful mostly for your custom scripts, which running on specific host and tired on ssh'ing every time when you need one simple command (i use it for update remote apt repository, when new package have to be downloaded from another host). Don't forget to set up authorization by keys, for maximum comfort. Show Sample Output


    -3
    echo -e '#!/bin/bash\nssh remote-user@remote-host $0 "$@"' >> /usr/local/bin/ssh-rpc; chmod +x /usr/local/bin/ssh-rpc; ln -s hostname /usr/local/bin/ssh-rpc; hostname
    mechmind · 2011-12-28 17:43:34 5
  • There are a lot of commands, which invokes your player at specified time. But I prefer not to jump from by bed, when alarm start to play. Instead, this script increases volume of mpd over time, which much more pleasant when you just woke up :)


    7
    at 8:30 <<<'mpc volume 20; mpc play; for i in `seq 1 16`; do sleep 2; mpc volume +5; done'
    mechmind · 2011-11-30 17:51:27 1
  • this oneliner uses make and it's jobserver for parallel execution of your script. The '-j' flag for make defines number of subprocesses to launch, '-f' tells make use stdin instead of Makefile. Also make have neat flag '-l', which "Specifies that no new jobs (commands) should be started if there are others jobs running and the load is at least load (a floating-point number)." Also you can use plain Makefile, for better readability: targets = $(subst .png,.jpg,$(wildcard *.png)) (targets): echo convert $(subst .jpg,.png,$@) $@ all : $(targets)


    5
    echo -n 'targets = $(subst .png,.jpg,$(wildcard *.png))\n$(targets):\n convert $(subst .jpg,.png,$@) $@ \nall : $(targets)' | make -j 4 -f - all
    mechmind · 2010-07-15 07:19:17 1
  • USAGE: $ sudor your command This command uses a dirty hack with history, so be sure you not turned it off. WARNING! This command behavior differ from other commands. It more like text macro, so you shouldn't use it in subshells, non-interactive sessions, other functions/aliases and so on. You shouldn't pipe into sudor (any string that prefixes sudor will be removed), but if you really want, use this commands: proceed_sudo () { sudor_command="`HISTTIMEFORMAT=\"\" history 1 | sed -r -e 's/^.*?sudor//' -e 's/\"/\\\"/g'`" ; pre_sudor_command="`history 1 | cut -d ' ' -f 5- | sed -r -e 's/sudor.*$//' -e 's/\"/\\\"/g'`"; if [ -n "${pre_sudor_command/ */}" ] ; then eval "${pre_sudor_command%| *}" | sudo sh -c "$sudor_command"; else sudo sh -c "$sudor_command" ;fi ;}; alias sudor="proceed_sudo # "


    3
    proceed_sudo () { sudor_command="`HISTTIMEFORMAT=\"\" history 1 | sed -r -e 's/^.*?sudor//' -e 's/\"/\\\"/g'`" ; sudo sh -c "$sudor_command"; }; alias sudor="proceed_sudo # "
    mechmind · 2010-06-29 14:56:29 0
  • For this hack you need following function: finit() { count=$#; current=1; for i in "$@" ; do echo $current $count; echo $i; current=$((current + 1)); done; } and alias: alias fnext='read cur total && echo -n "[$cur/$total] " && read' Inspired by CMake progress counters. Show Sample Output


    2
    finit "1 2 3" 3 2 1 | while fnext i ; do echo $i; done;
    mechmind · 2010-06-17 10:20:49 0
  • When you start screen as `ssh-agent screen`, agent will die after detatch. If you don't want to take care about files when stored agent's pid/socket/etc, you have to use this command.


    4
    eval `ssh-agent`; screen
    mechmind · 2010-03-07 14:58:54 0
  • With this form you dont need to cut out target directory using grep/sed/etc.


    4
    (ls; mkdir subdir; echo subdir) | xargs mv
    mechmind · 2009-11-08 11:40:55 8

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Watch the progress of 'dd'
The 'dd' command doesn't provide a progress when writing data. So, sending the "USR1" signal to the process will spit out its progress as it writes data. This command is superior to others on the site, as it doesn't require you to previously know the PID of the dd command.

Download Youtube video with wget!
Nothing special required, just wget, sed & tr!

Find if the command has an alias

Grep syslog today last hour
Uses date to grep de logfile for today and uses it to get the last hour logs. Can be used to get last minute logs or today's logs.

throttle bandwidth with cstream
this bzips a folder and transfers it over the network to "host" at 777k bit/s. cstream can do a lot more, have a look http://www.cons.org/cracauer/cstream.html#usage for example: $ echo w00t, i'm 733+ | cstream -b1 -t2 hehe :)

List only executables installed by a debian package
Safe for whitespaces in names.

Display IP : Count of failed login attempts
The lastb command presents you with the history of failed login attempts (stored in /var/log/btmp). The reference file is read/write by root only by default. This can be quite an exhaustive list with lots of bots hammering away at your machine. Sometimes it is more important to see the scale of things, or in this case the volume of failed logins tied to each source IP. The awk statement determines if the 3rd element is an IP address, and if so increments the running count of failed login attempts associated with it. When done it prints the IP and count. The sort statement sorts numerically (-n) by column 3 (-k 3), so you can see the most aggressive sources of login attempts. Note that the ':' character is the 2nd column, and that the -n and -k can be combined to -nk. Please be aware that the btmp file will contain every instance of a failed login unless explicitly rolled over. It should be safe to delete/archive this file after you've processed it.

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Find which service was used by which port number

I finally found out how to use notify-send with at or cron
The simplest way to do it. Works for me, at least. (Why are the variables being set?)


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: