Commands tagged Linux (266)

  • The large context number (-C 1000) is a bit of a hack, but in most of my use cases, it makes sure I'll see the whole log output.


    -1
    some_cronjobed_script.sh 2>&1 | tee -a output.log | grep -C 1000 ERROR
    DEinspanjer · 2009-03-06 17:51:13 7
  • NOT MINE! Taken from hackzine.com blog. It creates a tree-style output of all the (sub)folders and (sub)files from the current folder and down(deeper) Quoting some of hackzine's words "Murphy Mac sent us a link to a handy find/sed command that simulates the DOS tree command that you might be missing on your Mac or Linux box. [..split...] Like most things I've seen sed do, it does quite a bit in a single line of code and is completely impossible to read. Sure it's just a couple of substitutions, but like a jack in the box, it remains a surprise every time I run it." Show Sample Output


    -1
    find . -print | sed -e 's;[^/]*/;|____;g;s;____|; |;g'
    JesusSuperstar · 2009-03-12 22:25:26 8
  • Here $HOME/shots must exist and have appropriate access rights and sitecopy must be correctly set up to upload new screen shots to the remote site. Example .sitecopyrc (for illustration purposes only) site shots server ftp.example.com username user password antabakadesuka local /home/penpen/shots remote public_html/shots permissions ignore The command uses scrot to create a screen shot, moves it to the screen shot directory, uploads it using screen uses xsel to copy the URL to the paste buffer (so that you can paste it with a middle click) and finally uses feh to display a preview of the screen shot. Note that $BASE stands for the base URL for the screen shots on the remote server, replace it by the actual location; in the example http://www.example.com/~user/shots would be fitting. Assign this command to a key combination or an icon in whatever panel you use. Show Sample Output


    -1
    scrot -e 'mv $f \$HOME/shots/; sitecopy -u shots; echo "\$BASE/$f" | xsel -i; feh `xsel -o`'
    penpen · 2009-03-26 12:08:39 4
  • Without the -dump option the header is displayed in lynx. You can also use w3m, the command then is w3m -dump_head http://www.example.com/ Show Sample Output


    -1
    lynx -dump -head http://www.example.com/
    penpen · 2009-03-31 18:41:36 6
  • This is a bit hacky, but if you're setting up a bunch of new LUNs, it can save a bunch of time. Also check out sfdisk. The fdisk will fail if, for example, a partition table already exists.


    -1
    echo -e "n\np\n1\n\n\nt\n8e\nw" | fdisk /dev/sdX
    sud0er · 2009-10-20 16:21:54 5
  • Bash scrip to test if a server is up, you can use this before wget'ing a file to make sure a blank one isn't downloaded.


    -1
    if [ "$(ping -q -c1 google.com)" ];then wget -mnd -q http://www.google.com/intl/en_ALL/images/logo.gif ;fi
    alf · 2010-03-23 04:15:03 10
  • Tail all logs that are opened by all java processes. This is helpful when you are on a new environment and you do not know where the logs are located. Instead of java you can put any process name. This command does work only for Linux. The list of all log files opened by java process: sudo ls -l $(eval echo "/proc/{$(echo $(pgrep java)|sed 's/ /,/')}/fd/")|grep log|sed 's/[^/]* //g'


    -1
    sudo ls -l $(eval echo "/proc/{$(echo $(pgrep java)|sed 's/ /,/')}/fd/")|grep log|sed 's/[^/]* //g'|xargs -r tail -f
    vutcovici · 2010-07-30 18:20:00 3

  • -1
    wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'
    hunterm · 2010-10-07 03:19:17 4
  • The command was too long for the command box, so here it is: echo $(( `wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'` + `curl --silent http://www.dudalibre.com/gnulinuxcounter?lang=en | grep users | head -2 | tail -1 | sed 's/.*<strong>//g' | sed 's/<\/strong>.*//g'` )) This took me about an hour to do. It uses wget and curl because, dudalibre.com blocks wget, and wget worked nicely for me. Show Sample Output


    -1
    Check the Description below.
    hunterm · 2010-10-07 04:22:32 3
  • Same as 7272 but that one was too dangerous so i added -P to prompt users to continue or cancel Note the double space: "...^ii␣␣linux-image-2..." Like 5813, but fixes two bugs: [1]This leaves the meta-packages 'linux-headers-generic' and 'linux-image-generic' alone so that automatic upgrades work correctly in the future. [2]Kernels newer than the currently running one are left alone (this can happen if you didn't reboot after installing a new kernel).


    -1
    sudo aptitude remove -P $(dpkg -l|awk '/^ii linux-image-2/{print $2}'|sed 's/linux-image-//'|awk -v v=`uname -r` 'v>$0'|sed 's/-generic//'|awk '{printf("linux-headers-%s\nlinux-headers-%s-generic\nlinux-image-%s-generic\n",$0,$0,$0)}')
    Bonster · 2011-04-25 05:19:57 3
  • By default, slot 1 is used for OTP purposes and validates against the Yubikey servers. Running the command above will replace that. If you have a Yubikey 2 or above, you can change -1 to -2 and program the second slot with a strong static key.


    -1
    ykpersonalize -1 -ostatic-ticket -ostrong-pw1 -ostrong-pw2
    switchrodeo720 · 2011-05-13 04:00:52 4

  • -1
    sed 's/.$//' Win-file.txt
    totti · 2011-09-15 19:07:51 4
  • Requires mencoder. Show Sample Output


    -1
    mencoder FILENAME.3gp -ovc lavc -lavcopts vcodec=msmpeg4v2 -oac mp3lame -lameopts vbr=3 -o FILENAME.avi
    o0110o · 2013-03-25 23:30:15 4
  • Batch resize all images to a width of 'X' pixels while maintaing the aspect ratio. This makes uses of ImageMagick to make life easier.


    -1
    mogrify -resize SIZE_IN_PIXELS *.jpg
    o0110o · 2013-07-05 14:14:04 6
  • This example is taken from Cygwin running on Win7Ent-64. Device names will vary by platform. Both commands resulted in identical files per the output of md5sum, and ran in the same time down to the second (2m45s), less than 100ms apart. I timed the commands with 'time', which added before 'dd' or 'readom' gives execution times after the command completes. See 'man time' for more info...it can be found on any Unix or Linux newer than 1973. Yeah, that means everywhere. readom is supposed to guarantee good reads, and does support flags for bypassing bad blocks where dd will either fail or hang. readom's verbosity gave more interesting output than dd. On Cygwin, my attempt with 'readom' from the first answer actually ended up reading my hard drive. Both attempts got to 5GB before I killed them, seeing as that is past any CD or standard DVD. dd: 'bs=1M' says "read 1MB into RAM from source, then write that 1MB to output. I also tested 10MB, which shaved the time down to 2m42s. 'if=/dev/scd0' selects Cygwin's representation of the first CD-ROM drive. 'of=./filename.iso' simply means "create filename.iso in the current directory." readom: '-v' says "be a little noisy (verbose)." The man page implies more verbosity with more 'v's, e.g. -vvv. dev='D:' in Cygwin explicitly specifies the D-drive. I tried other entries, like '/dev/scd0' and '2,0', but both read from my hard drive instead of the CD-ROM. I imagine my LUN-foo (2,0) was off for my system, but on Cygwin 'D:' sort of "cut to the chase" and did the job. f='./filename.iso' specifies the output file. speed=2 simply sets the speed at which the CD is read. I also tried 4, which ran the exact same 2m45s. retries=8 simply means try reading a block up to 8 times before giving up. This is useful for damaged media (scratches, glue lines, etc.), allowing you to automatically "get everything that can be copied" so you at least have most of the data. Show Sample Output


    -1
    dd bs=1M if=/dev/scd0 of=./filename.iso OR readom -v dev='D:' f='./filename.iso' speed=2 retries=8
    scotharkins · 2013-10-23 15:53:27 6

  • -1
    mtr www.google.com
    slu4aen · 2014-10-08 06:55:32 9
  • Needs to be run in a battery sysfs dir, eg. /sys/class/power_supply/BAT0 on my system. Displays the battery's current charge and the rate per-second at which energy is {dis,}charging. All values are displayed as percentages of "full" charge. The first column is the current charge. The second is the rate of change averaged over the entire lifetime of the command (or since the AC cable was {un,}plugged), and the third column is the rate of change averaged over the last minute (controlled by the C=60 variable passed to awk). The sample output captures a scenario where I ran 'yes' in another terminal to max out a CPU. My battery was at 76% charge and you can see the energy drain starts to rise above 0.01% per-second as the cpu starts working and the fan kicks in etc. While idle it was more like 0.005% per-second. I tried to use this to estimate the remaining battery life/time until fully charged, but found it to be pretty useless... As my battery gets more charged it starts to charge slower, which meant the estimate was always wrong. Not sure if that's common for batteries or not. Show Sample Output


    -1
    while cat energy_now; do sleep 1; done |awk -v F=$(cat energy_full) -v C=60 'NR==1{P=B=$1;p=100/F} {d=$1-P; if(d!=0&&d*D<=0){D=d;n=1;A[0]=B=P}; if(n>0){r=g=($1-B)/n;if(n>C){r=($1-A[n%C])/C}}; A[n++%C]=P=$1; printf "%3d %+09.5f %+09.5f\n", p*$1, p*g, p*r}'
    sqweek · 2015-09-19 15:45:40 11
  • 'watch' repeatedly (default every 2 seconds, -n 1 => every second) runs a command (here ':', a shorthand for 'true'), displays the output (here nothing) and the date and time of the last run. I thought it to be obvious but it seemingly is not: to exit use Ctrl-C.


    -2
    watch -n 1 :
    penpen · 2009-03-25 23:00:28 6
  • An improved version of http://www.commandlinefu.com/commands/view/1772/simple-countdown-from-a-given-date that uses Perl to pretty-print the output. Note that the GNU-style '--no-title' option has been replaced by its one-letter counterpart '-t'. Show Sample Output


    -2
    watch -tn1 'bc<<<"`date -d'\''friday 21:00'\'' +%s`-`date +%s`"|perl -ne'\''@p=gmtime($_);printf("%dd %02d:%02d:%02d\n",@p[7,2,1,0]);'\'
    penpen · 2009-03-29 19:53:36 8
  • Depending on the installation only certain of these man pages are installed. 12 is left out on purpose because ISO/IEC 8859-12 does not exist. To also access those manpages that are not installed use opera (or any other browser that supports all the character sets involved) to display online versions of the manpages hosted at kernel.org: for i in $(seq 1 11) 13 14 15 16; do opera http://www.kernel.org/doc/man-pages/online/pages/man7/iso_8859-$i.7.html; done


    -2
    for i in $(seq 1 11) 13 14 15 16; do man iso-8859-$i; done
    penpen · 2009-03-31 19:40:15 5
  • This is the alias command that I discussed in my prior release which you can add to your ~/.bashrc. This command asks for the station name and then connects to somafm, Great for those who have linux home entertainment boxes and ssh enabled on them, just for the CLI fiends out there ( I know I'm one of them ;) You can find future releases of this and many more scripts at the teachings of master denzuko - denzuko.co.cc.


    -2
    alias somafm='read -p "Which station? "; mplayer --reallyquiet -vo none -ao sdl http://somafm.com/startstream=${REPLY}.pls'
    denzuko · 2009-05-05 12:13:46 3

  • -2
    ifconfig | grep "inet addr" | cut -d: -f2 | cut -d' ' -f1
    trungtm · 2009-07-24 09:39:19 5
  • Use this if you can't type repeated killall commands fast enough to kill rapidly spawning processes. If a process keeps spawning copies of itself too rapidly, it can do so faster than a single killall can catch them and kill them. Retyping the command at the prompt can be too slow too, even with command history retrieval. Chaining a few killalls on single command line can start up the next killall more quickly. The first killall will get most of the processes, except for some that were starting up in the meanwhile, the second will get most of the rest, and the third mops up.


    -2
    killall rapidly_spawning_process ; killall rapidly_spawning_process ; killall rapidly_spawning_process
    unixmonkey7434 · 2010-05-20 00:26:10 5
  • Linux specific, requires iproute2 (but most distros have that by default now)


    -2
    ip addr show eth0 | grep ether | awk '{print $2}'
    ryanc · 2011-05-01 19:54:46 4

  • -2
    for FILE in `ls -1`; do if [ -L "$FILE" ]; then cp $(readlink "$FILE") ${FILE}_rf; rm -f $FILE; mv ${FILE}_rf "$FILE"; fi; done
    unixmonkey1907 · 2012-01-16 16:39:41 6
  • ‹ First  < 8 9 10 11 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Redirect incoming traffic to SSH, from a port of your choosing
Stuck behind a restrictive firewall at work, but really jonesing to putty home to your linux box for some colossal cave? Goodness knows I was...but the firewall at work blocked all outbound connections except for ports 80 and 443. (Those were wide open for outbound connections.) So now I putty over port 443 and have my linux box redirect it to port 22 (the SSH port) before it routes it internally. So, my specific command would be: $iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-ports 22 Note that I use -A to append this command to the end of the chain. You could replace that with -I to insert it at the beginning (or at a specific rulenum). My linux box is running slackware, with a kernel from circa 2001. Hopefully the mechanics of iptables haven't changed since then. The command is untested under any other distros or less outdated kernels. Of course, the command should be easy enough to adapt to whatever service on your linux box you're trying to reach by changing the numbers (and possibly changing tcp to udp, or whatever). Between putty and psftp, however, I'm good to go for hours of time-killing.

Send pop-up notifications on Gnome
The title is optional. Options: -t: expire time in milliseconds. -u: urgency (low, normal, critical). -i: icon path. On Debian-based systems you may need to install the 'libnotify-bin' package. Useful to advise when a wget download or a simulation ends. Example: $ wget URL ; notify-send "Done"

Block an IP address from connecting to a server
This appends (-A) a new rule to the INPUT chain, which specifies to drop all packets from a source (-s) IP address.

Play all the music in a folder, on shuffle
Play files in shuffle mode with bash and mpg123. Why bother using big-as-hell stuff like mplayer? This will play all your music files contained in */* (in my case author/song.format) with bash and mplayer showing a nice output.

get all Google ipv4 subnets for a iptables firewall for example

Sorted list of established destination connections
no need grep. its redundant when awk is present.

Adhoc tar backup
Creates a quick backup with tar to a remote host over ssh.

Kill all processes that listen to ports begin with 50 (50, 50x, 50xxx,...)
Run netstat as root (via sudo) to get the ID of the process listening on the desired socket. Use awk to 1) match the entry that is the listening socket, 2) matching the exact port (bounded by leading colon and end of column), 3) remove the trailing slash and process name from the last column, and finally 4) use the system(…) command to call kill to terminate the process. Two direct commands, netstat & awk, and one forked call to kill. This does kill the specific port instead of any port that starts with 50. I consider this to be safer.

see the TIME_WAIT and ESTABLISHED nums of the network
see the TIME_WAIT and ESTABLISHED nums of the network

Check if the LHC has destroyed the world
This says if the LHC has destroyed the world. Run it in a loop to monitor the state of Earth. Might not work reliable, if the world has actually been destroyed.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: