Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using cut from sorted by
Terminal - Commands using cut - 469 results
cut -f2 -d`echo -e '\x01'` file
alias restoremod='chgrp users -R .;chmod u=rwX,g=rX,o=rX -R .;chown $(pwd |cut -d / -f 3) -R .'
2010-12-28 11:42:43
User: Juluan
Functions: alias chmod chown cut pwd users
2

I often use it at my work, on an ovh server with root ssh access and often have to change mod after having finished an operation.

This command, replace the user, group and mod by the one required by apache to work.

diff <(md5sum render_pack.zip| cut -d " " -f 1) <(md5sum /media/green/render_pack.zip| cut -d " " -f 1);echo $?
2010-12-27 18:29:00
User: schmiddim
Functions: cut diff echo md5sum
1

I had the problem that the Md5 Sum of a file changed after copying it to my external disk.

This unhandy command helped me to fix the problem.

ping -q -c 1 www.google.com|tail -1|cut -d/ -f5
buf () {oldname=$1; if [ "$oldname" != "" ]; then datepart=$(date +%Y-%m-%d); firstpart=`echo $oldname | cut -d "." -f 1`; newname=`echo $oldname | sed s/$firstpart/$firstpart.$datepart/`; cp -i ${oldname} ${newname}; fi }
2010-12-14 19:58:34
User: Seebi
Functions: cp cut date sed
-3

This backup function preserve the file suffix allowing zsh suffix aliases and desktop default actions to work with the backup file too.

for arptable in `arp | grep "eth1" | cut -d " " -f1`; do arp -d $arptable; done
2010-12-14 13:47:47
User: jaimerosario
Functions: arp cut grep
1

Clears the "arp" table, without entering manually addresses (tested in Ubuntu).

for file in $(find ~/ -iname "*.mp3");do c=$(mp3info $file|grep Genre|cut -f 3 -d :|cut -f 2 -d " ");if [ -z "$c" ];then c="Uncategorized";fi;if [ ! -e $c ];then touch $c.m3u;fi;echo "$file">>$c.m3u;done
sitepass2() {salt="this_salt";pass=`echo -n "$@"`;for i in {1..500};do pass=`echo -n $pass$salt|sha512sum`;done;echo$pass|gzip -|strings -n 1|tr -d "[:space:]"|tr -s '[:print:]' |tr '!-~' 'P-~!-O'|rev|cut -b 2-15;history -d $(($HISTCMD-1));}
2010-12-09 08:42:24
User: Soubsoub
Functions: cut gzip strings tr
Tags: Security
-4

This is a safest variation for "sitepass function" that includes a SALT over a long loop for sha512sum hash

function where(){ COUNT=0; while [ `where_arg $1~$COUNT | wc -w` == 0 ]; do let COUNT=COUNT+1; done; echo "$1 is ahead of "; where_arg $1~$COUNT; echo "by $COUNT commits";};function where_arg(){ git log $@ --decorate -1 | head -n1 | cut -d ' ' -f3- ;}
2010-12-08 15:41:39
User: noisy
Functions: cut echo head wc
Tags: git
0

usage:

where COMMIT

for instance:

where 1178c5950d321a8c5cd8294cd67535157e296554

where HEAD~5

ifconfig | grep -o "inet [^ ]*" | cut -d: -f2
2010-12-06 10:36:52
User: dooblem
Functions: cut grep ifconfig
Tags: ifconfig grep cut
-2

This is what we use.

You can grep -v 127.0.0.1 if you wish.

ifconfig | grep "inet\ " | grep -v "127.0" | sed -e 's/inet\ addr://g' | sed -e 's/Bcast:/\ \ \ \ \ \ \ /g' | cut -c 1-29 | sed -e 's/\ //g'
inet_ip=`ifconfig wlan0 | grep inet | cut -d: -f2 | cut -d ' ' -f1` && echo $inet_ip
2010-11-28 23:06:38
User: fabri8bit
Functions: cut echo grep
2

can be used within a script to configure iptables for example:

iface=$2

inet_ip=`ifconfig "$iface" | grep inet | cut -d: -f2 | cut -d ' ' -f1`

ipt="sudo /sbin/iptables"

.........................

----------------------------------------------------------------------------------------------------------------

$ipt -A INPUT -i $iface ! -f -p tcp -s $UL -d $inet_ip --sport 1023: --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

----------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------------------------

$ipt -A OUTPUT -o $iface -p tcp -s $inet_ip -d $UL --sport 3306 --dport 1023: -m state --state ESTABLISHED,RELATED -j ACCEPT

-----------------------------------------------------------------------------------------------------------------

for a in $(date +"%H%M"|cut -b1,2,3,4 --output-delimiter=" ");do case "$a" in 0)echo "....";;1)echo "...*";;2)echo "..*.";;3)echo "..**";;4)echo ".*..";;5)echo ".*.*";;6)echo ".**.";;7)echo ".***";;8)echo "*...";;9)echo "*..*";;esac;done
server=8.8.8.8; host="apple.com"; queries=128; for i in `seq $queries`; do let x+=`dig @${server} $host | grep "Query time" | cut -f 4 -d " "`; done && echo "scale=3;($x/${queries})" | bc
dpkg -l | grep ^rc | cut -d' ' -f3 | xargs dpkg -P
svn add `svn status | grep ? | cut -c9-80`
slow2() { ionice -c3 renice -n 20 $(pstree `pidof $1` -p -a -u -A|gawk 'BEGIN{FS=","}{print $2}'|cut -f1 -d " ") ; }
deadlib() { lsof | grep 'DEL.*lib' | cut -f 1 -d ' ' | sort -u; }
2010-11-17 12:53:30
User: Naib
Functions: cut grep sort
6

emerge,apt-get,yum... all update your system. This will at some point replace either a runtime dependency or a process (which is still running).

This oneliner will list what processes need to be restarted

ps ax -L -o pid,tid,psr,pcpu,args | sort -nr -k4| head -15 | cut -c 1-90
echo `lcg-infosites --vo lhcb ce | cut -f 1| grep [[:digit:]]| tr '\n' '+' |sed -e 's/\ //g' -e 's/+$//'`|bc -l
2010-11-10 15:06:00
User: kbat
Functions: bc cut echo grep sed tr
-2

Of course, this command must be executed at a GRID User Interface

lhcb - name of your VO, substitute it with the one you are interested it.

find / -type f -size +100M -exec du {} \; | sort -n | tail -10 | cut -f 2
find / -type f 2>/dev/null | xargs du 2>/dev/null | sort -n | tail -n 10 | cut -f 2 | xargs -n 1 du -h
2010-11-09 13:45:11
User: mxc
Functions: cut du find sort tail xargs
Tags: disk usage
1

Often you need to find the files that are taking up the most disk space in order to free up space asap. This script can be run on the enitre filesystem as root or on a home directory to find the largest files.

let NOW=`date +%s`/86400 ; PASS_LAST_CHANGE=`grep $USER /etc/shadow | cut -d: -f3` ; PASS_LIFE=`grep $USER /etc/shadow | cut -d: -f5`; DAYS_LEFT=$(( PASS_LAST_CHANGE + PASS_LIFE - NOW)) ; echo $DAYS_LEFT
2010-11-05 23:03:48
User: EBAH
Functions: cut echo
0

This works only with GNU date.

In solaris the command:

date +%s

doesn't work.

You can try using the following instead:

nawk 'BEGIN {print srand()}'

should give the same output as date +%s under Solaris.

for i in $(file * | grep broken | cut -d : -f 1); do rm $i; done
curl -s -O http://s3.amazonaws.com/alexa-static/top-1m.csv.zip ; unzip -q -o top-1m.csv.zip top-1m.csv ; head -1000 top-1m.csv | cut -d, -f2 | cut -d/ -f1 > topsites.txt
2010-11-01 01:25:53
User: chrismccoy
Functions: cut head
Tags: curl unzip cut
-4

this will dump a list of domains one per line into a text file