Commands using cut (586)

  • Better use iproute2 !


    2
    ip route list match 0.0.0.0/0 | cut -d " " -f 3
    BaS · 2010-03-07 15:41:18 3

  • 2
    aptitude search '~i!~E' | grep -v "i A" | cut -d " " -f 4
    XORwell · 2010-03-25 00:40:51 6
  • Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server. A slightly shorter version.


    2
    cat /etc/apache2/sites-enabled/* | egrep 'ServerAlias|ServerName' | tr -s ' ' | sed 's/^\s//' | cut -d ' ' -f 2 | sed 's/www.//' | sort | uniq
    chronosMark · 2010-04-08 15:50:34 2
  • Just a quick hack to give reasonable filenames to TrueType and OpenType fonts. I'd accumulated a big bunch of bizarrely and inconsistently named font files in my ~/.fonts directory. I wanted to copy some, but not all, of them over to my new machine, but I had no idea what many of them were. This script renames .ttf files based on the name embedded inside the font. It will also work for .otf files, but make sure you change the mv part so it gives them the proper extension. REQUIREMENTS: Bash (for extended pattern globbing), showttf (Debian has it in the fontforge-extras package), GNU grep (for context), and rev (because it's hilarious). BUGS: Well, like I said, this is a quick hack. It grew piece by piece on the command line. I only needed to do this once and spent hardly any time on it, so it's a bit goofy. For example, I find 'rev | cut -f1 | rev' pleasantly amusing --- it seems so clearly wrong, and yet it works to print the last argument. I think flexibility in expressiveness like this is part of the beauty of Unix shell scripting. One-off tasks can be be written quickly, built-up as a person is "thinking aloud" at the command line. That's why Unix is such a huge boost to productivity: it allows each person to think their own way instead of enforcing some "right way". On a tangent: One of the things I wish commandlinefu would show is the command line HISTORY of the person as they developed the script. I think it's that conversation between programmer and computer, as the pipeline is built piece-by-piece, that is the more valuable lesson than any canned script. Show Sample Output


    2
    shopt -s extglob; for f in *.ttf *.TTF; do g=$(showttf "$f" 2>/dev/null | grep -A1 "language=0.*FullName" | tail -1 | rev | cut -f1 | rev); g=${g##+( )}; mv -i "$f" "$g".ttf; done
    hackerb9 · 2010-04-30 09:46:45 6
  • count the times a domain appears on a file which lines are URLs in the form http://domain/resource. Show Sample Output


    2
    cut -d'/' -f3 file | sort | uniq -c
    rubenmoran · 2010-05-23 16:02:51 8
  • Same as original, but works in bash


    2
    while [ 1 -lt 2 ]; do i=0; COL=$((RANDOM%$(tput cols)));ROW=$((RANDOM%$(tput cols)));while [ $i -lt $COL ]; do tput cup $i $ROW;echo -e "\033[1;34m" $(cat /dev/urandom | head -1 | cut -c1-1) 2>/dev/null ; i=$(expr $i + 1); done; done
    dave1010 · 2010-05-28 16:07:56 5

  • 2
    curl -s "http://ajax.googleapis.com/ajax/services/language/translate?langpair=|en&v=1.0&q=`xsel`" |cut -d \" -f 6
    eneko · 2010-06-11 21:38:26 3
  • Run this command when you are physically at the computer you wish to send pop-up messages to. Then when you ssh in to it, you can do this: echo "guess who?" > commander guess who? will then pop up on the screen for a few moments, then disappear. You will need to create the commander file first. I mess with my wife all the time with this. i.e. echo "You have given the computer a virus. Computer will be rendered useless in 10 seconds." > commander lol


    2
    while : ; do if [ ! $(ls -l commander | cut -d ' ' -f5) -eq 0 ]; then notify-send "$(less commander)"; > commander; fi; done
    evil · 2010-06-13 18:45:02 13
  • Simple and easy. No regex, no search and replace. Just clean, built-in tools.


    2
    ifconfig eth0 | grep 'inet addr' | cut -d ':' -f 2 | cut -d ' ' -f 1
    atoponce · 2010-06-26 22:36:21 3

  • 2
    svn log 2>&1 | egrep '^r[0-9]+' | cut -d "|" -f2 | sort | uniq -c
    cicatriz · 2010-09-06 15:13:48 4
  • While going through the source code for the well known ps command, I read about some interesting things.. Namely, that there are a bunch of different fields that ps can try and enumerate for you. These are fields I was not able to find in the man pages, documentation, only in the source. Here is a longer function that goes through each of the formats recognized by the ps on your machine, executes it, and then prompts you whether you would like to add it or not. Adding it simply adds it to an array that is then printed when you ctrl-c or at the end of the function run. This lets you save your favorite ones and then see the command to put in your .bash_profile like mine at : http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Note that I had to do the exec method below in order to pause with read. t () { local r l a P f=/tmp/ps c='command ps wwo pid:6,user:8,vsize:8,comm:20' IFS=' '; trap 'exec 66 exec 66 $f && command ps L | tr -s ' ' >&$f; while read -u66 l >&/dev/null; do a=${l/% */}; $c,$a k -${a//%/} -A; yn "Add $a" && P[$SECONDS]=$a; done } Show Sample Output


    2
    for p in `ps L|cut -d' ' -f1`;do echo -e "`tput clear;read -p$p -n1 p`";ps wwo pid:6,user:8,comm:10,$p kpid -A;done
    AskApache · 2010-10-12 06:42:10 7
  • can be used within a script to configure iptables for example: iface=$2 inet_ip=`ifconfig "$iface" | grep inet | cut -d: -f2 | cut -d ' ' -f1` ipt="sudo /sbin/iptables" ......................... ---------------------------------------------------------------------------------------------------------------- $ipt -A INPUT -i $iface ! -f -p tcp -s $UL -d $inet_ip --sport 1023: --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT ---------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- $ipt -A OUTPUT -o $iface -p tcp -s $inet_ip -d $UL --sport 3306 --dport 1023: -m state --state ESTABLISHED,RELATED -j ACCEPT ----------------------------------------------------------------------------------------------------------------- Show Sample Output


    2
    inet_ip=`ifconfig wlan0 | grep inet | cut -d: -f2 | cut -d ' ' -f1` && echo $inet_ip
    fabri8bit · 2010-11-28 23:06:38 4
  • I often use it at my work, on an ovh server with root ssh access and often have to change mod after having finished an operation. This command, replace the user, group and mod by the one required by apache to work.


    2
    alias restoremod='chgrp users -R .;chmod u=rwX,g=rX,o=rX -R .;chown $(pwd |cut -d / -f 3) -R .'
    Juluan · 2010-12-28 11:42:43 17
  • list top committers (and number of their commits) of svn repository. in this example it counts revisions of current directory. Show Sample Output


    2
    svn log -q | grep '^r[0-9]' | cut -f2 -d "|" | sort | uniq -c | sort -nr
    kkapron · 2011-01-03 15:23:08 4

  • 2
    alias cd1='cd $( ls -lt | grep ^d | head -1 | cut -b 51- )'
    soulonfire · 2011-06-22 11:45:15 12
  • This command shows a sorted list of the IP addresses from which there have been authentication errors via SSH (possible script kiddies trying to gain access to your server), it eliminates duplicates so it's easier to read, but you can remove the "uniq" command at the end, or even do a "uniq -c" to have a count of how many times each IP address shows in the log (the path to the log may vary from system to system) Show Sample Output


    2
    cat /var/log/auth.log | grep -i "pam_unix(sshd:auth): authentication failure;" | cut -d' ' -f14,15 | cut -d= -f2 | sort | uniq
    JohnQUnknown · 2011-10-25 04:58:09 8

  • 2
    while read l; do echo -e "$RANDOM\t$l"; done | sort -n | cut -f 2
    unixmonkey28183 · 2011-12-09 01:02:27 4

  • 2
    cut -c 2- < <file>
    seb1245 · 2012-12-08 09:11:29 5
  • Tested in bash4


    2
    diff <(ssh-keygen -y -f ~/.ssh/id_rsa) <(cut -d' ' -f1,2 ~/.ssh/id_rsa.pub)
    fernandomerces · 2013-08-31 14:01:33 7
  • Analyze an Apache access log for the time period with most activity and display the hit count, requesting IP and the timestamp. May help detect a brute force dos attack.


    2
    cut -d " " -f1,4 access_log | sort | uniq -c | sort -rn | head
    zlemini · 2013-09-20 21:29:32 8
  • It find out the mic recording level at the moment of run the command and if a noise level is higher it starts to record an mp3 file. The resulting file will have only the sounds not the silences.


    2
    arecord -q -f cd -d 1 recvol.wav;sox recvol.wav -n stat 2>&1|grep RMS|grep amplitude|cut -d"." -f2|cut -c 1-2>recvol;echo $((`cat recvol`+1))>recvol;rec -t wav - silence 1 0.1 `cat recvol` -1 1.0 `cat recvol`%|lame -s 44.1 -a -v - >record.mp3
    geaplanet · 2014-02-27 23:23:55 8
  • IP addresses and number of connections connected to port 80. Show Sample Output


    2
    netstat -tn 2>/dev/null | grep ':80 ' | awk '{print $5}' |sed -e 's/::ffff://' | cut -f1 -d: | sort | uniq -c | sort -rn | head
    copocaneta · 2014-03-12 12:43:07 5
  • Finds duplicates based on MD5 sum. Compares only files with the same size. Performance improvements on: find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate The new version takes around 3 seconds where the old version took around 17 minutes. The bottle neck in the old command was the second find. It searches for the files with the specified file size. The new version keeps the file path and size from the beginning.


    2
    find -not -empty -type f -printf "%-30s'\t\"%h/%f\"\n" | sort -rn -t$'\t' | uniq -w30 -D | cut -f 2 -d $'\t' | xargs md5sum | sort | uniq -w32 --all-repeated=separate
    fobos3 · 2014-10-19 02:00:55 10

  • 2
    grep -xFf <(groups user1|cut -f3- -d\ |sed 's/ /\n/g') <(groups user2|cut -f3- -d\ |sed 's/ /\n/g')
    psykotron · 2015-03-04 18:23:59 8
  • This logs the titles of the active windows, thus you can monitor what you have done during which times. (it is not hard to also log the executable name, but then it is gets too long) Show Sample Output


    2
    while true; do (echo -n $(date +"%F %T"):\ ; xwininfo -id $(xprop -root|grep "ACTIVE_WINDOW("|cut -d\ -f 5) | grep "Window id" | cut -d\" -f 2 ) >> logfile; sleep 60; done
    BeniBela · 2015-09-23 23:00:14 24
  • ‹ First  < 4 5 6 7 8 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

list all files in a directory, sorted in reverse order by modification time, use file descriptors.
It's both silly, and infinitely useful. Especially useful in logfile directories where you want to know what file is being updated while troubleshooting.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

BASH: Print shell variable into AWK

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

Count the total number of files in each immediate subdirectory
counts the total (recursive) number of files in the immediate (depth 1) subdirectories as well as the current one and displays them sorted. Fixed, as per ashawley's comment

for too many arguments by *
$ grep ERROR *.log -bash: /bin/grep: Argument list too long $ echo *.log | xargs grep ERROR /dev/null 20090119.00011.log:DANGEROUS ERROR

get total of inodes of root partition

A signal trap that logs when your script was killed and what other processes were running at that time
trap is the bash builtin that allows you to execute commands when the current script receives a particular signal. Uses $0 for the script name, $$ for the script PID, tee to output to STDOUT as well as a log file and ps to log other running processes.

take a look to command before action
add |sh when you agree the list, I often use that method to prevent typos in dangerous or long operations

Get gzip compressed web page using wget.
Like the original command, but the -f allows this one to succeed even if the website returns uncompressed data. From gzip(1) on the -f flag: If the input data is not in a format recognized by gzip, and if the --stdout is also given, copy the input data without change to the standard output: let zcat behave as cat.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: