Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!
Hide

Top Tags

Hide

Functions

Hide

Credits

Commands using cut from sorted by
Terminal - Commands using cut - 485 results
find /dev/disk/by-id -type l -printf "%l\t%f\n" | cut -b7- | sort
tr -s ' ' | cut -d' ' -f2-
wget -q -O- https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/ | grep Linux/7/pdf | cut -d \" -f 2 | awk '{print "https://access.redhat.com"$1}' | xargs wget
input=a.pdf ; pages=`pdftk $input dump_data | grep -i numberofpages | cut -d" " -f 2`; pdftk A=$input shuffle A1-$[$pages/2] A$pages-$[$pages/2+1] output "${input%.*}.rearranged.${input##*.}"
2015-04-26 20:05:20
User: kobayashison
Functions: cut grep
0

Rearrange pdf document coming from a simplex document feed scanner, feeded first with odd pages, then with even pages from the end. Needs pdftk >1.44 w/ shuffle.

Similar to http://www.commandlinefu.com/commands/view/7965/pdf-simplex-to-duplex-merge where there are 2 separate documents, odd and even

pgrep -lf processname | cut -d' ' -f1 | awk '{print "cat /proc/" $1 "/net/sockstat | head -n1"}' | sh | cut -d' ' -f3 | paste -sd+ | bc
groups user1 user2|cut -d: -f2|xargs -n1|sort|uniq -d
2015-03-04 19:12:27
User: swemarx
Functions: cut groups uniq xargs
3

Updated according to flatcap's suggestion, thanks!

grep -xFf <(groups user1|cut -f3- -d\ |sed 's/ /\n/g') <(groups user2|cut -f3- -d\ |sed 's/ /\n/g')
git rev-list --all|tail -n1|xargs git show|grep -v diff|head -n1|cut -f1-3 -d' '
git rev-list --all|tail -n1|xargs git show|grep -v diff|head -n1|cut -f1-3 -d' '
udevadm info -q all -n /dev/sdc | grep ID_PATH | cut -d'-' -f 2 | xargs -n 1 lspci -s
2015-01-27 15:34:02
User: mhs
Functions: cut grep info lspci xargs
Tags: lspci udevadm
1

Useful for big systems with lots of cards.

(Update: does not work with USB disks)

nmap -sP 10.0.0.0/8 | grep -v "Host" | tail -n +3 | tr '\n' ' ' | sed 's|Nmap|\nNmap|g' | grep "MAC Address" | cut -d " " -f5,8-15
2014-12-26 18:31:53
User: jaimerosario
Functions: cut grep sed tail tr
0

In the field, I needed to script a process to scan a specific vendor devices in the network. With the help of nmap, I got all the devices of that particular vendor, and started a scripted netcat session to download configuration files from a tftp server.

This is the nmap loop (part of the script). You can however, add another pipe with grep to filter the vendor/manufacturer devices only. If want to check the whole script, check in http://pastebin.com/ju7h4Xf4

brew outdated | cut -f1 | xargs brew upgrade
2014-11-16 09:51:50
User: wires
Functions: cut xargs
1

You probably want to run `brew update` before you run this command

ls *.png | cut -d . -f 1 | xargs -L1 -i convert -strip -interlace Plane -quality 80 {}.png {}.jpg
cut -d: -f1 /etc/group | sort
find -not -empty -type f -printf "%-30s'\t\"%h/%f\"\n" | sort -rn -t$'\t' | uniq -w30 -D | cut -f 2 -d $'\t' | xargs md5sum | sort | uniq -w32 --all-repeated=separate
2014-10-19 02:00:55
User: fobos3
Functions: cut find md5sum sort uniq xargs
1

Finds duplicates based on MD5 sum. Compares only files with the same size. Performance improvements on:

find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate

The new version takes around 3 seconds where the old version took around 17 minutes. The bottle neck in the old command was the second find. It searches for the files with the specified file size. The new version keeps the file path and size from the beginning.

firefox $(grep -i ^url=* file.url | cut -b 5-)
2014-10-08 05:56:27
User: nachos117
Functions: cut grep
0

This command will use grep to read the shortcut (which in the above examle is file.url), and filter out all but the only important line, which contains the website URL, and some extra characters that will need to be removes (for example, URL=http://example.com). The cut command is then used to get rid of the URL= at the beginning. The output is then piped into Firefox, which should interpret the it as a web URL to be opened. Of course, you can replace Firefox with any other broswer. Tested in bash and sh.

host `hostname` | rev | cut -d' ' f1 | rev
2014-10-01 18:55:05
Functions: cut host rev
0

Gives the DNS listed IP for the host you're on... or replace `hostname` with any other host

finger $(whoami) | egrep -o 'Name: [a-zA-Z0-9 ]{1,}' | cut -d ':' -f 2 | xargs echo
2014-09-24 01:22:07
User: swebber
Functions: cut egrep finger xargs
0

Its possible to user a simple regex to extract de username from the finger command.

The final echo its optional, just for remove the initial space

while true; do ps aux | sort -rk 3,3 | head -n 11 | cut -c -120 | netcat -l -p 8888 2>&1 >/dev/null; done &
2014-08-29 07:10:57
User: manumiu
Functions: cut head ps sort
0

If you want to see your top ten cpu using processes from the browser (e.g. you don't want to ssh into your server all the time for checking system load) you can run this command and browse to the machines ip on port 8888. For example 192.168.0.100:8888

grep Failed auth.log | rev | cut -d\ -f4 | rev | sort -u
2014-08-14 14:57:41
User: supradave
Functions: cut grep rev sort
0

Find the failed lines, reverse the output because I only see 3 indicators after the IP address, i.e. port, port#, ssh2 (in my file), cut to the 4th field (yes, you could awk '{print $4}'), reverse the output back to normal and then sort -u (for uniq, or sort | uniq).

tail -f LOG_FILE | grep --line-buffered SEARCH_STR | cut -d " " -f 7-
2014-08-07 10:40:45
User: pjsb
Functions: cut grep tail
Tags: grep cut tail -f
0

Outputs / monitors the content of the LOG_FILE , which matches the SEARCH_STR. The output is cutted by spaces (as delimiter) starting from column 7 till the end.

netstat -tn 2>/dev/null | grep :80 | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr | head
bkname="test"; tobk="*" ; totalsize=$(du -csb $tobk | tail -1 | cut -f1) ; tar cvf - $tobk | tee >(sha512sum > $bkname.sha512) >(tar -tv > $bkname.lst) | mbuffer -m 4G -P 100% | pv -s $totalsize -w 100 | dd of=/dev/nst0 bs=256k
2014-07-22 15:47:50
User: johnr
Functions: cut dd du tail tar tee
0

This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray.

Go to the directory where you want to write the output files : cd /bklogs

Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www".

It will create a tar and write it to the tape drive on /dev/nst0.

In the process, it will

1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact

2) generate a filelist of the content of the tar with filesize to $bkname.lst

3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz.

4) show buffer in/out speed and used space in the buffer

5) show progress bar with time approximation using pv

ADD :

To eject the tape :

; sleep 75; mt-st -f /dev/nst0 rewoffl

TODO:

1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it."

2) Fix filesize approximation

3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?)

4) Support spanning on multiple tape.

5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project.

TIP:

1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing.

2) Remove the v in tar argument cvf to prevent listing all files added to the archive.

3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/)

and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !

cut -f 2 -d ':' oclHashcat.pot | egrep -oi '[a-z]{1,20}' | sort | uniq > base.pot
translate () {lang="ru"; text=`echo $* | sed 's/ /%20/g'`; curl -s -A "Mozilla/5.0" "http://translate.google.com/translate_a/t?client=t&text=$text&sl=auto&tl=$lang" | sed 's/\[\[\[\"//' | cut -d \" -f 1}
2014-07-10 18:26:34
User: 2b
Functions: cut sed
-1

Change lang from ru to something else.

Curl version - Mac OS etc, any system w/o wget.