Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using tr from sorted by
Terminal - Commands using tr - 273 results
cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 32
tr -d '\\' | tr -d '\n'
cat table-mv.txt | perl -pe 's{([^;]+);([^;]+)}{tbl$1/tbl$2}' | perl -pe 's{(\S+)}{perl -i -pe #s/$1/g# xxx.sql}' | tr "#" "\'" | bash
2011-10-05 15:55:34
User: hute37
Functions: cat perl tr
0

with a semicolon text file map, apply multiple replace to a single file

ps ewwo command PID | tr ' ' '\n' | grep \=
cat /proc/PID/environ | tr '\0' '\n'
echo 'fOo BaR' | tr '[A-Z]' '[a-z]' | sed 's/\(^\| \)\([a-z]\)/\1\u\2/g'
tr -d "\r" < file.vcf | tr "\0" " " > file.vcf.txt
tr -d "\r" < dos.txt > linux.txt
svn st -q | cut -c 2- | tr -d ' ' | xargs tar -czvf ../backup.tgz
2011-08-31 10:30:29
User: titus
Functions: cut tar tr xargs
1

This works more reliable for me ("cut -c 8-" had one more space, so it did not work)

cat dirtyfile.txt | awk '{gsub(/[[:punct:]]/,"")}1' | tr A-Z a-z | sed 's/[0-9]*//g' | sed -e 's/ //g' | strings | tr -cs '[:alpha:]' '\ ' | sed -e 's/ /\n/g' | tr A-Z a-z | sort -u > cleanfile.txt
2011-08-28 01:26:04
User: purehate
Functions: awk cat sed sort strings tr
0

Using large wordlists is cumbersome. Using password cracking programs with rules such as Hashcat or John the ripper is much more effective. In order to do this many times we need to "clean" a wordlist removing all numbers, special characters, spaces, whitespace and other garbage. This command will covert a entire wordlist to all lowercase with no garbage.

function expand_url() { curl -sI $1 | grep Location: | cut -d " " -f 2 | tr -d "\n" | pbcopy }
2011-08-21 05:30:09
User: gt
Functions: cut grep tr
0

Expand a URL, aka do a head request, and get the URL. Copy this value to clipboard.

sort -R /usr/share/dict/british | grep -v -m4 ^\{1,10\}$ | tr [:upper:] [:lower:] | tr "\n" " " | tr -d "'s" | xargs -0 echo
2011-08-16 10:11:21
User: takac
Functions: grep sort tr xargs
Tags: tr xkcd
-1

Doesn't use shuf, its much faster with "shuf -n4" instead of sort -R

RANGE=`wc -l /usr/share/dict/words | sed 's/^\([0-9]*\) .*$/\1/'`; for i in {1..4}; do let "N = $RANDOM % $RANGE"; sed -n -e "${N}p" /usr/share/dict/words | tr -d '\n'; done; RANGE=100; let "N = $RANDOM % $RANGE"; echo $N
jot 4 | awk '{ print "wc -l /usr/share/dict/words | awk '"'"'{ print \"echo $[ $RANDOM * $RANDOM % \" $1 \"]\" }'"'"' | bash | awk '"'"'{ print \"sed -n \" $1 \"p /usr/share/dict/words\" }'"'"' | bash" }' | bash | tr -d '\n' | sed 's/$/\n/'
2011-08-16 00:26:56
User: fathwad
Functions: awk bash sed tr
Tags: tr xkcd
0

So I use OSX and don't have the shuf command. This is what I could come up with.

This command assumes /usr/share/dict/words does not surpass 137,817,948 lines and line selection is NOT uniformly random.

cat /usr/share/dict/words | grep -P ^[a-z].* | grep -v "'s$" | grep -Pv ^.\{1,15\}$ | shuf -n4 | tr '\n' ' ' | sed 's/$/\n/'
2011-08-15 01:03:48
User: bugmenot
Functions: cat grep sed tr
Tags: tr xkcd shuf
-2

The first grep rejects capitalised words since the dict has proper nouns in it that you mightn't want to use. The second grep rejects words with ending in apostrophe s, and the third forces the words to be at least 15 characters long.

svn st | grep '^?' | sed -e 's/\?[[:space:]]*//' | tr '\n' '\0' | xargs -0 svn add
IFS=?" ; for i in * ; do mv -v $i `echo $i|tr ???????????????????\ aaaeeiooAAAEEIOOOcC_` ; done
shuf -n4 /usr/share/dict/words | tr -d '\n'
pi 62999 | tr 0-9 del\ l\!owrH
2011-07-29 22:47:53
User: maurol
Functions: tr
Tags: bash pi
0

Pi also says hello world!

find src/ -name "*.java" | while read f; do echo -n "$f "; cat "$f" | tr -dc '{}'; echo; done | awk '{ print length($2), $1 }' | sort -n
svn log | tr -d '\n' | sed -E 's/-{2,}/\'$'\n/g' | sed -E 's/ \([^\)]+\)//g' | sed -E 's/^r//' | sed -E "s/[0-9]+ lines?//g" | sort -g
2011-07-15 00:00:03
User: gdevarajan
Functions: sed sort tr
0

This is a minor variation to cowboy's submission - his script worked great on Ubuntu, but the sed gave issues on osx (which used BSD). Minor tweaks (sed -E instead of sed -r and \'$'\n to handle the new line made it work.

grep -E '<DT><A|<DT><H3' bookmarks.html | sed 's/<DT>//' | sed '/Bookmarks bar/d' | sed 's/ ADD_DATE=\".*\"//g' | sed 's/^[ \t]*//' | tr '<A HREF' '<a href'
2011-05-26 22:21:01
User: chrismccoy
Functions: grep sed tr
Tags: sed grep chrome
-1

chrome only lets you export in html format, with a lot of table junk, this command will just export the titles of the links and the links without all that extra junk

for i in {21..79};do echo -e "\x$i";done | tr " " "\n" | shuf | tr -d "\n"
apt-cache search pidgin* | awk '{print$ 1}' | tr '\n' ' ' | xargs aptitude -y install
2011-04-13 08:01:22
Functions: apt awk tr xargs
0

Command to install everything on a debian based system with the prefix you indicate.

tr '\0' '\377' < /dev/zero|dd count=$((<bytes>/512))
2011-04-05 14:26:02
User: cfy
Functions: dd tr
Tags: dd tr
4

the speed is about 500MB/s on my machine.

i think it's fast enough to output not too many bytes.

while a C program may output 1GB per sencond on my machine.

if the size is not the power of 512,you may change the bs and count in dd.