Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using file from sorted by
Terminal - Commands using file - 141 results
find . -name "*.URL" | while read file ; do cat "$file" | sed 's/InternetShortcut/Desktop Entry/' | sed '/^\(URL\|\[\)/!d' > "$file".desktop && echo "Type=Link" >> "$file".desktop ; done
iconv -f $(file -bi filename.ext | sed -e 's/.*[ ]charset=//') -t utf8 filename.ext > filename.ext
for file in `ls *.log`; do `:> $file`; done
2013-11-23 15:08:43
User: pratalife
Functions: file
0

nice trick with the :>! this is a variant to do a bunch of files (e.g. *.log) in one go

for file in $(git ls-files | grep old_name_pattern); do git mv $file $(echo $file | sed -e 's/old_name_pattern/new_name_pattern/'); done
cat file | paste -s -d'%' - | sed 's/\(^\|$\)/"/g;s/%/","/g'
for file in "$@"; do name=$(basename "$file" .webm) echo ffmpeg -i $file -vn -c:a copy $name.ogg ffmpeg -i "$file" -vn -c:a copy "$name.ogg" done
2013-10-05 14:49:07
User: hoodie
Functions: basename echo file
0

Strips the audio track from a webm video. Use this in combination with clive or youtube-dl.

rsync -arvx --numeric-ids --stats --progress --bwlimit=1000 file server:destination_directory
2013-10-01 13:00:59
Functions: file rsync
Tags: Linux rsync
0

Useful for transferring large file over a network during operational hours

for i in `find . -name "*.jar"`; do jar -tvf $i | grep -v /$ | awk -v file=$i '{print file ":" $8}'; done > all_jars.txt
convert_path2uri () { echo -n 'file://'; echo -n "$1" | perl -pe 's/([^a-zA-Z0-9_\/.])/sprintf("%%%.2x", ord($1))/eg' ;} #convert2uri '/tmp/a b' ### convert file path to URI
2013-07-01 08:54:45
User: totti
Functions: echo file perl
Tags: encoding PATH url
1

Really helpfull when play with files having spaces an other bad name. Easy to store and access names and path in just a field while saving it in a file.

This format (URL) is directly supported by nautilus and firefox (and other browsers)

split -l 12000 -a 5 database.sql splited_file i=1 for file in splited_file* do mv $file database_${i}.sql i=$(( i + 1 )) done
2013-05-15 18:17:47
User: doczine
Functions: file mv split
0

For some reason split will not let you add extension to the files you split. Just add this to a .sh script and run with bash or sh and it will split your text file at 12000 lines for each file and then add a .sql extension to the file name.

for file in *.zip; do unzip -l "$file" >> archiveindex.txt ; done;
2013-05-02 01:43:26
User: chon8a
Functions: file
Tags: unzip
-2

It can be used to create an index of a backup directory or to find some file.

file -i `find . -name '*.jpg' -print` | grep "application/msword"
2013-03-10 16:53:23
User: genghisdani
Functions: file grep
0

Created to deal with an overzealous batch rename on our server that renamed all files to .jpg files.

for file in *; do convert $file -resize 800x600 resized-$file; done
2013-02-17 21:37:14
User: sonic
Functions: file
Tags: xargs convert
0

To ignore aspect ratio, run:

for file in *; do convert $file -resize 800x600! resized-$file; done

and all images will be exactly 800x600.

Use your shell of choice.. This was done in BASH.

for file in `ls *.png`;do convert $file -resize 65% new_$file; done
FOR %%c in (C:\Windows\*.*) DO (echo file %%c)
2013-01-31 15:19:54
User: jmcclosk
Functions: echo file
0

You can implement a FOR loop to act on one or more files returned from the IN clause. We originally found this in order to GPG decrypt a file using wildcards (where you don't know exactly the entire file name, i.e.: Test_File_??????.txt, where ?????? = the current time in HHMMSS format). Since we won't know the time the file was generated, we need to use wildcards. And as a result of GPG not handling wildcards, this is the perfect solution. Thought I would share this revelation. :-)

find /var/www/ -name file -exec cp {}{,.bak} \;
2013-01-27 01:03:28
User: joepd
Functions: cp file find
0

Let the shell handle the repetition in stead of find :)

find /var/www/ -name file -exec cp {} {}.bak \;
largest() { dir=${1:-"./"}; count=${2:-"10"}; echo "Getting top $count largest files in $dir"; du -sx "$dir/"* | sort -nk 1 | tail -n $count | cut -f2 | xargs -I file du -shx file; }
2013-01-21 09:45:21
User: jhyland87
Functions: cut du echo file sort tail xargs
1

You can simply run "largest", and list the top 10 files/directories in ./, or you can pass two parameters, the first being the directory, the 2nd being the limit of files to display.

Best off putting this in your bashrc or bash_profile file

find -type f | xargs file | grep ".*: .* text" | sed "s;\(.*\): .* text.*;\1;"
file -sL /dev/sda7
for file in *.jpg; do identify -format '%f %b %Q %w %h' $file; done
2012-11-16 10:06:35
User: phattmatt
Functions: file
0

Runs the identify command (from ImageMagick) on each jpg file in the current directory and returns image details according to the format parameter. The example here returns:

Filename FileSize Compression Width Height

More information about the available format options can be found here: http://www.imagemagick.org/script/escape.php

I usually redirect the output to a text file using "> listofdetails.txt" at the end. Spreadsheet magic can then be applied.

for file in `ls -t \`find . -name "*.zip" -type f\``; do found=`unzip -c "$file" | grep --color=always "PATTERN"`; if [[ $found ]]; then echo -e "${file}\n${found}\n"; fi done
2012-11-12 15:43:15
User: vladfr
Functions: echo file grep
0

for file in `ls -t \`find . -name "*.zip" -type f\``; do

found=`unzip -c "$file" | grep --color=always "PATTERN"`;

if [[ $found ]]; then echo -e "${file}\n${found}\n"; fi

done

tail +56 file > newfile
2012-10-26 03:04:12
User: basic612
Functions: file tail
0

'newfile' will have content of 'file' minus first 55 lines

to delete first line only do:

tail +2 file > newfile

rename(){ txtToReplace=${1} ; replacementTxt=${2} ; shift 2 ; files=${@} ; for file in $files ; do mv ${file} ${file/${txtToReplace}/${replacementTxt}} ; done ; }
2012-10-03 17:03:29
Functions: file mv rename shift
2

Implementation of `rename` for systems on which I don't have access to it.

for file in `svn st | awk '{print $2}'`; do svn revert $file; done