Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using file from sorted by
Terminal - Commands using file - 146 results
for file in $(find /var/backup -name "backup*" -type f |sort -r | tail -n +10); do rm -f $file; done ; tar czf /var/backup/backup-system-$(date "+\%Y\%m\%d\%H\%M-\%N").tgz --exclude /home/dummy /etc /home /opt 2>&- && echo "system backup ok"
2014-09-24 14:04:11
User: akiuni
Functions: date echo file find rm sort tail tar
Tags: backup Linux cron
0

this command can be added to crontab so as to execute a nightly backup of directories and store only the 10 last backup files.

for file in *.pdf; do convert -verbose -colorspace RGB -resize 800 -interlace none -density 300 -quality 80 "$file" "${file//.pdf/.jpg}"; done
2014-06-19 15:52:42
User: malathion
Functions: file
Tags: pdf convert
2

Without the bashisms and unnecessary sed dependency. Substitutions quoted so that filenames with whitespace will be handled correctly.

for file in ./data/message-snapshots/*.jpg; do cp "$file" /data/digitalcandy/ml/images/; done
2014-06-14 17:26:21
User: ferdous
Functions: cp file
Tags: cp ARG_MAX
0

helpful when you see something like this:

zsh: argument list too long: cp

for file in /usr/bin/*; do pacman -Qo "$file" &> /dev/null || echo "$file"; done
2014-04-22 21:57:08
User: malathion
Functions: echo file
0

In this example I am returning all the files in /usr/bin that weren't put there by pacman, so that they can be moved to /usr/local/bin where they (most likely) belong.

for file in $(find . -name *.mp4); do ogv=${file%%.mp4}.ogv; if test "$file" -nt "$ogv"; then echo $file' is newer then '$ogv; ffmpeg2theora $file; fi done
find . -name "*.URL" | while read file ; do cat "$file" | sed 's/InternetShortcut/Desktop Entry/' | sed '/^\(URL\|\[\)/!d' > "$file".desktop && echo "Type=Link" >> "$file".desktop ; done
iconv -f $(file -bi filename.ext | sed -e 's/.*[ ]charset=//') -t utf8 filename.ext > filename.ext
for file in `ls *.log`; do `:> $file`; done
2013-11-23 15:08:43
User: pratalife
Functions: file
0

nice trick with the :>! this is a variant to do a bunch of files (e.g. *.log) in one go

for file in $(git ls-files | grep old_name_pattern); do git mv $file $(echo $file | sed -e 's/old_name_pattern/new_name_pattern/'); done
cat file | paste -s -d'%' - | sed 's/\(^\|$\)/"/g;s/%/","/g'
for file in "$@"; do name=$(basename "$file" .webm) echo ffmpeg -i $file -vn -c:a copy $name.ogg ffmpeg -i "$file" -vn -c:a copy "$name.ogg" done
2013-10-05 14:49:07
User: hoodie
Functions: basename echo file
0

Strips the audio track from a webm video. Use this in combination with clive or youtube-dl.

rsync -arvx --numeric-ids --stats --progress --bwlimit=1000 file server:destination_directory
2013-10-01 13:00:59
Functions: file rsync
Tags: Linux rsync
0

Useful for transferring large file over a network during operational hours

for i in `find . -name "*.jar"`; do jar -tvf $i | grep -v /$ | awk -v file=$i '{print file ":" $8}'; done > all_jars.txt
convert_path2uri () { echo -n 'file://'; echo -n "$1" | perl -pe 's/([^a-zA-Z0-9_\/.])/sprintf("%%%.2x", ord($1))/eg' ;} #convert2uri '/tmp/a b' ### convert file path to URI
2013-07-01 08:54:45
User: totti
Functions: echo file perl
Tags: encoding PATH url
1

Really helpfull when play with files having spaces an other bad name. Easy to store and access names and path in just a field while saving it in a file.

This format (URL) is directly supported by nautilus and firefox (and other browsers)

split -l 12000 -a 5 database.sql splited_file i=1 for file in splited_file* do mv $file database_${i}.sql i=$(( i + 1 )) done
2013-05-15 18:17:47
User: doczine
Functions: file mv split
0

For some reason split will not let you add extension to the files you split. Just add this to a .sh script and run with bash or sh and it will split your text file at 12000 lines for each file and then add a .sql extension to the file name.

for file in *.zip; do unzip -l "$file" >> archiveindex.txt ; done;
2013-05-02 01:43:26
User: chon8a
Functions: file
Tags: unzip
-2

It can be used to create an index of a backup directory or to find some file.

file -i `find . -name '*.jpg' -print` | grep "application/msword"
2013-03-10 16:53:23
User: genghisdani
Functions: file grep
0

Created to deal with an overzealous batch rename on our server that renamed all files to .jpg files.

for file in *; do convert $file -resize 800x600 resized-$file; done
2013-02-17 21:37:14
User: sonic
Functions: file
Tags: xargs convert
0

To ignore aspect ratio, run:

for file in *; do convert $file -resize 800x600! resized-$file; done

and all images will be exactly 800x600.

Use your shell of choice.. This was done in BASH.

for file in `ls *.png`;do convert $file -resize 65% new_$file; done
FOR %%c in (C:\Windows\*.*) DO (echo file %%c)
2013-01-31 15:19:54
User: jmcclosk
Functions: echo file
0

You can implement a FOR loop to act on one or more files returned from the IN clause. We originally found this in order to GPG decrypt a file using wildcards (where you don't know exactly the entire file name, i.e.: Test_File_??????.txt, where ?????? = the current time in HHMMSS format). Since we won't know the time the file was generated, we need to use wildcards. And as a result of GPG not handling wildcards, this is the perfect solution. Thought I would share this revelation. :-)

find /var/www/ -name file -exec cp {}{,.bak} \;
2013-01-27 01:03:28
User: joepd
Functions: cp file find
0

Let the shell handle the repetition in stead of find :)

find /var/www/ -name file -exec cp {} {}.bak \;
largest() { dir=${1:-"./"}; count=${2:-"10"}; echo "Getting top $count largest files in $dir"; du -sx "$dir/"* | sort -nk 1 | tail -n $count | cut -f2 | xargs -I file du -shx file; }
2013-01-21 09:45:21
User: jhyland87
Functions: cut du echo file sort tail xargs
1

You can simply run "largest", and list the top 10 files/directories in ./, or you can pass two parameters, the first being the directory, the 2nd being the limit of files to display.

Best off putting this in your bashrc or bash_profile file

find -type f | xargs file | grep ".*: .* text" | sed "s;\(.*\): .* text.*;\1;"
file -sL /dev/sda7