Avoids creating useless directory entries in archive, and sorts files by (roughly) extension, which is likely to group similar files together for better compression. 1%-5% improvement.
The original didn't use -print0 which fails on weird file names eg with spaces. The original parsed the output of 'ls -l' which is always a bad idea.
This command will find all files recursively containing the phrase entered, represented here by "searchphrase". This particular command searches in all php files, but you could change that to just be html files or just log files etc. Show Sample Output
Improvement of the command "Find Duplicate Files (based on size first, then MD5 hash)" when searching for duplicate files in a directory containing a subversion working copy. This way the (multiple dupicates) in the meta-information directories are ignored.
Can easily be adopted for other VCS as well. For CVS i.e. change ".svn" into ".csv":
find -type d -name ".csv" -prune -o -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type d -name ".csv" -prune -o -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
Show Sample Output
Just how much space are those zillions of database logs taking up ? How much will you gain on a compression rate of say 80% ? This little line gives you a good start for your calculations. Show Sample Output
Find C/C++ source files and headers in the current directory. Show Sample Output
This normalizes volume in your mp3 library, but uses mp3gain's "album" mode. This applies a gain change to all files from each directory (which are presumed to be from the same album) - so their volume relative to one another is changed, while the average album volume is normalized. This is done because if one track from an album is quieter or louder than the others, it was probably meant to be that way.
The command first deletes any old playlist calles playlist.tmp under /tmp. After that it recursively searches all direcotries under ~/mp3 and stores the result in /tmp/playlist.tmp. After havin created the playlist, the command will execute mplayer which will shuffle through the playlist. This command is aliased to m is aliased to `rm -rf /tmp/playlist.tmp && find ~/mp3 -name *.mp3 > /tmp/playlist.tmp && mplayer -playlist /tmp/playlist.tmp -shuffle -loop 0 | grep Playing' in my ~/.bashrc. Show Sample Output
Finds all C++, Python, SWIG files in your present directory (uses "*" rather than "." to exclude invisibles) and counts how many lines are in them. Returns only the last line (the total). Show Sample Output
This command searches the current directory, and all of its subdirs, for files that have the string "foo" in their filename (foo.c, two-foo.txt, index-FOO-bar.php, etc), and opens them in Vim. It ignores any hidden .svn directories. Change -iname to -name if you want to do case-sensitive matches. Files open in buffers by default, so to verify that the correct files were opened, type ":list". You can load all the files in tabs by doing ":tab ball" or use 'vim -p' on the command-line to load files straight to tabs. If you get permission denied errors, do: vim $(find . ! -path \*.svn\* -type -f iname \*foo\* 2>/dev/null) To narrow it down to a single file extension, such as .php files, use \*foo\*.php (or '*foo*.php'. Which ever you prefer)
This will find all files under the path "." which are older than 10 days, and delete them. If you wish to use the "rm" command instead, replace "-delete" with "-exec rm [options] {} \;"
Grab a list of MP3s (with full path) out of Firefox's cache Ever gone to a site that has an MP3 embedded into a pesky flash player, but no download link? Well, this one-liner will yank the *full path* of those tunes straight out of FF's cache in a clean list. Shorter and Intuitive version of the command submitted by (TuxOtaku) Show Sample Output
Possible simplification of egrep-awk-sort with find and -exec with xargs. Show Sample Output
This will search all directories and ignore the CVS ones. Then it will search all files in the resulting directories and act on them.
Gives you a nice quick summary of how many lines each of your files is comprised of. (In this example, we just check .c, .h, .php and .pl). Since we just use wc -l to count, you'll just get a very rough estimate of how many lines of actual code there are. Use a more sophisticated algorithm instead if you need to. Show Sample Output
Works with files containing spaces and for very large directories.
find OGG audio files on your *nix box and listen to them using your web browser Show Sample Output
In a folder with many files and folders, you want to move all files where the date is >= the file olderFilesNameToMove and
My take on the original: even though I like the other's use of -exec echo, sed just feels more natural. This should also be slightly easier to improve. I expanded this into a script as an exercise, which took about 35 minutes (had to look up some docs): http://bitbucket.org/kniht/nonsense/src/7c1b46488dfc/commandlinefu/quick_image_gallery.py
Thanks to flatcap for optimizing this command. This command takes advantage of the ext4 filesystem's resistance to fragmentation. By using this command, files that were previously fragmented will be copied / deleted / pasted essentially giving the filesystem another chance at saving the file contiguously. ( unlike FAT / NTFS, the *nix filesystem always try to save a file without fragmenting it ) My command only effects the home directory and only those files with your R/W (read / write ) permissions. There are two issues with this command: 1. it really won't help, it works, but linux doesn't suffer much (if any ) fragmentation and even fragmented files have fast I/O 2. it doesn't discriminate between fragmented and non-fragmented files, so a large ~/ directory with no fragments will take almost as long as an equally sized fragmented ~/ directory The benefits i managed to work into the command: 1. it only defragments files under 16mb, because a large file with fragments isn't as noticeable as a small file that's fragmented, and copy/ delete/ paste of large files would take too long 2. it gives a nice countdown in the terminal so you know how far how much progress is being made and just like other defragmenters you can stop at any time ( use ctrl+c ) 3. fast! i can defrag my ~/ directory in 11 seconds thanks to the ramdrive powering the command's temporary storage bottom line: 1. its only an experiment, safe ( i've used it several times for testing ), but probably not very effective ( unless you somehow have a fragmentation problem on linux ). might be a placebo for recent windows converts looking for a defrag utility on linux and won't accept no for an answer 2. it's my first commandlinefu command Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: