find -exec is evil since it launches a process for each file. You get the total as a bonus. Also, without -n sort will sort by lexical order (that is 9 after 10).
This is an updated version that some one provided me via another "find" command to find files over a certain size. Keep in mind you may have to mess around with the print values depending on your system to get the correct output you want. This was tested on FC and Cent based servers. (thanks to berta for the update) Show Sample Output
What *have* I been working on for the last 2 weeks... Show Sample Output
omit "> ~/Desktop/MyAppList`date +%s.txt`" if you don't want to print it to a file on your desktop and instead only want to display to console created and tested on: ProductName: Mac OS X ProductVersion: 10.6.3 BuildVersion: 10D573 Show Sample Output
This command allows you to stream your log files, including gziped files, into one stream which can be piped to awk or some other command for analysis.
Note: if your version of 'find' supports it, use:
find /var/log/apache2 -name 'access.log*gz' -exec zcat {} + -or -name 'access.log*' -exec cat {} +
Find the source file which contains most number of lines in your workspace :) Show Sample Output
Please be careful while executing the following command as you don?t want to delete the files by mistake. The best practice is to execute the same command with ls ?l to make sure you know which files will get deleted when you execute the command with rm.
The following command finds all the files not modified in the last 5 days under /protocollo/paflow directory and creates an archive files under /var/dump-protocollo in the format of ddmmyyyy_archive.tar
I use this sometimes when ctags won't help.
Replace the echo command with whatever commands you want. 'read' reads a line from stdin and places the text in the variable, the stdin of the while loop comes from the find command. Note that with simple commands, an easier way is using the '-exec' option of find. My command is useful if you want to execute multiple commands in the loop. Show Sample Output
Alternate version: Delete all files older than one day, with a filesize other than 0 bytes starting from the current working directory. Remove the -delete parameter to see which files it would delete
You define your variable MYVAR with the desired search pattern: MYVAR= ...which can then be searched with the find command. This is useful if you in a script, where you want the arguments to be fed into the find command. The provided search is case insensitive (-iname) and will find all files and directories with the pattern MYVAR (not exact matches). This may go without saying, but if you want exact matches remove the \* and if you want case sensitive, use the -name argument.
This works fine too.
This file can now be played with mpg321 -@ ~/mylist Show Sample Output
If you would like to ignore a directory including its subdirectory. For example, a tmp/ directory
The find command isn't the important bit, here: it's just what feeds the rest of the pipe (this one looks for all PDFs less than 7 days old, in an archive directory, whose structure is defined by a wildcard pattern: modify this find, to suit your real needs). I consider the next bit the useful part. xargs stats out the byte-size of each file, and this is passed to awk, which adds them all together, and prints the grand total. I use printf, in order to override awk's tendency to swtich to exponential output above a certain threshold, and, specifically "%0.0f\n", because it was all I can find to force things back to digital on Redhat systems. This is then passed to an optional sed, which formats them in a US/UK number format, to make large numbers easier to read. Change the comma in the sed, for your preferred separator character (e.g. sed -r ':L;s=\b([0-9]+)([0-9]{3})\b=\1 \2=g;t L' for most European countries). (This sed is credited to user name 'archtoad6', on the Linuxquestions forum.) This is useful for monitoring changes in the storage use within large and growing archives of files, and appears to execute much more quickly than some options I have seen (use of a 'for SIZE in find-command -exec du' style approach, instead, for instance). I just ran it on a not particularly spectacular server, where a directory tree with over three thousand subdirectories, containing around 4000 files, of about 4 Gigs, total, responded in under a second. Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: