Find files and calculate size with stat of result in shell
Based on capsule8 agent examples, not rigorously tested
Benchmark a SQL query against MySQL Server. The example runs the query 10 times, and you get the average runtime in the output. To ensure that the query does not get cached, use `RESET QUERY CACHE;` on top in the query file. Show Sample Output
this is good for variables if you have many script created files and if you want to know which one is the last created/changed one..
files
This renames a pattern matched bunch of files by their last modified time. rename by timestamp rename by time created rename by time modified Show Sample Output
Problem arises when ebuild gets removed from portage and you end up with old and unmaintained package that you cannot find standard way. This oneliner will give you list of those packages. Show Sample Output
The find command isn't the important bit, here: it's just what feeds the rest of the pipe (this one looks for all PDFs less than 7 days old, in an archive directory, whose structure is defined by a wildcard pattern: modify this find, to suit your real needs). I consider the next bit the useful part. xargs stats out the byte-size of each file, and this is passed to awk, which adds them all together, and prints the grand total. I use printf, in order to override awk's tendency to swtich to exponential output above a certain threshold, and, specifically "%0.0f\n", because it was all I can find to force things back to digital on Redhat systems. This is then passed to an optional sed, which formats them in a US/UK number format, to make large numbers easier to read. Change the comma in the sed, for your preferred separator character (e.g. sed -r ':L;s=\b([0-9]+)([0-9]{3})\b=\1 \2=g;t L' for most European countries). (This sed is credited to user name 'archtoad6', on the Linuxquestions forum.) This is useful for monitoring changes in the storage use within large and growing archives of files, and appears to execute much more quickly than some options I have seen (use of a 'for SIZE in find-command -exec du' style approach, instead, for instance). I just ran it on a not particularly spectacular server, where a directory tree with over three thousand subdirectories, containing around 4000 files, of about 4 Gigs, total, responded in under a second. Show Sample Output
Cleaner, but probably less portable. Works with bash4 and should also work on bash3. IIRC, $(()) and (()) are bashisms, not POSIX. Show Sample Output
new way to replace text file with dd,faster than head,sed,awk if you do this with big file Show Sample Output
Show running time. eta, progressbar Show Sample Output
Sometimes cache-files or garbage gets added to your SVN repository. This is the way I normally clean up those when the actual files are already gone.
here's a version which works on OS X.
MAC OSX doesn't come with a locate command, This will do the same thing as the locate command on a typical Linux OS. Simply add it to your ~/.bash_profile
one of the solutions from this stackexchange: http://unix.stackexchange.com/questions/71585/convert-ls-l-output-format-to-chmod-format Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: