This one has a better performance, as it is a one pass count with awk. For this script it might not matter, but for others it is a good optiomization.
Display all pid less the 300 processes info
I use this one-liner to search my sourcecode to find out where tags are named and since there's no easy way in XCode to see what values have already been used. Show Sample Output
I don't like doing a massive sort on all the directory names just to get a small set of them. the above shows a sorted list of all directories over 1GB. use head as well if you want. du's "-x" flag limits this to one file system. That's mostly useful when you run it on "/" but don't want "/proc" and "/dev" and so forth. Remember though that it will also exclude "/home" or "/var" if those are separate partitions. the "-a" option is often useful too, for listing large files as well as large directories. Might be slower.
Counts of messages by recipient, with frozen messages excluded. Show Sample Output
perhaps you should use CMD[$2] instead of CMD[$4] Show Sample Output
dumpfile is a CSV file, which its 1st field is a phone number in format CC+10 digits Empty lines are deleted, before the output in format "prefix,ocurrences" Show Sample Output
bash-3.2$ find /logs -ls -xdev | sort -nrk 7 | head -10 1761905 205380 -rwxrwxr-x 1 wsadmin logadmin 210095353 Jul 22 01:33 /logs/intlpymt/Trace.log 652689 187360 -rwxrwxr-x 1 wsadmin logadmin 191663182 Jul 21 23:00 /logs/websphere/wsfpp1lppwa1213omsecureServer/SystemOut_13.07.21_23.00.12.log 2380449 186536 -rwxrwxr-x 1 wsadmin logadmin 190819939 Jul 16 14:03 /logs/omset/traceIntl.log.201307161403.lppwa1213.gz 2119524 183888 -rwxrwxr-x 1 wsadmin logadmin 188110111 Jul 22 01:33 /logs/intlpymt/intlpymtria/Trace.log 652816 160332 -rwxrwxr-x 1 wsadmin logadmin 164011871 Aug 14 2012 /logs/websphere/wsfpp1lppwa1213omsecureServer/SystemOut.log_08142012.gzip 653312 128916 -rwxrwxr-x 1 wsadmin logadmin 131873943 Jul 18 10:49 /logs/websphere/heapdump.20130718.104150.27592.0006.phd.201307181406.lppwa1213.gz 653320 128916 -rwxrwxr-x 1 wsadmin logadmin 131873735 Jul 18 10:40 /logs/websphere/heapdump.20130718.104012.27592.0002.phd.201307181406.lppwa1213.gz 653309 128912 -rwxrwxr-x 1 wsadmin logadmin 131867602 Jul 18 10:46 /logs/websphere/heapdump.20130718.104008.27592.0001.phd.201307181405.lppwa1213.gz 653323 128872 -rwxrwxr-x 1 wsadmin logadmin 131828157 Jul 18 10:41 /logs/websphere/heapdump.20130718.104109.27592.0004.phd.201307181407.lppwa1213.gz 652783 120288 -rwxrwxr-x 1 wsadmin logadmin 123047750 Aug 13 2012 /logs/websphere/wsfpp1lppwa1213omsecureServer/SystemOut.log_0813.2012.gzip bash-3.2$ Show Sample Output
bit of a contrived example and playing to my OCD but nice for quick scripted output of listening ports which is sorted by port, ip address and protocol. Show Sample Output
The wajig package is not installed by default.
avoiding UUOC! cut can handle files as well. No neet for a cat.
When trying to find an error in a hosted project it's interesting to find out how the source is organized: Are there .inc files? Or .php files only? Or .xml files that probably contain translated texts? Show Sample Output
"find ./ ..." could be replaced with "find $PWD ..." to display absolute path instead of relative path. Show Sample Output
* Find all file sizes and file names from the current directory down (replace "." with a target directory as needed). * sort the file sizes in numeric order * List only the duplicated file sizes * drop the file sizes so there are simply a list of files (retain order) * calculate md5sums on all of the files * replace the first instance of two spaces (md5sum output) with a \0 * drop the unique md5sums so only duplicate files remain listed * Use AWK to aggregate identical files on one line. * Remove the blank line from the beginning (This was done more efficiently by putting another "IF" into the AWK command, but then the whole line exceeded the 255 char limit). >>>> Each output line contains the md5sum and then all of the files that have that identical md5sum. All fields are \0 delimited. All records are \n delimited.
Shows the 10 biggest files/dirs
this will give u the details in MB's; from high to low.... Show Sample Output
Remove duplicate line in a text file.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: