I'm working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren't, I wrote this command up. It will print a full repository listing of all files, remove the directories which confuse blame, run svn blame on each individual file, and tally the resulting line counts. It seems quite slow, depending on your repository location, because blame must hit the server for each individual file. You can remove the -R on the first part to print out the tallies for just the current directory. Show Sample Output
counts the total (recursive) number of files in the immediate (depth 1) subdirectories as well as the current one and displays them sorted. Fixed, as per ashawley's comment Show Sample Output
The same as the other two alternatives, but now less forking! Instead of using '\;' to mark the end of an -exec command in GNU find, you can simply use '+' and it'll run the command only once with all the files as arguments. This has two benefits over the xargs version: it's easier to read and spaces in the filesnames work automatically (no -print0). [Oh, and there's one less fork, if you care about such things. But, then again, one is equal to zero for sufficiently large values of zero.] Show Sample Output
Most of the "most used commands" approaches does not consider pipes and other complexities. This approach considers pipes, process substitution by backticks or $() and multiple commands separated by ; Perl regular expression breaks up each line using | or < ( or ; or ` or $( and picks the first word (excluding "do" in case of for loops) note: if you are using lots of perl one-liners, the perl commands will be counted as well in this approach, since semicolon is used as a separator Show Sample Output
This command gives you the number of lines of every file in the folder and its subfolders matching the search options specified in the find command. It also gives the total amount of lines of these files. The combination of print0 and files0-from options makes the whole command simple and efficient. Show Sample Output
Gives you a nice quick summary of how many lines each of your files is comprised of. (In this example, we just check .c, .h, .php and .pl). Since we just use wc -l to count, you'll just get a very rough estimate of how many lines of actual code there are. Use a more sophisticated algorithm instead if you need to. Show Sample Output
For each directory from the current one, list the counts of files in each of these directories. Change the -maxdepth to drill down further through directories. Show Sample Output
-L is for following symbolic links, it can be omitted and then you can find in your whole / dir Show Sample Output
I often find the need to number enumerations and other lists when programming. With this command, create a new file called 'inputfile' with the text you want to number. Paste the contents of 'outputfile' back into your source file and fix the tabbing if necessary. You can also change this to output hex numbering by changing the "%02d" to "%02x". If you need to start at 0 replace "NR" with "NR-1". I adapted this from http://osxdaily.com/2010/05/20/easily-add-line-numbers-to-a-text-file/. Show Sample Output
It does not work without the verbose mode (-v is important)
In this example, the command will recursively find files (-type f) under /some/path, where the path ends in .mp3, case insensitive (-iregex).
It will then output a single line of output (-print0), with results terminated by a the null character (octal 000). Suitable for piping to xargs -0. This type of output avoids issues with garbage in paths, like unclosed quotes.
The tr command then strips away everything but the null chars, finally piping to wc -c, to get a character count.
I have found this very useful, to verify one is getting the right number of before you actually process the results through xargs or similar. Yes, one can issue the find without the -print0 and use wc -l, however if you want to be 1000% sure your find command is giving you the expected number of results, this is a simple way to check.
The approach can be made in to a function and then included in .bashrc or similar. e.g.
count_chars() { tr -d -c "$1" | wc -c; }
In this form it provides a versatile character counter of text streams :)
Show Sample Output
make usable on OSX with filenames containing spaces. note: will still break if filenames contain newlines... possible, but who does that?!
Count your source and header file's line numbers. This ignores blank lines, C++ style comments, single line C style comments. This will not ignore blank lines with tabs or multiline C style comments.
First the find command finds all files in your current directory (.). This is piped to xargs to be able to run the next shell pipeline in parallel. The xargs -P argument specifies how many processes you want to run in parallel, you can set this higher than your core count as the duration reading is mainly IO bound. The -print0 and -0 arguments of find and xargs respectively are used to easily handle files with spaces or other special characters. A subshell is executed by xargs to have a shell pipeline for each file that is found by find. This pipeline extracts the duration and converts it to a format easily parsed by awk. ffmpeg reads the file and prints a lot of information about it, grep extracts the duration line. cut and sed cut out the time information, and tr converts the last . to a : to make it easier to split by awk. awk is a specialized programming language for use in shell scripts. Here we use it to split the time elements in 4 variables and add them up. Show Sample Output
I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing. Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs. Show Sample Output
find -exec is evil since it launches a process for each file. You get the total as a bonus. Also, without -n sort will sort by lexical order (that is 9 after 10).
At times I find that I need to loop through a file where each value that I need to do something with is not on a separate line, but rather separated with a ":" or a ";". In this instance, I create a loop within which I define 'IFS' to be something other than a whitespace character. In this example, I iterate through a file which only has one line, and several fields separated with ":". The counter helps me define how many times I want to repeat the loop.
Change "sort -f" to "sort" and "uniq -ic" to "uniq -c" to make it case sensitive. Show Sample Output
This sums up the page count of multiple pdf files without the useless use of grep and sed which other commandlinefus use. Show Sample Output
This pattern matches empty lines in the file and -c gives the count
This one has a better performance, as it is a one pass count with awk. For this script it might not matter, but for others it is a good optiomization.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: