'dpkg -S' just matches the string you supply it, so just using 'ls' as an argument matches any file from any package that has 'ls' anywhere in the filename. So usually it's a good idea to use an absolute path. You can see in the second example that 12 thousand files that are known to dpkg match the bare string 'ls'. Show Sample Output
This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim. Drop this in your .bash_aliases and make sure that file is initialized in your .bashrc.
Works similar to dpkg -S, but uses the locatedb and is thus inarguably a lot faster - if the locatedb is current.
Just want to post a Perl alternative. Does not count hidden files ('.' ones). Show Sample Output
A simple "ls" lists files *and* directories. So we need to "find" the files (type 'f') only. As "find" is recursive by default we must restrict it to the current directory by adding a maximum depth of "1". If you should be using the "zsh" then you can use the dot (.) as a globbing qualifier to denote plain files: zsh> ls *(.) | wc -l for more info see the zsh's manual on expansion and substitution - "man zshexpn". Show Sample Output
The same as the other two alternatives, but now less forking! Instead of using '\;' to mark the end of an -exec command in GNU find, you can simply use '+' and it'll run the command only once with all the files as arguments. This has two benefits over the xargs version: it's easier to read and spaces in the filesnames work automatically (no -print0). [Oh, and there's one less fork, if you care about such things. But, then again, one is equal to zero for sufficiently large values of zero.] Show Sample Output
This command gives you the number of lines of every file in the folder and its subfolders matching the search options specified in the find command. It also gives the total amount of lines of these files. The combination of print0 and files0-from options makes the whole command simple and efficient. Show Sample Output
Finds all C++, Python, SWIG files in your present directory (uses "*" rather than "." to exclude invisibles) and counts how many lines are in them. Returns only the last line (the total). Show Sample Output
-L is for following symbolic links, it can be omitted and then you can find in your whole / dir Show Sample Output
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100.
OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 ))
This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript).
An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command.
find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p'
I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this:
cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this.
cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
If you were building a process-killer (very common for cheap hosting) this would also be handy.
Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.
Show Sample Output
Another way of counting the line output of tail over 10s not requiring pv. Cut to have the average per second rate : tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log | cut -c-2 You can also enclose it in a loop and send stderr to /dev/null : while true; do tail -n0 -f access.log>/tmp/tmp.log & sleep 2; kill $! ; wc -l /tmp/tmp.log | cut -c-2; done 2>/dev/null
There once was a day I needed this info. Show Sample Output
It does not work without the verbose mode (-v is important)
Count your source and header file's line numbers. This ignores blank lines, C++ style comments, single line C style comments. This will not ignore blank lines with tabs or multiline C style comments.
This is really fast :)
time find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'
204753
real 0m0.191s
user 0m0.068s
sys 0m0.116s
Show Sample Output
A command to find out what the day ends in. Can be edited slightly to find out what "any" output ends in. NB: I haven't tested with weird and wonderful output. Show Sample Output
If you use HISTTIMEFORMAT environment e.g. timestamping typed commands, $(echo "1 2 $HISTTIMEFORMAT" | wc -w) gives the number of columns that containing non-command parts per lines. It should universify this command. Show Sample Output
Find the source file which contains most number of lines in your workspace :) Show Sample Output
Use bc for decimals...
Enhancement for the 'busy' command originally posted by busybee : less chars, no escape issue, and most important it exclude small files ( opening a 5 lines file isn't that persuasive I think ;) ) This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: