$ tail -n +5 /var/log/dmesg | head -n 1 [ 0.000000] KERNEL supported cpus:
You can get one specific line during any procedure. Very interesting to be used when you know what line you want. Show Sample Output
Just one character longer than the sed version ('FNR==5' versus -n 5p). On my system, without using "exit" or "q", the awk version is over four times faster on a ~900K file using the following timing comparison:
testfile="testfile"; for cmd in "awk 'FNR==20'" "sed -n '20p'"; do echo; echo $cmd; eval "$cmd $testfile"; for i in {1..3}; do time for j in {1..100}; do eval "$cmd $testfile" >/dev/null; done; done; done
Adding "exit" or "q" made the difference between awk and sed negligible and produced a four-fold improvement over the awk timing without the "exit".
For long files, an exit can speed things up:
awk 'FNR==5{print;exit}' <file>
I don't know if it's better but works fine :)
Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?
You must be signed in to comment.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
su root
wc -l file
394955588 filesync && echo 1 > /proc/sys/vm/drop_caches
time sed -n -e '390955588 p' file
/075c6142-c331-4e78-8e5a-4b7b5520edc1.html real 8m25.225s user 1m15.680s sys 0m10.820ssync && echo 1 > /proc/sys/vm/drop_caches
time tail -n +390955588 file | head -n 1
/075c6142-c331-4e78-8e5a-4b7b5520edc1.html real 2m34.209s user 0m13.160s sys 0m10.150s With "awk 'FNR=='390955588'" a wrong line was outputted and it took ~11m. The file i tested the posted command with is 21GB. 100k lines is a small file. It is about maybe 10MB in which case the difference is hard to see. Test machine specs: Intel Core i7 920 2x 1TB HDDs in Software Raid 1