Commands by jaymzcd (4)


  • 2
    for file in *.iso; do mkdir `basename $file | awk -F. '{print $1}'`; sudo mount -t iso9660 -o loop $file `basename $file | awk -F. '{print $1}'`; done
    jaymzcd · 2009-10-17 20:07:31 4
  • If you use newsgroups then you'll have come across split files before. Joining together a whole batch of them can be a pain so this will do the whole folder in one. Show Sample Output


    2
    for file in *.001; do NAME=`echo $file | cut -d. -f1,2`; cat "$NAME."[0-9][0-9][0-9] > "$NAME"; done
    jaymzcd · 2009-07-29 10:04:26 5
  • This prints a summary of your referers from your logs as long as they occurred a certain number of times (in this case 500). The grep command excludes the terms, I add this in to remove results Im not interested in. Show Sample Output


    1
    awk -F\" '{print $4}' *.log | grep -v "eviljaymz\|\-" | sort | uniq -c | awk -F\ '{ if($1>500) print $1,$2;}' | sort -n
    jaymzcd · 2009-05-05 22:21:04 4
  • I use this (well I normally just drop the F=*.log bit and put that straight into the awk command) to count how many times I get referred from another site. I know its rough, its to give me an idea where any posts I make are ending up. The reason I do the Q="query" bit is because I often want to check another domain quickly and its quick to use CTRL+A to jump to the start and then CTRL+F to move forward the 3 steps to change the grep query. (I find this easier than moving backwards because if you group a lot of domains with the pipe your command line can get quite messy so its normally easier to have it all at the front so you just have to edit it & hit enter). For people new to the shell it does the following. The Q and F equals bits just make names we can refer to. The awk -F\" '{print $4}' $F reads the file specified by $F and splits it up using double-quotes. It prints out the fourth column for egrep to work on. The 4th column in the log is the referer domain. egrep then matches our query against this list from awk. Finally wc -l gives us the total number of lines (i.e. matches). Show Sample Output


    0
    Q="reddit|digg"; F=*.log; awk -F\" '{print $4}' $F | egrep $Q | wc -l
    jaymzcd · 2009-05-05 21:51:16 6

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Instant mirror from your laptop + webcam

Put split files back together, without a for loop
After splitting a file, put them all back together a lot faster then doing $cat file1 file2 file3 file4 file5 > mainfile or $for i in {0..5}; do cat file$i > mainfile; done When splitting, be sure to do split -d for getting numbers instead of letters

Huh? Where did all my precious space go ?
Sort ls output of all files in current directory in ascending order Just the 20 biggest ones: $ ls -la | sort -k 5bn | tail -n 20 A variant for the current directory tree with subdirectories and pretty columns is: $ find . -type f -print0 | xargs -0 ls -la | sort -k 5bn | column -t And finding the subdirectories consuming the most space with displayed block size 1k: $ du -sk ./* | sort -k 1bn | column -t

get colorful side-by-side diffs of files in svn with vim
This will diff your local version of the file with the latest version in svn. I put this in a shell function like so: $svd() { vimdiff

GRUB2: set Super Mario as startup tune
I'll let Slayer handle that. Raining Blood for your pleasure.

Keep track of diff progress
You're running a program that reads LOTS of files and takes a long time. But it doesn't tell you about its progress. First, run a command in the background, e.g. $ find /usr/share/doc -type f -exec cat {} + > output_file.txt Then run the watch command. "watch -d" highlights the changes as they happen In bash: $! is the process id (pid) of the last command run in the background. You can change this to $(pidof my_command) to watch something in particular.

What is my ip?

df output, sorted by Use% and correctly maintaining header row
Show disk space info, grepping out the uninteresting ones beginning with ^none while we're at it. The main point of this submission is the way it maintains the header row with the command grouping, by removing it from the pipeline before it gets fed into the sort command. (I'm surprised sort doesn't have an option to skip a header row, actually..) It took me a while to work out how to do this, I thought of it as I was drifting off to sleep last night!

Annotate tail -f with timestamps

Watch memcache traffic
View all memcache traffic


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: