git log --pretty=oneline --abbrev-commit

simple single-lined git log


0
By: cdillon
2013-12-23 05:47:01

These Might Interest You

  • Converts windows lined-style file to unix. see http://en.wikipedia.org/wiki/Newline Can be used to convert from linux2dos : just invert \r\n and \n.


    0
    $ perl -pi -e 's/\r\n/\n/g' <finelame>
    unixmonkey4200 · 2009-04-10 22:22:31 3
  • This command might not be useful for most of us, I just wanted to share it to show power of command line. Download simple text version of novel David Copperfield from Poject Gutenberg and then generate a single column of words after which occurences of each word is counted by sort | uniq -c combination. This command removes numbers and single characters from count. I'm sure you can write a shorter version. Show Sample Output


    -4
    wget -q -O- http://www.gutenberg.org/dirs/etext96/cprfd10.txt | sed '1,419d' | tr "\n" " " | tr " " "\n" | perl -lpe 's/\W//g;$_=lc($_)' | grep "^[a-z]" | awk 'length > 1' | sort | uniq -c | awk '{print $2"\t"$1}'
    alperyilmaz · 2009-05-04 16:00:39 8
  • Replaces "650" with "999" in simple.xml. xml used - http://www.w3schools.com/xml/simple.xml


    1
    xmlstarlet ed -u '//food[calories="650"]/calories' -v "999" simple.xml
    zlemini · 2016-10-05 18:29:01 0
  • Splits the input based on commas and prints it in a nice column format. This would not work for CSV rows that have "," between quotes or with newline characters. Use only simple simple csv files. Show Sample Output


    18
    column -s, -t <tmp.csv
    pykler · 2009-09-24 20:57:32 0
  • As an alternative to using an additional grep -v grep you can use a simple regular expression in the search pattern (first letter is something out of the single letter list ;-)) to drop the grep command itself. Show Sample Output


    66
    ps aux | grep [p]rocess-name
    olorin · 2009-08-13 05:44:45 10
  • Something I do a lot is extract columns from some input where cut is not suitable because the columns are separated by not a single character but multiple spaces or tabs. So I often do things like: ... | awk '{print $7, $8}' ... which is a lot of typing, additionally slowed down when typing symbols like '{}$ ... Using the simple one-line function above makes it easier and faster: ... | col 7 8 How it works: The one-liner defines a new function with name col The function will execute awk, and it expects standard input (coming from a pipe or input redirection) The function arguments are processed with sed to use them with awk: replace all spaces with ,$ so that for example 1 2 3 becomes 1,$2,$3, which is inserted into the awk command to become the well formatted shell command: awk '{print $1,$2,$3}' Allows negative indexes to extract columns relative to the end of the line. Credit: http://www.bashoneliners.com/oneliners/oneliner/144/ Show Sample Output


    0
    col() { awk '{print $('$(echo $* | sed -e s/-/NF-/g -e 's/ /),$(/g')')}'; }
    tekniq · 2014-06-05 18:01:31 0

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: