Commands tagged HEAD (26)

  • Run the alias command, then issue ps aux | head and resize your terminal window (putty/console/hyperterm/xterm/etc) then issue the same command and you'll understand. ${LINES:-`tput lines 2>/dev/null||echo -n 12`} Insructs the shell that if LINES is not set or null to use the output from `tput lines` ( ncurses based terminal access ) to get the number of lines in your terminal. But furthermore, in case that doesn't work either, it will default to using the deafault of 12 (-2 = 10). The default for HEAD is to output the first 10 lines, this alias changes the default to output the first x lines instead, where x is the number of lines currently displayed on your terminal - 2. The -2 is there so that the top line displayed is the command you ran that used HEAD, ie the prompt. Depending on whether your PS1 and/or PROMPT_COMMAND output more than 1 line (mine is 3) you will want to increase from -2. So with my prompt being the following, I need -7, or - 5 if I only want to display the commandline at the top. ( https://www.askapache.com/linux/bash-power-prompt/ ) 275MB/748MB [7995:7993 - 0:186] 06:26:49 Thu Apr 08 [askapache@n1-backbone5:/dev/pts/0 +1] ~ In most shells the LINES variable is created automatically at login and updated when the terminal is resized (28 linux, 23/20 others for SIGWINCH) to contain the number of vertical lines that can fit in your terminal window. Because the alias doesn't hard-code the current LINES but relys on the $LINES variable, this is a dynamic alias that will always work on a tty device. Show Sample Output


    27
    alias head='head -n $((${LINES:-`tput lines 2>/dev/null||echo -n 12`} - 2))'
    AskApache · 2010-04-08 22:37:06 14

  • 11
    curl -I http://localhost
    mniskin · 2011-01-02 14:19:30 11
  • Goes through all files in the directory specified, uses `stat` to print out last modification time, then sorts numerically in reverse, then uses cut to remove the modified epoch timestamp and finally head to only output the last 10 modified files. Note that on a Mac `stat` won't work like this, you'll need to use either: find . -type f -print0 | xargs -0 stat -f '%m%t%Sm %12z %N' | sort -nr | cut -f2- | head or alternatively do a `brew install coreutils` and then replace `stat` with `gstat` in the original command. Show Sample Output


    5
    find . -type f -print0 | xargs -0 stat -c'%Y :%y %12s %n' | sort -nr | cut -d: -f2- | head
    HerbCSO · 2013-08-03 09:53:46 13
  • This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting. Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl? Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links.... Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process. Show Sample Output


    3
    find $HOME -type f -print0 | perl -0 -wn -e '@f=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
    drewk · 2009-09-21 22:11:16 11

  • 2
    head -c4 /dev/urandom | od -N4 -tu4 | sed -ne '1s/.* //p'
    opexxx · 2009-10-08 11:27:20 4

  • 2
    tr -c -d 0-9 < /dev/urandom | head -c 10
    hfs · 2009-10-09 11:58:30 3
  • Run the alias command, then issue ps aux | tail and resize your terminal window (putty/console/hyperterm/xterm/etc) then issue the same command and you'll understand. ${LINES:-`tput lines 2>/dev/null||echo -n 12`} Insructs the shell that if LINES is not set or null to use the output from `tput lines` ( ncurses based terminal access ) to get the number of lines in your terminal. But furthermore, in case that doesn't work either, it will default to using the default of 80. The default for TAIL is to output the last 10 lines, this alias changes the default to output the last x lines instead, where x is the number of lines currently displayed on your terminal - 7. The -7 is there so that the top line displayed is the command you ran that used TAIL, ie the prompt. Depending on whether your PS1 and/or PROMPT_COMMAND output more than 1 line (mine is 3) you will want to increase from -2. So with my prompt being the following, I need -7, or - 5 if I only want to display the commandline at the top. ( http://www.askapache.com/linux/bash-power-prompt.html ) 275MB/748MB [7995:7993 - 0:186] 06:26:49 Thu Apr 08 [askapache@n1-backbone5:/dev/pts/0 +1] ~ In most shells the LINES variable is created automatically at login and updated when the terminal is resized (28 linux, 23/20 others for SIGWINCH) to contain the number of vertical lines that can fit in your terminal window. Because the alias doesn't hard-code the current LINES but relys on the $LINES variable, this is a dynamic alias that will always work on a tty device. Show Sample Output


    2
    alias tail='tail -n $((${LINES:-`tput lines 2>/dev/null||echo -n 80`} - 7))'
    AskApache · 2012-03-22 02:44:11 7
  • Find biggest files in a directory Show Sample Output


    1
    find . -printf '%.5m %10M %#9u %-9g %TY-%Tm-%Td+%Tr [%Y] %s %p\n'|sort -nrk8|head
    AskApache · 2014-12-10 23:48:20 9

  • 1
    rpm -qa --last | head -n 16
    sparsile · 2018-09-13 02:55:51 276

  • 0
    curl -i -X HEAD http://localhost/
    ethanmiller · 2009-03-25 17:34:39 7
  • Strangely enough, there is no option --lines=[negative] with tail, like the head's one, so we have to use sed, which is very short and clear, you see. Strangely more enough, skipping lines at the bottom with sed is not short nor clear. From Sed one liner : # delete the last 10 lines of a file $ sed -e :a -e '$d;N;2,10ba' -e 'P;D' # method 1 $ sed -n -e :a -e '1,10!{P;N;D;};N;ba' # method 2 Show Sample Output


    0
    seq 1 12 | sed 1,5d ; seq 1 12 | head --lines=-5
    flux · 2009-08-01 00:41:52 8
  • Makes use of $RANDOM environment variable.


    0
    head -c10 <(echo $RANDOM$RANDOM$RANDOM)
    jgc · 2009-10-09 15:09:02 6

  • 0
    echo $RANDOM$RANDOM$RANDOM |cut -c3-12
    emmerich164 · 2009-10-13 08:08:45 4

  • 0
    curl -s -L --head -w "%{http_code}\n" URL | tail -n1
    tokland · 2010-04-02 19:42:14 6
  • You can actually do the same thing with a combination of head and tail. For example, in a file of four lines, if you just want the middle two lines: head -n3 sample.txt | tail -n2 Line 1 --\ Line 2 } These three lines are selected by head -n3, Line 3 --/ this feeds the following filtered list to tail: Line 4 Line 1 Line 2 \___ These two lines are filtered by tail -n2, Line 3 / This results in: Line 2 Line 3 being printed to screen (or wherever you redirect it).


    0
    head -n1 sample.txt | tail -n1
    gtcom · 2011-06-14 17:45:04 6
  • Find which directories on your system contain a lot of files. Edit: much shorter and betterer with -n switch. Show Sample Output


    0
    sudo find / -type f | perl -MFile::Basename -ne '$counts{dirname($_)}++; END { foreach $d (sort keys %counts) {printf("%d\t%s\n",$counts{$d},$d);} }'|sort -rn | tee /tmp/sortedfilecount.out | head
    tamouse · 2011-09-14 19:41:19 3

  • 0
    mco ping | head -n -4 | awk '{print $1}' | sort
    mrwulf · 2014-06-24 18:20:16 7
  • Way more easy to understand for naive user. Just returns the biggest file with size.


    0
    find . -printf '%s %p\n'|sort -nr| head -1
    ranjha · 2014-12-14 15:40:56 8

  • 0
    du -k . | sort -rn | head -11
    b0wlninja · 2014-12-29 17:12:25 8
  • This is a alternate command I like to use instead of TOP or HTOP to see what are the processes which are taking up the most memory on a system. It shows the username, process ID, CPU usage, Memory usage, thread ID, Number of threads associated with parent process, Resident Set Size, Virtual Memory Size, start time of the process, and command arguments. Then it's sorted by memory and showing the top 10 with head. This of course can be changed to suit you needs. I have a small system which is why Firefox is taking so much resources. Show Sample Output


    0
    watch -n .8 'ps -eaLo uname,pid,pcpu,pmem,lwp,nlwp,rss,vsz,start_time,args --sort -pmem| head -10'
    ubercoo · 2016-05-11 01:05:53 11
  • urls.txt should have a fully qualified url on each line prefix with rm log.txt; to clear the log change curl command to curl --head $file | head -1 >> log.txt to just get the http status Show Sample Output


    -1
    for file in `cat urls.txt`; do echo -n "$file " >> log.txt; curl --head $file >> log.txt ; done
    Glutnix · 2010-10-19 02:54:13 5

  • -1
    curl -s http://www.last.fm/user/$LASTFMUSER | grep -A 1 subjectCell | sed -e 's#<[^>]*>##g' | head -n2 | tail -n1 | sed 's/^[[:space:]]*//g'
    sn0opy · 2011-01-24 08:16:48 3
  • 20characters long alpahnumeric "password" Show Sample Output


    -1
    head -c20 /dev/urandom | xxd -ps
    opexxx · 2013-07-16 10:14:21 8

  • -2
    n=$RANDOM$RANDOM$RANDOM; let "n %= 10000000000"; echo $n
    alset · 2009-10-15 05:10:00 4
  • Useful for situations where you have word lists or dictionaries that range from hundreds of megabytes to several gigabytes in size. Replace file.lst with your wordlist, replace 50000 with however many lines you want the resulting list to be in total. The result will be redirected to output.txt in the current working directory. It may be helpful to run wc -l file.lst to find out how many lines the word list is first, then divide that in half to figure out what value to put for the head -n part of the command.


    -3
    less file.lst | head -n 50000 > output.txt
    Richie086 · 2011-09-05 05:26:04 8
  •  1 2 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Replicate a directory structure dropping the files

find out how many days since given date
You can also do this for seconds, minutes, hours, etc... Can't use dates before the epoch, though.

Count number of files in subdirectories
For each directory from the current one, list the counts of files in each of these directories. Change the -maxdepth to drill down further through directories.

Adequately order the page numbers to print a booklet
Useful if you don't have at hand the ability to automatically create a booklet, but still want to. F is the number of pages to print. It *must* be a multiple of 4; append extra blank pages if needed. In evince, these are the steps to print it, adapted from https://help.gnome.org/users/evince/stable/duplex-npage.html.en : 1) Click File ▸ Print. 2) Choose the General tab. Under Range, choose Pages. Type the numbers of the pages in this order (this is what this one-liner does for you): n, 1, 2, n-1, n-2, 3, 4, n-3, n-4, 5, 6, n-5, n-6, 7, 8, n-7, n-8, 9, 10, n-9, n-10, 11, 12, n-11... ...until you have typed n-number of pages. 3) Choose the Page Setup tab. - Assuming a duplex printer: Under Layout, in the Two-side menu, select Short Edge (Flip). - If you can only print on one side, you have to print twice, one for the odd pages and one for the even pages. In the Pages per side option, select 2. In the Page ordering menu, select Left to right. 4) Click Print.

Sort files by date
Show you the list of files of current directory sorted by date youngest to oldest, remove the 'r' if you want it in the otherway.

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

reverse-i-search: Search through your command line history
"What it actually shows is going to be dependent on the commands you've previously entered. When you do this, bash looks for the last command that you entered that contains the substring "ls", in my case that was "lsof ...". If the command that bash finds is what you're looking for, just hit Enter to execute it. You can also edit the command to suit your current needs before executing it (use the left and right arrow keys to move through it). If you're looking for a different command, hit Ctrl+R again to find a matching command further back in the command history. You can also continue to type a longer substring to refine the search, since searching is incremental. Note that the substring you enter is searched for throughout the command, not just at the beginning of the command." - http://www.linuxjournal.com/content/using-bash-history-more-efficiently

Get the list of local files that changed since their last upload in an S3 bucket
Can be useful to granulary flush files in a CDN after they've been changed in the S3 bucket.

print DateTimeOriginal from EXIF data for all files in folder
see output from `identify -verbose` for other keywords to filter for (e.g. date:create, exif:DateTime, EXIF:ExifOffset).

Change every instance of OLD to NEW in file FILE
Very quick way to change a word in a file. I use it all the time to change variable names in my PHP scripts (sed -i 's/$oldvar/$newvar/g' index.php)


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: