Commands tagged stat (33)

  • This is useful if you'd like to see the output of a script while you edit it. Each time you save the file the command is executed. I thought for sure something like this already exists - and it probably does. I'm on an older system and tend to be missing some useful things. Examples: ontouchdo yourscript 'clear; yourscript somefiletoparse' Edit yourscript in a separate window and see new results each time you save. ontouchdo crufty.html 'clear; xmllint --noout crufty.html 2>&1 | head' Keep editing krufty.html until the xmllint window is empty. Note: Mac/bsd users should use stat -f%m. If you don't have stat, you can use perl -e '$f=shift; @s=stat($f); print "$s[9]\n";' $1


    9
    ontouchdo(){ while :; do a=$(stat -c%Y "$1"); [ "$b" != "$a" ] && b="$a" && sh -c "$2"; sleep 1; done }
    putnamhill · 2010-10-22 23:25:12 7
  • I love this function because it tells me everything I want to know about files, more than stat, more than ls. It's very useful and infinitely expandable. find $PWD -maxdepth 1 -printf '%.5m %10M %#9u:%-9g %#5U:%-5G [%AD | %TD | %CD] [%Y] %p\n' | sort -rgbS 50% 00761 drwxrw---x askapache:askapache 777:666 [06/10/10 | 06/10/10 | 06/10/10] [d] /web/cg/tmp The key is: # -printf '%.5m %10M %#9u:%-9g %#5U:%-5G [%AD | %TD | %CD] [%Y] %p\n' which believe it or not took me hundreds of tweaking before I was happy with the output. You can easily use this within a function to do whatever you want.. This simple function works recursively if you call it with -r as an argument, and sorts by file permissions. lsl(){ O="-maxdepth 1";sed -n '/-r/!Q1'<<<$@ &&O=;find $PWD $O -printf '%.5m %10M %#9u:%-9g %#5U:%-5G [%AD | %TD | %CD] [%Y] %p\n'|sort -rgbS 50%; } Personally I'm using this function because: lll () { local a KS="1 -r -g"; sed -n '/-sort=/!Q1' <<< $@ && KS=`sed 's/.*-sort=\(.*\)/\1/g'<<<$@`; find $PWD -maxdepth 1 -printf '%.5m %10M %#9u:%-9g %#5U:%-5G [%AD | %TD | %CD] [%Y] %p\n'|sort -k$KS -bS 50%; } # i can sort by user lll -sort=3 # or sort by group reversed lll -sort=4 -r # and sort by modification time lll -sort=6 If anyone wants to help me make this function handle multiple dirs/files like ls, go for it and I would appreciate it.. Something very minimal would be awesome.. maybe like: for a; do lll $a; done Note this uses the latest version of GNU find built from source, easy to build from gnu ftp tarball. Taken from my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Show Sample Output


    8
    find $PWD -maxdepth 1 -printf '%.5m %10M %#9u:%-9g %#5U:%-5G [%AD | %TD | %CD] [%Y] %p\n'
    AskApache · 2010-06-10 22:03:08 9
  • Imagine you've started a long-running process that involves piping data, but you forgot to add the progress-bar option to a command. e.g. xz -dc bigdata.xz | complicated-processing-program > summary . This command uses lsof to see how much data xz has read from the file. lsof -o0 -o -Fo FILENAME Display offsets (-o), in decimal (-o0), in parseable form (-Fo) This will output something like: . p12607 f3 o0t45187072 . Process id (p), File Descriptor (f), Offset (o) . We stat the file to get its size stat -c %s FILENAME . Then we plug the values into awk. Split the line at the letter t: -Ft Define a variable for the file's size: -s=$(stat...) Only work on the offset line: /^o/ . Note this command was tested using the Linux version of lsof. Because it uses lsof's batch option (-F) it may be portable. . Thanks to @unhammer for the brilliant idea. Show Sample Output


    7
    F=bigdata.xz; lsof -o0 -o -Fo $F | awk -Ft -v s=$(stat -c %s $F) '/^o/{printf("%d%%\n", 100*$2/s)}'
    flatcap · 2015-09-19 22:22:43 18
  • Goes through all files in the directory specified, uses `stat` to print out last modification time, then sorts numerically in reverse, then uses cut to remove the modified epoch timestamp and finally head to only output the last 10 modified files. Note that on a Mac `stat` won't work like this, you'll need to use either: find . -type f -print0 | xargs -0 stat -f '%m%t%Sm %12z %N' | sort -nr | cut -f2- | head or alternatively do a `brew install coreutils` and then replace `stat` with `gstat` in the original command. Show Sample Output


    5
    find . -type f -print0 | xargs -0 stat -c'%Y :%y %12s %n' | sort -nr | cut -d: -f2- | head
    HerbCSO · 2013-08-03 09:53:46 14
  • It's common to want to split up large files and the usual method is to use split(1). If you have a 10GiB file, you'll need 10GiB of free space. Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem). This takes AGES. . The command uses a set of loop block devices to create fake chunks, but without making any changes to the file. This means the file splitting is nearly instantaneous. The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15). . Note: This isn't a drop-in replacement for using split. The results are block devices. tar and zip won't do what you expect when given block devices. . These commands will work: hexdump /dev/loop4 . gzip -9 < /dev/loop6 > part6.gz . cat /dev/loop10 > /media/usb/part10.bin Show Sample Output


    5
    FILE=file_name; CHUNK=$((64*1024*1024)); SIZE=$(stat -c "%s" $FILE); for ((i=0; i < $SIZE; i+=$CHUNK)); do losetup --find --show --offset=$i --sizelimit=$CHUNK $FILE; done
    flatcap · 2014-10-03 13:18:19 10
  • This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting. Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl? Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links.... Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process. Show Sample Output


    3
    find $HOME -type f -print0 | perl -0 -wn -e '@f=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
    drewk · 2009-09-21 22:11:16 11
  • This shows every bit of information that stat can get for any file, dir, fifo, etc. It's great because it also shows the format and explains it for each format option. If you just want stat help, create this handy alias 'stath' to display all format options with explanations. alias stath="stat --h|sed '/Th/,/NO/!d;/%/!d'" To display on 2 lines: ( F=/etc/screenrc N=c IFS=$'\n'; for L in $(sed 's/%Z./%Z\n/'<<<`stat --h|sed -n '/^ *%/s/^ *%\(.\).*$/\1:%\1/p'`); do G=$(echo "stat -$N '$L' \"$F\""); eval $G; N=fc;done; ) For a similarly powerful stat-like function optimized for pretty output (and can sort by any field), check out the "lll" function http://www.commandlinefu.com/commands/view/5815/advanced-ls-output-using-find-for-formattedsortable-file-stat-info From my .bash_profile -> http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Show Sample Output


    3
    statt(){ C=c;stat --h|sed '/Th/,/NO/!d;/%/!d'|while read l;do p=${l/% */};[ $p == %Z ]&&C=fc&&echo ^FS:^;echo "`stat -$C $p \"$1\"` ^$p^${l#%* }";done|column -ts^; }
    AskApache · 2010-06-11 23:31:03 4
  • This loop will finish if a file hasn't changed in the last 10 seconds. . It checks the file's modification timestamp against the clock. If 10 seconds have elapsed without any change to the file, then the loop ends. . This script will give a false positive if there's a 10 second delay between updates, e.g. due to network congestion . How does it work? 'date +%s' gives the current time in seconds 'stat -c %Y' gives the file's last modification time in seconds '$(( ))' is bash's way of doing maths '[ X -lt 10 ]' tests the result is Less Than 10 otherwise sleep for 1 second and repeat . Note: Clever as this script is, inotify is smarter. Show Sample Output


    3
    while [ $(( $(date +%s) - $(stat -c %Y FILENAME) )) -lt 10 ]; do sleep 1; done; echo DONE
    flatcap · 2015-05-09 12:30:13 15
  • Since the original command (#1873) didn't work on FreeBSD whose stat lacks the "-c" switch, I wrote an alternative that does. This command shows also the fourth digit of octal format permissions which yields the sticky bit information. Show Sample Output


    2
    stat -f '%Sp %p %N' * | rev | sed -E 's/^([^[:space:]]+)[[:space:]]([[:digit:]]{4})[^[:space:]]*[[:space:]]([^[:space:]]+)/\1 \2 \3/' | rev
    vwal · 2009-08-04 08:45:20 3
  • Works with files containing spaces and for very large directories.


    2
    find -type f -print0 | xargs -r0 stat -c %y\ %n | sort
    dooblem · 2010-05-29 13:40:18 11
  • Applies each file operator using the built-in test. testt /home/askapache/.sq /home/askapache/.sq -a True - file exists. -d True - file is a directory. -e True - file exists. -r True - file is readable by you. -s True - file exists and is not empty. -w True - the file is writable by you. -x True - the file is executable by you. -O True - the file is effectively owned by you. -G True - the file is effectively owned by your group. -N True - the file has been modified since it was last read. Full Function: testt () { local dp; until [ -z "${1:-}" ]; do dp="$1"; [[ ! -a "$1" ]] && dp="$PWD/$dp"; command ls -w $((${COLUMNS:-80}-20)) -lA --color=tty -d "$dp"; [[ -d "$dp" ]] && find "$dp" -mount -depth -wholename "$dp" -printf '%.5m %10M %#15s %#9u %-9g %#5U %-5G %Am/%Ad/%AY %Cm/%Cd/%CY %Tm/%Td/%TY [%Y] %p\n' -a -quit 2> /dev/null; for f in a b c d e f g h L k p r s S t u w x O G N; do test -$f "$dp" && help test | sed "/-$f F/!d" | sed -e 's#^[\t ]*-\([a-zA-Z]\{1\}\) F[A-Z]*[\t ]* True if#-\1 "'$dp'" #g'; done; shift; done } Show Sample Output


    2
    testt(){ o=abcdefghLkprsStuwxOGN;echo $@;for((i=0;i<${#o};i++));do c=${o:$i:1};test -$c $1 && help test | sed "/^ *-$c/!d;1q;s/^[^T]*/-$c /;s/ if/ -/";done; }
    AskApache · 2012-02-21 16:54:53 11
  • This alias is super-handy for me because it quickly shows the details of each file in the current directory. The output is nice because it is sortable, allowing you to expand this basic example to do something amazing like showing you a list of the newest files, the largest files, files with bad perms, etc.. A recursive alias would be: alias LSR='find -mount -printf "%.5m %10M %#9u:%-9g %#5U:%-5G %TF_%TR %CF_%CR %AF_%AR %#15s [%Y] %p\n" 2>/dev/null' From: http://www.askapache.com/linux/bash_profile-functions-advanced-shell.html Show Sample Output


    2
    alias LS='find -mount -maxdepth 1 -printf "%.5m %10M %#9u:%-9g %#5U:%-5G %TF_%TR %CF_%CR %AF_%AR %#15s [%Y] %p\n" 2>/dev/null'
    AskApache · 2013-02-06 17:54:14 7
  • This script compares the modification date of /var/lib/dpkg/info/${package}.list and all the files mentioned there. It could be wrong on noatime partitions. Here is non-oneliner: #!/bin/sh package=$1; list=/var/lib/dpkg/info/${package}.list; inst=$(stat "$list" -c %X); cat $list | ( while read file; do if [ -f "$file" ]; then acc=$(stat "$file" -c %X); if [ $inst -lt $acc ]; then echo used $file exit 0 fi; fi; done exit 1 ) Show Sample Output


    1
    package=$1; list=/var/lib/dpkg/info/${package}.list; inst=$(stat "$list" -c %X); cat $list | (while read file; do if [ -f "$file" ];then acc=$(stat "$file" -c %X); if [ $inst -lt $acc ]; then echo used $file; exit 0; fi; fi; done; exit 1)
    pipeliner · 2010-09-20 18:10:19 6
  • Increase the modification date for the files selected with the find command.


    1
    find . -type f | while read line; do NEW_TS=`date -d@$((\`stat -c '%Y' $line\` + <seconds> )) '+%Y%m%d%H%M.%S'`; touch -t $NEW_TS ${line}; done
    angleto · 2010-11-18 14:03:32 4
  • Find files and calculate size with stat of result in shell


    1
    find . -name "pattern" -exec stat -c%s {} \; | awk '{total += $1} END {print total}'
    Koobiac · 2014-01-15 11:07:09 9

  • 1
    stat -c'%s %n' **/* | sort -n
    ysangkok · 2015-08-25 18:23:55 44
  • this is good for variables if you have many script created files and if you want to know which one is the last created/changed one..


    1
    find . -type f -print0 | xargs -0 stat -c '%y %n' | sort -n -k 1,1 | awk 'END{print $NF}'
    emphazer · 2018-05-14 08:47:41 215

  • 0
    FILE=`ls -ltr /var/lib/pgsql/backups/daily/ | tail -n1 | awk '{print $NF}'`; TIME=`stat -c %Y /var/lib/pgsql/backups/daily/$FILE`; NOW=`date +%s`; echo $((NOW-TIME))
    allrightname · 2011-05-12 13:21:37 7

  • 0
    fn=$(find . -type f -printf "%T@\t%p\n"|sort -n|tail -1|cut -f2); echo $(date -r "$fn") "$fn"
    tzot · 2011-09-13 14:26:20 4
  • This command displays the CPU idle + used time using stats from /proc/stat. Show Sample Output


    0
    grep "cpu " /proc/stat | awk -F ' ' '{total = $2 + $3 + $4 + $5} END {print "idle \t used\n" $5*100/total "% " $2*100/total "%"}'
    Goez · 2012-01-21 04:12:50 4
  • Show running time. eta, progressbar Show Sample Output


    0
    pv -petrs $(stat -c %s file.iso) file.iso | dd bs=1M oflag=sync of=/dev/sdX
    f4m8 · 2012-01-30 07:16:29 6

  • 0
    pv file.iso >/dev/sdX
    qiet72 · 2012-01-30 09:38:15 4
  • here's a version which works on OS X.


    0
    find . -type f -exec stat -f '%m %N' {} \; | sort -n
    cellularmitosis · 2012-06-20 00:13:52 5

  • 0
    stat -f -L -c %T YOUR_FILE_OR_DIRECTORY
    Koobiac · 2013-06-14 07:27:41 8
  • This function will find the modification time in unix_time of the given file, then calculate the number of minutes from now to then and then find all files modified in that range. Show Sample Output


    0
    function findOlderThan () { find . -mmin -$((($(date "+%s") - $(stat -c %Y $1))/60)) -type f ; }
    RobertDeRose · 2014-08-29 17:52:34 8
  •  1 2 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Recursively remove all files in a CVS directory
This will search all directories and ignore the CVS ones. Then it will search all files in the resulting directories and act on them.

Show memory stats on Nexenta/Solaris

Insert the last argument of the previous command

list block devices
Shows all block devices in a tree with descruptions of what they are.

Add to Instapaper
Adds URL to Instapaper. Usage: instapaper-add user@example.com 12345 http://www.commandlinefu.com/

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

pretend to be busy in office to enjoy a cup of coffee
using seq inside a subshell instead of a bash sequence to create increments.

Create md5sum of files under the current dir excluding some directories
Useful if you want get all the md5sum of files but you want exclude some directories. If your list of files is short you can make in one command as follow: $ find . -type d \( -name DIR1 -o -name DIR2 \) -prune -o -type f -exec md5sum {} \; Alternatively you can specify a different command to be executed on the resulting files.

Use color grep by default
Alias the grep command to show colored results by default.

Use jq to validate and pretty-print json output
The `jq` tool can also be used do validate json files and pretty print output: ` jq < file.json` Available on several platforms, including newer debian-based systems via `#sudo apt install jq`, mac via `brew install jq`, and from source https://stedolan.github.io/jq/download/ This alternative to the original avoids the useless use of cat


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: