Commands using ls (517)


  • 1
    find / -perm +6000 -type f -exec ls -ld {} \;
    aguslr · 2011-10-14 22:19:58 5
  • This will list the files in a directory, then zip each one with the original filename individually. video1.wmv -> video1.zip video2.wmv -> video2.zip This was for zipping up large amounts of video files for upload on a Windows machine.


    1
    ls -1 | awk ' { print "zip "$1".zip " $1 } ' | sh
    kaywhydub · 2011-12-14 20:30:56 6

  • 1
    ls -d $PWD/*
    putnamhill · 2011-12-16 19:12:55 3

  • 1
    ls -d1 $PWD/{.*,*}
    bunam · 2011-12-17 12:25:15 2

  • 1
    ls -d1 $PWD/*
    www · 2011-12-31 14:46:41 3

  • 1
    find <directory> -type f -printf "%T@\t%p\n"|sort -n|cut -f2|xargs ls -lrt
    rik · 2012-03-02 12:51:06 3
  • Here's an annotated version of the command, using full-names instead of aliases. It is exactly equivalent to the short-hand version. # Recursively list all the files in the current directory. Get-ChildItem -Recurse | # Filter out the sub-directories themselves. Where-Object { return -not $_.PsIsContainer; } | # Group the resulting files by their extensions. Group-Object Extension | # Pluck the Name and Count properties of each group and define # a custom expression that calculates the average of the sizes # of the files in that group. # The back-tick is a line-continuation character. Select-Object ` Name, Count, @{ Name = 'Average'; Expression = { # Average the Length (sizes) of the files in the current group. return ($_.Group | Measure-Object -Average Length).Average; } } | # Format the results in a tabular view, automatically adjusted to # widths of the values in the columns. Format-Table -AutoSize ` @{ # Rename the Name property to something more sensible. Name = 'Extension'; Expression = { return $_.Name; } }, Count, @{ # Format the Average property to display KB instead of bytes # and use a formatting string to show it rounded to two decimals. Name = 'Average Size (KB)'; # The "1KB" is a built-in constant which is equal to 1024. Expression = { return $_.Average / 1KB }; FormatString = '{0:N2}' } Show Sample Output


    1
    ls -r | ?{-not $_.psiscontainer} | group extension | select name, count, @{n='average'; e={($_.group | measure -a length).average}} | ft -a @{n='Extension'; e={$_.name}}, count, @{n='Average Size (KB)'; e={$_.average/1kb}; f='{0:N2}'}
    brianpeiris · 2012-03-13 17:58:10 9
  • This will generate the same output without changing the current directory, and filepath will be relative to the current directory. Note: it will (still) fail if your iTunes library is in a non-standard location.


    1
    ls "~/Music/iTunes/iTunes Media/Mobile Applications" > filepath
    minnmass · 2012-05-04 09:51:59 50
  • Nothing too magical here, just uses pngcrush to losslessly compress all your pngs!


    1
    ls *.png | while read line; do pngcrush -brute $line compressed/$line; done
    waffleboi9 · 2012-07-17 20:20:49 5
  • Sometimes I would like to see hidden files, prefix with a period, but some files or folders I never want to see (and really wish I could just remove all together). Show Sample Output


    1
    alias ls='if [[ -f .hidden ]]; then while read l; do opts+=(--hide="$l"); done < .hidden; fi; ls --color=auto "${opts[@]}"'
    expelledboy · 2012-08-12 13:10:23 5

  • 1
    ln -s /base/* /target && ls -l /target
    mattcen · 2012-08-22 11:27:40 5
  • Show the UUID-based alternate device names of ZEVO-related partitions on Darwin/OS X. Adapted from the lines by dbrady at http://zevo.getgreenbytes.com/forum/viewtopic.php?p=700#p700 and following the disk device naming scheme at http://zevo.getgreenbytes.com/wiki/pmwiki.php?n=Site.DiskDeviceNames Show Sample Output


    1
    ls /dev/disk* | xargs -n 1 -t sudo zdb -l | grep GPTE_
    grahamperrin · 2012-10-06 20:19:45 5
  • Substitute for #11720 Can probably be even shorter and easier. Show Sample Output


    1
    ls -l /dev/disk/by-id/ | grep '/sda$' | grep -o 'ata[^ ]*'
    michelsberg · 2013-01-16 17:28:11 7
  • I find it useful, when cleaning up deleting unwanted files to make more space, to list in size order so I can delete the largest first. Note that using "q" shows files with non-printing characters in name. In this sample output (above), I found two copies of the same iso file both of which are immediate "delete candidates" for me. Show Sample Output


    1
    ls -qahlSr # list all files in size order - largest last
    mpb · 2013-03-13 09:52:07 29
  • zsh: list of files sorted by size, greater than 100mb, head the top 5. '**/*' is recursive, and the glob qualifiers provide '.' = regular file, 'L' size, which is followed by 'm' = 'megabyte', and finally '+100' = a value of 100


    1
    ls -Sh **/*(.Lm+100) | tail -5
    khayyam · 2013-03-21 20:22:11 4
  • make usable on OSX with filenames containing spaces. note: will still break if filenames contain newlines... possible, but who does that?!


    1
    svn ls -R | egrep -v -e "\/$" | tr '\n' '\0' | xargs -0 svn blame | awk '{print $2}' | sort | uniq -c | sort -nr
    rymo · 2013-04-10 19:37:53 5
  • Like top, but for files


    1
    watch -d -n 2 'df; ls -FlAt;'
    G2G · 2013-09-17 05:44:47 6
  • Sorts by latest modified files by looking to current directory and all subdirectories Show Sample Output


    1
    find . -name '*pdf*' -print0 | xargs -0 ls -lt | head -20
    fuats · 2013-10-03 21:58:51 9
  • displays a list of all file extensions in current directory and how many files there are of each type of extension in ascending order (case insensitive) Show Sample Output


    1
    ls | tr [:upper:] [:lower:] | grep -oP '\.[^\.]+$' | sort | uniq -c | sort
    icefyre · 2014-01-30 11:37:27 10

  • 1
    npm ls -g|grep "^[&#9500;&#9492;]\(.\+\)\?[&#9516;&#9472;] "
    lucasmezencio · 2014-02-03 21:50:39 7
  • Very quick! Based only on the content sizes and the character counts of filenames. If both numbers are equal then two (or more) directories seem to be most likely identical. if in doubt apply: diff -rq path_to_dir1 path_to_dir2 AWK function taken from here: http://stackoverflow.com/questions/2912224/find-duplicates-lines-based-on-some-delimited-fileds-on-line Show Sample Output


    1
    find . -type d| while read i; do echo $(ls -1 "$i"|wc -m) $(du -s "$i"); done|sort -s -n -k1,1 -k2,2 |awk -F'[ \t]+' '{ idx=$1$2; if (array[idx] == 1) {print} else if (array[idx]) {print array[idx]; print; array[idx]=1} else {array[idx]=$0}}'
    knoppix5 · 2014-02-25 22:50:09 27
  • Tested with GNU and BSD ls. Show Sample Output


    1
    ls -la | grep ^l
    gatopan · 2014-08-11 03:06:48 8
  • With this version, you can list all symlinks in the current directory (no subdirectories), and have it list both the link and the target. Show Sample Output


    1
    ls -l `find ~ -maxdepth 1 -type l -print`
    skittleys · 2015-01-04 02:36:47 7
  • list all txt files order by time, newest first


    1
    ls -lt --time=atime *.txt
    miccaman · 2015-05-21 21:03:44 10
  • Adding course name prefix to lecture pdfs Show Sample Output


    1
    ls *.pdf | while read file; do newfile="CS749__${file}"; mv "${file}" "${newfile}"; done;
    programmer · 2016-04-19 11:04:47 16
  • ‹ First  < 6 7 8 9 10 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

list all files in a directory, sorted in reverse order by modification time, use file descriptors.
It's both silly, and infinitely useful. Especially useful in logfile directories where you want to know what file is being updated while troubleshooting.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

BASH: Print shell variable into AWK

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

Count the total number of files in each immediate subdirectory
counts the total (recursive) number of files in the immediate (depth 1) subdirectories as well as the current one and displays them sorted. Fixed, as per ashawley's comment

for too many arguments by *
$ grep ERROR *.log -bash: /bin/grep: Argument list too long $ echo *.log | xargs grep ERROR /dev/null 20090119.00011.log:DANGEROUS ERROR

get total of inodes of root partition

A signal trap that logs when your script was killed and what other processes were running at that time
trap is the bash builtin that allows you to execute commands when the current script receives a particular signal. Uses $0 for the script name, $$ for the script PID, tee to output to STDOUT as well as a log file and ps to log other running processes.

take a look to command before action
add |sh when you agree the list, I often use that method to prevent typos in dangerous or long operations

Get gzip compressed web page using wget.
Like the original command, but the -f allows this one to succeed even if the website returns uncompressed data. From gzip(1) on the -f flag: If the input data is not in a format recognized by gzip, and if the --stdout is also given, copy the input data without change to the standard output: let zcat behave as cat.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: