Commands using ls (517)


  • 1
    find / -perm +6000 -type f -exec ls -ld {} \;
    aguslr · 2011-10-14 22:19:58 5
  • This will list the files in a directory, then zip each one with the original filename individually. video1.wmv -> video1.zip video2.wmv -> video2.zip This was for zipping up large amounts of video files for upload on a Windows machine.


    1
    ls -1 | awk ' { print "zip "$1".zip " $1 } ' | sh
    kaywhydub · 2011-12-14 20:30:56 6

  • 1
    ls -d $PWD/*
    putnamhill · 2011-12-16 19:12:55 3

  • 1
    ls -d1 $PWD/{.*,*}
    bunam · 2011-12-17 12:25:15 2

  • 1
    ls -d1 $PWD/*
    www · 2011-12-31 14:46:41 3

  • 1
    find <directory> -type f -printf "%T@\t%p\n"|sort -n|cut -f2|xargs ls -lrt
    rik · 2012-03-02 12:51:06 3
  • Here's an annotated version of the command, using full-names instead of aliases. It is exactly equivalent to the short-hand version. # Recursively list all the files in the current directory. Get-ChildItem -Recurse | # Filter out the sub-directories themselves. Where-Object { return -not $_.PsIsContainer; } | # Group the resulting files by their extensions. Group-Object Extension | # Pluck the Name and Count properties of each group and define # a custom expression that calculates the average of the sizes # of the files in that group. # The back-tick is a line-continuation character. Select-Object ` Name, Count, @{ Name = 'Average'; Expression = { # Average the Length (sizes) of the files in the current group. return ($_.Group | Measure-Object -Average Length).Average; } } | # Format the results in a tabular view, automatically adjusted to # widths of the values in the columns. Format-Table -AutoSize ` @{ # Rename the Name property to something more sensible. Name = 'Extension'; Expression = { return $_.Name; } }, Count, @{ # Format the Average property to display KB instead of bytes # and use a formatting string to show it rounded to two decimals. Name = 'Average Size (KB)'; # The "1KB" is a built-in constant which is equal to 1024. Expression = { return $_.Average / 1KB }; FormatString = '{0:N2}' } Show Sample Output


    1
    ls -r | ?{-not $_.psiscontainer} | group extension | select name, count, @{n='average'; e={($_.group | measure -a length).average}} | ft -a @{n='Extension'; e={$_.name}}, count, @{n='Average Size (KB)'; e={$_.average/1kb}; f='{0:N2}'}
    brianpeiris · 2012-03-13 17:58:10 9
  • This will generate the same output without changing the current directory, and filepath will be relative to the current directory. Note: it will (still) fail if your iTunes library is in a non-standard location.


    1
    ls "~/Music/iTunes/iTunes Media/Mobile Applications" > filepath
    minnmass · 2012-05-04 09:51:59 50
  • Nothing too magical here, just uses pngcrush to losslessly compress all your pngs!


    1
    ls *.png | while read line; do pngcrush -brute $line compressed/$line; done
    waffleboi9 · 2012-07-17 20:20:49 5
  • Sometimes I would like to see hidden files, prefix with a period, but some files or folders I never want to see (and really wish I could just remove all together). Show Sample Output


    1
    alias ls='if [[ -f .hidden ]]; then while read l; do opts+=(--hide="$l"); done < .hidden; fi; ls --color=auto "${opts[@]}"'
    expelledboy · 2012-08-12 13:10:23 5

  • 1
    ln -s /base/* /target && ls -l /target
    mattcen · 2012-08-22 11:27:40 5
  • Show the UUID-based alternate device names of ZEVO-related partitions on Darwin/OS X. Adapted from the lines by dbrady at http://zevo.getgreenbytes.com/forum/viewtopic.php?p=700#p700 and following the disk device naming scheme at http://zevo.getgreenbytes.com/wiki/pmwiki.php?n=Site.DiskDeviceNames Show Sample Output


    1
    ls /dev/disk* | xargs -n 1 -t sudo zdb -l | grep GPTE_
    grahamperrin · 2012-10-06 20:19:45 5
  • Substitute for #11720 Can probably be even shorter and easier. Show Sample Output


    1
    ls -l /dev/disk/by-id/ | grep '/sda$' | grep -o 'ata[^ ]*'
    michelsberg · 2013-01-16 17:28:11 7
  • I find it useful, when cleaning up deleting unwanted files to make more space, to list in size order so I can delete the largest first. Note that using "q" shows files with non-printing characters in name. In this sample output (above), I found two copies of the same iso file both of which are immediate "delete candidates" for me. Show Sample Output


    1
    ls -qahlSr # list all files in size order - largest last
    mpb · 2013-03-13 09:52:07 29
  • zsh: list of files sorted by size, greater than 100mb, head the top 5. '**/*' is recursive, and the glob qualifiers provide '.' = regular file, 'L' size, which is followed by 'm' = 'megabyte', and finally '+100' = a value of 100


    1
    ls -Sh **/*(.Lm+100) | tail -5
    khayyam · 2013-03-21 20:22:11 4
  • make usable on OSX with filenames containing spaces. note: will still break if filenames contain newlines... possible, but who does that?!


    1
    svn ls -R | egrep -v -e "\/$" | tr '\n' '\0' | xargs -0 svn blame | awk '{print $2}' | sort | uniq -c | sort -nr
    rymo · 2013-04-10 19:37:53 5
  • Like top, but for files


    1
    watch -d -n 2 'df; ls -FlAt;'
    G2G · 2013-09-17 05:44:47 6
  • Sorts by latest modified files by looking to current directory and all subdirectories Show Sample Output


    1
    find . -name '*pdf*' -print0 | xargs -0 ls -lt | head -20
    fuats · 2013-10-03 21:58:51 9
  • displays a list of all file extensions in current directory and how many files there are of each type of extension in ascending order (case insensitive) Show Sample Output


    1
    ls | tr [:upper:] [:lower:] | grep -oP '\.[^\.]+$' | sort | uniq -c | sort
    icefyre · 2014-01-30 11:37:27 10

  • 1
    npm ls -g|grep "^[&#9500;&#9492;]\(.\+\)\?[&#9516;&#9472;] "
    lucasmezencio · 2014-02-03 21:50:39 7
  • Very quick! Based only on the content sizes and the character counts of filenames. If both numbers are equal then two (or more) directories seem to be most likely identical. if in doubt apply: diff -rq path_to_dir1 path_to_dir2 AWK function taken from here: http://stackoverflow.com/questions/2912224/find-duplicates-lines-based-on-some-delimited-fileds-on-line Show Sample Output


    1
    find . -type d| while read i; do echo $(ls -1 "$i"|wc -m) $(du -s "$i"); done|sort -s -n -k1,1 -k2,2 |awk -F'[ \t]+' '{ idx=$1$2; if (array[idx] == 1) {print} else if (array[idx]) {print array[idx]; print; array[idx]=1} else {array[idx]=$0}}'
    knoppix5 · 2014-02-25 22:50:09 27
  • Tested with GNU and BSD ls. Show Sample Output


    1
    ls -la | grep ^l
    gatopan · 2014-08-11 03:06:48 8
  • With this version, you can list all symlinks in the current directory (no subdirectories), and have it list both the link and the target. Show Sample Output


    1
    ls -l `find ~ -maxdepth 1 -type l -print`
    skittleys · 2015-01-04 02:36:47 7
  • list all txt files order by time, newest first


    1
    ls -lt --time=atime *.txt
    miccaman · 2015-05-21 21:03:44 10
  • Adding course name prefix to lecture pdfs Show Sample Output


    1
    ls *.pdf | while read file; do newfile="CS749__${file}"; mv "${file}" "${newfile}"; done;
    programmer · 2016-04-19 11:04:47 16
  • ‹ First  < 6 7 8 9 10 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Define shell variable HISTIGNORE so that comments (lines starting with #) appear in shell history
I was surprised to find that with RedHat bash, I could not find any comment lines (begining with #) in my bash shell history. Surprised because in Mageia Linux this works. It turns out that RedHat's bash will keep comment lines if in my .bashrc, I define: export HISTIGNORE=' cd "`*: PROMPT_COMMAND=?*?' Why have comment lines in shell history? It's a handy and convenient way to make proto-commands (to be completed later) and for storing brief text data that is searchable in shell history.

Use find to get around Argument list too long problem
Can be used for other commands as well, replace rm with ls. It is easy to make this shorter but if the filenames involved have spaces, you will need to do use find's "-print0" option in conjunction with xargs's "-0" option. Otherwise the shell that xargs uses to execute the "rm" command line will treat the space as a token separator, thereby treating the name as two (or more) names.

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

Retrieve the size of a file on a server
Downloads the entire file, but http servers don't always provide the optional 'Content-Length:' header, and ftp/gopher/dict/etc servers don't provide a filesize header at all.

Sort all running processes by their memory & CPU usage
you can also pipe it to "tail" command to show 10 most memory using processes.

Create POSIX tar archive
tar(1) and cpio(1) are not fully platform agnostic, although their file formats are specified in POSIX.1-2001. As such, GNU tar(1) might not be able to extract a BSD tar(1) archive, and ivce versa. pax(1) is defined in POSIX.1-2001. To extract an archive: $ pax -rf archive.tar

BourneShell: Go to previous directory
cd - would return to the previous directory of your cd command. NB: previous dir is always stored in $OLDPWD variable.

Copy a file using dd and watch its progress
This is a more accurate way to watch the progress of a dd process. The $DDPID=$! is needed so that you don't get the PID of the sleep. The sleep 1 is needed because in my testing at least, if you run kill -USR1 against dd too quickly, it will kill it off instead of display the status. So you need to wait a second, probably so that it can configure itself to trap the USR1 signal.

Find files with size over 100MB and output with better lay-out

Convert all Flac in a directory to Mp3 using maximum quality variable bitrate


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: