echo *

A faster ls

Sometimes "ls" is just too slow, especially if you're having problems with terminal scroll speed, or if you're a speed freak. In these situations, do an echo * in the current directory to immediately see the directory listing. Do an echo * | tr ' ' '\n' if you want a column. Do an alias ls='echo *' if you want to achieve higher echelons of speed and wonder. Note that echo * is also useful on systems that are so low in memory that "ls" itself is failing - perhaps due to a memory leak that you're trying to debug.

By: kFiddle
2009-04-17 21:40:58

These Might Interest You

  • new way to replace text file with dd,faster than head,sed,awk if you do this with big file Show Sample Output

    :|dd of=./ssss.txt seek=1 bs=$(($(stat -c%s ./ssss.txt)-$(tail -n 2 ./ssss.txt|wc -c)))
    ri0day · 2011-10-17 07:53:00 0
  • Adds high-performance, lightweight lz4 compression to speed the transfer of files over a trusted network link. Using (insecure) netcat results in a much faster transfer than using a ssh tunnel because of the lack of overhead. Also, LZ4 is as fast or faster than LZ0, much faster than gzip or LZMA, an in a worst-case scenario, incompressible data gets increased by 0.4% in size. Using LZMA or gzip compressors makes more sense in cases where the network link is the bottleneck, whereas LZ4 makes more sense if CPU time is more of a bottleneck.

    On target: "nc -l 4000 | lz4c -d - | tar xvf -" On source: "tar -cf - . | lz4c | nc target_ip 4000"
    baitisj · 2014-08-02 05:09:30 0
  • Useful to move many files (thousands or millions files) over ssh. Faster than scp because this way you save a lot of tcp connection establishments (syn/ack packets). If using a fast lan (I have just tested gigabyte ethernet) it is faster to not compress the data so the command would be: tar -cf - /home/user/test | ssh user@sshServer 'cd /tmp; tar xf -'

    tar -cf - /home/user/test | gzip -c | ssh user@sshServer 'cd /tmp; tar xfz -'
    esplinter · 2009-08-24 18:35:38 6
  • Faster and more convinent than [Esc]

    Ctrl + [
    light13 · 2010-12-13 00:46:12 4
  • Use this if you can't type repeated killall commands fast enough to kill rapidly spawning processes. If a process keeps spawning copies of itself too rapidly, it can do so faster than a single killall can catch them and kill them. Retyping the command at the prompt can be too slow too, even with command history retrieval. Chaining a few killalls on single command line can start up the next killall more quickly. The first killall will get most of the processes, except for some that were starting up in the meanwhile, the second will get most of the rest, and the third mops up.

    killall rapidly_spawning_process ; killall rapidly_spawning_process ; killall rapidly_spawning_process
    unixmonkey7434 · 2010-05-20 00:26:10 2
  • I know its not much but is very useful in time consuming scripts (cron, rc.d, etc). Show Sample Output

    echo *
    grep · 2009-02-16 21:20:13 2

What Others Think

Yes, it'll print all filenames quite fast, all in one line, and the use of tr(1) won't work if there's (ugh) spaces in filenames. Also, mapping ls to "echo *" is not a good idea at all. Too many scripts depend on such a vital command as ls(1) and will break in some way or another. Things stored in /bin/ are essential on a *NIX system and should not be tampered with.
sunny32768 · 474 weeks and 3 days ago
This might get you out of a scrape in rare circumstances.. but don't alias it! In addition to the previous posters comments, this *will* fail on directories that contain many files as the wildcard will expand beyond the size of the shell parser's command line buffer.
animoid · 474 weeks and 3 days ago
Too much typing, and what are you gaining really? Is your hardware that slow that you notice the difference?
atoponce · 474 weeks and 3 days ago
Here's what you gain: ls real 0m0.007s user 0m0.000s sys 0m0.008s echo * real 0m0.002s user 0m0.000s sys 0m0.004s echo * | tr ' ' '\n' real 0m0.008s user 0m0.004s sys 0m0.004s I don't notice it much...
brie · 431 weeks and 3 days ago
Timings using an Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz - Ubuntu 9.10 Karmic 64bit using bash in gnome-terminal: ls /usr/lib/ real 0m0.563s user 0m0.030s sys 0m0.010s echo /usr/lib/* real 0m0.086s user 0m0.020s sys 0m0.000s using bash on first console (the infamous Ctrl+F1) ls provide 0.25s echo provide 0.024s seems that "ls" output time is about 6 times "echo" output time To me prove that echo is faster than ls in pseudo-pure computational time (in bash on gnome-terminal)...better values on first console (echo is 10 times faster than ls) I disagree totally about aliasing ls due to it's own obvious conclusion in terms of pipings issues. I found this solution very interesting when I have to simply parse folder content in term of pure strings.
LucaCappelletti · 430 weeks and 6 days ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this? is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.


Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: