Commands tagged lsof (38)

  • show only the name of the apps that are using internet Show Sample Output


    32
    lsof -P -i -n | cut -f 1 -d " "| uniq | tail -n +2
    edo · 2009-09-19 21:23:54 2
  • This command is more portable than it's cousin netstat. It works well on all the BSDs, GNU/Linux, AIX and Mac OS X. You won't find lsof by default on Solaris or HPUX by default, but packages exist around the web for installation, if needed, and the command works as shown. This is the most portable command I can find that lists listening ports and their associated pid. Show Sample Output


    29
    lsof -Pan -i tcp -i udp
    atoponce · 2010-06-07 15:22:44 0
  • List all file opened by a particular command based on it's command name. Show Sample Output


    25
    lsof -c dhcpd
    d4n3sh · 2009-04-17 07:18:38 0
  • It may be helpful in case you need to umount a directory and some process is preventing you to do so keeping the folder busy. The lsof may process the +D option slowly and may require a significant amount of memory because it will descend the full dir tree. On the other hand it will neither follow symlinks nor other file systems.


    13
    lsof +D <dirname>
    ztank1013 · 2011-09-18 00:01:25 0
  • Just refining last proposal for this check, showing awk power to make more complex math (instead /1024/1024, 2^20). We don't need declare variable before run lsof, because $(command) returns his output. Also, awk can perform filtering by regexp instead to call grep. I changed the 0.0000xxxx messy output, with a more readable form purging all fractional numbers and files less than 1 MB. Show Sample Output


    10
    lsof -p $(pidof firefox) | awk '/.mozilla/ { s = int($7/(2^20)); if(s>0) print (s)" MB -- "$9 | "sort -rn" }'
    tzk · 2010-01-13 22:45:53 1
  • Instead of using force un-mounting, it's better to find the processes that currently use the relevant folder. Taken from: http://www.linuxhowtos.org/Tips%20and%20Tricks/findprocesses.htm Show Sample Output


    9
    lsof /folder
    dotanmazor · 2010-09-06 05:10:06 0
  • While `lsof` will work, why not use the tool designed explicitly for this job? (If not run as root, you will only see the names of PID you own) Show Sample Output


    8
    netstat -plnt
    DopeGhoti · 2011-09-30 19:56:32 0
  • Imagine you've started a long-running process that involves piping data, but you forgot to add the progress-bar option to a command. e.g. xz -dc bigdata.xz | complicated-processing-program > summary . This command uses lsof to see how much data xz has read from the file. lsof -o0 -o -Fo FILENAME Display offsets (-o), in decimal (-o0), in parseable form (-Fo) This will output something like: . p12607 f3 o0t45187072 . Process id (p), File Descriptor (f), Offset (o) . We stat the file to get its size stat -c %s FILENAME . Then we plug the values into awk. Split the line at the letter t: -Ft Define a variable for the file's size: -s=$(stat...) Only work on the offset line: /^o/ . Note this command was tested using the Linux version of lsof. Because it uses lsof's batch option (-F) it may be portable. . Thanks to @unhammer for the brilliant idea. Show Sample Output


    7
    F=bigdata.xz; lsof -o0 -o -Fo $F | awk -Ft -v s=$(stat -c %s $F) '/^o/{printf("%d%%\n", 100*$2/s)}'
    flatcap · 2015-09-19 22:22:43 1
  • Lis all files opened by a particular process id. "PID" Show Sample Output


    6
    lsof -p 15857
    d4n3sh · 2009-04-17 07:16:03 0
  • Check which files are opened by Firefox then sort by largest size (in MB). You can see all files opened by just replacing grep to "/". Useful if you'd like to debug and check which extensions or files are taking too much memory resources in Firefox. Show Sample Output


    6
    FFPID=$(pidof firefox-bin) && lsof -p $FFPID | awk '{ if($7>0) print ($7/1024/1024)" MB -- "$9; }' | grep ".mozilla" | sort -rn
    josue · 2009-08-16 08:58:22 3

  • 5
    lsof -i :22
    bucciarati · 2011-03-11 16:48:37 0

  • 4
    lsof -i | grep -i estab
    P17 · 2009-05-06 17:45:55 0
  • When trying to play a sound you may sometimes get an error saying that your sound card is already used, but not by what process. This will list all processes playing sound, useful to kill processes that you no longer need but that keep using your sound card. Show Sample Output


    4
    lsof | grep pcm
    Miles · 2010-05-16 12:12:01 0

  • 4
    lsof -Pn | grep LISTEN
    pykler · 2011-09-29 18:21:51 0
  • change 24073 to your pid Show Sample Output


    3
    lsof -nP +p 24073 | grep -i listen | awk '{print $1,$2,$7,$8,$9}'
    icreed · 2009-05-26 20:47:14 1
  • for when a program is hogging the sound output. finds, and kills. add -9 to the end for wedged processes. add in 'grep ^program' after lsof to filter. Show Sample Output


    2
    lsof /dev/snd/pcm*p /dev/dsp | awk ' { print $2 }' | xargs kill
    alustenberg · 2010-07-23 20:24:16 0
  • The output of lsof is piped to txt2html which converts it to html. # Perl module HTML::TextToHTML needed Show Sample Output


    2
    lsof -nPi | txt2html > ~/lsof.html
    zlemini · 2011-07-28 14:01:21 4
  • Maybe this will help you to monitor your load balancers or reverse proxies if you happen to use them. This is useful to discover TIME OUTS and this will let you know if one or more of your application servers is not connected by checking. Show Sample Output


    2
    watch -n 1 "/usr/sbin/lsof -p PID |awk '/TCP/{split(\$8,A,\":\"); split(A[2],B,\">\") ; split(B[1],C,\"-\"); print A[1],C[1],B[2], \$9}' | sort | uniq -c"
    ideivid · 2011-08-12 19:16:38 0
  • Fast and easy way to find all established tcp connections without using the netstat command.


    2
    lsof -i -n | grep ESTABLISHED
    techie · 2013-04-03 09:14:09 0
  • Say you're started "xzcat bigdata.xz | complicated-processing-program >summary" an hour ago, and you of course forgot to enable progress output (you could've just put "awk 'NR%1000==0{print NR>"/dev/stderr"}{print}'" in the pipeline but it's too late for that now). But you really want some idea of how far along your program is. Then you can run the above command to see how many % along xzcat is in reading the file. Note that this is for the GNU/Linux version of lsof; the one found on e.g. Darwin has slightly different output so the awk part may need some tweaks. Show Sample Output


    2
    f=bigdata.xz; calc "round($(lsof -o0 -o "$f"|awk '{o=substr($7,3)}END{print o}')/$(stat -c %s "$f")*100)"
    unhammer · 2015-09-19 18:27:12 3
  • In addition to generating the current connections, it also opens then in your default browser on gnome.


    1
    lsof -nPi | txt2html > ~/lsof.html | gnome-open lsof.html
    hippie · 2011-07-28 21:59:07 0
  • Check open TCP and UDP ports Show Sample Output


    1
    netstat -plntu
    bolthorn0 · 2011-10-01 12:16:38 0
  • See the summary. Show Sample Output


    1
    lsof +c 15 | awk '{print $1}' | sort | uniq -c | sort -rn | head
    SEJeff · 2012-05-25 16:31:46 0
  • also could specify port number: lsof -ni TCP:80


    1
    lsof -ni TCP
    tsener · 2013-03-20 22:51:16 0
  • While the posted solution works, I'm a bit uneasy about the "%d" part. This would be hyper-correct approach: lsof|gawk '$4~/txt/{next};/REG.*\(deleted\)$/{sub(/.$/,"",$4);printf ">/proc/%s/fd/%s\n", $2,$4}' Oh, and you gotta pipe the result to sh if you want it to actually trim the files. ;) Btw, this approach also removes false negatives (OP's command skips any deleted files with "txt" in their name).


    1
    lsof|gawk '$4~/txt/{next};/REG.*\(deleted\)$/{printf ">/proc/%s/fd/%d\n", $2,$4}'
    wejn · 2014-03-11 10:40:32 5
  •  1 2 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

set a reminder for 5 days in the future
setup for reminder in 5 days, added the date in the future. To run a job at 4pm three days from now, you would do at 4pm + 3 days, to run a job at 10:00am on July 31, you would do at 10am Jul 31 and to run a job at 1am tomorrow, you would do at 1am tomorrow.

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

Efficient count files in directory (no recursion)
$ time perl -e 'if(opendir D,"."){@a=readdir D;print $#a - 1,"\n"}' 205413 real 0m0.497s user 0m0.220s sys 0m0.268s $ time { ls |wc -l; } 205413 real 0m3.776s user 0m3.340s sys 0m0.424s ********* ** EDIT: turns out this perl liner is mostly masturbation. this is slightly faster: $ find . -maxdepth 1 | wc -l sh-3.2$ time { find . -maxdepth 1|wc -l; } 205414 real 0m0.456s user 0m0.116s sys 0m0.328s ** EDIT: now a slightly faster perl version $ perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"' sh-3.2$ time perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"' 205414 real 0m0.415s user 0m0.176s sys 0m0.232s

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Every Nth line position # (AWK)
A better way to show the file lines 3n + 1

Get AWS temporary credentials ready to export based on a MFA virtual appliance
You might want to secure your AWS operations requiring to use a MFA token. But then to use API or tools, you need to pass credentials generated with a MFA token. This commands asks you for the MFA code and retrieves these credentials using AWS Cli. To print the exports, you can use: `awk '{ print "export AWS_ACCESS_KEY_ID=\"" $1 "\"\n" "export AWS_SECRET_ACCESS_KEY=\"" $2 "\"\n" "export AWS_SESSION_TOKEN=\"" $3 "\"" }'` You must adapt the command line to include: * $MFA_IDis ARN of the virtual MFA or serial number of the physical one * TTL for the credentials

Multi-line grep
Using perl you can search for patterns spanning several lines, a thing that grep can't do. Append the list of files to above command or pipe a file through it, just as with regular grep. If you add the 's' modifier to the regex, the dot '.' also matches line endings, useful if you don't known how many lines you need are between parts of your pattern. Change '*' to '*?' to make it greedy, that is match only as few characters as possible. See also http://www.commandlinefu.com/commands/view/1764/display-a-block-of-text-with-awk to do a similar thing with awk. Edit: The undef has to be put in a begin-block, or a match in the first line would not be found.

Encrypted archive with openssl and tar
The lifehacker way: http://lifehacker.com/software/top/geek-to-live--encrypt-your-data-178005.php#Alternate%20Method:%20OpenSSL "That command will encrypt the unencrypted-data.tar file with the password you choose and output the result to encrypted-data.tar.des3. To unlock the encrypted file, use the following command:" $ openssl des3 -d -salt -in encrypted-data.tar.des3 -out unencrypted-data.tar

pretend to be busy in office to enjoy a cup of coffee
using seq inside a subshell instead of a bash sequence to create increments.

Propagate a directory to another and create symlink to content
Lndir create from source directory to destination directory a full symlink tree of all contents of source directory, really useful for propagate changes from a directory to another.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: