Commands matching memory (148)

  • "That's it. Not much to see here. The first command writes any cache data that hasn't been written to the disk out to the disk. The second command tells the kernel to drop what's cached. Not much to it. This invalidates the write cache as well as the read cache, which is why we have the sync command first. Supposedly, it is possible to have some cached write data never make it to disk, so use it with caution, and NEVER do it on a production server. You could ... but why take the risk? As long as you are running a post 2.6.16 kernel,..." Source: http://ubuntuforums.org/showpost.php?p=3621283&postcount=1


    -1
    sudo sync && sudo echo 3 | sudo tee /proc/sys/vm/drop_caches
    StephenJudge · 2012-03-17 08:27:58 13
  • This is just another example of what the nocache package is useful for, which I described in http://www.commandlinefu.com/commands/view/12357/ and that provides the commands nocache <command to run with page cache disabled>cachedel <single file to remove from page cache>cachstats <single file> # to get the current cache state   Often, we do not want to disable caching, because several file reads are involved in a command and operations would be slowed down a lot, due to massive disk seeks. But after our operations, the file sits in the cache needlessly, if we know we're very likely never touching it again. cachedel helps to reduce cache pollution, i.e. frequently required files relevant for desktop interaction (libs/configs/etc.) would be removed from RAM. So we can run cachedel after each data intensive job. Today I run commands like these: <compile job> && find . -type f -exec cachedel '{}' \; &> /dev/null # no need to keep all source code and tmp files in memorysudo apt-get dist-upgrade && find /var/cache/apt/archives/ -type f -exec cachedel '{}' \; # Debian/*buntu system upgradedropbox status | grep -Fi idle && find ~/Dropbox -type f -exec cachedel '{}' \; &> /dev/null # if Dropbox is idle, remove sync'ed files from cache   https://github.com/Feh/nocache http://packages.debian.org/search?keywords=nocache http://packages.ubuntu.com/search?keywords=nocache http://askubuntu.com/questions/122857


    -1
    find /path/to/dir -type f -exec cachedel '{}' \;
    michelsberg · 2013-12-12 18:22:54 8
  • This command shows a high level overview of system memory and usage refreshed in seconds. Change -n 10 to you desired refresh interval. Show Sample Output


    -1
    watch -n 10 free -m
    Darkstar · 2014-01-04 10:10:15 12

  • -1
    CMD=chrome ; ps h -o pmem -C $CMD | awk '{sum+=$1} END {print sum}'
    pdxdoughnut · 2014-01-08 23:05:09 8

  • -1
    echo "Memory:" $(dmidecode --type memory | grep " MB" | awk '{sum += $2; a=sum/1024} END {print a}') "GB"
    bp · 2014-02-18 06:20:34 9
  • left-most column is PID, middle is virtual memory being consumed, right-most is actual process. Show Sample Output


    -1
    ps -e -o pid,vsz,comm= | sort -n -k 2
    jmorganwalker · 2014-05-14 00:36:50 12
  • Report memory and swap space utilization statistics e.g. memory free/used, swap free/used


    -2
    sar -r
    sharfah · 2009-05-19 11:47:38 10
  • Probably more trouble than its worth, but worked for the obscure need.


    -2
    memnum=$(awk '{ print $2 }' /proc/meminfo |head -n1); echo "$memnum / 1024 / 1024" | bc -l
    wekoch · 2011-11-08 16:28:25 3

  • -2
    free -m
    jmorganwalker · 2014-06-05 19:51:51 7

  • -2
    free && sync && echo 3 > /proc/sys/vm/drop_caches && free
    ironmarc · 2016-11-02 08:51:01 23

  • -3
    biosdecode
    theIndianMaiden · 2009-08-16 02:28:00 14
  • Taskkill: As the name of the utility ?taskkill? suggests that it is simply used to see the running processes and to kill one or more processes either by using its PID i.e. ProcessID or by using its Image name i.e. by which it is present in system and being executed. We can also filter the results on the basis of user name, PID, image name, CPU time, memory usage etc at the time of killing or terminating a process. Syntax: taskkill [/s [/u [\] [/p []]]] {[/fi ] [...] [/pid /im ]} [/f] [/t] Parameters description: /s :- To provide IP specification or name of the remote computer; if not provided local computer is considered. Do not use backslashes in the value of the parameter. /u \ :- To provide UserName or Domain\UserName under whose permission command should execute. If not provided then command run under the permission of person who is logged on. Option /u can be used only if /s is specified. /p :- For the password of that user account which is provided with /u parameter. Password is prompted in case this field is omitted. /fi :- To apply filter to select a set of tasks. Wildcard character (*) can be used for specifying all tasks or image names. Filter names are provided after parameter description. /pid >ProcessID> :- For specifying PID of the process to be killed. /im :- For providing image name of the process to be terminated. Also Wildcard character (*) can be used to specify all image names. /t :- To terminate the whole tree of the process including all child processes started by it. /f :- For forceful termination of process. It is not omitted in case of remote process as they are terminated forcefully in default. Filters description: Filters are provided to filter the result. This filtering is based on some Filter names which are checked with some relational operators. You will observe that the filter names are the column names which comes in task manager. Filter Name Valid Operators Valid Values STATUS eq,ne RUNNINGNOT RESPONDINGUNKNOWN IMAGENAME eq, ne Name of image PID eq, ne, gt, lt, ge, le ProcessID number SESSION eq, ne, gt, lt, ge, le Session number CPUTIME eq, ne, gt, lt, ge, le CPU time in the format HH:MM:SS, where MM and SS are between 0 and 59 and HH is any unsigned number MEMUSAGE eq, ne, gt, lt, ge, le Memory usage(in KB) USERNAME eq, ne Any valid user name (User or Domain\User) SERVICES eq, ne Service name WINDOWTITLE eq, ne Window title MODULES eq, ne DLL name where eq, ne, gt, lt, ge & le are meant for equal to, not equal to, greater than, less than, greater than equal to and less than equal to respectively. Points to be noted: In case of remote process WINDOWTITLE and STATUS filters are not supported. Wildcard (*) character is accepted for /im option only when filter is applied. Not necessary that /f is specified in case of remote process termination as in default that is terminated forcefully. Don?t specify computer name to HOSTNAME filter as it will result in a shutdown and all processes are stopped. For specifying ProcessID (PID) tasklist command can be used. Examples: To terminate a process with PID 3276 use parameter /pid. ?taskkill /pid 3276 To terminate more than one process with pid as 2001, 2224, 4083. ?taskkill /pid 2001 /pid 2224 /pid 4083 To terminate a process with its image name like wmplayer.exe for Windows Media Player use /im parameter. ?taskkill /im wmplayer.exe To terminate a process and all its child process i.e. to end process tree in task manager use /t parameter. ?taskkill /f /im explorer.exe /t To terminate all those processes which have PID greater than or equal to 1500 without considering their image names use filter ge with wildcard character. ?taskkill /f /fi ?PID ge 1500? /im * To terminate the process tree with PID 2521 which is started by account name admin. ?taskkill /pid 2521 /t /fi ?USERNAME eq admin? To terminate all process beginning with note on a remote system named serverpc under user name ?administrator? having its password as ?qu@dc()r3?. ?taskkill /s serverpc /u administrator /p qu@dc()r3 /fi ?IMAGENAME eq note*? /im * To terminate a process with its windows title as ?paint? ?taskkill /f /fi ?WINDOWTITLE eq paint? Source: http://unlock-windows.blogspot.com/2008/12/taskkill-command-line-utility.html Show Sample Output


    -3
    Taskkill /?
    StephenJudge · 2011-10-01 17:47:11 2
  • when you can do it , avoid pipe Show Sample Output


    -3
    awk '{ printf "%.2f", $2/1024/1024 ; exit}' /proc/meminfo
    benoit_c_lbn · 2011-11-12 15:38:56 4
  • It clears caches from memory. It works fine on CentOS and Fedora. It will show you how much memory you need, for real.


    -4
    sync; echo 3 > /proc/sys/vm/drop_caches
    renich · 2011-04-26 21:12:06 9

  • -4
    echo 3 > /proc/sys/vm/drop_caches
    andreisid · 2012-08-05 19:35:14 6
  • One liner is based on this article: https://www.computerhope.com/issues/ch001307.htm Show Sample Output


    -4
    lspci|grep -i "VGA Compatible Controller"|cut -d' ' -f1|xargs lspci -v -s|grep ' prefetchable'
    knoppix5 · 2020-09-11 14:04:10 271
  • Sometimes "ls" is just too slow, especially if you're having problems with terminal scroll speed, or if you're a speed freak. In these situations, do an echo * in the current directory to immediately see the directory listing. Do an echo * | tr ' ' '\n' if you want a column. Do an alias ls='echo *' if you want to achieve higher echelons of speed and wonder. Note that echo * is also useful on systems that are so low in memory that "ls" itself is failing - perhaps due to a memory leak that you're trying to debug.


    -5
    echo *
    kFiddle · 2009-04-17 21:40:58 9
  • this application monitors the apps you use most often and load them into memory with their libraries and other dependencies. So now, when you launch Firefox or Thunderbird or OpenOffice, the display is immediate as on Mac.


    -7
    sudo apt-get install preload
    danfernbanck · 2009-02-13 19:34:57 10
  • This command kills all processes with 'SomeCommand' in the process name. There are other more elegant ways to extract the process names from ps but they are hard to remember and not portable across platforms. Use this command with caution as you could accidentally kill other matching processes! xargs is particularly handy in this case because it makes it easy to feed the process IDs to kill and it also ensures that you don't try to feed too many PIDs to kill at once and overflow the command-line buffer. Note that if you are attempting to kill many thousands of runaway processes at once you should use 'kill -9'. Otherwise the system will try to bring each process into memory before killing it and you could run out of memory. Typically when you want to kill many processes at once it is because you are already in a low memory situation so if you don't 'kill -9' you will make things worse


    -7
    ps axww | grep SomeCommand | awk '{ print $1 }' | xargs kill
    philiph · 2009-02-28 17:48:51 11
  • A short way to give us relevant report in a moment done about quantities on disk usage, memory and swap in our Linux Systems. Show Sample Output


    -9
    alias dfr='df;free'
    ximo88 · 2009-04-28 11:30:31 17

  • -11
    sudo cat /dev/mem > /dev/dsp
    eastwind · 2009-04-22 07:26:10 4
  • This is an useful command for when your OS is reporting less free RAM than it actually has. In case terminated processes did not free their variables correctly, the previously allocated RAM might make a bit sluggis over time. This command then creates a huge file made out of zeroes and then removes it, thus freeing the amount of memory occupied by the file in the RAM. In this example, the sequence will free up to 1GB(1M * 1K) of unused RAM. This will not free memory which is genuinely being used by active processes.


    -11
    dd if=/dev/zero of=junk bs=1M count=1K
    guedesav · 2009-11-01 23:45:51 6
  • RHEL / CentOS Support 4GB or more RAM ( memory ) Show Sample Output


    -13
    yum install kernel-PAE
    svnlabs · 2010-02-17 16:34:09 4
  • ‹ First  < 4 5 6

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Renaming a file without overwiting an existing file name
Sometimes in a hurry you may move or copy a file using an already existent file name. If you aliased the cp and mv command with the -i option you are prompted for a confirmation before overwriting but if your aliases aren't there you will loose the target file! The -b option will force the mv command to check if the destination file already exists and if it is already there a backup copy with an ending ~ is created.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Printout a list of field numbers (awk index) from a CSV file with headers as first line.
Useful to identify the field number in big CSV files with large number of fields. The index is the reference to use in processing with commands like 'cut' or 'awk' involved.

Get AWS temporary credentials ready to export based on a MFA virtual appliance
You might want to secure your AWS operations requiring to use a MFA token. But then to use API or tools, you need to pass credentials generated with a MFA token. This commands asks you for the MFA code and retrieves these credentials using AWS Cli. To print the exports, you can use: `awk '{ print "export AWS_ACCESS_KEY_ID=\"" $1 "\"\n" "export AWS_SECRET_ACCESS_KEY=\"" $2 "\"\n" "export AWS_SESSION_TOKEN=\"" $3 "\"" }'` You must adapt the command line to include: * $MFA_IDis ARN of the virtual MFA or serial number of the physical one * TTL for the credentials

Detect illegal access to kernel space, potentially useful for Meltdown detection
Based on capsule8 agent examples, not rigorously tested

Print the IPv4 address of a given interface

Sudoers: bypass all password prompts
If you as the sole user of a computer at home only don’t like needing to repeatedly type a password each time you run a command, using ‘NOPASSWD’ in sudoers for your specific username is for you.

check open ports without netstat or lsof

Find out my Linux distribution name and version

Extract raw URLs from a file
you can also use cut instead of awk. less powerful but probably faster. ;)


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: