Use all the cores or CPUs when compiling

make -j 4
Force make command to create as many compile processes as specified (4 in the example), so that each one goes into one core or CPU and compilation happens in parallel. This reduces the time required to compile a program by up to a half in the case of CPUs with 2 cores, one fourth in the case of quad cores... and so on.

By: kovan
2009-08-05 22:50:57

These Might Interest You

  • Check whether hyperthreading is enabled or not. a better solution as nproc should work on all OS with awk Show Sample Output

    awk -F: '/^core id/ && !P[$2] { CORES++; P[$2]=1 }; /^physical id/ && !N[$2] { CPUs++; N[$2]=1 }; END { print CPUs*CORES }' /proc/cpuinfo
    emphazer · 2018-05-14 14:17:33 0
  • Save as a bash script and run as root to set the ondemand cpu frequency governor for all cpu cores. Name the file ondemand. Change 'ondemand' in the argument to performance or your preferred governor to do the same thing but set all cpu cores to use the performance governor (or your preferred governor)

    for i in `cat /proc/cpuinfo |grep processor|awk '{print $3}'`;do cpufreq-set -g ondemand $i;done
    godmachine81 · 2011-12-31 01:44:18 0
  • # 4 cores with 2500 pi digits CPUBENCH 4 2500 . every core will use 100% cpu and you can see how fast they calculate it. if you do 50000 digitits and more it can take hours or days Show Sample Output

    CPUBENCH() { local CPU="${1:-1}"; local SCALE="${2:-5000}"; { for LOOP in `seq 1 $CPU`; do { time echo "scale=${SCALE}; 4*a(1)" | bc -l -q | grep -v ^"[0-9]" & } ; done }; echo "Cores: $CPU"; echo "Digit: $SCALE" ;}
    emphazer · 2018-05-14 17:30:37 0
  • Using the output of 'ps' to determine CPU usage is misleading, as the CPU column in 'ps' shows CPU usage per process over the entire lifetime of the process. In order to get *current* CPU usage (without scraping a top screen) you need to pull some numbers from /proc/stat. Here, we take two readings, once second apart, determine how much IDLE time was spent across all CPUs, divide by the number of CPUs, and then subtract from 100 to get non-idle time. Show Sample Output

    NUMCPUS=`grep ^proc /proc/cpuinfo | wc -l`; FIRST=`cat /proc/stat | awk '/^cpu / {print $5}'`; sleep 1; SECOND=`cat /proc/stat | awk '/^cpu / {print $5}'`; USED=`echo 2 k 100 $SECOND $FIRST - $NUMCPUS / - p | dc`; echo ${USED}% CPU Usage
    toxick · 2012-10-02 03:57:51 1

What Others Think

Is there an easy way to know how many CPUs you have? Then the command could be: make -j $(cat /proc/cpus)
matthewbauer · 459 weeks and 2 days ago
Your compilation only experience a n-fold linear speedup (with n being the number of CPU/cores) if your code has only parallel components and no serial components (dependencies in your code). In the case of even a slight amount of serial components (i.e. 1-2%), speedup is greatly affected. This is the essence of Amdahl's Law.
DeusExMachina · 459 weeks and 1 day ago
@mattthewbauer in Linux you could do somethink like make -j $(grep -c ^processor /proc/cpuinfo). It doesn't do any bad to use a higher number than the actual number of cores thought. @DeusExMachina: true but usually the speed increase is linear or nearly linear, because AFAIK in Makefiles interdependencies only exist between targets, so all the source files of each target can be compiled in parallel.
kovan · 459 weeks and 1 day ago
From my make manpage, "If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously." That suggests this command shouldn't help at all. Am I wrong?
tremby · 456 weeks and 4 days ago
Oh, facepalm. I read it (more than once) as "If the -j option is not given". Never mind.
tremby · 456 weeks and 4 days ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this? is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.


Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: