When I'm testing some scripts or programs, they end up using more memory than anticipated. In that case, computer nearly halts due to swap space usage, and sometimes I have to press Magic SysRq+REISUB to reboot.
So, I was looking for a way to limit memory usage per script and found out that ulimit can limit memory. If you run it this way:
$ ulimit -v 1000000
.
$ scriptname
Then the new memory limit will be valid for that shell. I think changing the limit within a subshell is much more flexible and it won't interfere with your current shell ulimit settings.
note: -v 1000000 corresponds to approximately 1GB of RAM
default stack size is 10M. This makes your multithread app filling rapidly your memory. on my PC I was able to create only 300thread with default stack size. Lower the default stack size to the one effectively used by your threads, let you create more. ex. putting 64k I was able to create more than 10.000threads. Obviously ...your thread shouldn't need more than 64k ram!!!
It is not uncommon to receive an error "Too many open files", this command allows you to change the limit for a user. This can be put into /etc/profile so that all users will have this change.
Death to the user limits!
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: