Show the maximum amount of memory that was needed by a process at any time. My use case: Having a long-running computation job on $BIG_COMPUTER and judging whether it will also run on $SMALL_COMPUTER. http://man7.org/linux/man-pages/man5/proc.5.html VmHWM: Peak resident set size ("high water mark") Show Sample Output
This is just another example of what the nocache package is useful for, which I described in http://www.commandlinefu.com/commands/view/12357/ and that provides the commands
nocache <command to run with page cache disabled>
cachedel <single file to remove from page cache>
cachstats <single file> # to get the current cache state
Often, we do not want to disable caching, because several file reads are involved in a command and operations would be slowed down a lot, due to massive disk seeks. But after our operations, the file sits in the cache needlessly, if we know we're very likely never touching it again.
cachedel helps to reduce cache pollution, i.e. frequently required files relevant for desktop interaction (libs/configs/etc.) would be removed from RAM.
So we can run cachedel after each data intensive job. Today I run commands like these:
<compile job> && find . -type f -exec cachedel '{}' \; &> /dev/null # no need to keep all source code and tmp files in memory
sudo apt-get dist-upgrade && find /var/cache/apt/archives/ -type f -exec cachedel '{}' \; # Debian/*buntu system upgrade
dropbox status | grep -Fi idle && find ~/Dropbox -type f -exec cachedel '{}' \; &> /dev/null # if Dropbox is idle, remove sync'ed files from cache
https://github.com/Feh/nocache
http://packages.debian.org/search?keywords=nocache
http://packages.ubuntu.com/search?keywords=nocache
http://askubuntu.com/questions/122857
Translate strings from non-german to german (and vice versa) using LEO. Put it in your ~/.bashrc.
Usage:
leo words
To use another language other than english, use an option:
leo -xx words
Valid language options:
ch - chinese
en - english
es - spanish
fr - french
it - italian
pl - polish
pt - portuguese
ru - russian
The other language will always be german!
Show Sample Output
We all know...
nice -n19
for low CPU priority.
ionice -c3
for low I/O priority.
nocache can be useful in related scenarios, when we operate on very large files just a single time, e.g. a backup job. It advises the kernel that no caching is required for the involved files, so our current file cache is not erased, potentially decreasing performance on other, more typical file I/O, e.g. on a desktop.
http://askubuntu.com/questions/122857
https://github.com/Feh/nocache
http://packages.debian.org/search?keywords=nocache
http://packages.ubuntu.com/search?keywords=nocache
To undo caching of a single file in hindsight, you can do
cachedel <OneSingleFile>
To check the cache status of a file, do
cachestats <OneSingleFile>
Put it in your ~/.bashrc usage: google word1 word2 word3... google '"this search gets quoted"' Show Sample Output
For slow flash memory (cheap thumb drive), ext4 is the fastest stable file system for all use cases with no relevant exception:
http://www.linuxplanet.com/linuxplanet/tutorials/7208/1
Since we can usually dispense with the benefits of a journal for this type of storage, this is a way to achieve the least awful I/O-speed.
Disabling the journal for an existing ext4 partition can be achieved using
tune2fs -O ^has_journal /dev/sdXN
Note that it is often recommended to format removable flash media with ext2, due to the lack of a journal. ext4 has many advantages over ext2 even without the journal, with much better speed as one of the consequences. So the only usecase for ext2 would be compatibility with very old software.
I like it sorted... 2> /dev/null was also needless, since our pipes already select stdin, only.
Just realized how needless the 'ls' has been... This version is also multilingual, since there is no need to grep for a special key word ("nothing"/"nichts"/"rien"/"nada"...). And it makes use of all the available horizontal space. Show Sample Output
Just starting to get in love with mogrify.
mogrify can be used like convert. The difference is that mogrify overwrites files: http://www.imagemagick.org/www/mogrify.html Of course, other source colors can be used as well.
Usage:
up N
I did not like two things in the submitted commands and fixed it here:
1) If I do cd - afterwards, I want to go back to the directory I've been before
2) If I call up without argument, I expect to go up one level
It is sad, that I need eval (at least in bash), but I think it's safe here.
eval is required, because in bash brace expansion happens before variable substitution, see http://rosettacode.org/wiki/Repeat_a_string#Using_printf
Substitute for #11720 Can probably be even shorter and easier. Show Sample Output
It is often recommended to enclose capital letters in a BibTeX file in braces, so the letters will not be transformed to lower case, when imported from LaTeX. This is an attempt to apply this rule to a BibTeX database file.
DO NOT USE sed '...' input.bib > input.bib as it will empty the file!
How it works:
/^\s*[^@%]/
Apply the search-and-replace rule to lines that start (^) with zero or more white spaces (\s*), followed by any character ([...]) that is *NOT* a "@" or a "%" (^@%).
s=<some stuff>=<other stuff>=g
Search (s) for some stuff and replace by other stuff. Do that globally (g) for all matches in each processed line.
\([A-Z][A-Z]*\)\([^}A-Z]\|},$\)
Matches at least one uppercase letter ([A-Z][A-Z]*) followed by a character that is EITHER not "}" and not a capital letter ([^}A-Z]) OR (|) it actually IS a "}", which is followed by "," at the end of the line ($).
Putting regular expressions in escaped parentheses (\( and \), respectively) allows to dereference the matched string later.
{\1}\2
Replace the matched string by "{", followed by part 1 of the matched string (\1), followed by "}", followed by the second part of the matched string (\2).
I tried this with GNU sed, only, version 4.2.1.
Show Sample Output
top accecpts a comma separated list of PIDs.
pgrep foo
may return several pids for process foobar footy01 etc. like this:
11427
12576
12577
sed puts "-p " in front and we pass a list to top:
top -p 11427 -p 12576 -p 12577
[ 2000 -ge "$(free -m | awk '/buffers.cache:/ {print $4}')" ] returns true if less than 2000 MB of RAM are available, so adjust this number to your needs. [ $(echo "$(uptime | awk '{print $10}' | sed -e 's/,$//' -e 's/,/./') >= $(grep -c ^processor /proc/cpuinfo)" | bc) -eq 1 ] returns true if the current machine load is at least equal to the number of CPUs. If either of the tests returns true we wait 10 seconds and check again. If both tests return false, i.e. 2GB are available and machine load falls below number of CPUs, we start our command and save it's output in a text file. The ( ( ... ) & ) construct lets the command run in background even if we log out. See http://www.commandlinefu.com/commands/view/3115/ .
When you remotely log in like "ssh -X userA:host" and become a different user with "su UserB", X-forwarding will not work anymore since /home/UserB/.Xauthority does not exist. This will use UserA's information stored in .Xauthority for UserB to enable X-forwarding. Watch http://prefetch.net/blog/index.php/2008/04/05/respect-my-xauthority/ for details.
no loop, only one call of grep, scrollable ("less is more", more or less...)
Route output to notify-send to show nice messages on the desktop, e.g. title and interpreter of the current radio stream
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: