Commands by michelsberg (19)

  • Show the maximum amount of memory that was needed by a process at any time. My use case: Having a long-running computation job on $BIG_COMPUTER and judging whether it will also run on $SMALL_COMPUTER.   http://man7.org/linux/man-pages/man5/proc.5.html VmHWM: Peak resident set size ("high water mark") Show Sample Output


    2
    grep VmHWM /proc/$(pgrep -d '/status /proc/' FOO)/status
    michelsberg · 2014-11-05 15:06:29 8
  • This is just another example of what the nocache package is useful for, which I described in http://www.commandlinefu.com/commands/view/12357/ and that provides the commands nocache <command to run with page cache disabled>cachedel <single file to remove from page cache>cachstats <single file> # to get the current cache state   Often, we do not want to disable caching, because several file reads are involved in a command and operations would be slowed down a lot, due to massive disk seeks. But after our operations, the file sits in the cache needlessly, if we know we're very likely never touching it again. cachedel helps to reduce cache pollution, i.e. frequently required files relevant for desktop interaction (libs/configs/etc.) would be removed from RAM. So we can run cachedel after each data intensive job. Today I run commands like these: <compile job> && find . -type f -exec cachedel '{}' \; &> /dev/null # no need to keep all source code and tmp files in memorysudo apt-get dist-upgrade && find /var/cache/apt/archives/ -type f -exec cachedel '{}' \; # Debian/*buntu system upgradedropbox status | grep -Fi idle && find ~/Dropbox -type f -exec cachedel '{}' \; &> /dev/null # if Dropbox is idle, remove sync'ed files from cache   https://github.com/Feh/nocache http://packages.debian.org/search?keywords=nocache http://packages.ubuntu.com/search?keywords=nocache http://askubuntu.com/questions/122857


    -1
    find /path/to/dir -type f -exec cachedel '{}' \;
    michelsberg · 2013-12-12 18:22:54 8
  • Translate strings from non-german to german (and vice versa) using LEO. Put it in your ~/.bashrc. Usage: leo words   To use another language other than english, use an option: leo -xx words Valid language options: ch - chinese en - english es - spanish fr - french it - italian pl - polish pt - portuguese ru - russian The other language will always be german! Show Sample Output


    4
    leo (){ l="en"; if [ "${1:0:1}" = "-" ]; then l=${1:1:2};shift;fi;Q="$*";curl -s "https://dict.leo.org/${l}de/?search=${Q// /%20}" | html2text | sed -e '0,/H.ufigste .*/d;/Weitere Aktionen/,$d;/f.r Sie .*:/,$d' | grep -aEA900 '^\*{5} .*$'; }
    michelsberg · 2013-06-24 22:35:46 29
  • We all know... nice -n19 for low CPU priority.   ionice -c3 for low I/O priority.   nocache can be useful in related scenarios, when we operate on very large files just a single time, e.g. a backup job. It advises the kernel that no caching is required for the involved files, so our current file cache is not erased, potentially decreasing performance on other, more typical file I/O, e.g. on a desktop.   http://askubuntu.com/questions/122857 https://github.com/Feh/nocache http://packages.debian.org/search?keywords=nocache http://packages.ubuntu.com/search?keywords=nocache   To undo caching of a single file in hindsight, you can do cachedel <OneSingleFile>   To check the cache status of a file, do cachestats <OneSingleFile>


    21
    nocache <I/O-heavy-command>
    michelsberg · 2013-05-21 15:15:05 16
  • Put it in your ~/.bashrc usage: google word1 word2 word3... google '"this search gets quoted"' Show Sample Output


    13
    function google { Q="$@"; GOOG_URL='https://www.google.de/search?tbs=li:1&q='; AGENT="Mozilla/4.0"; stream=$(curl -A "$AGENT" -skLm 10 "${GOOG_URL}${Q//\ /+}" | grep -oP '\/url\?q=.+?&amp' | sed 's|/url?q=||; s|&amp||'); echo -e "${stream//\%/\x}"; }
    michelsberg · 2013-04-05 08:04:15 9
  • For slow flash memory (cheap thumb drive), ext4 is the fastest stable file system for all use cases with no relevant exception: http://www.linuxplanet.com/linuxplanet/tutorials/7208/1 Since we can usually dispense with the benefits of a journal for this type of storage, this is a way to achieve the least awful I/O-speed. Disabling the journal for an existing ext4 partition can be achieved using tune2fs -O ^has_journal /dev/sdXN Note that it is often recommended to format removable flash media with ext2, due to the lack of a journal. ext4 has many advantages over ext2 even without the journal, with much better speed as one of the consequences. So the only usecase for ext2 would be compatibility with very old software.


    2
    mke2fs -t ext4 -O ^has_journal /dev/sdXN
    michelsberg · 2013-02-15 17:24:02 6
  • I like it sorted... 2> /dev/null was also needless, since our pipes already select stdin, only.


    2
    whatis $(compgen -c) | sort | less
    michelsberg · 2013-02-01 09:13:56 4
  • Just realized how needless the 'ls' has been... This version is also multilingual, since there is no need to grep for a special key word ("nothing"/"nichts"/"rien"/"nada"...). And it makes use of all the available horizontal space. Show Sample Output


    3
    whatis /usr/bin/* 2> /dev/null | less
    michelsberg · 2013-01-31 22:25:30 8
  • Just starting to get in love with mogrify.


    2
    mogrify -crop <width>x<height>+<X-offset>+<Y-offset> *.png
    michelsberg · 2013-01-24 11:42:48 6
  • mogrify can be used like convert. The difference is that mogrify overwrites files: http://www.imagemagick.org/www/mogrify.html Of course, other source colors can be used as well.


    2
    mogrify -transparent white image*.png
    michelsberg · 2013-01-23 16:58:24 5
  • Usage: up N I did not like two things in the submitted commands and fixed it here: 1) If I do cd - afterwards, I want to go back to the directory I've been before 2) If I call up without argument, I expect to go up one level It is sad, that I need eval (at least in bash), but I think it's safe here. eval is required, because in bash brace expansion happens before variable substitution, see http://rosettacode.org/wiki/Repeat_a_string#Using_printf


    6
    function up { cd $(eval printf '../'%.0s {1..$1}) && pwd; }
    michelsberg · 2013-01-21 12:57:45 7
  • Substitute for #11720 Can probably be even shorter and easier. Show Sample Output


    1
    ls -l /dev/disk/by-id/ | grep '/sda$' | grep -o 'ata[^ ]*'
    michelsberg · 2013-01-16 17:28:11 7
  • It is often recommended to enclose capital letters in a BibTeX file in braces, so the letters will not be transformed to lower case, when imported from LaTeX. This is an attempt to apply this rule to a BibTeX database file. DO NOT USE sed '...' input.bib > input.bib as it will empty the file! How it works: /^\s*[^@%]/ Apply the search-and-replace rule to lines that start (^) with zero or more white spaces (\s*), followed by any character ([...]) that is *NOT* a "@" or a "%" (^@%). s=<some stuff>=<other stuff>=g Search (s) for some stuff and replace by other stuff. Do that globally (g) for all matches in each processed line. \([A-Z][A-Z]*\)\([^}A-Z]\|},$\) Matches at least one uppercase letter ([A-Z][A-Z]*) followed by a character that is EITHER not "}" and not a capital letter ([^}A-Z]) OR (|) it actually IS a "}", which is followed by "," at the end of the line ($). Putting regular expressions in escaped parentheses (\( and \), respectively) allows to dereference the matched string later. {\1}\2 Replace the matched string by "{", followed by part 1 of the matched string (\1), followed by "}", followed by the second part of the matched string (\2). I tried this with GNU sed, only, version 4.2.1. Show Sample Output


    1
    sed '/^\s*[^@%]/s=\([A-Z][A-Z]*\)\([^}A-Z]\|},$\)={\1}\2=g' literature.bib > output.bib
    michelsberg · 2013-01-15 22:24:17 10
  • top accecpts a comma separated list of PIDs.


    15
    top -p $(pgrep -d , foo)
    michelsberg · 2012-06-27 20:59:09 6
  • pgrep foo may return several pids for process foobar footy01 etc. like this: 11427 12576 12577 sed puts "-p " in front and we pass a list to top: top -p 11427 -p 12576 -p 12577


    5
    top $(pgrep foo | sed 's|^|-p |g')
    michelsberg · 2012-06-14 15:13:00 3
  • [ 2000 -ge "$(free -m | awk '/buffers.cache:/ {print $4}')" ] returns true if less than 2000 MB of RAM are available, so adjust this number to your needs. [ $(echo "$(uptime | awk '{print $10}' | sed -e 's/,$//' -e 's/,/./') >= $(grep -c ^processor /proc/cpuinfo)" | bc) -eq 1 ] returns true if the current machine load is at least equal to the number of CPUs. If either of the tests returns true we wait 10 seconds and check again. If both tests return false, i.e. 2GB are available and machine load falls below number of CPUs, we start our command and save it's output in a text file. The ( ( ... ) & ) construct lets the command run in background even if we log out. See http://www.commandlinefu.com/commands/view/3115/ .


    4
    ( ( while [ 2000 -ge "$(free -m | awk '/buffers.cache:/ {print $4}')" ] || [ $(echo "$(uptime | awk '{print $10}' | sed -e 's/,$//' -e 's/,/./') >= $(grep -c ^processor /proc/cpuinfo)" | bc) -eq 1 ]; do sleep 10; done; my-command > output.txt ) & )
    michelsberg · 2010-07-13 09:12:11 3
  • When you remotely log in like "ssh -X userA:host" and become a different user with "su UserB", X-forwarding will not work anymore since /home/UserB/.Xauthority does not exist. This will use UserA's information stored in .Xauthority for UserB to enable X-forwarding. Watch http://prefetch.net/blog/index.php/2008/04/05/respect-my-xauthority/ for details.


    1
    su username -c "xauth add ${HOSTNAME}/unix:${DISPLAY//[a-zA-Z:_-]/} $(xauth list | grep -o '[a-zA-Z0-9_-]*\ *[0-9a-zA-Z]*$'); bash"
    michelsberg · 2010-04-02 10:08:25 5
  • no loop, only one call of grep, scrollable ("less is more", more or less...)


    12
    ls /usr/bin | xargs whatis | grep -v nothing | less
    michelsberg · 2010-01-26 12:59:47 31
  • Route output to notify-send to show nice messages on the desktop, e.g. title and interpreter of the current radio stream


    2
    echo 'Desktop SPAM!!!' | while read SPAM_OUT; do notify-send "$SPAM_OUT"; done
    michelsberg · 2009-12-31 15:38:35 3

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

tail -f a log file over ssh into growl

Display disk partition sizes
It is the same but more faster real 0m0,007s user 0m0,011s sys 0m0,000s with my solution real 0m0,038s user 0m0,044s sys 0m0,000s with your solution :)

dump database from postgresql to a file

Keep a copy of the raw Youtube FLV,MP4,etc stored in /tmp/
Certain Flash video players (e.g. Youtube) write their video streams to disk in /tmp/ , but the files are unlinked. i.e. the player creates the file and then immediately deletes the filename (unlinking files in this way makes it hard to find them, and/or ensures their cleanup if the browser or plugin should crash etc.) But as long as the flash plugin's process runs, a file descriptor remains in its /proc/ hierarchy, from which we (and the player) still have access to the file. The method above worked nicely for me when I had 50 tabs open with Youtube videos and didn't want to have to re-download them all with some tool.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Change the homepage of Firefox
Pros: Works in all Windows computers, most updated and compatible command. Cons: 3 liner Replace fcisolutions.com with your site name.

Copy without overwriting

Fast, built-in pipe-based data sink
This is shorter and actually much faster than >/dev/null (see sample output for timings) Plus, it looks like a disappointed face emoticon.

bash screensaver off

return the latest kernel version from a Satellite / Spacewalk server software channel


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: