Commands by breign (2)

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

display a smiling smiley if the command succeeded and a sad smiley if the command failed
you could save the code between if and fi to a shell script named smiley.sh with the first argument as and then do a smiley.sh to see if the command succeeded. a bit needless but who cares ;)

Find class in jar

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Find class in jar

PDF simplex to duplex merge
Joins two pdf documents coming from a simplex document feed scanner. Needs pdftk >1.44 w/ shuffle.

Fastest segmented parallel sync of a remote directory over ssh
Mirror a remote directory using some tricks to maximize network speed. lftp:: coolest file transfer tool ever -u: username and password (pwd is merely a placeholder if you have ~/.ssh/id_rsa) -e: execute internal lftp commands set sftp:connect-program: use some specific command instead of plain ssh ssh:: -a -x -T: disable useless things -c arcfour: use the most efficient cipher specification -o Compression=no: disable compression to save CPU mirror: copy remote dir subtree to local dir -v: be verbose (cool progress bar and speed meter, one for each file in parallel) -c: continue interrupted file transfers if possible --loop: repeat mirror until no differences found --use-pget-n=3: transfer each file with 3 independent parallel TCP connections -P 2: transfer 2 files in parallel (totalling 6 TCP connections) sftp://remotehost:22: use sftp protocol on port 22 (you can give any other port if appropriate) You can play with values for --use-pget-n and/or -P to achieve maximum speed depending on the particular network. If the files are compressible removing "-o Compression=n" can be beneficial. Better create an alias for the command.

Save your open windows to a file so they can be opened after you restart
This will save your open windows to a file (~/.windows). To start those applications: $ cat ~/.windows | while read line; do $line &; done Should work on any EWMH/NetWM compatible X Window Manager. If you use DWM or another Window Manager not using EWMH or NetWM try this: $ xwininfo -root -children | grep '^ ' | grep -v children | grep -v '' | sed -n 's/^ *\(0x[0-9a-f]*\) .*/\1/p' | uniq | while read line; do xprop -id $line _NET_WM_PID | sed -n 's/.* = \([0-9]*\)$/\1/p'; done | uniq -u | grep -v '^$' | while read line; do ps -o cmd= $line; done > ~/.windows

Find ulimit values of currently running process
When dealing with system resource limits like max number of processes and open files per user, it can be hard to tell exactly what's happening. The /etc/security/limits.conf file defines the ceiling for the values, but not what they currently are, while $ ulimit -a will show you the current values for your shell, and you can set them for new logins in /etc/profile and/or ~/.bashrc with a command like: $ ulimit -S -n 100000 >/dev/null 2>&1 But with the variability in when those files get read (login vs any shell startup, interactive vs non-interactive) it can be difficult to know for sure what values apply to processes that are currently running, like database or app servers. Just find the PID via "ps aux | grep programname", then look at that PID's "limits" file in /proc. Then you'll know for sure what actually applies to that process.

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: