Here $HOME/shots must exist and have appropriate access rights and sitecopy must be correctly set up to upload new screen shots to the remote site. Example .sitecopyrc (for illustration purposes only) site shots server ftp.example.com username user password antabakadesuka local /home/penpen/shots remote public_html/shots permissions ignore The command uses scrot to create a screen shot, moves it to the screen shot directory, uploads it using screen uses xsel to copy the URL to the paste buffer (so that you can paste it with a middle click) and finally uses feh to display a preview of the screen shot. Note that $BASE stands for the base URL for the screen shots on the remote server, replace it by the actual location; in the example http://www.example.com/~user/shots would be fitting. Assign this command to a key combination or an icon in whatever panel you use. Show Sample Output
Without the -dump option the header is displayed in lynx. You can also use w3m, the command then is
w3m -dump_head http://www.example.com/
Show Sample Output
Whereas ^V is CTRL-V. converts a dos file to unix by removing 0x13 characters Show Sample Output
This example is taken from Cygwin running on Win7Ent-64. Device names will vary by platform. Both commands resulted in identical files per the output of md5sum, and ran in the same time down to the second (2m45s), less than 100ms apart. I timed the commands with 'time', which added before 'dd' or 'readom' gives execution times after the command completes. See 'man time' for more info...it can be found on any Unix or Linux newer than 1973. Yeah, that means everywhere. readom is supposed to guarantee good reads, and does support flags for bypassing bad blocks where dd will either fail or hang. readom's verbosity gave more interesting output than dd. On Cygwin, my attempt with 'readom' from the first answer actually ended up reading my hard drive. Both attempts got to 5GB before I killed them, seeing as that is past any CD or standard DVD. dd: 'bs=1M' says "read 1MB into RAM from source, then write that 1MB to output. I also tested 10MB, which shaved the time down to 2m42s. 'if=/dev/scd0' selects Cygwin's representation of the first CD-ROM drive. 'of=./filename.iso' simply means "create filename.iso in the current directory." readom: '-v' says "be a little noisy (verbose)." The man page implies more verbosity with more 'v's, e.g. -vvv. dev='D:' in Cygwin explicitly specifies the D-drive. I tried other entries, like '/dev/scd0' and '2,0', but both read from my hard drive instead of the CD-ROM. I imagine my LUN-foo (2,0) was off for my system, but on Cygwin 'D:' sort of "cut to the chase" and did the job. f='./filename.iso' specifies the output file. speed=2 simply sets the speed at which the CD is read. I also tried 4, which ran the exact same 2m45s. retries=8 simply means try reading a block up to 8 times before giving up. This is useful for damaged media (scratches, glue lines, etc.), allowing you to automatically "get everything that can be copied" so you at least have most of the data. Show Sample Output
'watch' repeatedly (default every 2 seconds, -n 1 => every second) runs a command (here ':', a shorthand for 'true'), displays the output (here nothing) and the date and time of the last run. I thought it to be obvious but it seemingly is not: to exit use Ctrl-C.
An improved version of http://www.commandlinefu.com/commands/view/1772/simple-countdown-from-a-given-date that uses Perl to pretty-print the output. Note that the GNU-style '--no-title' option has been replaced by its one-letter counterpart '-t'. Show Sample Output
Depending on the installation only certain of these man pages are installed. 12 is left out on purpose because ISO/IEC 8859-12 does not exist. To also access those manpages that are not installed use opera (or any other browser that supports all the character sets involved) to display online versions of the manpages hosted at kernel.org:
for i in $(seq 1 11) 13 14 15 16; do opera http://www.kernel.org/doc/man-pages/online/pages/man7/iso_8859-$i.7.html; done
Use this if you can't type repeated killall commands fast enough to kill rapidly spawning processes. If a process keeps spawning copies of itself too rapidly, it can do so faster than a single killall can catch them and kill them. Retyping the command at the prompt can be too slow too, even with command history retrieval. Chaining a few killalls on single command line can start up the next killall more quickly. The first killall will get most of the processes, except for some that were starting up in the meanwhile, the second will get most of the rest, and the third mops up.
sed '$ d' foo.txt.tmp ...deletes last line from the file
Install with `npm install unix-permissions`. https://github.com/ehmicky/unix-permissions Unix file permissions can take many shapes: symbolic (`ug+rw`), octal (`660`) or a list of characters (`drw-rw----`). `unix-permissions` enables using any of these (instead of being limited to a single one) with any CLI command. Show Sample Output
An example config file is placed in the sample output along with the command line call to use it. The rsync daemon here is setup on the destination, thus requiring the read only = false flag. Also it uses uid and gid of root, change as required. Show Sample Output
while commandt do command command ... done {commandt is executed and its exit status tested.} for i in 1 2 3 > do > echo $i > done Show Sample Output
See smbstatus Output within a 5 second interval (for monitoring smb access)
This command will grep the entire directory looking for any files containing the list of files. This is useful for cleaning out your project of old static files that are no longer in use. Also ignores .svn directories for accurate counts. Replace 'static/images/' with the directory containing the files you want to search for. Show Sample Output
Will rot 13 whatever parameter follows 'rot13', whether it is a string or a file. Additionally, it will rot 5 each digit in a number
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: