Commands by bhbmaster (7)

  • wmr - | pv -s $SIZEOFMEM | ssh -p 40004 -c arcfour,blowfish-cbc -C [email protected] "cat - > /forensics/T430-8gb-RAM1.dd" Run above command from Windows Cygwin: On Windows: Install Cygwin, and copy WMR (windows memory reader 1.0) memory diagnostic into cygwin\bin folder, also install cygwins netcat and ssh (openssh). I recommend installing apt-cyg and running " On Linux: Have an SSH Server SIMPLEST FORM: WINDOWS: # wmr - | ssh [email protected] "cat - > /tmp/FileToSave.dd" For more details on how to extract information from memory dump: apt-get install foremost foremost -t all -T -i /forensics/T430-8gb-RAM1.dd For more information: http://www.kossboss.com/memdump-foremost Show Sample Output


    0
    wmr - | pv -s $SIZEOFMEM | ssh -p 40004 -c arcfour,blowfish-cbc -C [email protected] "cat - > /forensics/T430-8gb-RAM1.dd"
    bhbmaster · 2013-05-31 00:04:19 0
  • Most of the commands require the jpegs a certain format, not this, it just follows alphabetical order. The same order you follow if you do "ls -lisah" from top to bottom, top frame is first, bottom is last... This goes perfectly with a webcam timelapse... I have just the script for it: http://www.kossboss.com/linux---app-script---timelapse---capush Show Sample Output


    0
    mencoder mf://*.jpg -mf fps=50:type=jpg -ovc raw -oac copy -o out50fps.avi
    bhbmaster · 2013-05-30 07:49:36 0
  • NOTE: When opening the files you might need to strip the very top line with notepad++ as its a mistake header This is useful when the local machine where you need to do the packet capture with tcpdump doesn?t have enough room to save the file, where as your remote host does tcpdump -i eth0 -w - | ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 "cat - | gzip > /tmp/eth0.pcap.gz" Your @ PC1 doing a tcpdump of PC1s eth0 interface and its going to save the output @ PC2 who is called save.location.com to a file /tmp/eth0-to-me.pcap.gz again on PC2 More info @: http://www.kossboss.com/linuxtcpdump1 Show Sample Output


    1
    tcpdump -i eth0 -w - | ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 "cat - | gzip > /tmp/eth0.pcap.gz"
    bhbmaster · 2013-05-30 07:41:22 0
  • This is useful when the local machine where you need to do the packet capture with tcpdump doesn?t have enough room to save the file, where as your remote host does tcpdump -i eth0 -w - | ssh savelocation.com -c arcfour,blowfish-cbc -C -p 50005 "cat - > /tmp/eth0.pcap" Your @ PC1 doing a tcpdump of PC1s eth0 interface and its going to save the output @ PC2 who is called save.location.com to a file /tmp/ppp1-to-me.pcap.gz again on PC2 More info @: http://www.kossboss.com/linuxtcpdump1 Show Sample Output


    0
    tcpdump -i eth0 -w - | ssh savelocation.com -c arcfour,blowfish-cbc -C -p 50005 "cat - > /tmp/eth0.pcap"
    bhbmaster · 2013-05-30 07:33:48 15
  • NOTE: When doing these commands when asked for questions there might be flowing text from the pv doing the progress bar just continue typing as if its not there, close your eyes if it helps, there might be a yes or no question, type "yes" and ENTER to it, and also it will ask for a password, just put in your password and ENTER I talk alot more about this and alot of other variations of this command on my site: http://www.kossboss.com/linuxtarpvncssh Show Sample Output


    0
    cd /srcfolder; tar -czf - . | pv -s `du -sb . | awk '{print $1}'` | ssh -c arcfour,blowfish-cbc -p 50005 [email protected] "tar -xzvf - -C /dstfolder"
    bhbmaster · 2013-05-30 07:21:06 0
  • Where filein is the source file, destination.com is the ssh server im copying the file to, -c arcfour,blowfish-cbc is selecting the fastest encryption engines, -C is for online compressions and decompression when it comes off the line - supposed to speed up tx in some cases, then the /tmp/fileout is how the file is saved... I talk more about it on my site, where there is more room to talk about this: http://www.kossboss.com/linuxtarpvncssh and http://www.kossboss.com/linux---transfer-1-file-with-ssh Show Sample Output


    0
    cat filein | ssh destination.com -c arcfour,blowfish-cbc -C -p 50005 "cat - > /tmp/fileout"
    bhbmaster · 2013-05-30 07:18:46 0
  • Do above at the Destination aka The Server. Do the following at the Source aka The Client: tar -cf - /srcfolder | pv | nc www.home.com 50002 If you want ETAs and stuff: tar -cf - /srcfolder | pv -s `du -sb /srcfolder | awk '{print $1}'` | nc www.home.com 50002 If you dont care about progress bars @ server/destination: tar -cf - /srcfolder | pv | nc www.home.com 50002 If you dont care about progress bars @ client/source: tar -cf - /srcfolder | pv -s `du -sb /srcfolder | awk '{print $1}'` | nc www.home.com 50002 I have this in alot better detail where there is more room to talk about it on my site: http://www.kossboss.com/linuxtarpvncssh Show Sample Output


    0
    while true; do nc -l -p 50002 | pv | tar -xf -; done
    bhbmaster · 2013-05-30 07:17:23 0

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

grep processes list avoiding the grep itself
Trick to avoid the form: grep process | grep - v grep

Restart openssh-server on your Synology NAS from commandline.
The correct way to restart openssh-server on your synology nas.

Suspend to ram

Print every Nth line
Sometimes commands give you too much feedback. Perhaps 1/100th might be enough. If so, every() is for you. $ my_verbose_command | every 100 will print every 100th line of output. Specifically, it will print lines 100, 200, 300, etc If you use a negative argument it will print the *first* of a block, $ my_verbose_command | every -100 It will print lines 1, 101, 201, 301, etc The function wraps up this useful sed snippet: $ ... | sed -n '0~100p' don't print anything by default $ sed -n starting at line 0, then every hundred lines ( ~100 ) print. $ '0~100p' There's also some bash magic to test if the number is negative: we want character 0, length 1, of variable N. $ ${N:0:1} If it *is* negative, strip off the first character ${N:1} is character 1 onwards (second actual character).

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Remove duplicate rows of an un-sorted file based on a single column
$F[0] filters using first word. $F[1] - 2nd, and so on.

Push your present working directory to a stack that you can pop later
If are a Bash user and you are in a directory and need to go else where for a while but don't want to lose where you were, use pushd instead of cd. cd /home/complicated/path/.I/dont/want/to/forget pushd /tmp cd thing/in/tmp popd (returns you to /home/complicated/path/.I/dont/want/to/forget)

Grep for regular expression globally, list files and positions.
Grep for expression globally, list files and positions. "Hirn" is a nice german crib meaning "Brain". :-) Afterwards you can edit the line you want with "vi ./p_common/common_main.pbt +1550"

A command to copy mysql tables from a remote host to current host via ssh.
This version compresses the data for transport.

Generat a Random MAC address


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: