Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using ssh from sorted by
Terminal - Commands using ssh - 295 results
ssh -o UserKnownHostsFile=/dev/null root@192.168.1.1
2010-04-08 14:55:58
User: oernii2
Functions: ssh
Tags: ssh
8

you may create an alias also, which I did ;-)

alias sshu="ssh -o UserKnownHostsFile=/dev/null "

ssh -t server 'cd /etc && $SHELL'
2010-04-02 19:34:09
User: dooblem
Functions: ssh
Tags: ssh
4

Useful to create an alias that sends you right in the directory you want :

alias server-etc="ssh -t server 'cd /etc && $SHELL'"

ssh -q user@server
2010-03-24 12:02:55
User: KoRoVaMiLK
Functions: ssh
0

This allows you to skip the banner (usually /etc/issue.net) on ssh connections.

Useful to avoid banners outputted to your mail by rsync cronjobs.

ssh user@host "cat /path/to/backup/backupfile.tar.bz2" |tar jpxf -
2010-03-24 01:35:28
User: mack
Functions: ssh tar
Tags: ssh tar
8

Here how to recover the remote backup over ssh

tar jcpf - [sourceDirs] |ssh user@host "cat > /path/to/backup/backupfile.tar.bz2"
2010-03-24 01:29:25
User: mack
Functions: ssh tar
Tags: ssh tar
13

Execute it from the source host, where the source files you wish backup resides. With the minus '-' the tar command deliver the compressed output to the standar output and, trough over the ssh session to the remote host. On the other hand the backup host will be receive the stream and read it from the standar input sending it to the /path/to/backup/backupfile.tar.bz2

for I in $(mysql -e 'show databases' -u root --password=root -s --skip-column-names); do mysqldump -u root --password=root $I | gzip -c | ssh user@server.com "cat > /remote/$I.sql.gz"; done
2010-03-07 15:03:12
User: juliend2
Functions: gzip ssh
6

It grabs all the database names granted for the $MYSQLUSER and gzip them to a remote host via SSH.

ssh -c 'tar cvzf - -C /path/to/src/*' | tar xzf -
2010-03-02 14:15:17
Functions: ssh tar
0

Create tarball on stdout which is piped to tar reading from stdin all over ssh

for x in `grep server /tmp/error.log | awk '{print $3}'`; do \ t=`date "+%d-%m-%H%M%S"` ; ssh -q -t admin@$x.domain.com 'pstree -auln' > ~/snapshots/$x-$t.out \ done
2010-02-26 19:50:41
User: jrparris
Functions: awk ssh
0

Required:

1) Systems that send out alert emails when errors, database locks, etc occur.

2) a system that:

a) has the ability to receive emails, and has procmail installed.

b) has ssh keys set up to machines that would send out alerts.

When procmail receives alert email, you can issue a command like this one (greps and awks may very - you're isolating the remote hostname that had the issue).

This will pull process trees from the alerting machines, which is always useful in later analysis.

arecord -f dat | ssh -C user@host aplay -f dat
#INSIDE-host# ssh -f -N -R 8888:localhost:22 user@somedomain.org # #OUTSIDE-host#ssh user@localhost -p 8888#
2010-02-14 21:43:44
User: Abiden
Functions: ssh
3

Both hosts must be running ssh and also the outside host must have a port forwarded to port 22.

ssh 10.0.0.4 "cat /tmp/backup.sql | gzip -c1" | gunzip -c > backup.sql
2010-02-14 19:09:07
User: kennethjor
Functions: gunzip ssh
3

I've kept the gzip compression at a low level, but depending on the cpu power available on the source machine you may want to increase it. However, SQL compresses really well, and I found even with -1 I was able to transfer 40 MiB/s over a 100 mbps wire, which was good enough for me.

sshpass -p [password] rsync -av -e ssh [utente]@[indirizzoip]:/directorydacopiare/ /directorydidestinazione
2010-01-31 15:21:14
User: 0disse0
Functions: rsync ssh
0

Check the ssh_config file and set the variable:

StrictHostKeyChecking no

ssh-keygen -T moduli-2048 -f /tmp/moduli-2048.candidates
2010-01-29 19:35:21
User: eastwind
Functions: ssh ssh-keygen
1

this command test the moduli file generated by the command ssh-keygen -G /tmp/moduli-2048.candidates -b 2048 . The test can be long depend of your cpu power , around 5 minutes to 30 minutes

ssh-keygen -G /tmp/moduli-2048.candidates -b 2048
2010-01-29 19:33:23
User: eastwind
Functions: ssh ssh-keygen
0

if you lost your moduli file in openssh server side you need generate new one with this command then test if the number generated can be used with ssh-keygen -T moduli-2048 -f /tmp/moduli-2048.candidates

seq 1 5 | parallel ssh {}.cluster.net uptime
2010-01-28 08:18:50
Functions: seq ssh
Tags: parallel
2

Parallel is from https://savannah.nongnu.org/projects/parallel/

Other examples would be:

(echo foss.org.my; echo www.debian.org; echo www.freenetproject.org) | parallel traceroute

seq -f %04g 0 9999 | parallel -X rm pict{}.jpg

file='path to file'; tar -cf - "$file" | pv -s $(du -sb "$file" | awk '{print $1}') | gzip -c | ssh -c blowfish user@host tar -zxf - -C /opt/games
2010-01-19 16:02:45
User: starchox
Functions: awk du file gzip ssh tar
3

You set the file/dirname transfer variable, in the end point you set the path destination, this command uses pipe view to show progress, compress the file outut and takes account to change the ssh cipher. Support dirnames with spaces.

Merged ideas and comments by http://www.commandlinefu.com/commands/view/4379/copy-working-directory-and-compress-it-on-the-fly-while-showing-progress and http://www.commandlinefu.com/commands/view/3177/move-a-lot-of-files-over-ssh

pv /dev/zero|ssh $host 'cat > /dev/null'
2010-01-06 20:40:51
User: opertinicy
Functions: ssh
Tags: ssh pv /dev/null
11

connects to host via ssh and displays the live transfer speed, directing all transferred data to /dev/null

needs pv installed

Debian: 'apt-get install pv'

Fedora: 'yum install pv' (may need the 'extras' repository enabled)

mkfifo /tmp/fifo; ssh-keygen; ssh-copyid root@remotehostaddress; sudo ssh root@remotehost "tshark -i eth1 -f 'not tcp port 22' -w -" > /tmp/fifo &; sudo wireshark -k -i /tmp/fifo;
sudo ssh -Y remoteuser@remotehost sudo wireshark
2010-01-05 14:35:20
User: Code_Bleu
Functions: ssh sudo
-8

This allows you to display the wireshark program running on remote pc to your local pc.

yes | pv | ssh $host "cat > /dev/null"
2009-12-27 21:34:23
User: opertinicy
Functions: ssh yes
Tags: ssh yes pv
23

connects to host via ssh and displays the live transfer speed, directing all transferred data to /dev/null

needs pv installed

Debian: 'apt-get install pv'

Fedora: 'yum install pv' (may need the 'extras' repository enabled)

ssh HOST cat < LOCALFILE ">" REMOTEFILE
ssh root@server.com 'tshark -f "port !22" -w -' | wireshark -k -i -
2009-12-17 23:03:24
User: markdrago
Functions: ssh
26

This captures traffic on a remote machine with tshark, sends the raw pcap data over the ssh link, and displays it in wireshark. Hitting ctrl+C will stop the capture and unfortunately close your wireshark window. This can be worked-around by passing -c # to tshark to only capture a certain # of packets, or redirecting the data through a named pipe rather than piping directly from ssh to wireshark. I recommend filtering as much as you can in the tshark command to conserve bandwidth. tshark can be replaced with tcpdump thusly:

ssh root@example.com tcpdump -w - 'port !22' | wireshark -k -i -
cat ~/.ssh/id_rsa.pub | ssh <remote_host> "xargs --null echo >> ~/.ssh/authorized_keys"
2009-12-17 15:12:11
User: koushik
Functions: cat ssh
0

Well its just appending your public key to the remote hosts authorized_keys, but can get messy logging in and out

ssh -4 -C -c blowfish-cbc
2009-12-15 00:30:53
User: vxbinaca
Functions: ssh
Tags: ssh
18

We force IPv4, compress the stream, specify the cypher stream to be Blowfish. I suppose you could use aes256-ctr as well for cypher spec. I'm of course leaving out things like master control sessions and such as that may not be available on your shell although that would speed things up as well.

[ $1 == "client" ] && hostname || cat $0 | ssh $1 /bin/sh -s client
2009-11-25 22:24:31
User: a8ksh4
Functions: cat hostname ssh
6

Now put more interesting stuff on the script in replacement of hostname, even entire functions, etc, and stuff.

hosta> cat myScript.sh

#!/bin/sh

[ $1 == "client" ] && hostname || cat $0 | ssh $1 /bin/sh -s client

hosta> myScript.sh hostb

hostb

hosta>