commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
In this case it's better do to use the dedicated tool
The above command will send 4GB of data from one host to the next over the network, without consuming any unnecessary disk on either the client nor the host. This is a quick and dirty way to benchmark network speed without wasting any time or disk space.
Of course, change the byte size and count as necessary.
This command also doesn't rely on any extra 3rd party utilities, as dd, ssh, cat, /dev/zero and /dev/null are installed on all major Unix-like operating systems.
Much simpler method. More portable version: ssh host -l user "`cat cmd.txt`"
I was tired of the endless quoting, unquoting, re-quoting, and escaping characters that left me with working, but barely comprehensible shell one-liners. It can be really frustrating, especially if the local and remote shells differ and have their own escaping and quoting rules. I decided to try a different approach and ended up with this.
Actually 'firefox' is a script that then launches the 'firefox-bin' executable. You need to specify the 'no-remote' option in order to launch remote firefox instead of your local one (this drove me crazy time ago)
backup big mysql db to remote machine over ssh. "--skip-opt" option is needed when you can?t allocate full database in ram.
Some servers don't have ssh-copy-id, this works in those cases.
It will ask for the destination server, this can be IP, hostname, or user@hostname if different from current user.
Ssh keygen will let you know if a pubkey already exists on your system and you can opt to not overwrite it.
First of all you need to run this command.
X :12.0 vt12 2>&1 >/dev/null &
This command will open a X session on 12th console. And it will show you blank screen. Now press Alt + Ctrl + F7. You will get your original screen.
Now run given command "xterm -display :12.0 -e ssh -X user@remotesystem &". After this press Alt + Ctrl + F12. You will get a screen which will ask you for password for remote linux system. And after it you are done. You can open any window based application of remote system on your desktop.
Press Alt + Ctrl + F7 for getting original screen.
This is a 'nocd' alternative :)
Good if only you have access to host1 and host2, but they have no access to your host (so ncat won't work) and they have no direct access to each other.
you may create an alias also, which I did ;-)
alias sshu="ssh -o UserKnownHostsFile=/dev/null "
Useful to create an alias that sends you right in the directory you want :
alias server-etc="ssh -t server 'cd /etc && $SHELL'"
This allows you to skip the banner (usually /etc/issue.net) on ssh connections.
Useful to avoid banners outputted to your mail by rsync cronjobs.
Here how to recover the remote backup over ssh
Execute it from the source host, where the source files you wish backup resides. With the minus '-' the tar command deliver the compressed output to the standar output and, trough over the ssh session to the remote host. On the other hand the backup host will be receive the stream and read it from the standar input sending it to the /path/to/backup/backupfile.tar.bz2
It grabs all the database names granted for the $MYSQLUSER and gzip them to a remote host via SSH.
Create tarball on stdout which is piped to tar reading from stdin all over ssh
1) Systems that send out alert emails when errors, database locks, etc occur.
2) a system that:
a) has the ability to receive emails, and has procmail installed.
b) has ssh keys set up to machines that would send out alerts.
When procmail receives alert email, you can issue a command like this one (greps and awks may very - you're isolating the remote hostname that had the issue).
This will pull process trees from the alerting machines, which is always useful in later analysis.
Both hosts must be running ssh and also the outside host must have a port forwarded to port 22.
I've kept the gzip compression at a low level, but depending on the cpu power available on the source machine you may want to increase it. However, SQL compresses really well, and I found even with -1 I was able to transfer 40 MiB/s over a 100 mbps wire, which was good enough for me.
Check the ssh_config file and set the variable:
this command test the moduli file generated by the command ssh-keygen -G /tmp/moduli-2048.candidates -b 2048 . The test can be long depend of your cpu power , around 5 minutes to 30 minutes
if you lost your moduli file in openssh server side you need generate new one with this command then test if the number generated can be used with ssh-keygen -T moduli-2048 -f /tmp/moduli-2048.candidates