commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
When you have to manage lot of servers, it's boring to type ssh [email protected] for each connection. Now you can type juste "s someting" and you are connected.
You can too add bash_completion script to complet with tab the name of your servers. This will be the next tips from me ;)
You will need libnotify-bin for this to work:
sudo aptitude install libnotify-bin
From time to time one forgets either thier gpg key or other passphrases. This can be very problematic in most cases. But luckily there's this script. Its based off of pwsafe which is a unix commandline program that manages encrypted password databases. For more info on pwsafe visit, http://nsd.dyndns.org/pwsafe/.
What this script does is it will help you store all your passphrases for later on and allow you to copy it to your clipboard so you can just paste it in, all with one password. Pretty neat no?
You can find future releases of this and many more scripts at The Teachings of Master Denzuko - denzuko.wordpress.com.
With this command you can use shell variables inside sed scripts.
This is useful if the script MUST remain in an external file, otherwise you can simply use an inline -e argument to sed.
bash.org is a collection of funny quotes from IRC.
WARNING: some of the quotes contain "adult" jokes... may be embarrassing if your boss sees them...
Thanks to Chen for the idea and initial version!
This script downloads a page with random quotes, filters the html to retrieve just one liners quotes and outputs the first one.
Just barely under the required 255 chars :)
Improvment:
You can replace the head -1 at the end by:
awk 'length($0)>0 {printf( $0 "\n%%\n" )}' > bash_quotes.txt
which will separate the quotes with a "%" and place it in the file.
and then:
strfile bash_quotes.txt
which will make the file ready for the fortune command
and then you can:
fortune bash_quotes.txt
which will give you a random quote from those in the downloaded file.
I download a file periodically and then use the fortune in .bashrc so I see a funny quote every time I open a terminal.
This is a bit to bit copy so if you have a 500GB hard disk it will take a long time even if have Gigabit Ethernet
This opens up nautilus in the current directory, which is useful for some quick file management that isn't efficiently done from a terminal.
Find all corrupted jpeg in the current directory, find a file with the same name in a source directory hierarchy and copy it over the corrupted jpeg file.
Convenient to run on a large bunch of jpeg files copied from an unsure medium.
Needs the jpeginfo tool, found in the jpeginfo package (on debian at least).
I have a bash alias for this command line and find it useful for searching C code for error messages.
The -H tells grep to print the filename. you can omit the -i to match the case exactly or keep the -i for case-insensitive matching.
This find command find all .c and .h files
Aureport is a tool for displaying auditd system log. -x options cause to display launched executable on system.
Aureport work with auditd so auditd must be installed an running on a system.
Tested on CentOS / Debian
Please take notice that if you are going to use an JPG file for shadow effect,
let change -background none to -background white!
Because -background none make a transparent effect while JPG doesn't support transparent! And when viewing, you will get a bacl box!
So we will use an white background under! We can use other color as well!
rotate: the rotate angle
width, $height: width and height to scale to
birghtness: change brighness
The colors are defined as variables.
e.g.
RED="\[\033[01;31m\]"
BLUE="\[\033[01;34m\]"
Runs an instance of screen with name of "name_me" and command of "echo "hi""
To reconnect to screen instance later use:
screen -r name_me
While I love gpg and truecrypt there's some times when you just want to edit a file and not worry about keys or having to deal needing extra software on hand. Thus, you can use vim's encrypted file format.
For more info on vim's encrypted files visit: http://www.vim.org/htmldoc/editing.html#encryption
This prints a summary of your referers from your logs as long as they occurred a certain number of times (in this case 500). The grep command excludes the terms, I add this in to remove results Im not interested in.
I use this (well I normally just drop the F=*.log bit and put that straight into the awk command) to count how many times I get referred from another site. I know its rough, its to give me an idea where any posts I make are ending up. The reason I do the Q="query" bit is because I often want to check another domain quickly and its quick to use CTRL+A to jump to the start and then CTRL+F to move forward the 3 steps to change the grep query. (I find this easier than moving backwards because if you group a lot of domains with the pipe your command line can get quite messy so its normally easier to have it all at the front so you just have to edit it & hit enter).
For people new to the shell it does the following. The Q and F equals bits just make names we can refer to. The awk -F\" '{print $4}' $F reads the file specified by $F and splits it up using double-quotes. It prints out the fourth column for egrep to work on. The 4th column in the log is the referer domain. egrep then matches our query against this list from awk. Finally wc -l gives us the total number of lines (i.e. matches).
there is no explicit find command in DOS you can create a batch file with this one and find all jpegs on the C drive ...
note: if creating a batch file "find.bat" the syntax changes to:
for %%f in (c) do dir %%f:\%1 /s /p
you can then use
find *.jpg
Reads in the ~/.Xdefaults lexicographically sorted with, instead of replacing, the current contents of the specified properties.