Check These Out
Occasionally, to force zone updating, cache flush is necessary. The use of this command is better than restart the Bind9 process.
This command will give you a list of available keyboard shortcuts according to stty.
This captures traffic on a remote machine with tshark, sends the raw pcap data over the ssh link, and displays it in wireshark. Hitting ctrl+C will stop the capture and unfortunately close your wireshark window. This can be worked-around by passing -c # to tshark to only capture a certain # of packets, or redirecting the data through a named pipe rather than piping directly from ssh to wireshark. I recommend filtering as much as you can in the tshark command to conserve bandwidth. tshark can be replaced with tcpdump thusly:
$ ssh root@example.com tcpdump -w - 'port !22' | wireshark -k -i -
Can be used to create path alias.
From: https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html. #9
Thanks to flatcap for optimizing this command.
This command takes advantage of the ext4 filesystem's resistance to fragmentation.
By using this command, files that were previously fragmented will be copied / deleted / pasted essentially giving the filesystem another chance at saving the file contiguously. ( unlike FAT / NTFS, the *nix filesystem always try to save a file without fragmenting it )
My command only effects the home directory and only those files with your R/W (read / write ) permissions.
There are two issues with this command:
1. it really won't help, it works, but linux doesn't suffer much (if any ) fragmentation and even fragmented files have fast I/O
2. it doesn't discriminate between fragmented and non-fragmented files, so a large ~/ directory with no fragments will take almost as long as an equally sized fragmented ~/ directory
The benefits i managed to work into the command:
1. it only defragments files under 16mb, because a large file with fragments isn't as noticeable as a small file that's fragmented, and copy/ delete/ paste of large files would take too long
2. it gives a nice countdown in the terminal so you know how far how much progress is being made and just like other defragmenters you can stop at any time ( use ctrl+c )
3. fast! i can defrag my ~/ directory in 11 seconds thanks to the ramdrive powering the command's temporary storage
bottom line:
1. its only an experiment, safe ( i've used it several times for testing ), but probably not very effective ( unless you somehow have a fragmentation problem on linux ). might be a placebo for recent windows converts looking for a defrag utility on linux and won't accept no for an answer
2. it's my first commandlinefu command
This will merge all of the changes from {rev_num} to head on the branch to the current working directory
Then just nc servername 2600 and ./script.sh
kill the client with ctrl+c. You can reconnect several times.
kill the server with exit
Create commands to download all of your Picasaweb albums
Install Googlecl (http://code.google.com/p/googlecl/) and authenticate first.
Create a temporary file that acts as swap space. In this example it's a 1GB file at the root of the file system. This additional capacity is added to the existing swap space.