Commands tagged process (44)

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

start a tunnel from some machine's port 80 to your local post 2001
now you can acces the website by going to http://localhost:2001/

network throughput test
On the machine acting like a server, run: $ iperf -s On the machine acting like a client, run: $ iperf -c ip.add.re.ss where ip.add.re.ss is the ip or hostname of the server.

Use "most" as your man pager
you should have the "most" package installed. I like it because it is colorful and easier to read. alternatively you can use "less" instead of "most". you can also add this to your ~/.bashrc to make it permanent.

Update program providing java on Debian
Lets you set all the java alternatives at once to a matching version. Also has options for just changing the jre or the plugin.

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

defragment files
Thanks to flatcap for optimizing this command. This command takes advantage of the ext4 filesystem's resistance to fragmentation. By using this command, files that were previously fragmented will be copied / deleted / pasted essentially giving the filesystem another chance at saving the file contiguously. ( unlike FAT / NTFS, the *nix filesystem always try to save a file without fragmenting it ) My command only effects the home directory and only those files with your R/W (read / write ) permissions. There are two issues with this command: 1. it really won't help, it works, but linux doesn't suffer much (if any ) fragmentation and even fragmented files have fast I/O 2. it doesn't discriminate between fragmented and non-fragmented files, so a large ~/ directory with no fragments will take almost as long as an equally sized fragmented ~/ directory The benefits i managed to work into the command: 1. it only defragments files under 16mb, because a large file with fragments isn't as noticeable as a small file that's fragmented, and copy/ delete/ paste of large files would take too long 2. it gives a nice countdown in the terminal so you know how far how much progress is being made and just like other defragmenters you can stop at any time ( use ctrl+c ) 3. fast! i can defrag my ~/ directory in 11 seconds thanks to the ramdrive powering the command's temporary storage bottom line: 1. its only an experiment, safe ( i've used it several times for testing ), but probably not very effective ( unless you somehow have a fragmentation problem on linux ). might be a placebo for recent windows converts looking for a defrag utility on linux and won't accept no for an answer 2. it's my first commandlinefu command

Check if the Debian package was used since its installation/upgrade.
This script compares the modification date of /var/lib/dpkg/info/${package}.list and all the files mentioned there. It could be wrong on noatime partitions. Here is non-oneliner: #!/bin/sh package=$1; list=/var/lib/dpkg/info/${package}.list; inst=$(stat "$list" -c %X); cat $list | ( while read file; do if [ -f "$file" ]; then acc=$(stat "$file" -c %X); if [ $inst -lt $acc ]; then echo used $file exit 0 fi; fi; done exit 1 )

Tell what is encoded in a float, given its HEX bytes
It handles all possible combination of the hex bytes, including NaNs, Infinities, Normalized and Subnormal Numbers... $ This crazy DC stuff spent me a few days to write, optimize, polish and squeeze so that it works within the tight 255 character bound... $ You can modify it easily for other IEEE754 numbers, say, half, double, double-extended, quadruple $ (I hope someone will find this useful and submit more dc code to commandlinefu!)

grep binary (hexadecimal) patterns
-P activates the Perl regular expression mode.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: