All commands (14,187)

  • You might want to check what file and directory names would be renamed or chopped if you create iso 9660 level 2 image out of them. Use this command to check first. Show Sample Output


    1
    find . -regextype posix-extended -not -regex '.*/[A-Za-z_]*([.][A-Za-z_]*)?'
    zhangweiwu · 2010-06-25 00:27:09 3
  • 'ac' is included in the package 'acct', which is described as "The GNU Accounting utilities for process and login accounting". Other interesting flags are: * print statistics for a specified user ac -d username * print statistics for all the users ac -p With my command, the output is also printed in a sexagesimal, more readable, style. Show Sample Output


    -1
    ac -d | awk '{h=int($NF); m=($NF-h)*60; s=int((m-int(m))*60); m=int(m); print $0" = "h"h "m"m "s"s "}'
    karpoke · 2010-06-24 08:08:41 7
  • Curl is not installed by default on many common distros anymore. wget always is :) wget -qO- ifconfig.me/ip


    12
    wget -qO- icanhazip.com
    SuperJediWombat · 2010-06-24 03:49:14 8
  • This ran on a ubuntu box using espeak for speaking text with the bash shell. On a mac you should use 'say'. Also you can change your alarm interval and your snooze interval which are currently 8 hours and 1 minute. I would run this via cron yet it's easier to disable if you run it as a command like this :P Show Sample Output


    3
    sleep 8h && while [ 1 ] ; do date "+Good Morning. It is time to wake up. The time is %I %M %p" | espeak -v english -p 0 -s 150 -a 100 ; sleep 1m; done
    copremesis · 2010-06-23 17:34:54 3
  • Validate a file using xmllint. If there are parser errors, edit the file in vim at the line of the first error.


    1
    vimlint(){ eval $(xmllint --noout "$1" 2>&1 | awk -F: '/parser error/{print "vim \""$1"\" +"$2; exit}'); }
    putnamhill · 2010-06-23 15:55:02 3
  • This function counts the opening and closing braces in a string. This is useful if you have eg long boolean expressions with many braces and you simply want to check if you didn't forget to close one. Show Sample Output


    1
    countbraces () { COUNT_OPENING=$(echo $1 | grep -o "(" | wc -l); COUNT_CLOSING=$(echo $1 | grep -o ")" | wc -l); echo Opening: $COUNT_OPENING; echo Closing: $COUNT_CLOSING; }
    hons · 2010-06-23 12:24:18 4
  • Personally, I save this in a one line script called ~/bin/sci: #!/bin/bash for pid in `screen -ls | grep -v $STY | grep tached | awk '{print $1;}' | perl -nle '$_ =~ /^(\d+)/; print $1;'`; do screen -x $pid; done I also use: alias scx='screen -x' alias scl='screen -ls | grep -v $STY'


    0
    for pid in `screen -ls | grep -v $STY | grep tached | awk '{print $1;}' | perl -nle '$_ =~ /^(\d+)/; print $1;'`; do screen -x $pid; done
    tmsh · 2010-06-22 23:06:31 29
  • This is helpful if you connect to several networks with different subnets such as 192 networks, 10 networks, etc. Cuts first three octets of ip from ifconfig command and runs nmap ping scan on that subnet. Replace wlan0 with your interface. Assumes class c network, if class b use: cut -d "." -f 1-2 and change nmap command accordingly.


    -1
    dhclient wlan0 && sbnt=$(ifconfig wlan0 |grep "inet addr" |cut -d ":" -f 2 | cut -d "." -f 1-3) && nmap $sbnt.0/24 -sP
    wltj · 2010-06-22 21:00:29 6
  • It really disables all ICMP responses not only the ping one. If you want to enable it you can use: sudo -s "echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all"


    6
    sudo -s "echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all"
    sliceoflinux · 2010-06-22 19:16:43 14
  • The improvement is that you can re-attach to the screen at a later point.


    5
    screen -d -m command &
    unixmonkey10455 · 2010-06-22 18:24:22 3
  • Requires figlet. Other than that, this should be portable enough across all the Bourne-compatible shells (sh, bash, ksh, zsh, etc). Produces a massive number using figlet that counts down the number of seconds for any given minute interval. For example, here's a 4-minute timer: i=$((4*60)); while [ $i -gt 0 ]; do clear; echo $i | figlet; sleep 1; i=$(($i-1)); done; And a 1-minute timer: i=$((1*60)); while [ $i -gt 0 ]; do clear; echo $i | figlet; sleep 1; i=$(($i-1)); done; Show Sample Output


    1
    i=$((15*60)); while [ $i -gt 0 ]; do clear; echo $i | figlet; sleep 1; i=$(($i-1)); done;
    atoponce · 2010-06-22 17:49:36 5
  • This is an easy way to quickly get a status for a device in multipath on SLES systems, as long as the server is configured based on Novell's standards, where multipathed disks are referred to by /dev/disk/by-... tree. Make sure to replace name_of_vg with your Volume Group name.


    -1
    pvscan | awk '/name_of_vg/ {print $2}' | sed 's/[-|/|]/ /g' | cut -d " " -f7
    slashdot · 2010-06-22 16:34:42 3
  • Figlet is easy to find for download on the internet, and works for any text. Quite cool. Show Sample Output


    4
    watch -tn1 'date +%r | figlet'
    SuperFly · 2010-06-22 10:59:16 6
  • On-the-fly conversion of Unix Time to human-readable in Squid's access.log Show Sample Output


    1
    perl -p -e 's/^([0-9]*)/"[".localtime($1)."]"/e' < /var/log/squid/access.log
    KoRoVaMiLK · 2010-06-22 08:42:40 4
  • Usage: get-ipsw device-name generation-string firmware-version For example: get-ipsw iPod 2,1 4.0 Different generation strings: iPhone 3G: iPhone 1,2 iPhone 3GS: iPhone 2,1 iPod touch 2G: iPod 2,1 iPod touch 3G: iPod 3,1 This can be used with idevicerestore (I haven't tried it though). http://github.com/posixninja/idevicerestore Based on: http://www.tuaw.com/2010/06/21/ios-4-0-firmware-release-expected-momentarily-quick-terminal-ti/ Show Sample Output


    -1
    get-ipsw(){ curl -s -L http://phobos.apple.com/version | sed -rn "s|[\t ]*<string>(http://appldnld\.apple\.com\.edgesuite\.net/content\.info\.apple\.com/iPhone[0-9]?/[^/]*/$1$2_$3_[A-Z0-9a-z]*_Restore\.ipsw)</string>|\1|p" | uniq; }
    matthewbauer · 2010-06-22 02:34:15 5
  • Google just released a new commend line tool offering all sorts of new services from the commend line. One of them is uploading a youtube video but there are plenty more google services to interact with. Download it here: http://code.google.com/p/googlecl/ Manual: http://code.google.com/p/googlecl/wiki/Manual This specific command courtesy of lifehacker:http://lifehacker.com/5568817/ Though all can be found in manual page linked above. Show Sample Output


    38
    google docs edit --title "To-Do List" --editor vim
    spiffwalker · 2010-06-21 16:15:42 17
  • This command can be used to revert a particular changeset in the local copy. I find this useful because I frequently import files into the wrong directory. After the import it says "Committed revision 123" or similar. to revert this change in the working copy do: svn merge -c -123 . (don't forget the .) and then commit. Show Sample Output


    1
    svn merge -c -REV
    shadycraig · 2010-06-21 15:11:13 3
  • Simple countdown clock that should be quite portable across any Bourne-compatible shell. I used to teach for a living, and I would run this code when it was time for a break. Usually, I would set "MIN" to 15 for a 15-minute break. The computer would be connected to a projector, so this would be projected on screen, front and center, for all to see. Show Sample Output


    12
    MIN=1 && for i in $(seq $(($MIN*60)) -1 1); do echo -n "$i, "; sleep 1; done; echo -e "\n\nBOOOM! Time to start."
    atoponce · 2010-06-20 15:19:12 198
  • This uses PV to monitor the progress of the MySQL import and displays it though Zenity. You could also do this pv ~/database.sql | mysql -u root -pPASSWORD -D database_name and get a display in the CLI that looks like this 2.19MB 0:00:06 [ 160kB/s] [> ] 5% ETA 0:01:40 My Nautalus script using this command is here http://www.daniweb.com/forums/post1253285.html#post1253285


    5
    (pv -n ~/database.sql | mysql -u root -pPASSWORD -D database_name) 2>&1 | zenity --width 550 --progress --auto-close --auto-kill --title "Importing into MySQL" --text "Importing into the database"
    kbrill · 2010-06-19 22:40:10 7
  • I use zgrep because it also parses non gzip files. With ls -tr, we parse logs in time order. Greping the empty string just concatenates all logs, but you can also grep an IP, an URL...


    2
    zgrep -h "" `ls -tr access.log*`
    dooblem · 2010-06-19 09:44:05 4
  • This command allows you to stream your log files, including gziped files, into one stream which can be piped to awk or some other command for analysis. Note: if your version of 'find' supports it, use: find /var/log/apache2 -name 'access.log*gz' -exec zcat {} + -or -name 'access.log*' -exec cat {} +


    0
    find /var/log/apache2 -name 'access.log*gz' -exec zcat {} \; -or -name 'access.log*' -exec cat {} \;
    recursiverse · 2010-06-19 08:35:12 3
  • No need for grep, let awk do the match. This will not behave properly if the filenames contains whitespace, which is awk's default field separator.


    1
    svn st | awk '{if ($1 ~ "?") print $2}' | xargs svn add
    sciurus · 2010-06-19 03:07:26 6
  • Using -f treats the process name as a pattern so you don't have to include the full path in the command. Thus 'pkill -f firefox' works, even with iceweasel.


    0
    pkill -f <process name>
    eikenberry · 2010-06-19 02:36:31 3
  • Same as http://www.commandlinefu.com/commands/view/5876, but for bash. This will show a numerical value for each of the 256 colors in bash. Everything in the command is a bash builtin, so it should run on any platform where bash is installed. Prints one color per line. If someone is interested in formatting the output, paste the alternative.


    49
    for code in {0..255}; do echo -e "\e[38;05;${code}m $code: Test"; done
    scribe · 2010-06-19 02:14:42 14
  • This command will automate the creation of ESSIDs and batch processing in pyrit. Give it a list of WPA/WPA2 access points you're targeting and it'll import those ESSIDs and pre-compute the potential password hashes for you, assuming you've got a list of passwords already imported using: pyrit -i dictionary import_passwords Once the command finishes, point pyrit to your packet capture containing a handshake with the attack_db module. Game over. Show Sample Output


    0
    gopyrit () { if [ $# -lt 1 ]; then echo $0 '< list of ESSIDs >'; return -1; fi; for i in "$@"; do pyrit -e $i create_essid && pyrit batch; done; pyrit eval }
    meathive · 2010-06-19 01:11:00 7
  • ‹ First  < 364 365 366 367 368 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Find the package that installed a command

Count the number of deleted files
It does not work without the verbose mode (-v is important)

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Find out current working directory of a process

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

Count the total number of files in each immediate subdirectory
counts the total (recursive) number of files in the immediate (depth 1) subdirectories as well as the current one and displays them sorted. Fixed, as per ashawley's comment

for too many arguments by *
$ grep ERROR *.log -bash: /bin/grep: Argument list too long $ echo *.log | xargs grep ERROR /dev/null 20090119.00011.log:DANGEROUS ERROR

create ext4 filesystem with big count of inodes
XX is your device partition number like /dev/sdc1 . to see how many inodes your partition have type: $ df --inodes (or df -i) Default formatting with ext4 would create small inode count for the new partition if you need big count of inodes is the fstype news the correct one. in debian you can see which fstype exists as template in: $ vim /etc/mke2fs.conf if you format default ext for a partition size with 1TB you would get 1 Million inodes (not enough for backupStorages) but if you format with fstype news you would get hunderd of millions of inodes for the partition. you have tune $/etc/sysctl.conf also with following sysconfig parameters $ fs.file-max = XXX $ fs.nr_open = XXX where XXX is the count of max inodes for whole system

A signal trap that logs when your script was killed and what other processes were running at that time
trap is the bash builtin that allows you to execute commands when the current script receives a particular signal. Uses $0 for the script name, $$ for the script PID, tee to output to STDOUT as well as a log file and ps to log other running processes.

take a look to command before action
add |sh when you agree the list, I often use that method to prevent typos in dangerous or long operations


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: