Commands tagged shell (95)

  • To get information at your fingertips about Apache compilation. Show Sample Output


    -1
    httpd2 -V
    unixbhaskar · 2009-08-29 13:04:37 3
  • chkrootkit is a tool to locally check for signs of a rootkit,Get it from the website http://www.chkrootkit.org


    -1
    chkrootkit -x | less
    unixbhaskar · 2009-08-30 12:47:08 3
  • Searches in order of the directories of $PATH. Stops after finding the entry; looks for only that fileName. Works in Bourne, Korn, Bash and Z shells. Show Sample Output


    -1
    for L in `echo :$PATH | tr : '\n'`; do F=${L:-"."}/fileName; if [ -f ${F} -o -h ${F} ]; then echo ${F}; break; fi; done
    arcege · 2009-09-11 16:14:36 3
  • Works in all shells. Does not require a test. Handles like an assertion. Show Sample Output


    -1
    : ${VAR:?unset variable}
    arcege · 2009-09-14 19:41:01 4
  • Often times you run a command in the terminal and you don't realize it's going to take forever. You can open a new terminal, but you lose the local history of the suspended one. You can stop the running command using , but that may produce undesirable side-effects. suspends the job, and (assuming you have no other jobs running in the background) %1 resumes it. Appending & tells it to run in the background. You now have a job running concurrently with your terminal. Note this will still print any output to the same terminal you're working on. Tested on zsh and bash. Show Sample Output


    -1
    <ctrl+z> %1 &
    joem86 · 2010-10-25 17:43:38 5

  • -1
    ps aux | awk {'sum+=$3;print sum'} | tail -n 1
    tailot · 2011-07-16 16:16:59 3

  • -1
    help shopt
    ankush108 · 2012-06-26 17:25:38 19
  • iterating range of numer with for loop in shell or bash Show Sample Output


    -1
    rangeBegin=10; rangeEnd=20; for numbers in $(eval echo "{$rangeBegin..$rangeEnd}"); do echo $numbers;done
    aysadk · 2019-07-26 09:19:44 222
  • "seq" has an additional parameter to use as INCREMENT. # seq FIRST INCREMENT LAST https://linux.die.net/man/1/seq Show Sample Output


    -1
    for i in $(seq 1 5) ; do echo $i ; done
    guilsson · 2019-07-29 18:34:12 203
  • rkhunter (Rootkit Hunter) is a Unix-based tool that scans for rootkits, backdoors and possible local exploits. rkhunter is a shell script which carries out various checks on the local system to try and detect known rootkits and malware. It also performs checks to see if commands have been modified, if the system startup files have been modified, and various checks on the network interfaces, including checks for listening applications.


    -2
    rkhunter --check
    unixbhaskar · 2009-08-30 12:53:33 7
  • Read and execute commands from FILENAME in the current shell. The entries in $PATH are used to find the directory containing FILENAME. If any ARGUMENTS are supplied, they become the positional parameters when FILENAME is executed. Show Sample Output


    -2
    . filename [arguments]
    saibbot · 2011-06-06 14:14:43 3
  • Found this one little more for me. This one removes the perl dependency (from command 2535). Source for command : http://www.earthinfo.org/linux-disk-usage-sorted-by-size-and-human-readable/ Show Sample Output


    -3
    function duf { du -sk "$@" | sort -n | while read size fname; do for unit in k M G T P E Z Y; do if [ $size -lt 1024 ]; then echo -e "${size}${unit}\t${fname}"; break; fi; size=$((size/1024)); done; done; }
    marssi · 2009-07-02 19:56:36 6
  • After you install slocate ,the first thing you have to do with it to initialise the database by issuing a command " slocate -u" . And then onwards just give the filename or dirname as a argument to the slocate command will reveal the files/dirs location in the system along with path.Moreover over it's an securely way of looking into the file system. Show Sample Output


    -3
    slocate filename/dirname
    unixbhaskar · 2009-08-29 03:28:08 3
  • Now a bit of explanation required for this command.Once you type the command it opens up an vi editor with an temporary file enlisting the password file information .So if you make an change it will not reflected in the passwd file until you save the file.The reason behind using this command over other way to view the password file in network environment is that it locks the password file when you start working with it.So no one can temper with it during that period.Once you are done(means you save the tmp file) ,it will release the lock associated with it.I think it's a better mechanism to view the sensitive data like passwd file.Never ever use other tool like cat, nano or any other means. Show Sample Output


    -3
    vipw
    unixbhaskar · 2009-08-29 03:46:42 3
  • If you follow my previous posting regarding "vipw" then no explanation required.The same method goes behind this command also.It will open an tmp file in vi editor to give you the enlisting to edit the group file.And most importantly to attach a lock with it.Once you are done ,the lock is released and the changed reflected to the original file.So you can securely edit the group file over the network without the fear of being tampered . Show Sample Output


    -3
    vigr
    unixbhaskar · 2009-08-29 03:56:07 3
  • If you follow my other posting regarding "vipw" and "vigr' then no explanation required.It has done the same thing as did with those two command.Open the /etc/sudoers file and attach a lock with it. Once you are done with it ,the lock gets released and the changes reflected to the original file.It will open a tmp file in vi editor to give you the chance to edit the sudoers file securely.visudo parses the sudoers file after the edit and will not save the changes if there is a syntax error. Upon finding an error, visudo will print a message stating the line number(s) where the error occurred and the user will receive the "What now?" prompt. At this point the user may enter "e" to re-edit the sudoers file, "x" to exit without saving the changes, or "Q" to quit and save changes. The "Q" option should be used with extreme care because if visudo believes there to be a parse error, so will sudo and no one will be able to sudo again until the error is fixed. If "e" is typed to edit the sudoers file after a parse error has been detected, the cursor will be placed on the line where the error occurred (if the editor supports this feature). PS: Although I have had experienced myself and few people shown to me that it behaves badly in some distribution ,noteably SLES.But the problem can be rectified with little caution. Show Sample Output


    -3
    visudo
    unixbhaskar · 2009-08-29 04:06:11 3
  • Once it is connected to the remote server by that ssh protocol,the mentioned command will start working on that server.


    -3
    ssh user@remotehost [anycommand](i.e uptime,w)
    unixbhaskar · 2009-08-29 04:27:37 7
  • This command will reveal login has been made to the system as well as when the reboot occurs. It uses a file called /var/log/wtmp,which captures all the information about the successful login and reboot information. It has many switch ,by which you can get an idea when people login how long they stay. Show Sample Output


    -3
    last
    unixbhaskar · 2009-08-29 12:08:30 4
  • Useful in while and if statements if not grep string filename; then echo string not found; exit 1; fi


    -4
    not () { "$@" && return 1 || return 0; }
    arcege · 2009-09-23 01:09:53 5
  • If you want to copy all files listed (with full path) in a text-file (i.e. cmus playlist.pl) to a certain directory use this nice oneliner... Credits goes to RiffRaff: http://www.programmingforums.org/post242527-2.html


    -4
    (while read fn; do; cp "$fn" $DESTINATION\.; done<filename.txt)
    jameskirk · 2013-05-05 16:29:51 11
  • ‹ First  < 2 3 4

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

get all Google ipv4 subnets for a iptables firewall for example

Print a row of characters across the terminal
shorter than alternative

Merge files, joining each line in one line
Merge files, joining line by line horizontally. Very useful when you have a lot of files where each line represents an info about an event and you want to join them into a single file where each line has all the info about the same event See the example for a better understanding

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Disco lights in the terminal

List your interfaces and MAC addresses
Requires sysfs mounted on /sys - may only be useful for Linux systems. Could also use "printf '%-8s %s\n' $(basename $f) $(cat $f/address)" instead of echo.

list files recursively by size

Look at your data as a greymap image.
Keep width to a power of 2 to see patterns emerge. 512 is good. So is 4096 for huge maps. PNM headers are super basic. http://netpbm.sourceforge.net/doc/pbm.html

Lists installed kernels

Scan for [samba|lanman] NetBIOS names and ip addresses in LAN by ARP.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: