So, I'm using a CentOS VM in VirtualBox, and created four new disks in the SCSI controller. The VM created the folders: /dev/sda /dev/sdb /dev/sdc /dev/sdd Using a 'for loop' all disks are partitioned for LVM.
In the field, I needed to script a process to scan a specific vendor devices in the network. With the help of nmap, I got all the devices of that particular vendor, and started a scripted netcat session to download configuration files from a tftp server. This is the nmap loop (part of the script). You can however, add another pipe with grep to filter the vendor/manufacturer devices only. If want to check the whole script, check in http://pastebin.com/ju7h4Xf4 Show Sample Output
### for ADUSER in $(wbinfo -u --domain="$(wbinfo --own-domain)" | sort); do WBSEP=$(wbinfo --separator); ADUNAME=$(wbinfo -i "$ADUSER" | cut -d ":" -f5); UINFO=$(wbinfo -i "$ADUSER" | cut -d ":" -f4); SIDG=$(wbinfo -G "$UINFO"); GROUPID=$(wbinfo -s "$SIDG" | sed 's/.\{1\}$//' | cut -d "$WBSEP" -f2); echo -e "$ADUSER ($ADUNAME)\n$(printf '%.s-' {1..32})\n\t[*] $GROUPID"; for GID in $(wbinfo -r "$ADUSER"); do SID=$(wbinfo -G "$GID"); GROUP=$(wbinfo -s "$SID" | cut -d " " -f1,2); echo -e "\t[ ] $(echo -e "${GROUP/%?/}" | cut -d "$WBSEP" -f2)"; done | sed '1d'; echo -e "$(printf '%.s=' {1..32})\n"; done ### Show Sample Output
### for ADUSER in $(wbinfo -u --domain="$(wbinfo --own-domain)" | sort); do WBSEP=$(wbinfo --separator); ADUNAME=$(wbinfo -i "$ADUSER" | cut -d ":" -f5); UINFO=$(wbinfo -i "$ADUSER" | cut -d ":" -f3); GINFO=$(wbinfo -i "$ADUSER" | cut -d ":" -f4); SIDU=$(wbinfo -U "$UINFO"); SIDG=$(wbinfo -G "$GINFO"); USERID=$(wbinfo -s "$SIDU" | sed 's/.\{1\}$//' | cut -d "$WBSEP" -f2); GROUPID=$(wbinfo -s "$SIDG" | sed 's/.\{1\}$//' | cut -d "$WBSEP" -f2); echo -e "$ADUSER:$USERID:$ADUNAME:$GROUPID"; done | column -tx -s: ### Show Sample Output
Creates an incremental snapshot of individual folders.
Problem: I wanted to backup user data individually, using and incremental method. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...) I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_2013-12-05.tgz" and "lucy_2013-12-05.tgz".
Problem: I wanted to backup user data individually. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...) I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_full.tgz" and "lucy_full.tgz".
Renames files eliminating suffix, in this case everything after "-" is cutted. Just change "-" with the character you need. Show Sample Output
Clears the "arp" table, without entering manually addresses (tested in Ubuntu).
I've been using it in a script to build from scratch proxy servers. Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: