Commands using tee (102)

  • Explanation It creates dnsmasq-com-blackhole.conf file with one line to route all domains of com zones to You might use "address=/home.lab/" to point allpossiblesubdomains.home.lab to your localhost or some other IP in a cloud. Show Sample Output

    echo "address=/com/" | sudo tee /etc/dnsmasq.d/dnsmasq-com-blackhole.conf && sudo systemctl restart dnsmasq
    emphazer · 2018-05-14 16:28:18 0
  • netstat doesn't always function similarly across the board. Also the use of three commands in the original (netstat followed by grep followed by grep) is a waste of pipes

    lsof -i :80 | tee /dev/stderr | wc -l
    AdvancedThreat · 2017-11-26 16:04:59 0

  • 1
    echo performance |sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    phrik · 2017-11-13 12:34:32 0
  • List all your public IPs in an EC2/AWS region, and run an nmap scan against them (ignoring ping response). Requires: aws cli, jq for shell JSON processing Show Sample Output

    nmap -P0 -sV `aws --output json ec2 describe-addresses | jq -r '.Addresses[].PublicIp'` | tee /dev/shm/nmap-output.txt
    NightMonkey · 2017-08-18 17:55:13 0
  • Summarize established connections after netstat output. Using tee and /dev/stderr you can send one command output to terminal before executing wc so you can summarize at the bottom of the output. Show Sample Output

    netstat -n | grep ESTAB |grep :80 | tee /dev/stderr | wc -l
    rubenmoran · 2016-06-26 11:37:19 0
  • Save all output to a log.

    nohup bash 2>&1 | tee -i i-like-log-files.log &
    Tatsh · 2015-09-02 06:07:11 3
  • Use tee -a to append.

    command foo bar | sudo tee /etc/write-protected > /dev/null
    adeverteuil · 2015-02-08 03:58:35 0
  • If you have a client that connects to a server via plain text protocol such as HTTP or FTP, with this command you can monitor the messages that the client sends to the server. Application level text stream will be dumped on the command line as well as saved in a file called proxy.txt. You have to change 8080 to the local port where you want your client to connect to. Change also to the IP address of the destination server and 80 to the port of the destination server. Then simply point your client to localhost 8080 (or whatever you changed it to). The traffic will be redirected to host on port 80 (or whatever you changed them to). Any requests from the client to the server will be dumped on the console as well as in the file "proxy.txt". Unfortunately the responses from the server will not be dumped. Show Sample Output

    mkfifo fifo; while true ; do echo "Waiting for new event"; nc -l 8080 < fifo | tee -a proxy.txt /dev/stderr | nc 80 > fifo ; done
    ynedelchev · 2015-01-14 09:26:54 2

  • 0
    sudo tee /path/to/file < /dev/null
    Panovski · 2014-10-02 20:08:42 0
  • Copies to file.copy1 ... file.copyn

    tee < file.copy1 file.copy2 [file.copyn] > /dev/null
    dacoman · 2014-10-02 16:41:36 0
  • This command allows to follow up a trace on SDP (CS5.2), at the same time as the trace records are stored in the file with "raw" format. Trace files in native format are useful to filter the records before to translation from '|' to '\n'. Example: grep -v OP_GET <raw-records>.trace | tr '|' '\n' Show Sample Output

    tail -1f /var/opt/fds/logs/TraceEventLogFile.txt.0 | grep <msisdn> | tee <test-case-id>.trace | tr '|' '\n'
    neomefistox · 2014-08-21 19:29:07 0
  • get master info: head -n 40 /home/db_bak.sql |awk '$0~/MASTER_LOG_FILE/ slave server: change master ??. start slave

    mysqldump -pyourpass --single-transaction --master-data=2 -q --flush-logs --databases db_for_doslave |tee /home/db_bak.sql |ssh "mysql"
    dragonwei · 2014-08-11 05:57:21 0
  • This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray. Go to the directory where you want to write the output files : cd /bklogs Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www". It will create a tar and write it to the tape drive on /dev/nst0. In the process, it will 1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact 2) generate a filelist of the content of the tar with filesize to $bkname.lst 3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz. 4) show buffer in/out speed and used space in the buffer 5) show progress bar with time approximation using pv ADD : To eject the tape : ; sleep 75; mt-st -f /dev/nst0 rewoffl TODO: 1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it." 2) Fix filesize approximation 3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?) 4) Support spanning on multiple tape. 5) Replace tar format with something else (dar?); looking at xar right now (, xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project. TIP: 1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing. 2) Remove the v in tar argument cvf to prevent listing all files added to the archive. 3) You can get tarsum ( and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !

    bkname="test"; tobk="*" ; totalsize=$(du -csb $tobk | tail -1 | cut -f1) ; tar cvf - $tobk | tee >(sha512sum > $bkname.sha512) >(tar -tv > $bkname.lst) | mbuffer -m 4G -P 100% | pv -s $totalsize -w 100 | dd of=/dev/nst0 bs=256k
    johnr · 2014-07-22 15:47:50 1
  • You have an external USB drive or key. Apply this command (using the file path of anything on your device) and it will simulate the unplug of this device. If you just want the port, just type : echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") Show Sample Output

    echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") | sudo tee /sys/bus/usb/drivers/usb/unbind
    tweet78 · 2014-04-06 12:06:29 9

  • 0
    command_line 2>&1 | tee -a output_file
    esplinter · 2014-04-05 13:02:30 0
  • Many circumstances call for creating variable of a summary result while still printing the original pipe. Inserting "tee >(cat >&2)" allows the command output to still be printed while permitting the same output to be processed into a variable. Show Sample Output

    num_errs=`grep ERROR /var/log/syslog | tee >(cat >&2) | wc -l`
    accountholder · 2014-03-12 00:04:24 0
  • Securely stream a file from a remote server (and save it locally). Useful if you're impatient and want to watch a movie immediately and download it at the same time without using extra bandwidth. This is an extension of snipertyler's idea. Note: This command uses an encrypted connection, unlike the original. Show Sample Output

    ssh USER@HOST cat REMOTE_FILE.mp4 | tee LOCAL_FILE.mp4 | mplayer -
    flatcap · 2013-11-28 11:25:26 4
  • Requires a listening port on HOST eg. "cat movie.mp4 | nc -l 1356 " (cat movie.mp4 | nc -l PORT) Useful if you're impatient and want to watch a movie immediately and download it at the same time without using extra bandwidth. You can't seek (it'll crash and kill the stream) but you can pause it.

    nc HOST PORT | tee movie.mp4 | mplayer -
    snipertyler · 2013-11-28 01:38:29 0

  • 0
    :w !sudo tee %
    sheeju · 2013-11-26 07:52:21 0
  • This is to overcome the issue of slow I/O by reading once and forwarding the output to several processes (e. g. 3 in the given command). One could also invoke grep or other programs to work on read data. Show Sample Output

    dd if=file | tee >(sha1sum) >(md5sum) >(sha256sum) >/dev/null
    dubbaluga · 2013-11-07 17:43:54 0

  • 2
    ssh remoteUser@remoteHost "tail -f /var/log/scs/remoteLogName" | tee localLogName
    scanepa · 2013-09-29 20:32:51 0

  • 0
    :w !sudo tee %
    KodjoSuprem · 2013-09-25 08:56:21 0
  • Sets the @ A record for your domain hosted by namecheap to your current internet-facing IP address, logs success or failure with syslog, and logs the data returned to /root/dnsupdate. Change the XXX's as appropriate. More info at: Show Sample Output

    logger -tdnsupdate $(curl -s ''|tee -a /root/dnsupdate|perl -pe'/Count>(\d+)<\/Err/;$_=$1eq"0"?"Update Sucessful":"Update failed"'&&date>>/root/dnsupdate)
    MagisterQuis · 2013-08-11 16:27:39 0
  • Pipes the header row of ps to STDERR, then greps for the command on the output of ps, removing the grep entry before that. Show Sample Output

    psgrep() { ps aux | tee >(head -1>&2) | grep -v " grep $@" | grep "$@" -i --color=auto; }
    fnl · 2013-08-02 12:44:32 0
  • Replace the first part of the command above with the appropriate timezone string. Eg: 'Europe/London' or for UTC - 'Etc/UTC'. The appropriate string can be found from This is useful when your server is installed by a data centre (managed hardware, VPS, etc) and the timezone is not usually set to the one your prefer.

    echo 'Etc/UTC' | tee /etc/timezone; dpkg-reconfigure --frontend noninteractive tzdata
    donatello · 2013-04-22 06:14:55 0
  •  1 2 3 >  Last ›

What's this? is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.


Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: