Print a git log (in reverse order) giving a reference relative to HEAD.
HEAD (the current revision) can also be referred to as HEAD~0
The previous revision is HEAD~1 then HEAD~2 etc.
.
Add line numbers to the git output, starting at zero:
... | nl -v0 | ...
.
Insert the string 'HEAD~' before the number using sed:
... | sed 's/^ \+/&HEAD~/'
.
Thanks to bartonski for the idea :-)
Show Sample Output
Print out your age in days in binary. Today's my binary birthday, I'm 2^14 days old :-) . This command does bash arithmatic $(( )) on two dates: Today: $(date +%s) Date of birth: $(date +%s -d YYYY-MM-DD) The dates are expressed as the number of seconds since the Unix epoch (Jan 1970), so we devide the difference by 86400 (seconds per day). . Finally we pipe "obase=2; DAYS-OLD" into bc to convert to binary. (obase == output base) Show Sample Output
Often you run a command, but afterwards you're not quite sure what it did.
By adding this prefix/suffix around [COMMAND], you can list any files that were modified.
.
Take a nanosecond timestamp: YYYY-MM-DD HH:MM:SS.NNNNNNNNN
date "+%F %T.%N"
.
Find any files that have been modified since that timestamp:
find . -newermt "$D"
.
This command currently only searches below the current directory.
If you want to look elsewhere change the find parameter, e.g.
find /var/log . -newermt "$D"
Show Sample Output
A wrapper around ssh to automatically provide logging and session handling.
This function runs ssh, which runs screen, which runs script.
.
The logs and the screen session are stored on the server.
This means you can leave a session running and re-attach to it later, or from another machine.
.
.
Requirements:
* Log sessions on a remote server
* Transparent - nothing extra to type
* No installation - nothing to copy to the server beforehand
.
Features:
* Function wrapper delegating to ssh
- so nothing to remember
- uses .ssh/config as expected
- passes your command line option to ssh
* Self-contained: no scripts to install on the server
* Uses screen(1), so is:
- detachable
- re-attachable
- shareable
* Records session using script(1)
* Configurable log file location, which may contain variables or whitespace
L="$HOME" # local variable
L="\$HOME" # server variable
L="some space"
.
Limitations:
* Log dir/file may not contain '~' (which would require eval on the server)
.
.
The sessions are named by the local user connecting to the server.
Therefore if you detach and re-run the same command you will reconnect to your original session.
If you want to connect/share another's session simply run:
USER=bob ssh root@server
.
The command above is stripped down to an absolute minimum.
A fully expanded and annotated version is available as a Gist (git pastebin):
https://gist.github.com/flatcap/3c42326abeb1197ee714
.
If you want to add timing info to script, change the command to:
ssh(){ L="\$HOME/logs/$(date +%F_%H:%M)-$USER";/usr/bin/ssh -t "$@" "mkdir -p \"${L%/*}\";screen -xRRS $USER script --timing=\"$L-timing\" -f \"$L\"";}
Show Sample Output
Imagine you've started a long-running process that involves piping data,
but you forgot to add the progress-bar option to a command.
e.g.
xz -dc bigdata.xz | complicated-processing-program > summary
.
This command uses lsof to see how much data xz has read from the file.
lsof -o0 -o -Fo FILENAME
Display offsets (-o), in decimal (-o0), in parseable form (-Fo)
This will output something like:
.
p12607
f3
o0t45187072
.
Process id (p), File Descriptor (f), Offset (o)
.
We stat the file to get its size
stat -c %s FILENAME
.
Then we plug the values into awk.
Split the line at the letter t: -Ft
Define a variable for the file's size: -s=$(stat...)
Only work on the offset line: /^o/
.
Note this command was tested using the Linux version of lsof.
Because it uses lsof's batch option (-F) it may be portable.
.
Thanks to @unhammer for the brilliant idea.
Show Sample Output
Take the header line from a comma-delimited CSV file and enumerate the fields.
.
First sed replaces all commas with newlines
s/,/\n/g
Then sed quits (q) after the first line.
Finally, nl numbers all the lines
Show Sample Output
List all open files of all processes.
.
find /proc/*/fd
Look through the /proc file descriptors
.
-xtype f
list only symlinks to file
.
-printf "%l\n"
print the symlink target
.
grep -P '^/(?!dev|proc|sys)'
ignore files from /dev /proc or /sys
.
sort | uniq -c | sort -n
count the results
.
Many processes will create and immediately delete temporary files.
These can the filtered out by adding:
... | grep -v " (deleted)$" | ...
Show Sample Output
Randomly decide whether to run a command, or fail.
It's useful for testing purposes.
.
Usage: ran PERCENTAGE COMMAND [ARGS]
Note: In this version the percentage is required.
.
This is like @sesom42 and @snipertyler's commands but in a USABLE form.
.
e.g. In your complicated shell script, put "ran 99" before a crucial component.
Now, it will fail 1% of the time allowing you to test the failure code-path.
ran 99 my_complex_program arg1 arg2
Show Sample Output
This loop will finish if a file hasn't changed in the last 10 seconds. . It checks the file's modification timestamp against the clock. If 10 seconds have elapsed without any change to the file, then the loop ends. . This script will give a false positive if there's a 10 second delay between updates, e.g. due to network congestion . How does it work? 'date +%s' gives the current time in seconds 'stat -c %Y' gives the file's last modification time in seconds '$(( ))' is bash's way of doing maths '[ X -lt 10 ]' tests the result is Less Than 10 otherwise sleep for 1 second and repeat . Note: Clever as this script is, inotify is smarter. Show Sample Output
Welcome to Jon H. (@fart), the new maintainer of CommandLineFu. . In the absence of a forum, I encourage people welcome him, here, in the comments. . Also... What would you like to improve/change about the site?
Thanks to knoppix5 for the idea :-)
Print selected lines from a file or the output of a command.
Usage:
every NTH MAX [FILE]
Print every NTH line (from the first MAX lines) of FILE.
If FILE is omitted, stdin is used.
The command simply passes the input to a sed script:
sed -n -e "${2}q" -e "0~${1}p" ${3:-/dev/stdin}
print no output
sed -n
quit after this many lines (controlled by the second parameter)
-e "${2}q"
print every NTH line (controlled by the first parameter)
-e "0~${1}p"
take input from $3 (if it exists) otherwise use /dev/stdin
{3:-/dev/stdin}
Show Sample Output
Sometimes commands give you too much feedback.
Perhaps 1/100th might be enough. If so, every() is for you.
my_verbose_command | every 100
will print every 100th line of output.
Specifically, it will print lines 100, 200, 300, etc
If you use a negative argument it will print the *first* of a block,
my_verbose_command | every -100
It will print lines 1, 101, 201, 301, etc
The function wraps up this useful sed snippet:
... | sed -n '0~100p'
don't print anything by default
sed -n
starting at line 0, then every hundred lines ( ~100 ) print.
'0~100p'
There's also some bash magic to test if the number is negative:
we want character 0, length 1, of variable N.
${N:0:1}
If it *is* negative, strip off the first character ${N:1} is character 1 onwards (second actual character).
Show Sample Output
Function that searchs for process by its name:
* Shows the Header for reference
* Hides the process 'grep' from the list
* Case sensitive
The typical problem with using "ps | grep" is that the grep process shows up the in the output.
The usual solution is to search for "[p]attern" instead of "pattern".
This function turns the parameter into just such a [p]attern.
${1:0:1} is the first character of $1
.
${1:1} is characters 2-end of $1
Show Sample Output
Draw a telephone keyboard, using just a shell built-in command. Show Sample Output
It's common to want to split up large files and the usual method is to use split(1).
If you have a 10GiB file, you'll need 10GiB of free space.
Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem).
This takes AGES.
.
The command uses a set of loop block devices to create fake chunks, but without making any changes to the file.
This means the file splitting is nearly instantaneous.
The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15).
.
Note: This isn't a drop-in replacement for using split. The results are block devices.
tar and zip won't do what you expect when given block devices.
.
These commands will work:
hexdump /dev/loop4
.
gzip -9 < /dev/loop6 > part6.gz
.
cat /dev/loop10 > /media/usb/part10.bin
Show Sample Output
Show the current load of the CPU as a percentage.
Read the load from /proc/loadavg and convert it using sed:
Strip everything after the first whitespace:
sed -e 's/ .*//'
Delete the decimal point:
sed -e 's/\.//'
Remove leading zeroes:
sed -e 's/^0*//'
Show Sample Output
Convert some SVG files into PNG using ImageMagick's convert command. Run the conversions in parallel to save time. This is safer than robinro's forkbomb approach :-) xargs runs four processes at a time -P4
Convert a camelCase string into snake_case. To complement senorpedro's command. Show Sample Output
Convert some SVG files into PNG using ImageMagick's convert command.
Filter out lines of input that contain 72, or fewer, characters. "sed -n" : don't print lines by default "/^.\{73,\}/" : find lines that start with 73 (or more) characters "p" : print them Show Sample Output
Filter out lines of input that contain 72, or fewer, characters. This uses bash only. ${#i} is the number of characters in variable i. Show Sample Output
You're running a program that reads LOTS of files and takes a long time.
But it doesn't tell you about its progress.
First, run a command in the background, e.g.
find /usr/share/doc -type f -exec cat {} + > output_file.txt
Then run the watch command.
"watch -d" highlights the changes as they happen
In bash: $! is the process id (pid) of the last command run in the background.
You can change this to $(pidof my_command) to watch something in particular.
Show Sample Output
Use find's internal stat to get the file size then let the shell add up the numbers.
Securely stream a file from a remote server (and save it locally). Useful if you're impatient and want to watch a movie immediately and download it at the same time without using extra bandwidth. This is an extension of snipertyler's idea. Note: This command uses an encrypted connection, unlike the original. Show Sample Output
Quietly get a webpage from wikipedia: curl -s By default, don't output anything: sed -n Search for interesting lines: /<tr valign="top">/ With the matching lines: {} Search and replace any html tags: s/<[^>]*>//g Finally print the result: p Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: