All commands (14,187)

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

Count occurrences per minute in a log file
The cut should match the relevant timestamp part of the logfile, the uniq will count the number of occurrences during this time interval.

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

Filter IPs out of files

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

list the top 15 folders by decreasing size in MB
list the top 15 folders by decreasing size in MB

Lock your KDE4 remotely (via regular KDE lock)
Forgot to lock your computer? Want to lock it via SSH or mobile phone or use it for scheduled lock? TIP: Make a alias for this (e. g. as "lock"). I found some howtos for ugly X11 lock, but this will use regular KDE locking utility. Note that KDE 3 is using utility with another name (I guess with the same argument --forcelock) Tested on Kubuntu 8.10. Stay tuned for remote unlock.

list files recursively by size

Count the total amount of hours of your music collection
First the find command finds all files in your current directory (.). This is piped to xargs to be able to run the next shell pipeline in parallel. The xargs -P argument specifies how many processes you want to run in parallel, you can set this higher than your core count as the duration reading is mainly IO bound. The -print0 and -0 arguments of find and xargs respectively are used to easily handle files with spaces or other special characters. A subshell is executed by xargs to have a shell pipeline for each file that is found by find. This pipeline extracts the duration and converts it to a format easily parsed by awk. ffmpeg reads the file and prints a lot of information about it, grep extracts the duration line. cut and sed cut out the time information, and tr converts the last . to a : to make it easier to split by awk. awk is a specialized programming language for use in shell scripts. Here we use it to split the time elements in 4 variables and add them up.

easily find megabyte eating files or directories
This is easy to type if you are looking for a few (hundred) "missing" megabytes (and don't mind the occasional K slipping in)... A variation without false positives and also finding gigabytes (but - depending on your keyboard setup - more painful to type): $du -hs *|grep -P '^(\d|,)+(M|G)'|sort -n (NOTE: you might want to replace the ',' according to your locale!) Don't forget that you can modify the globbing as needed! (e.g. '.[^\.]* *' to include hidden files and directories (w/ bash)) in its core similar to: http://www.commandlinefu.com/commands/view/706/show-sorted-list-of-files-with-sizes-more-than-1mb-in-the-current-dir


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: