Use this command if you want to rename all subtitles for them to have the same name as the mp4 files. NOTE: The order of "ls -1 *.mp4" must match the order of "ls -1 *.srt", run the command bellow to make sure the *.srt files will really match the movies after run this command: paste -d:
In the above example all files have 4 lines. In "file1" consecutive lines are: "num, 1, 2, 3", in "file2": "name, Jack, Jim, Frank" and in "file3": "scores, 1300, 1100, 980". This one liner can save considerate ammount of time when you're trying to process serious portions of data. "-d" option allows one to set series of characters to be used as separators between data originating from given files. Show Sample Output
Tired copy paste to get opcode from objdump huh ? Get more @ http://gunslingerc0de.wordpress.com Show Sample Output
This one-liner will output installed packages sorted by size in Kilobytes. Show Sample Output
Merge files, joining line by line horizontally. Very useful when you have a lot of files where each line represents an info about an event and you want to join them into a single file where each line has all the info about the same event See the example for a better understanding Show Sample Output
paste one file at a time instead of in parallel Show Sample Output
paste(){ curl -s -S --data-urlencode "txt=$($@)" " Show Sample Output
Replace all instances of "A" with "B" in file "source" saved as file "destination". !! IF A/B is multi-byte, then separate bytes with spaces like so: "s/20\ 0A/00/g". Show Sample Output
seems a useless command ... Show Sample Output
Alternative1 (grep support): pacman -Ss python | paste - - | grep --color=always -e '/python' | less -R Alternative2 (eye-candy, no grep): pacman --color=always -Ss "python" | paste - - | less -R in ~/.bashrc: pkg-grep() { pacman -Ss "$1" | paste - - | grep --color=always -e "${2:-$1}" | less -R ; } pkg-search() { pacman --color=always -Ss "python" | paste - - | less -R; } Show Sample Output
** Replace the ... in URLS with: www.census.gov/genealogy/www/data/1990surnames Couldn't fit in 256 Created on Ubuntu 9.10 but nothing out of the ordinary, should work anywhere with a little tweaking. 5163 is the number of unique first names you get when combine the male and female first name files from. http://www.census.gov/genealogy/www/data/1990surnames/names_files.html Show Sample Output
Calculate pi from the infinite series 4/1 - 4/3 + 4/5 - 4/7 + ... This expansion was formulated by Gottfried Leibniz: http://en.wikipedia.org/wiki/Leibniz_formula_for_pi I helped rubenmoran create the sum of a sequence of numbers and he replied with a command for the sequence: 1 + 2 -3 + 4 ... This set me thinking. Transcendental numbers! seq provides the odd numbers 1, 3, 5 sed turns them into 4/1 4/3 4/5 paste inserts - and + bc -l does the calculation Note: 100 million iterations takes quite a while. 1 billion and I run out of memory. Show Sample Output
Computes total size of files in a directory. This value is different "du -b" because doesn't includes directory sizes. Show Sample Output
If there are spaces won't work.
A more simple way to join lines with paste command than sed. Show Sample Output
Rename all files in current directory by names from text file 'zzz'
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: