This removes the video and subsequent file size and directly copies the audio. Show Sample Output
Uses parallel processing Reiteration of my earlier command https://www.commandlinefu.com/commands/view/15246/convert-entire-music-library Usage lc Old_Directory New_DIrectory Old_Format New_Format lc ~/Music ~/Music_ogg mp3 ogg Show Sample Output
This converts all m4a files in a dir to flv. You can just swap the m4a bit to anything else ffmpeg supports though, and it'll work.
-vn removes tha video content, the copy option tells ffmpeg to use the same codec for generating the output
Takes an mpeg video and coverts it to a youtube compatible flv file. The -r 25 sets the frame rate for PAL, for NTSC use 29.97
i have a large video file, 500+ MB, so i cant upload it to flickr, so to reduce the size i split it into 2 files. the command shows the splitting for the first file, from 0-4 minutes. ss is start time and t is duration (how long you want the output file to be). credit goes to philc: http://ubuntuforums.org/showthread.php?t=480343 NOTE: when i made the second half of the video, i got a *lot* of lines like this: frame= 0 fps= 0 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbit just be patient, it is working =) Show Sample Output
With the -vn switch we make our intentions clear and ask FFmpeg not to bother itself with the video. Next we specify the audio codec copy, which tells FFmpeg to use the same codec to encode the audio, which it uses to decode it. To keep things simple, we'll just keep the sampling and bitrate values the same.
Faster thumbnail creation than '-itsoffset'
ffmpeg -itsoffset -4 -i test.avi -vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 test.jpg
Show Sample Output
Play with the framerate option '-r' to scale back bandwidth usage. The '-s' option is the captured screan area, not the rescaled size. If you want to rescale add a second '-s' option after '-i :0'. Rescaling smaller will also decrease bandwidth.
Alternative, imho better, using the concat protocol
If you're using the experimental vorbis encoder (homebrew version of libffmpeg)
Change video orientation in metadata only
Convert those .mov files that your digital camera makes to .avi
Adjust the bitrate (-b) to get the appropriate file size. A larger bitrate produces a larger (higher quality) .avi file and smaller bitrate produces a smaller (lower quality) .avi file.
Requires ffmpeg (see man page for details)
(tested with canon camera MOV files)
Other examples:
ffmpeg -i input.mov -sameq -vcodec msmpeg4v2 -acodec pcm_u8 output.avi
ffmpeg -i input.mov -b 1024k -vcodec msmpeg4v2 -acodec pcm_u8 output.avi
This command takes a set of images (from a render, for example), and converts them into a format conforming to the Blu-ray spec, or at least the version on the Wikipedia page.
sox (SOund eXchange) can capture the system audio be it a browser playing youtube or from hardware mic and can pipe it to ffmpeg which encodes it into flv and send it over rtmp. Tested using Red5 rtmp server.
Note: %~nI expands %I to a file name only (cf. http://technet.microsoft.com/en-us/library/bb490909.aspx)
First the find command finds all files in your current directory (.). This is piped to xargs to be able to run the next shell pipeline in parallel. The xargs -P argument specifies how many processes you want to run in parallel, you can set this higher than your core count as the duration reading is mainly IO bound. The -print0 and -0 arguments of find and xargs respectively are used to easily handle files with spaces or other special characters. A subshell is executed by xargs to have a shell pipeline for each file that is found by find. This pipeline extracts the duration and converts it to a format easily parsed by awk. ffmpeg reads the file and prints a lot of information about it, grep extracts the duration line. cut and sed cut out the time information, and tr converts the last . to a : to make it easier to split by awk. awk is a specialized programming language for use in shell scripts. Here we use it to split the time elements in 4 variables and add them up. Show Sample Output
Requires bpm-tools https://www.pogo.org.uk/~mark/bpm-tools/ Show Sample Output
The 30 means start extracting frames from 30 seconds into the video. The 3 means extract the next 3 seconds from that point. The fps can be adjusted based on your preferences. The 320 is the width of the gif, the height will be calculated automatically. input.mp4 is the video file, which can be any video file ffmpeg supports. The output.gif is the gif created.
Requirements: ffmpeg2theora (http://v2v.cc/~j/ffmpeg2theora/)
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: