Search Results

Search found 612 results on 25 pages for 'tao ffmpeg'.

Page 12/25 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • DTS to AC3 conversion for LG TV using mediatomb DLNA server

    - by prion crawler
    I want to convert a MKV video file containing DTS audio to a stream with AC3 audio. I want to pass this resulting stream to mediatomb's transcoding feature. Mediatomb will transfer the stream via DLNA to a LG TV, which does not support DTS audio. I have tried the VLC command below but the TV does not recognize the stream, and playing the destination stream on PC does not produce sound. vlc -vvv -I dummy INPUT.file --sout \ '#transcode{acodec=ac3,ab=256k,channels=2,threads=4} \ :std{mux=ts,access=file,dst=DEST.file}' The following ffmpeg command give a stream that plays on the TV with sound, but the ffmpeg process gets killed (with signal 15) within 10-15 seconds, and then the TV restarts the playback from the beginning. This goes on in loops. ffmpeg -i INPUT.file -acodec ac3 -ab 384k -vcodec copy \ -vbsf h264_mp4toannexb -f mpegts -y DEST.file I want to have a working DLNA server which transcodes DTS to AC3, any help is appreciated.

    Read the article

  • How to install two packages that write the same file

    - by Joel E Salas
    Sirs, a conundrum. I have two packages that each create /usr/bin/ffprobe. One of them is ffmpeg from the Deb Multimedia repository, while the other is ffmbc 0.7-rc5 built from source. The hand-rolled one is business-critical, and we used to just install it from source wherever it was necessary. I can only assume it would clobber the ffmpeg file, and there were never any ill effects. In theory, it should be acceptable for our ffmbc package to overwrite the file from the ffmpeg package. Is there any easy way to reconcile this?

    Read the article

  • Out of sync audio video using mencoder

    - by 1ch1g0
    hi i converted a mkv (matroska) file to avi using ffmpeg: ffmpeg -i input.mkv -f mp4 -vcodec mpeg4 -sameq -r 29.97 -b 512kb -acodec ac3 -ab 128kb -vol 512 output.avi the output file plays fine using mplayer. after that, i using mencoder to insert subtitles: mencoder output.avi -o new.avi -oac pcm -ovc lavc -subfont-text-scale 3 -sub subtitle.srt however, after i play back the video "new.avi", the video and audio is out of sync. What options can i put into mencoder to sync the A/V. ? I have also tried ffmpeg -newsubtitle option but can't get it work. Any examples of usage of -newsubtitle would be greatly appreciated. thanks

    Read the article

  • How to convert a mpeg4-aac video to mpeg-ts using MEncoder?

    - by Mahendra Liya
    Hello, I know this can be done using FFMpeg and I have done it. The only problem here was that FFMpeg actually tries to "encode" the file while converting to mpeg-ts which I don't want. i.e. I just want to change the container format to mpeg-ts without encoding the media. Is this type of conversion possible with FFMpeg? (I know about "copy" option, but it works with H264-aac and not with mpeg4-aac). Is it possible to change mpeg4-aac to mpeg-ts container format with MEncoder? I wish to know the personal opinion/advice of those who have already worked on such stuff. Thanks in advance.

    Read the article

  • FFSERVER - streaming an ASF video as Webm output

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Environment Debian 7.5 ffmpeg 2.2 Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://192.168.1.62:8091/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream.

    Read the article

  • How to use infinit live streams with JAVE library? (Java, ffmpeg)

    - by Ole Jak
    So I want to use JAVE to save mp3 radio stream to my File system. I have this code for file saving but what shall I do to save a stream (stop on timer for ex) File source = new File("source.wav"); File target = new File("target.mp3"); AudioAttributes audio = new AudioAttributes(); audio.setCodec("libmp3lame"); audio.setBitRate(new Integer(128000)); audio.setChannels(new Integer(2)); audio.setSamplingRate(new Integer(44100)); EncodingAttributes attrs = new EncodingAttributes(); attrs.setFormat("mp3"); attrs.setAudioAttributes(audio); Encoder encoder = new Encoder(); encoder.encode(source, target, attrs);

    Read the article

  • Automating video generation by adding an intro and a trailing video to the main video

    - by DevDewboy
    I have a video project I am trying to compile. Here is the overview: I have many videos which are 5 minute training sessions - Main video. The Intro Video will be a standard 5 second video that will have the Video title and Author. This will be concatenated to the main video. The Trailing Video will pretty much be a stock video that will be concatenated to the main video and have all the legaleze etc. The Intro Vid will smoothly fade into the main vid as well as when you get to end of the main video it will fade into the Trailing video nicely. The product is a new video with a Intro, Main & Trailer video all in one! The concept is really that simple. In fact I found an example of a person who has solved this and is doing exactly what I want. This solution is a Bash script that takes a config file that has the title, author, etc. and generates the Intro, the Ending and creates the resulting video with them concatenated. I am using Ubuntu 12.04 Server. I have been trying to take this as a sample and just running it with no luck because of incompatibility errors. I even attempted to convert it using .MP4 containers or .MKV. I am running into error after error or incompatibility issues. I went as far as changing out the ffmpeg binary using the 25 Oct 2013 version from http://ffmpeg.gusari.org/static/64bit/ which I like as I don't have to worry about rebuilding the binary. Almost successful but again I have some error which I cannot solve. I know part of the problem is the fact that video production, codecs, formats is a completely new field for me so I am attempting to work through this new territory. Perhaps an expert here has something similar that I can use as a guideline that uses MP4 or h.264 format. Or take the solution above from the URL and make it work with a more up-to-date version of ffmpeg. I will include the script and its parameter file and the output (abbreviated because of limitation) below. Basically as the script stands right now, when run I get the error [matroska,webm @ 0x27bbee0] Read error. This error is return from the 'reasembleVideo' routine from the first ffmpeg command. The following is the Parameter File: #!/bin/bash INPUTFILE="ssh_main.mp4" LOGO="logo.png" LOGOLENGTH="1" SPEAKER="Jason" TITLE="Basic SSH Video" DATE="October 28, 2013" SCENESTART="00:00:01" SCENEDURATION="00:00:09" OUTPUTFILE="ssh_basic_1" } The following is the script I am running. The ${OUTPUTFILE} being used is a small 2 minute video I create in screen-o-matic in MP4 format. Script on PasteBin (too long for Super User post)

    Read the article

  • Watermarking FLV videos on Ubuntu

    - by sharjeel
    I wanna add support of watermarking on my FLV videos. Previously I used to do that using FFMPEG's vhook option but due to some problems I had to upgrade it to the latest svn revision. This version of FFMPEG doesn't have vhook support anymore. I have tried mencoder with bmovl but mencoder seems to be pretty difficult to work with. Is there any other feasible option of watermarking FLV videos?

    Read the article

  • A seekable one-frame FLV video (with audio)?

    - by George Stephanos
    Is it possible to generate an FLV out of an MP3 and a JPG, without uselessly looping the image and still be able to seek the audio ? This command generates a non-seekable video: ffmpeg -y -i audio.mp3 -i image.jpg -r 1 -acodec copy video.flv and this one generates a seekable one, but with uselessly looping the image occupying both space and time: ffmpeg -y -loop_input -i audio.mp3 -i image.jpg -r 1 -acodec copy video.flv -shortest

    Read the article

  • how to make a queue in php with mysql

    - by robert
    hy, in my script i run a exec() function to make a movie file with ffmpeg. the problem is ffmpeg can run only 1 time on the server, if 2 people are online on server and first one already run ffmpeg i want the second to wait until the first end the process how to code this? thank you

    Read the article

  • Mix Audio tracks with offset in SOX

    - by Laramie
    From ASP.Net, I am using FFMPEG to convert flv files on a Flash Media Server to wavs that I need to mix into a single MP3 file. I originally attempted this entirely with FFMPEG but eventually gave up on the mixing step because I don't believe it it possible to combine audio only tracks into a single result file. I would love to be wrong. I am now using FFMPEG to access the FLV files and extract the audio track to wav so that SOX can mix them. The problem is that I must offset one of the audio tracks by a few seconds so that they are synchronized. Each file is one half of a conversation between a student and a teacher. For example teacher.wav might need to begin 3.3 seconds after student.wav. I can only figure out how to mix the files with SOX where both tracks begin at the same time. My best attempt at this point is: ffmpeg -y -i rtmp://server/appName/instance/student.flv -ac 1 student.wav ffmpeg -y -i rtmp://server/appName/instance/teacher.flv -ac 1 teacher.wav sox -m student.wav teacher.wav combined.mp3 splice 3.3 These tools (FFMEG/SoX) were chosen based on my best research, but are not required. Any working solution would allow an ASP.Net service to input the two FMS flvs and create a combined MP3 using open-source or free tools.

    Read the article

  • Convert video files for Maemo (Nokia N900) using ffmpeg/mencoder

    - by Vikrant Chaudhary
    I'm a newbie in video encoding so I'm looking for some expert advice. I'm looking to transcode media files with ffmpeg or mencoder (or something other) on Ubuntu for my Nokia N900 running Maemo. I'd prefer mencoder, because of ffmpeg's crazy dependencies. Video output should be AVC/H.264 (probably hardware accelerated on device). Audio output in AAC (should have preferred Vorbis but not supported natively and requires .mkv which is also not completely supported). Output video should retain the original aspect ratio. Resolution of screen is 800x480 (16:10). (Explanation of why-this-value-is-chosen would be really appreciated). Thanks.

    Read the article

  • Encoding with FFmpeg using a FIFO

    - by Ashot Martirosyan
    Hello everyone. I'm trying to convert Flac audio file to AAC file using command line. So I wrote this ffmpeg -i input.flac temp.wav faac -q 120 -o output.m4a temp.wav It's working fine. Now I want to do the same using fifo, so I'm writing this mkfifo temp.wav ffmpeg -i input.flac temp.wav & faac -q 120 -o output.m4a temp.wav And it's freezing. So could you tall me what I'm doing wrong. Thanks a lot, and sorry for my English.

    Read the article

  • ffserver-2.2 - streaming an ASF video as Webm output with ffserver on Debian 7.5

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://ffserver_ip:port/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://ffserver_ip:port/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream. Thanks for your help again.

    Read the article

  • How to stream H264 Video from camera over FTP?

    - by Jay
    I bought a h264 security camera system last year and set it up to ftp video to my computer. I was able to get the video to play (even though it played a little fast) on Ubuntu 11.04 using mplayer. A few months ago, I did a fresh install of 12.04 and I cannot seem to get the video to play with mplayer, smplayer or VLC. I have the restricted formats video packages installed and when playing with any of the players, all I get is a gray video. When calling mplayer from the command line to play the video with no options, I get a lot of these errors: [h264 @ 0x7f278c61f280]concealing 1320 DC, 1320 AC, 1320 MV errors No pts value from demuxer to use for frame! pts after filters MISSING I'm not a video expert and have been coming up with a lot of dead ends when Googling for this. Could someone offer some advice about how to play these videos? Here is the output of mediainfo for a sample file. mediainfo -f sec-cam01-m-20120921-212454.h264 General Count : 278 Count of stream of this kind : 1 Kind of stream : General Kind of stream : General Stream identifier : 0 Count of video streams : 1 Video_Format_List : AVC Video_Format_WithHint_List : AVC Codecs Video : AVC Complete name : sec-cam01-m-20120921-212454.h264 File name : sec-cam01-m-20120921-212454 File extension : h264 Format : AVC Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Format/Extensions usually used : avc h264 Commercial name : AVC Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec/Extensions usually used : avc h264 File size : 1097315 File size : 1.05 MiB File size : 1 MiB File size : 1.0 MiB File size : 1.05 MiB File size : 1.046 MiB File last modification date : UTC 2012-09-22 01:27:12 File last modification date (local) : 2012-09-21 21:27:12 Video Count : 205 Count of stream of this kind : 1 Kind of stream : Video Kind of stream : Video Stream identifier : 0 Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Commercial name : AVC Format profile : [email protected] Format settings : 1 Ref Frames Format settings, CABAC : No Format settings, CABAC : No Format settings, ReFrames : 1 Format settings, ReFrames : 1 frame Format settings, GOP : M=1, N=3 Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Family : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec profile : [email protected] Codec settings : 1 Ref Frames Codec settings, CABAC : No Codec_Settings_RefFrames : 1 Width : 704 Width : 704 pixels Height : 480 Height : 480 pixels Pixel aspect ratio : 1.000 Display aspect ratio : 1.467 Display aspect ratio : 3:2 Standard : NTSC Resolution : 8 Resolution : 8 bits Colorimetry : 4:2:0 Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 Bit depth : 8 bits Scan type : Progressive Scan type : Progressive Interlacement : PPF Interlacement : Progressive Edit: Here is a sample video using the same encoding: https://www.dropbox.com/s/l5acwzy8rtqn9xe/sec-cam08-m-20121118-105815.h264 (not the same video as mediainfo output)

    Read the article

  • how to convert avi (xvid) to mkv or mp4 (h264)

    - by bcsteeve
    Very noob when it comes to video. I'm trying to make sense of what I"m finding via Google... but its mostly Greek to me. I have a bunch of Avi files that won't play in my WD TV Play box. Mediainfo tells me they are xvid. Specs for the box show that should be fine... but digging through forums says its hit-and-miss. So I'd like to try converting them to h264 encoded MKV or mp4 files. I gather avconv is the tool, but reading the manual just has me really really confused. I tried the very basic example of: avconv -i file.avi -c copy file.mp4 it took less than 4 seconds. And it worked... sort of. It "played" in that something came up on the screen... but there was horrible artifacting and scenes would just sort of melt into each other. I want to preserve quality if possible. I'm not concerned about file size. I'm not terribly concerned with the time it takes either, provided I can do them in a batch. Can someone familiar with the process please give me a command with the options? Thank you for your help. I'm posting the mediainfo in case it helps: General Complete name : \\SERVER\Video\Public\test.avi Format : AVI Format/Info : Audio Video Interleave File size : 189 MiB Duration : 11mn 18s Overall bit rate : 2 335 Kbps Writing application : Lavf52.32.0 Video ID : 0 Format : MPEG-4 Visual Format profile : Advanced Simple@L5 Format settings, BVOP : 2 Format settings, QPel : No Format settings, GMC : No warppoints Format settings, Matrix : Default (H.263) Muxing mode : Packed bitstream Codec ID : XVID Codec ID/Hint : XviD Duration : 11mn 18s Bit rate : 2 129 Kbps Width : 720 pixels Height : 480 pixels Display aspect ratio : 16:9 Frame rate : 29.970 fps Standard : NTSC Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.206 Stream size : 172 MiB (91%) Writing library : XviD 1.2.1 (UTC 2008-12-04) Audio ID : 1 Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Mode extension : MS Stereo Codec ID : 55 Codec ID/Hint : MP3 Duration : 11mn 18s Bit rate mode : Constant Bit rate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Stream size : 15.5 MiB (8%) Alignment : Aligned on interleaves Interleave, duration : 24 ms (0.72 video frame)

    Read the article

  • Record and display Kinect audio and video via avconv

    - by ThomasH
    I'm trying to record audio and video from a Kinect using avconv but I seem to have trouble specifying the right options. avconv -f video4linux2 -video_size 640x480 -c:v h264 -c:a ac3 -i /dev/video1 test.mp4 results in: avconv: /build/buildd/libav-extra-0.8.3ubuntu0.12.04.1/libav/libavutil/mathematics.c:79: av_rescale_rnd: Assertion `c > 0' failed. Aborted (core dumped) Cheese is perfectly happy recording the audio and video from the Kinect, so it's not a setup issue. For added fun, I also need to display the video at the same time.

    Read the article

  • Why is my ogg bigger then my m4a?

    - by acidzombie24
    i am using ffmpeg.exe -i 0123456789 -ab 192k out.m4a ffmpeg.exe -i 0123456789 -f wav - | oggenc2.exe - -r -q 6 -o out.ogg (0123456789 has no extension). My m4a output is 14,608kb while my ogg output is 19,809kb why? AFAIK -q 6 is roughly 192kb. So it should be about even. I could see one file being 1-3mb bigger then the other but 5 is pretty large. the m4a is almost 75% of the ogg! thats a lot! Why is this?

    Read the article

  • The fastest way to encode image+audio for Youtube from command line?

    - by Pavel Vlasov
    I have an mp3 and image and I want to make a simple clip to upload onto Youtube. Is there a fast solution? If video formats are so bad designed, then maybe it is possible to use a prerendered video-only clip? This works good except it takes as much time as the audio lasts: ffmpeg -loop_input -r ntsc -i "%IMAGE%" -i "%AUDIO%" -r 1 -acodec copy -shortest -re -force_fps "%VIDEO%" This takes a second but results in a black screen video that is successfully played by a desktop video player but not acceptable by Youtube: ffmpeg -i "%IMAGE%" -i "%AUDIO%" -acodec copy "%VIDEO%" Windows 7. Preserving audio quality is preferred over video quality.

    Read the article

  • Why is my ogg bigger than my m4a?

    - by acidzombie24
    I am using the following commands to encode an audio file to m4a & ogg formats: ffmpeg.exe -i 0123456789 -ab 192k out.m4a ffmpeg.exe -i 0123456789 -f wav - | oggenc2.exe - -r -q 6 -o out.ogg (0123456789 has no extension.) My m4a output is 14,608kB while my ogg output is 19,809kB. Why? AFAIK -q 6 is roughly 192kbps. So it should be about even. I could see one file being 1-3MB bigger than the other, but 5MB is pretty large. The m4a is almost 75% of the ogg! Why is this?

    Read the article

  • map based on specific type of stream

    - by Steven Penny
    Consider the following file Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'infile.mp4': Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1038 Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 386 kb/s Stream #0:2(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 124 kb/s I would like to run a command that uses the video and only the stereo audio. With this particular file I could just run ffmpeg -i infile.mp4 -c copy -map v -map :2 outfile.mp4 However this will not work if the audio streams are switched. How can I tell FFmpeg to use the 2 channel audio only?

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • Video encoding is very slow on Amazon EC2 instance

    - by Timka
    We are using Amazon EC2 m1.xlarge instance for video re-encoding and it looks like the actual encoding process takes a very long time. For an average 250mb video file it takes about an hour to encode. Intance: m1.xlarge (Xeon E5645 x 15gb ram) Windows Server 2008 R2 64-bit AviSynth version 2.5 (32bit) + ffms2 plugin (FFmpegSource 1.21) FFmpeg SVN-r13712 libavutil 3213056 libavcodec 3356930 libavformat 3411456 libavdevice 3407872 Number of parallel jobs is 3 Average CPU utilization ~96% Update#1 Source video: mp4/h.264 Parameters for ffmpeg: --enable-memalign-hack --enable-avisynth --enable-libxvid --enable-libx264 + --enable-libgsm --enable-libfaac --enable-libfaad --enable-liba52 + --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-pthreads + --enable-swscale --enable-gpl Video files encoded to mp4/h.264 with the following extra command line options: -threads 0 -coder 0 -bf 0 -refs 1 -level 30 -maxrate 10000000 -bufsize 10000000

    Read the article

  • Looking for a way execute a task on all files in a directory (recursively) on Windows

    - by stzzz1
    I have a huge number of mp4 video files that needs to have a volume boost. I need a way to execute a ffmpeg audio filter on all files in a specified base directory (and in subdirectories as well). My problem is that I'm working on a Windows computer and I have no knowledge of its shell syntax. I would like to do the equivalent of what this bash script does : TARGET_FILES=$(find /path/to/dir -type f -name *.mp4) for f in $TARGET_FILES do ffmpeg -i $f -af 'volume=4.0' output.$f done I spent quite some time this afternoon looking for a solution but the recursive nature of what I need (that is so simple with find!) isn't too clear. Any help would be greatly appreciated!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >