Ok, Eureka was a bit early. But… this… time.
The mpegtsdemux and mp4mux items of the gstreamer-framework for the dm368 seem to be a bit picky. Too picky for me. So here is what I do:
I demux the files with tsdemux on my video recorder. I will save the output of tsdemux for later use
pid=577 (0x0241), ch=132, id=1, type=0x02 (m2v), stream=0xe0, fps=25.00, len=2517240ms, fn=62929, head=+490ms pid=578 (0x0242), ch=132, id=2, type=0x03 (m2a), stream=0xc0, fps=8.33, len=2517000ms, fn=20975, tail=-730ms
The audio stream is 490 milliseconds delayed – I will need that for muxing. Then I recode the video stream on the LeopardBoard:
time gst-launch -v filesrc location=00001.track_577.m2v ! TIViddec2 framerate=25 codecName=mpeg2dec engineName=codecServer ! TIVidenc1 framerate=25/1 codecName=h264enc engineName=codecServer contiguousInputFrame=true ! capsfilter caps="video/x-h264,framerate=25/1,pixel-aspect-ratio=(fraction)16/11" ! mp4mux ! filesink location=00001.track_577.264
The capsfilter „pixel-aspect-ratio=(fraction)1777/1222“ does the 16/9 formatting of the 704x576px of the PAL formatted stream. mp4mux is needed since if I „filesink“ the output of TIVidenc1 directly, an incorrect file is built, and ffmpeg/ vlc will play the file at 30fps.
Since the board doesn’t have an mp2 audio codec, I decode the audio stream using mad:
gst-launch filesrc location=00001.track_578.m2a ! mad ! audioconvert ! TIAudenc1 codecName=aaclcenc engineName=codecServer ! filesink location=00001.track_578.aac
I can still use the hardware accelerated aac encoding.
Now I have the audio file 00001.track_578.aac and the video file 00001.track_577.264. This can be (fast) muxed with ffmpeg on my PVR:
fmpeg -i 00001.track_578.m2a -itsoffset 0.490 -i 00001.track_577.264 -acodec copy -vcodec copy 00001.mp4
You might notive the 490ms delay. Video and audio in sync (handbrake seem to do this automatically when transcoding transport streams, ffmpeg et al don’t seem to do it). The hardware encoder for Videos runs at roughly 50fps for a h264-high-profile encoding. Not bad!
The CPU-based audio decoding is unfortunately a bit slower, so that I think the actual transcoding would be a bit less than real time.
Next steps: Deinterlacing (if possible – right now, the interlaced material is decoded and not deinterlaced but encoded as if it was – which is annoying sometimes) and automation.
In the process I built a few images of the RidgeRun SDK and TI-DVSDK (the latter I am running on the board now), downloaded the SVN-version of gstreamer, patched the TI-DVSDK to integrate the new versions of the hardware accelerated codecs, patched the gstreamer config to allow de- and encoding at the same time.
The standard configuration is:
- capture from camera/ component input, encode, stream or save
- read from file/ stream and display on component or lcd output.
And that’s why „decodeencode“ (= transcode) is not out-of-the-box-supported.
It would be nice just to start a ffmpeg on the board and it does the rest. But ffmpeg-dm365 doesn’t work for me yet – segfaults and only a few codecs are supported yet.