Visual and audible feedback to select the exact position where to cut the video.
After #14 (closed) is now possible cut a video in the exact position that we need, but where is that exact position? This issue is also related with #26, because there the problem is more less the same, is have more info about the cut zone.
We are humans and our references to do a task are our senses. We can do a better job if we have a more accurate information about the task, as same as how a robot need more sensors to do a task better.
The best senses we have to do a cut in a video are sight to see exactly where a visual scene start or end in the video and the hearing to listen where exactly a possible conversation/ambient noise begins or ends, but when we used Video Trimmer, we are currently almost blind and deaf. Here is where we asked us if we can improved the information that Video Trimmer gives us about the sound and the visual scene in the neighborhood of the position of the video where we want to cut it.
About improve the sound information, we know for the usage of professional software to handled audio, that the best way we can have a faster feedback about the sound in a zone of a video (neighborhood of the position of the video where we want to cut) is the waveform of the audio. Ofcourse, we don't need the waveform of the whole video, just only the waveform in a neighborhood of current video position, where we will planed to cut it. Audacity is probably the best and know example for that, but also the subtitleeditor have the waveform to add feedback about where a subtitle line need to be added.
About improve the visual scene information, we know that the best is to have a visual timeline where we can see exactly what is happening as is occurring in the most used video cutters. This is a concept called "thumbnails timeline" of the video. To be useful to cut a video in a precise way, the thumbnails timeline need to be implemented showing frame by frame in the neighborhood of the position where we want to cut the video. In that way if the video position is in the frame x
and the GUI can show 10
thumbnails, the thumbnails that we will see will be the frames [x-4
, x-3
, x-2
, x-1
, x
, x+1
, x+2
, x+3
, x+4
, x+5
]. But if we move the position where the video is, we need to regenerate the thumbnails in that case, to display the new thumbnails. Same apply if we are watching the video, as here the position of the video is changing as the video progresses.
As an example, the software vidcutter have one thumbnails timeline, but it's not fully functional because is not make to show the neighborhood of the position where we want to cut the video.
The TMPGEnc Video Mastering Works, use both concept.
The Adobe Premiere, use both concept.
Last but probably the most important example pitivi, also use both concepts.