Another option would be to...
Posted Dec 14, 2007 18:47 UTC (Fri) by jd
Parent article: The Grumpy Editor's video journey, part 1
...regard the analog medium as simply a collection of stills. Scanning in a time series of still images is much much easier in Linux. You even have the added benefit that if you rescan the same still using red, green and blue, there are packages for stitching such images together into a single high dynamic range image, which may be closer to what was originally on the analog medium.
Next up, if there is a sound track, you record that. It's kept separate from the images, at this point you only care that both are time sequenced.
You then thread the time-sequenced stills and the sound into a single movie, where the inputs are scaled to an equal length in time, hopefully but not necessarily linearly. That's the hard part. When you record the sound, the slice of sound for a given frame must be shown at the same time as that frame. The painfully bad sync on some YouTube videos, where the individuals involved aren't doing anything nearly as sophisticated as what I'm outlining, shows that this is not an easy thing to do.
You now (hopefully) have something that accurately reproduces the movies, is likely superior to something Linux' video input can do (because you produced multiple samples of the images and that can't be done in software alone in most cases), requires much simpler drivers (the graphics aren't sampled in hard real-time and aren't obtained from high-speed video capturing hardware), and will probably cause far less damage to the hair follicles.
If the series doesn't already cover such approaches, then maybe it might be worth adding something on non-traditional video capture methods - in this case, using the fact that video is really no different than animation via flicking through pages in a book.
to post comments)