I am detecting that our present, retro-style depiction of how life worked in the 1970s, often assumes details which may not be 100% accurate historically. And one such detail would be, that if television stations in the 1970s were disseminating an analog signal, that signal must have been recorded on videotape.
Videotape existed at a much earlier point in time, but was hamstrung in its conception, to not being able to cover color signal-formats. This was due to an inability of the playback-device, to ensure a stable frequency for the color sub-carrier. It was only a much later development, that color videotape formats became possible, because of the ability to use VCOs, PLLs, and other elements of a feedback loop, to Heterodyne the frequency of the color information on the tape, and then to produce an output which had strict control over its frequencies, based on the accuracy of a single quartz crystal in the playback device. We needed numerous Integrated Circuits to accomplish that, and the earliest videotape machines only had tubes.
Early radio-transmitters also needed to have one quartz crystal, for every frequency it was licensed to transmit on. It required later technology, to be able to transmit on numerous accurate frequencies, yet only to possess one quartz crystal. And quartz crystals tended to be expensive, before they started to be mass-produced to resonate at one standard frequency.
What TV stations in the 1970s had was a device, into which 16mm emulsion film was fed, which was also a standard photographic film-format at the time, and that captured video from this photographic movie-film, translated it into an analog video signal – in color – that signal to be transmitted as it was being output from this machine. So content was actually distributed to the TV stations, on film.
And the notion did not exist yet, that in order to capture the film content would require scanning it with a laser. Instead, the same type of video-capture tubes were used in this machine, that were used in video-cameras for live broadcasting, which were also quite large and bulky. And Yes, this required one video-capture tube for each primary color – in practice though not in theory.
For TV, the image on one frame of the film was brought into focus – using a lens – on 3 capture-tubes, the light-input to which was split by reflectors.
This also affects how we watch the old movies today.
It can happen that my Roku is set to a 1080p picture-format, but that some content from the 1970s is only available in 4:3 aspect ratio on film – on a specialty channel I subscribe to, that the picture-quality is better than what the NTSC analog signal would have allowed, but still not as good, as 1080p would allow.
The picture-quality is then capped, by how grainy the old film was.
In those cases the bit-rate is high enough, for me to recognize the grain of the film.
It was also a fact, that in the USA and Canada, they did not change the frame-rate, the way they did in Europe.
On both sides of the Atlantic, a video signal used interlaced scan, which meant that each frame consisted of an even-numbered and an odd-numbered set of scan-lines.
With the North-American standards, each field – aka half-frame – was transmitted 60 times per second, once containing the even-numbered scan-lines, and once containing the odd-numbered ones, to result in 30 full frames per second.
Because motion-picture film was usually shot at 24FPS, it was feasible here to devise a standard, by which one frame of film would be scanned by three fields, while the next frame was scanned with only two fields.
Thus, Frame 1 would consist of even-odd-even scan-lines, Frame 2 would consist of odd-even scan-lines, Frame 3 would consist of odd-even-odd scan-lines, Frame 4 would consist of even-odd scan-lines, and after every 4 frames, the cycle would repeat itself.
This actually allowed the film to run at 24FPS, as it was filmed, while also allowing the vertical sync frequency to be 60Hz.
3 + 2 = 5 Fields / 2-Frames 60 / 5 = 12 2-Frames / Second - Video 24 / 2 = 12 2-Frames / Second - Film
In Europe, the vertical sync frequency was tuned to their power-line frequency, as it was here, but there, their power-line frequency was 50Hz. And so, because there was no easy way to convert, the Europeans would speed up the film to 25FPS, and thus broadcast it at 50Hz straight.
(Edit : ) By the late 1970s, TV stations with ‘real budgets’ started to possess large, expensive, reel-to-reel, color video-recording machines. Also, it was not really beyond the ability of major organizations to fill a large amount of space, just with circuits, as evidenced by the existence of mainframe computers.
OTOH, Monochrome, reel-to-reel video-tape machines existed that used tubes, but which were surprisingly compact.
One fact about the early monochrome video-tape machines which impressed me, even though the one I got to play with was already as late as the mid-1970s, was the relatively high resolution, in comparison to color TV signals. I had grown accustomed to the pictures of poorly-filtered color TVs in my teens, so that an older, monochrome video-tape machine and its camera actually had superior ‘quality’, in certain ways.
Also, I know that the color TV cameras that existed in studios, in the 1970s, tended to be ‘big boxes on stands’, mainly because they had to contain 3 capture-tubes instead of just 1. But then one detail I do not know the full answer to, was whether the color form of the signal was being broadcast live, while maybe, the videotaped format was in monochrome? Or were there additional, film-cameras running on-set? This I do not know.