As a longtime viewer of closed captioning, I appreciate the tremendous work done by human captioners to provide an accurate translation of the spoken word into the written word for television programming. I am thankful for the work of the trained professions who make sure that the captions are as verbatim and accurate to the spoken word as possible, and who pay close attention to punctuation and speaker changes, which are crucial in the comprehension of the captions. However, lately I have noticed a trend in news stations and other platforms that have ditched human captioning in favor of automatic speech recognition, or ASR. ASR captions are virtually always distinguishable from human-produced captions because of their drastic reduction in quality. The captions produced by ASR are too often riddled with inaccurate words, omissions, lack of punctuation, and lack of speaker-change identification. The developers of these ASR systems tend to tout that they caption more words, but fail to mention the number of word/punctuation/formatting errors in the output, and oftentimes these confusing error-laden captions move at a speed that is too fast for the average reader's comprehension. Human-produced captions, on the other hand, always indicate speaker changes, include correct punctuation, and with proper training, captioners can achieve high speeds with a fair balance of accuracy and readability. I believe that the FCC should not allow stations to use ASR for captioning unless they have demonstrated that these systems can indeed produce captions that are legible, comprehensive, and are indistinguishable from the work done by human captioners. I also would like the FCC to develop metrics to better monitor compliance among TV networks so that they burden of reporting noncompliance isn't placed solely on the complaintants.