Zusammenfassung
As part of the IKILeUS project at the University of Stuttgart, research was conducted to explore how Large Language Models (LLMs) can enhance the accuracy and contextual relevance of automatic speech recognition (ASR)-generated captions. While ASR tools provide a foundation for accessibility, they often produce grammatical errors, misinterpret homophones, and struggle with domain-specific terminology. To address these challenges, experiments were conducted using LLMs such as GPT-3.5 and Llama2-13B to refine and correct captioning errors. The models were evaluated using standard NLP metrics such as Word Error Rate (WER), BLEU, and ROUGE scores, demonstrating notable improvements in caption accuracy. The findings suggest that LLMs can effectively enhance the readability, coherence, and precision of automatically generated captions, offering a promising direction for improving video accessibility for the Deaf and Hard of Hearing (DHH) community.
Nutzer