Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains. However, adapting them to narrative content remains challenging. This paper explores the opportunities in adapting open-source LLMs to narrative contexts, where coherence, plot development, and character consistency are paramount. We investigate existing techniques for adapting and then fine-tuning LLMs on narrative data and propose a solution tailored to the specific demands of narrative generation. Furthermore, we analyze the performance of the proposed approach on the standard dataset WritingPrompts by exploring several corpora for the adaptation step. Moreover, we propose a qualitative evaluation involving human feedback. Results show that the adaption helps the model improve the generation and accuracy of prompts ranking.
Adapting Large Language Models to Narrative Content
Siciliani L.;Basile P.;Semeraro G.
2024-01-01
Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains. However, adapting them to narrative content remains challenging. This paper explores the opportunities in adapting open-source LLMs to narrative contexts, where coherence, plot development, and character consistency are paramount. We investigate existing techniques for adapting and then fine-tuning LLMs on narrative data and propose a solution tailored to the specific demands of narrative generation. Furthermore, we analyze the performance of the proposed approach on the standard dataset WritingPrompts by exploring several corpora for the adaptation step. Moreover, we propose a qualitative evaluation involving human feedback. Results show that the adaption helps the model improve the generation and accuracy of prompts ranking.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


