Knowledge-augmented Methods for Natural Language Processing [electronic resource] / by Meng Jiang, Bill Yuchen Lin, Shuohang Wang, Yichong Xu, Wenhao Yu, Chenguang Zhu.

Por: Jiang, Meng [author.]Colaborador(es): Lin, Bill Yuchen [author.] | Wang, Shuohang [author.] | Xu, Yichong [author.] | Yu, Wenhao [author.] | Zhu, Chenguang [author.] | SpringerLink (Online service)Tipo de material: TextoTextoSeries SpringerBriefs in Computer ScienceEditor: Singapore : Springer Nature Singapore : Imprint: Springer, 2024Edición: 1st ed. 2024Descripción: IX, 95 p. 18 illus., 15 illus. in color. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9789819707478Tema(s): Natural language processing (Computer science) | Computational linguistics | Data mining | Natural Language Processing (NLP) | Computational Linguistics | Data Mining and Knowledge DiscoveryFormatos físicos adicionales: Printed edition:: Sin título; Printed edition:: Sin título; Printed edition:: Sin títuloClasificación CDD: 006.35 Clasificación LoC:QA76.9.N38Recursos en línea: Libro electrónicoTexto
Contenidos:
Chapter 1. Introduction to Knowledge-augmented NLP -- Chapter 2. Knowledge Sources -- Chapter 3. Knowledge-augmented Methods for Natural Language Understanding -- Chapter 4. Knowledge-augmented Methods for Natural Language Generation -- Chapter 5. Augmenting NLP Models with Commonsense Knowledge -- Chapter 6. Summary and Future Directions.
En: Springer Nature eBookResumen: Over the last few years, natural language processing has seen remarkable progress due to the emergence of larger-scale models, better training techniques, and greater availability of data. Examples of these advancements include GPT-4, ChatGPT, and other pre-trained language models. These models are capable of characterizing linguistic patterns and generating context-aware representations, resulting in high-quality output. However, these models rely solely on input-output pairs during training and, therefore, struggle to incorporate external world knowledge, such as named entities, their relations, common sense, and domain-specific content. Incorporating knowledge into the training and inference of language models is critical to their ability to represent language accurately. Additionally, knowledge is essential in achieving higher levels of intelligence that cannot be attained through statistical learning of input text patterns alone. In this book, we will review recent developments in the field of natural language processing, specifically focusing on the role of knowledge in language representation. We will examine how pre-trained language models like GPT-4 and ChatGPT are limited in their ability to capture external world knowledge and explore various approaches to incorporate knowledge into language models. Additionally, we will discuss the significance of knowledge in enabling higher levels of intelligence that go beyond statistical learning on input text patterns. Overall, this survey aims to provide insights into the importance of knowledge in natural language processing and highlight recent advances in this field.
Star ratings
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Colección Signatura Copia número Estado Fecha de vencimiento Código de barras
Libro Electrónico Biblioteca Electrónica
Colección de Libros Electrónicos 1 No para préstamo

Chapter 1. Introduction to Knowledge-augmented NLP -- Chapter 2. Knowledge Sources -- Chapter 3. Knowledge-augmented Methods for Natural Language Understanding -- Chapter 4. Knowledge-augmented Methods for Natural Language Generation -- Chapter 5. Augmenting NLP Models with Commonsense Knowledge -- Chapter 6. Summary and Future Directions.

Over the last few years, natural language processing has seen remarkable progress due to the emergence of larger-scale models, better training techniques, and greater availability of data. Examples of these advancements include GPT-4, ChatGPT, and other pre-trained language models. These models are capable of characterizing linguistic patterns and generating context-aware representations, resulting in high-quality output. However, these models rely solely on input-output pairs during training and, therefore, struggle to incorporate external world knowledge, such as named entities, their relations, common sense, and domain-specific content. Incorporating knowledge into the training and inference of language models is critical to their ability to represent language accurately. Additionally, knowledge is essential in achieving higher levels of intelligence that cannot be attained through statistical learning of input text patterns alone. In this book, we will review recent developments in the field of natural language processing, specifically focusing on the role of knowledge in language representation. We will examine how pre-trained language models like GPT-4 and ChatGPT are limited in their ability to capture external world knowledge and explore various approaches to incorporate knowledge into language models. Additionally, we will discuss the significance of knowledge in enabling higher levels of intelligence that go beyond statistical learning on input text patterns. Overall, this survey aims to provide insights into the importance of knowledge in natural language processing and highlight recent advances in this field.

UABC ; Perpetuidad

Con tecnología Koha