Dynamic Network Representation Based on Latent Factorization of Tensors [electronic resource] / by Hao Wu, Xuke Wu, Xin Luo.

Por: Wu, Hao [author.]Colaborador(es): Wu, Xuke [author.] | Luo, Xin [author.] | SpringerLink (Online service)Tipo de material: TextoTextoSeries SpringerBriefs in Computer ScienceEditor: Singapore : Springer Nature Singapore : Imprint: Springer, 2023Edición: 1st ed. 2023Descripción: VIII, 80 p. 20 illus., 16 illus. in color. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9789811989346Tema(s): Artificial intelligence -- Data processing | Quantitative research | Data Science | Data Analysis and Big DataFormatos físicos adicionales: Printed edition:: Sin título; Printed edition:: Sin títuloClasificación CDD: 005.7 Clasificación LoC:Q336Recursos en línea: Libro electrónicoTexto
Contenidos:
Chapter 1 IntroductionChapter -- 2 Multiple Biases-Incorporated Latent Factorization of tensors -- Chapter 3 PID-Incorporated Latent Factorization of Tensors -- Chapter 4 Diverse Biases Nonnegative Latent Factorization of Tensors -- Chapter 5 ADMM-Based Nonnegative Latent Factorization of Tensors -- Chapter 6 Perspectives and Conclusion. .
En: Springer Nature eBookResumen: A dynamic network is frequently encountered in various real industrial applications, such as the Internet of Things. It is composed of numerous nodes and large-scale dynamic real-time interactions among them, where each node indicates a specified entity, each directed link indicates a real-time interaction, and the strength of an interaction can be quantified as the weight of a link. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant dynamic network High Dimensional and Incomplete (HDI). An HDI dynamic network with directed and weighted links, despite its HDI nature, contains rich knowledge regarding involved nodes' various behavior patterns. Therefore, it is essential to study how to build efficient and effective representation learning models for acquiring useful knowledge. In this book, we first model a dynamic network into an HDI tensor and present the basic latent factorization of tensors (LFT) model. Then, we propose four representative LFT-based network representation methods. The first method integrates the short-time bias, long-time bias and preprocessing bias to precisely represent the volatility of network data. The second method utilizes a proportion-al-integral-derivative controller to construct an adjusted instance error to achieve a higher convergence rate. The third method considers the non-negativity of fluctuating network data by constraining latent features to be non-negative and incorporating the extended linear bias. The fourth method adopts an alternating direction method of multipliers framework to build a learning model for implementing representation to dynamic networks with high preciseness and efficiency.
Star ratings
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Colección Signatura Copia número Estado Fecha de vencimiento Código de barras
Libro Electrónico Biblioteca Electrónica
Colección de Libros Electrónicos 1 No para préstamo

Acceso multiusuario

Chapter 1 IntroductionChapter -- 2 Multiple Biases-Incorporated Latent Factorization of tensors -- Chapter 3 PID-Incorporated Latent Factorization of Tensors -- Chapter 4 Diverse Biases Nonnegative Latent Factorization of Tensors -- Chapter 5 ADMM-Based Nonnegative Latent Factorization of Tensors -- Chapter 6 Perspectives and Conclusion. .

A dynamic network is frequently encountered in various real industrial applications, such as the Internet of Things. It is composed of numerous nodes and large-scale dynamic real-time interactions among them, where each node indicates a specified entity, each directed link indicates a real-time interaction, and the strength of an interaction can be quantified as the weight of a link. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant dynamic network High Dimensional and Incomplete (HDI). An HDI dynamic network with directed and weighted links, despite its HDI nature, contains rich knowledge regarding involved nodes' various behavior patterns. Therefore, it is essential to study how to build efficient and effective representation learning models for acquiring useful knowledge. In this book, we first model a dynamic network into an HDI tensor and present the basic latent factorization of tensors (LFT) model. Then, we propose four representative LFT-based network representation methods. The first method integrates the short-time bias, long-time bias and preprocessing bias to precisely represent the volatility of network data. The second method utilizes a proportion-al-integral-derivative controller to construct an adjusted instance error to achieve a higher convergence rate. The third method considers the non-negativity of fluctuating network data by constraining latent features to be non-negative and incorporating the extended linear bias. The fourth method adopts an alternating direction method of multipliers framework to build a learning model for implementing representation to dynamic networks with high preciseness and efficiency.

UABC ; Perpetuidad

Con tecnología Koha