Neural Networks with Model Compression [electronic resource] / by Baochang Zhang, Tiancheng Wang, Sheng Xu, David Doermann.

Por: Zhang, Baochang [author.]Colaborador(es): Wang, Tiancheng [author.] | Xu, Sheng [author.] | Doermann, David [author.] | SpringerLink (Online service)Tipo de material: TextoTextoSeries Computational Intelligence Methods and ApplicationsEditor: Singapore : Springer Nature Singapore : Imprint: Springer, 2024Edición: 1st ed. 2024Descripción: IX, 260 p. 101 illus., 67 illus. in color. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9789819950683Tema(s): Machine learning | Artificial intelligence | Image processing -- Digital techniques | Computer vision | Machine Learning | Artificial Intelligence | Computer Imaging, Vision, Pattern Recognition and Graphics | Computer VisionFormatos físicos adicionales: Printed edition:: Sin título; Printed edition:: Sin título; Printed edition:: Sin títuloClasificación CDD: 006.31 Clasificación LoC:Q325.5-.7Recursos en línea: Libro electrónicoTexto
Contenidos:
Chapter 1. Introduction -- Chapter 2. Binary Neural Networks -- Chapter 3. Binary Neural Architecture Search -- Chapter 4. Quantization of Neural Networks -- Chapter 5. Network Pruning -- Chapter 6. Applications.
En: Springer Nature eBookResumen: Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
Star ratings
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Colección Signatura Copia número Estado Fecha de vencimiento Código de barras
Libro Electrónico Biblioteca Electrónica
Colección de Libros Electrónicos 1 No para préstamo

Chapter 1. Introduction -- Chapter 2. Binary Neural Networks -- Chapter 3. Binary Neural Architecture Search -- Chapter 4. Quantization of Neural Networks -- Chapter 5. Network Pruning -- Chapter 6. Applications.

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.

UABC ; Perpetuidad

Con tecnología Koha