Deep Learning for Video Understanding [electronic resource] / by Zuxuan Wu, Yu-Gang Jiang.

Por: Wu, Zuxuan [author.]Colaborador(es): Jiang, Yu-Gang [author.] | SpringerLink (Online service)Tipo de material: TextoTextoSeries Wireless NetworksEditor: Cham : Springer Nature Switzerland : Imprint: Springer, 2024Edición: 1st ed. 2024Descripción: IX, 188 p. 99 illus. in color. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9783031576799Tema(s): Telecommunication | Signal processing | Computer vision | Multimedia systems | Communications Engineering, Networks | Signal, Speech and Image Processing | Computer Vision | Multimedia Information SystemsFormatos físicos adicionales: Printed edition:: Sin título; Printed edition:: Sin título; Printed edition:: Sin títuloClasificación CDD: 621.382 Clasificación LoC:TK5101-5105.9Recursos en línea: Libro electrónicoTexto
Contenidos:
Introduction -- Overview of Video Understanding -- Deep Learning Basics for Video Understanding -- Deep Learning for Action Recognition -- Deep Learning for Action Localization -- Deep Learning for Video Captioning -- Unsupervised Feature Learning for Video Understanding -- Efficient Video Understanding -- Future Research Directions -- Conclusion.
En: Springer Nature eBookResumen: This book presents deep learning techniques for video understanding. For deep learning basics, the authors cover machine learning pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature learning. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence learning for video captioning. For unsupervised feature learning, the authors discuss the necessity of shifting from supervised learning to unsupervised learning and then introduce how to design better surrogate training tasks to learn video representations. Finally, the book introduces recent self-training pipelines like contrastive learning and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with deep learning. Presents an overview of deep learning techniques for video understanding; Covers important topics like action recognition, action localization, video captioning, and more; Introduces cutting-edge and state-of-the-art video understanding techniques.
Star ratings
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Colección Signatura Copia número Estado Fecha de vencimiento Código de barras
Libro Electrónico Biblioteca Electrónica
Colección de Libros Electrónicos 1 No para préstamo

Introduction -- Overview of Video Understanding -- Deep Learning Basics for Video Understanding -- Deep Learning for Action Recognition -- Deep Learning for Action Localization -- Deep Learning for Video Captioning -- Unsupervised Feature Learning for Video Understanding -- Efficient Video Understanding -- Future Research Directions -- Conclusion.

This book presents deep learning techniques for video understanding. For deep learning basics, the authors cover machine learning pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature learning. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence learning for video captioning. For unsupervised feature learning, the authors discuss the necessity of shifting from supervised learning to unsupervised learning and then introduce how to design better surrogate training tasks to learn video representations. Finally, the book introduces recent self-training pipelines like contrastive learning and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with deep learning. Presents an overview of deep learning techniques for video understanding; Covers important topics like action recognition, action localization, video captioning, and more; Introduces cutting-edge and state-of-the-art video understanding techniques.

UABC ; Perpetuidad

Con tecnología Koha