000 03982nam a22005895i 4500
001 978-3-031-57679-9
003 DE-He213
005 20250516160114.0
007 cr nn 008mamaa
008 240801s2024 sz | s |||| 0|eng d
020 _a9783031576799
_9978-3-031-57679-9
050 4 _aTK5101-5105.9
072 7 _aTJK
_2bicssc
072 7 _aTEC041000
_2bisacsh
072 7 _aTJK
_2thema
082 0 4 _a621.382
_223
100 1 _aWu, Zuxuan.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
245 1 0 _aDeep Learning for Video Understanding
_h[electronic resource] /
_cby Zuxuan Wu, Yu-Gang Jiang.
250 _a1st ed. 2024.
264 1 _aCham :
_bSpringer Nature Switzerland :
_bImprint: Springer,
_c2024.
300 _aIX, 188 p. 99 illus. in color.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aWireless Networks,
_x2366-1445
505 0 _aIntroduction -- Overview of Video Understanding -- Deep Learning Basics for Video Understanding -- Deep Learning for Action Recognition -- Deep Learning for Action Localization -- Deep Learning for Video Captioning -- Unsupervised Feature Learning for Video Understanding -- Efficient Video Understanding -- Future Research Directions -- Conclusion.
520 _aThis book presents deep learning techniques for video understanding. For deep learning basics, the authors cover machine learning pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature learning. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence learning for video captioning. For unsupervised feature learning, the authors discuss the necessity of shifting from supervised learning to unsupervised learning and then introduce how to design better surrogate training tasks to learn video representations. Finally, the book introduces recent self-training pipelines like contrastive learning and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with deep learning. Presents an overview of deep learning techniques for video understanding; Covers important topics like action recognition, action localization, video captioning, and more; Introduces cutting-edge and state-of-the-art video understanding techniques.
541 _fUABC ;
_cPerpetuidad
650 0 _aTelecommunication.
650 0 _aSignal processing.
650 0 _aComputer vision.
650 0 _aMultimedia systems.
650 1 4 _aCommunications Engineering, Networks.
650 2 4 _aSignal, Speech and Image Processing.
650 2 4 _aComputer Vision.
650 2 4 _aMultimedia Information Systems.
700 1 _aJiang, Yu-Gang.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
710 2 _aSpringerLink (Online service)
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031576782
776 0 8 _iPrinted edition:
_z9783031576805
776 0 8 _iPrinted edition:
_z9783031576812
830 0 _aWireless Networks,
_x2366-1445
856 4 0 _zLibro electrónico
_uhttp://libcon.rec.uabc.mx:2048/login?url=https://doi.org/10.1007/978-3-031-57679-9
912 _aZDB-2-ENG
912 _aZDB-2-SXE
942 _cLIBRO_ELEC
999 _c275877
_d275876