SSA-GAN: End-to-End Time-Lapse Video Generation with Spatial Self-Attention
- 23 February 2020
- book chapter
- conference paper
- Published by Springer Science and Business Media LLC
Abstract
No abstract availableKeywords
This publication has 21 references indexed in Scilit:
- Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Image-to-Image Translation with Conditional Adversarial NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- 3D U-Net: Learning Dense Volumetric Segmentation from Sparse AnnotationPublished by Springer Science and Business Media LLC ,2016
- Identity Mappings in Deep Residual NetworksPublished by Springer Science and Business Media LLC ,2016
- Learning Temporal Transformations from Time-Lapse VideosPublished by Springer Science and Business Media LLC ,2016
- Context Encoders: Feature Learning by InpaintingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Temporal Action Localization in Untrimmed Videos via Multi-stage CNNsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Hierarchical Recurrent Neural Encoder for Video Representation with Application to CaptioningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Towards Understanding Action RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- Action Recognition with Improved TrajectoriesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013