This article explores issues associated with immersive storytelling in order to examine how the field of World Building can constitute a theoretical framework for practice in the context of VR-based ...and Full Dome artistic projects. With respect to immersion, the intent will be to interpret the concept of storytelling in relation with the recent formulation of the concept of extended reality (XR). The very concept of World Building is transauthor and transmedia by nature. The transauthor dimension of World Building resides in the idea of subcreation, i.e., designing environments and interaction rules that help create a storytelling basis for generating multiple stories. Once the universe has been conceived, stories written by different authors take shape through transmedia processes across multiple distribution media (film, video games, web, etc.). The question then arises: How can the World Building approach shape the construction of immersive experiences? The article sets out to answer this question, and in doing so, to contribute to the research on environmental storytelling.
This study investigates the user experience to clarify what it is like to experience stories in VR (virtual reality) and how immersion influences story experiences in immersive storytelling. This ...study explores the immersive storytelling context, developing and testing a VR experience model that integrates presence, flow, empathy, and embodiment. The results imply that users’ personal traits correlates immersion in VR: user experience in VR depend on individual traits, which in turns influence how strongly users immerse in a VR. The way users view and accept VR stories derives from the way they envisage and intend to experience them. Rather than simply being influenced by technological features, users have intentional and purposeful control over VR stories. The findings of this study suggest that the cognitive processes by which users experience quality, presence, and flow determine how they will empathize with and embody VR stories.
•User experience of virtual reality storytelling.•How immersion influences story experiences in immersive storytelling.•Users' personal traits correlates immersion in VR.•The way users view and accept VR stories derives from the way they envisage and intend to experience them.
First-person captioning is significant because it provides veracious descriptions of egocentric scenes in a unique perspective. Also, there is a need to caption the scene, a.k.a. life-logging, for ...patients, travellers, and emergency responders in an egocentric narrative. Ego-captioning is indeed non-trivial since (1) Ego-images can be noisy due to motion and angles; (2) Describing a scene in a first-person narrative involves drastically different semantics; (3) Empirical implications have to be made on top of visual appearance because the cameraperson is often outside the field of view. We note we humans make good sense out of casual footage thanks to our contextual awareness in judging when and where the event unfolds, and whom the cameraperson is interacting with. This inspires the infusion of such “contexts” for situation-aware captioning. We create EgoCap which contains 2.1K ego-images, over 10K ego-captions, and 6.3K contextual labels, to close the gap of lacking ego-captioning datasets. We propose EgoFormer, a dual-encoder transformer-based network which fuses both contextual and visual features. The context encoder is pre-trained on ImageNet before fine tuning with context classification tasks. Similar to visual attention, we exploit stacked multi-head attention layers in the captioning decoder to reinforce attention to the context features. The EgoFormer has realized state-of-the-art performance on EgoCap achieving a CIDEr score of 125.52. The EgoCap dataset and EgoFormer are publicly available at https://github.com/zdai257/EgoCap-EgoFormer.
•First quantitative study of image captioning in an egocentric perspective.•Introduction of transformer-based context fusion architecture.•EgoCap, a sizable first-person image caption dataset, is released.