Deadlines coming soon: Nominate your colleagues for the ACM SIGGRAPH awards for researchers, practitioners, artists, and educators.

Nominate your colleagues for the ACM SIGGRAPH awards for researchers, practitioners, artists, and educators. Nominations/submissions for each award category must be finalized by the deadline date (coming soon) to be considered for 2020.

The nine SIGGRAPH awards are: the Steven Anson Coons Award, the Computer Graphics Achievement Award, Significant New Researcher Award, Outstanding Doctoral Dissertation Award, Outstanding Service Award, Distinguished Artist Award, Practitioner Award, ACM SIGGRAPH Academy, and the Distinguished Educator Award.

Please visit the awards page to learn about each award and its nomination process.

VES Announces Special 2019 Honorees

VES Announces Special 2019 Honorees

The VES announces the 2019 inductees into the VES Hall of Fame, the newest Lifetime and Honorary members and this year’s recipient of the VES Founders Award. The names of this year’s VES Fellows will be announced later. The honorees and Hall of Fame inductees will be recognized at a special reception in October.  

“Our VES honorees represent a group of exceptional artists, innovators and professionals who have had a profound impact on the field of visual effects,” said Mike Chambers, VES Board Chair. “We are proud to recognize those who helped shape our shared legacy and continue to inspire future generations of VFX practitioners.”

VES Hall of Fame

This distinction is bestowed upon a select group of professionals and pioneers who have played a significant role in advancing the field of visual effects by invention, science, contribution or avocation of the art, science, technology and/or communications.

Walt Disney (1901-1966)

Walter Elias Disney was an American entrepreneur, animator, voice actor, TV and film producer. A pioneer of the American animation industry, he introduced a score of innovations in the field of animation.  As a film producer, Disney holds the record for most Academy Awards earned by an individual, having won 26 Oscars.  His creative vision gave rise to the groundbreaking theme park Disneyland and Disney Brothers Cartoon Studios, the origin of Walt Disney Animation Studios.

Stanley Kubrick (1928-1999)

Stanley Kubrick was an American film director, screenwriter, and producer. He is frequently cited as one of the greatest and most influential filmmakers in cinematic history. His filmography includes SPARTACUS, LOLITA, DR. STRANGELOVE, 2001: A SPACE ODYSSEY (Academy Award winner for Best Special Visual Effects), A CLOCKWORK ORANGE and THE SHINING.  All of Stanley Kubrick’s films from PATHS OF GLORY until the end of his career, except for THE SHINING, were nominated for Academy Awards or Golden Globe Awards.

Stan Lee (1922-2018)

Stan Lee was an American comic book writer, editor, publisher and producer.  He became Marvel Comics’ primary creative leader for two decades, leading its expansion from a small division of a publishing house to a multimedia corporation that dominated the comics industry. In collaboration with others at Marvel, he co-created numerous popular fictional characters, including superheroes SPIDER-MAN, THE X-MEN, IRON MAN, THOR, THE HULK, THE FANTASTIC FOUR, BLACK PANTHER, DAREDEVIL, DOCTOR STRANGE, SCARLET WITCH and ANT-MAN. Lee was an inductee into the comic book industry’s Will Eisner Award Hall of Fame, and a recipient of NEA’s National Medal of Arts as well as the VES Lifetime Achievement Award.

Founders Award Recipient

Awarded to any individual member of the VES who has significantly contributed to the success of the VES.

Susan Thurmond O’Neal

Susan Thurmond O’Neal joined the VES in the late 1990s and has served as a member of its global Board of Directors and the Executive Committee. For many years, she served as the Chair for the global Education Committee and currently chairs the Membership Committee – and she has been instrumental in the work to grow the Society by leading the bi-annual membership review and approval process. 

O’Neal is currently a recruiter for BLT Recruiting, Inc and has worked as an Operations Manager at The Mill, Operations Director at Escape Studios in Los Angeles and as an Account Manager at Side Effects Software, Inc.  She started her career in visual effects at Digital Domain in 1993, where she worked in finance and operations before turning to production.  O’Neal’s credits include T2: 3D – BATTLE ACROSS TIME, TITANIC, THE ITALIAN JOB, WEST WING, DEUCE BIGALOW 2: EUROPEAN GIGOLO as well as a host of music videos, commercials and other works.

Lifetime Member

Awarded for meritorious service to the Society, the industry and for furthering the interests and values of visual effects artists around the world.

Michael Fink, VES

Michael Fink began working in film on THE CHINA SYNDROME.  He was hooked, and worked on films such as STAR TREK: THE MOTION PICTURE and BLADE RUNNER before becoming a Visual Effects Supervisor on WAR GAMES (BAFTA Nomination).  He has since worked on THE ADVENTURES OF BUCKAROO BANZAI ACROSS THE 8TH DIMENSION, BATMAN RETURNS (Academy Award and BAFTA nominations), BRAVEHEART, MARS ATTACKS!, X-MEN, X-MEN 2, THE GOLDEN COMPASS (VES Award nomination, BAFTA and Academy Award winner), AVATAR, TRON: LEGACY, TREE OF LIFE and LIFE OF PI.

Fink directed the first Coca-Cola Polar Bear spot in 1993, which was one of the earliest widely seen examples of 3D fur on a CG creature. Fink is a founding member of the Visual Effects Society and a former VES Board Member.  He is a member of the Visual Effects Branch of the Academy of Motion Picture Arts and Sciences, and has served on the Executive Committee.  He is currently a Professor at the School of Cinematic Arts at the University of Southern California and Chair of the Division of Film and Television Production.  He holds the Georges Méliès Endowed Chair in Visual Effects at the USC School of Cinematic Arts. 

Honorary Member

Awarded for exemplary contributions to the entertainment industry at large, and for furthering the interests and values of visual effects practitioners around the world.

Mike Brodersen

Mike Brodersen is one of the owners of FotoKem, currently serving as its Chief Strategy Officer. Thanks to Brodersen, FotoKem has been a valuable longtime partner with the VES, including serving as the Los Angeles site for the VES Awards nomination events for many years. 

In his 25-year tenure at the company, Brodersen has helped incubate services including film scanning and recording, digital intermediate, digital restoration, software development and file-based dailies. Established in 1963 and one of the last remaining full service film laboratories in the world, FotoKem provides color grading, digital imaging, VFX, graphics, audio, and a variety of other post-production services from dailies through finishing.  In recent years, the company has been able to provide post-production, imaging and VFX support services on titles like ONCE UPON A TIME IN HOLLYWOOD, AQUAMAN, STAR WARS: THE LAST JEDI, DUNKIRK, VICE, GREEN BOOK, KONG: SKULL ISLAND, BETTER CALL SAUL and HOMELAND. 

Meet SUE: a Super Uber Elemental

INTERVIEW
with Sony Pictures Imageworks’ Theo Bialek
by Jessica Fernandes, Spark CG Society

© 2019 · Spark CG Society
August 22, 2019

Meet SUE: a Super Uber Elemental

In Spider-Man: Far From Home, our friendly neighbourhood Spider-Man is unwillingly thrust into the spotlight, battling a power-hungry, tech savvy, vengeful adversary and his legion of drones. With a shot count of 320, and crew size of 200, Sony Pictures Imageworks was responsible for the bulk of the impressive third act of the film. Theo Bialek, VFX Supervisor at Sony Imageworks, sat down with us to chat through their work on the project.

Far From Home

Having VFX supervised both Spider-Man: Homecoming, and Far From Home, what struck you as the biggest differences between them?

The scope. Homecoming was a much simpler FX film for us in the sense that it’s really just one character (the Vulture) against Spider-Man. In Far From Home, it’s Spider-Man against Mysterio, Beck, thousands of drones, and the final Elemental creature (which is a complex set of different FX phenomena — water, fire, wind, clouds, lighting, lava). It was a much more varied and complex set of characters and circumstances, seen from a lot of different angles. And unlike on Homecoming, our Far From Home scenes occurred in daylight. There’s less that you can hide when it’s bright, especially when you’re trying to recreate or extend a live action set.

I had been cautioned that with Marvel films especially, as the story is continually refined that changes to the sequences would be inevitable up till the end of post. Having recently finished on Spider-Man: Homecoming, another 3rd act joint Marvel project, I was confident I knew the drill. We took precautions, overbuilding key assets and staffing up for complexity increases we felt were likely to come our way. However, it didn’t take too long before our team realized this time was different and the precautions we undertook were only the start of what would be needed. Keeping our production team flexible was the best way to absorb all of the new ideas and constructs along the way.

Can you run us through some of the elements you worked on?

The Super Uber Elemental:
In the final act, Spider-Man battles and flies into the final Elemental creature (we named her SUE, the Super Uber Elemental). As originally conceived, Spider-Man was to swing along the peripheral of the storm, disabling the SUE illusion after confronting Beck and damaging his suit. Later in the production it was decided a more dynamic demise for SUE was needed. When instead an alternate idea of Spider-Man flying into the interior of SUE to disable hundreds of drones constructing her illusion with an electrified web network was pitched we thought maybe it was a joke. It didn’t take long for the reality to sink in when we realized the scope of the challenge. Rapidly prototyping various styles and methods for the interior shell of SUE we endeavored to figure out what the inside of an illusion could look like. Eventually settling on the construct of malleable voxels of light powered by drone holographic lasers to make up SUE’s interior skin shell.

On the exterior SUE had a collection of different effects that made up her form, but one of the most expensive was her water tentacles, these huge streams of churning water with an internal core of fire that go into the sky as lightning crackles along the perimeter. To generate these effects came at a large computational cost, both in the time to simulate and the space required to store the data needed for rendering. As the number of SUE shots grew in excess of 50 it became apparent that custom tentacles for every shot wasn’t going to be feasible. To help keep our CPU and disk space expenditures to a minimum we decided to cache out several long simulations and share them across the shots as opposed to generating unique effects per shot. A single 400 frame sim from our water tentacle, with its associated spray, internal fire core, and lightning, approached 1000 core hours to simulate and 5TB of data on disk. A large amount of data to keep on disk for the duration of the show, but it still ended up saving considerable time versus creating bespoke FX per shot.

To facilitate the fire effects on SUE and elsewhere in the sequence we created Inferno, a proprietary tool developed in Houdini, that allows us to run distributed fire and smoke FX simulations across multiple machines on a queue (versus having to run sims on a single machine). It also allows the artist to run a lower res setting, get feedback on the results, and then if approved, iterate, refine to a higher res and render. Typically, when you have low res simulations for these types of phenomenon and effects, there is a lot of wasted work to add the higher resolution back. But, in our tool, the rough version was a good indication of what the high res would be. That was a new stage we developed specifically on this show, to help us verify our sims. It’s now being used on multiple projects.

Far From Home

Drone Battle & Set Extensions:
The third act required vast distances to be covered within a shot given the expected amount of drone and Spider-Man combat throughout the environment. Given this, we needed to ensure we had a large enough CG environment to accommodate all the action. How much of the set to build virtually at high detail was made especially challenging given the scope and complexity of the shots remained fluid and ever expanding throughout the production. Whenever possible we would elect to use plates, however we often found the constraints of the shot design and camera moves required a full CG environment. By the end of the production we had created a high detail CG version of the bridge and surrounding area that could be used for either set extension or full CG.

The majority of the plate work was filmed on the backlot of Leavesden Studios outside of London. Partials sets were constructed to recreate portions of the bridge roadway and the elevated walkway above where Beck coordinates his attacking drones.

Shooting on the actual bridge wasn’t an option logistically as the cost and constraints of shutting down a functioning landmark wasn’t practical for the time a full shoot requires. We were allowed limited access though this was only useful for acquisition photography and reference footage. On this morning the city shut down the bridge for 2 minute intervals roughly every 15 minutes over the course of the morning. This allowed us just enough time to clear traffic off the bridge and run out to various locations to shoot unobstructed pano and tiles along the roadway of the bridge.

Plates were also shot on location during normal hours from the towers and sidewalks of the bridge, but this meant hundreds of pedestrians and traffic in frame, mostly rendering the footage as being valuable reference only. As a general rule, whenever possible we would always try to shoot our plates and reference at three times of day: morning, afternoon, and late afternoon. This was done so we could capture different sun directions from each location. Additional footage was shot from a barge on the river, atop a double decker bus driving over the bridge, from a helicopter above, along the shores both on foot and from atop various neighboring buildings including City Hall and The Shard. By the end of the acquisition phase there wasn’t a spot our team hadn’t trekked across multiple times.

Far From Home

I heard that the different Elementals were based on previous Spider-Man villains (Sandman, Hydro Man, Molten Man, Cyclone). What did you use as reference for yours, as it’s an amalgamation of a number of different elements?

They supplied us with an initial piece of artwork that established the groundwork for her design, establishing the number of limbs and relative shapes of her appendages. What it didn’t do was give a detailed look at how all of the parts connected or the differing phenomena interrelated. As SUE was meant to be derived from the other earlier Elementals in the film and those were also being developed in parallel there was often a wait-and-see component to our character: where are the other vendors with their designs, let’s try to borrow ideas from that.

Originally SUE wasn’t intended to be featured so heavily in the film, only expected to be seen from a distance in a handful of shots. As the scope grew, and the design got increasingly more complex the character quickly became our largest challenge on the film. Originally imagined as discrete segments, the body was to be constructed solely of volcanic earth, the tentacle arms as water and a smokey cloud-like head sat atop. Following this design we approached her creation in the typical serial fashion – concept, model, rig, lookdev, FX, composite. When the direction for her design evolved it was all hands on deck and we rebooted the process back to the concept stage. The updated brief was to create a mixture of all the elements that relate more closely to the other elements now further along in development. This required the previous volcanic form to now include regions of clear earth as rock and mud, water, and smoke.

In terms of additional reference, we took our cues from films like Clash of the Titans, looking at other large types of creatures. We tried to learn from that, as well as graft from the other Elementals being created.

Far From Home

Who came up with the name SUE?

It was an Imageworks suggestion. Some of the early concept work was titled “Uber Elemental”, while another had the name “Super Elemental”. Because we didn’t yet have a unifying name, in jest I suggested we call it the “Super Uber Elemental”, combining both names. Realized SUE would be the acronym it just seemed like a natural fit. They liked it, so it stuck.

Was any mocap used for SUE?

Although some of the other Elementals used mocap, because SUE had multiple arms, even using one of her arms more as a leg, it didn’t really make sense for us to go that route and as a general rule we hand animated her performance.

In terms of process, our animators often act out and record their performances using a modified Xbox Kinect system. In the case of SUE, our animators would pretend to be the massive creature moving slowly as they try to mimic the motion a several hundred foot large monster. This motion is recorded both as video and as three dimensional scans. The 3D scan is then loadable into Maya where it is used as a dynamic reference to help match timing and poses to. This worked quite well in regards to SUE as fast method to block out first pass animation.

What can you tell us about the final drone fight in the walkway?

The extended opening shot of the drone fight in the walkway was definitely one of our most difficult shots. Originally envisioned as a reality-only shot, Spider-Man battles the last remaining drones in the confined walkway without the aid of webs. As a single camera take, well over 20 seconds in length, the shot was incredibly complex given the length, dynamic FX and destruction required. Spider-Man’s animation was a combination of several mini snippets of mocap takes and hand animation all spliced together and enhanced with a unifying layer done by hand. Deep into production the requirements changed and the shot design was based on starting in the illusion realm that gradually fades away from a void to reality as the drones are destroyed. This updated twist amplified the already complex shot as FX demands increased to now include the devolving illusion effects and accompanying signature Mysterio green vapor that inhabits the void world. With all of the flashing explosions and blue illusion effects It was a particularly challenging shot to wrangle the visual elements in a manner that didn’t devolve into chaos.

What was the biggest challenge on this show?

The unexciting answer to that is the timeframe and pace we kept. The nature of the project required us to remain extremely flexible and revisit and update shots frequently. With such a large sequence and so many interconnected characters it was logistical puzzle that required large resources to keep under control. On a purely technical side SUE was the biggest challenge for us. It could take six or more FX artists just to generate the elements for one of her shots.

What are you most proud of on this show?

The animation. It’s always a challenge on superhero films to keep the motion both grounded in reality, but still service the ideal that it has to be exciting and out of the ordinary. As Spider-Man was much more active in this film than in Homecoming, we needed to take more risks on how far we could push the performance but still not break the sense of realism. With well over 100 shots needing a full CG Spider-Man, the level and scope of animation was immense.

There are also a large number of shots in the film that required a full CG suit replacement. After the point in the sequence where Spider-Man catches on fire all later shots in our sequence needed to have the suit replaced with a burnt CG suit. As this idea was added late into the production all of the live action footage after this event now needed to have the damaged CG suit. That ended up being an excess of over 70 shots, that all needed rotomation, animation, lighting and comp to swap in a CG version.

Well done, I didn’t notice it.

You wouldn’t be looking for it, so that’s the advantage that we had!

Far From Home

Anything you wish you could have done, if given more time?

There’s a segment in the middle of the battle on the bridge we internally called the car gauntlet that was abandoned. Spider-Man is leaping over and through cars along the roadway as explosions are going off all around. It was a very dynamic and exciting action bit that ended up getting cut due to time constraints. Had we more time I would have loved to finished that bit up.

Thank you so much to Theo for taking the time to chat with us. I was fully engrossed from beginning to end with Far From Home. Well done Sony on wrapping the film up so spectacularly!

Spider-Man: Far From Home – © 2019 CTMG, Inc. All Rights Reserved.

CVMP 2019 Final full papers deadline extension

The 16th ACM SIGGRAPH European Conference on Visual Media Production (CVMP 2019)
17-18 December 2019
BFI Southbank, London, UK
https://www.cvmp-conference.org

FULL PAPERS DEADLINE EXTENDED TO 23RD AUGUST (FINAL)

Call for Submissions

For over a decade, CVMP has built a reputation as the prime venue for researchers to meet with practitioners in the Creative Industries: film, broadcast and games. The Conference brings together expertise in video processing, computer vision, computer graphics, animation and physical simulation sponsored by ACM SIGGRAPH. It provides a forum for presentation of the latest research and application advances, combined with keynote and invited talks on state-of-the-art industry practice. CVMP regularly attracts around 140 attendees approximately 50:50 from academia and the creative industries.

We encourage participation from a diverse range of backgrounds including scientists, engineers, artists and producers to contribute inspiring papers, presentations, posters and technical abstracts. We invite submissions to the conference on any topic that demonstrates an impact on visual media production, animation, and interactive content creation and experiences. This year there is an additional focus on gaming across all platforms including design, production and engineering. We hope contributions will convey innovative ideas, technical details and insight or experience into theory and/or practice.

Full Papers: We invite submissions of regular, technical papers presenting novel research or applications related to any aspect of media production, including computer vision, graphics and machine learning research with application in this area. We particularly encourage submission of early-stage doctoral work. Submitted papers can be any length up to 10 pages and will be subject to double-blind peer review. Accepted papers will be presented in either oral or poster form, and will appear in the ACM Digital Library. Browse past CVMP papers in the ACM Digital Library here: http://bit.ly/ACM_CVMP

Short Papers and Technical Abstracts: Submissions are invited in the form of a one-page extended abstract, describing innovative industry practice or academic research. Submissions can also describe work in progress and do not prevent submission of the work elsewhere. Accepted submissions will be presented in poster form at the conference, and will not appear in the ACM Digital Library.

Demos: The demos programme promotes applied research and applications to facilitate collaborations between industrial and academic members of the media production community. Demos do not have to be related to full or short papers accepted at CVMP but should have the form of live demonstrations of media production methods and/or their applications. A one-page paper should be submitted describing the content of the demonstration and an overview of the approach.

Industry Talks: This year we are pleased to introduce a new track for industry talks. These should be submitted in the form of a one-page extended abstract describing the proposed talk. Talks will take a similar form to submissions for talks at SIGGRAPH and FMX and can be talks that have already been presented at other venues.

Submission Dates:

Full Papers deadline: FINAL EXTENSION TO 23 AUGUST 2019
Industry Talks deadline: 27 September 2019
Short Papers deadline: 27 September 2019
Demos deadline: 27 September 2019

Papers, Demos and Talks are invited in all areas of visual media production related to film, games, and broadcast, including but not limited to:

– 3D video capture and 3D-TV
– Augmented/virtual Reality
– Character animation
– Computational photography
– Computer vision
– Computer graphics
– High-dynamic range (HDR) imaging
– Image and video synthesis
– Image enhancement and restoration
– Image/model/asset editing
– Interactive media and games
– Level of detail (LOD)
– Machine learning
– Motion estimation
– Multiple camera systems
– Omni-directional video
– Post production using stereo, 3D and motion
– Pre-visualization
– Real-time imaging systems
– Real-time rendering
– Relighting images and video
– Scene modelling
– Segmentation and matting
– Video and camera tracking
– Video-based animation
– Video-based human motion capture
– Visual asset management
– Visual effects (VFX)
– Virtual production

Submission instructions: https://www.cvmp-conference.org/2019/submission-instructions/

2nd CFP IEEE VR 2020 Journal Papers – Abstracts due Sept. 3!

Call for Journal Papers

IEEE VR 2020: the 27th IEEE Conference on Virtual Reality and 3D User Interfaces

March 22-26, 2020, Atlanta, USA

http://ieeevr.org/2020/

Important Dates

Journal Track

  • September 3, 2019: Abstracts due (REQUIRED)
  • September 10, 2019: Submissions due
  • November 9, 2019: Notification of first review cycle results
  • January 6, 2020: Revised paper submissions due to second review cycle
  • January 22, 2020: Final notifications
  • January 31, 2020: Camera-ready material due from authors of accepted papers

Conference Track

  • November 16: Conference paper abstracts due (REQUIRED)
  • November 23: Conference paper submissions due
  • January 26: Conference paper notifications of results returned to authors
  • February 9: Camera-ready material due from authors of accepted conference papers

Overview

IEEE VR 2020 seeks original, high-quality papers in all areas related to virtual reality (VR), including augmented reality (AR), mixed reality (MR), and 3D user interfaces (3DUIs).

Inquiries contact: program2020 [at] ieeevr.org

Submission Guidelines

Paper abstracts and complete papers must be submitted electronically through the online submission system: https://new.precisionconference.com/~vr

Each research paper should provide a validated contribution covering one or more of the following categories: methodological, technology, applications, and systems.

  • Methodological papers should describe advances in theories and methods of AR/VR/MR and 3DUI, such as ethical issues, theories on presence, or human factors.
  • Technology papers should describe advancements in algorithms or devices critical to AR/VR/MR and 3DUI development such as input, display, user interaction, or tracking.
  • Application papers provide an important insight to the community by explaining how the authors built upon existing ideas and applied them to solve an interesting problem in a novel way. Each paper should include an evaluation of the success of the use of AR/VR/MR and/or 3DUI in the given application domain.
  • System papers should indicate how the developers integrated techniques and technologies to produce an effective system, and convey any lessons learned in the process.

Each paper should include an evaluation of its contributions, such as user studies, benchmarking and/or comparison with existing systems/techniques/methods.

Further Details Regarding Submissions. We welcome paper submissions not exceeding 9 pages, excluding references. References may not exceed two additional pages. Continuing our cooperation with the IEEE Transactions on Visualization and Computer Graphics (TVCG), all accepted paper submissions will automatically be published in a special issue of IEEE TVCG. To meet TVCG standards, papers recommended for inclusion to TVCG will undergo a two-stage review process (see SUBMISSION DEADLINES below). Authors of papers that are determined to be acceptable to the journal subject to minor revisions during the first review cycle will be invited to submit a revised version for a second review cycle. Only papers that are accepted in this second cycle will appear in the journal issue. Papers that fail to pass the second round of reviews may proceed through further revisions to appear in a future regular issue of TVCG.

Ethics and Responsibility. All submissions describing research experiments with human participants should follow the appropriate ethical guidelines and authors are encouraged to secure and report their pre-approval by the relevant ethics commission. An approval by any institutional review board should be indicated via the submission system. While this is not a mandatory requirement at this time, with the broad dissemination of VR, AR, and MR technology, our community should be aware of this responsibility.

Conference Presentation. All accepted papers must be orally presented at the conference. There is also the possibility for authors of relevant previously published TVCG papers (accepted within the last year) to present their work at IEEE VR 2020. Interested authors should contact the program chairs for more details.

Abstract Submission. Note that a paper abstract must be uploaded a week prior to the actual paper submission deadline. This facilitates the process of assigning reviewers, as the review process operates on a very tight schedule.

Topics

IEEE VR 2020 seeks contributions in VR/AR/MR and 3DUI including, but not limited to, the following topics:

  • 3D and volumetric display and projection technology
  • 3D authoring
  • 3D user interaction
  • 3DUI metaphors
  • Audio interfaces, sound rendering, spatialized audio, auditory perception and psychoacoustics
  • Collaborative interactions
  • Computer graphics techniques
  • Crowd simulation
  • Embodied agents, virtual humans and (self-)avatars
  • Ethical issues
  • Haptic and tactile interfaces, wearable haptics, passive haptics, pseudo haptics, other touch-based UI
  • Human factors and ergonomics
  • Immersive / 360° video
  • Immersive analytics and visualization
  • Input devices
  • Locomotion and navigation
  • Mediated and diminished reality
  • Mobile, desktop or hybrid 3DUIs
  • Modeling and simulation
  • Multi-user and distributed systems
  • Multimodal capturing and reconstruction
  • Multimodal input and output
  • Multimodal/cross-modal Interaction and perception
  • Multisensory rendering, registration, and synchronization
  • Non-fatiguing 3DUIs
  • Non-visual interfaces (such as olfactory)
  • Perception and cognition
  • Presence, body ownership, and agency
  • Scene description and management issues
  • Software architectures, toolkits, and engineering
  • Storytelling
  • Teleoperation and telepresence
  • Therapy and rehabilitation
  • Touch, tangible and gesture interfaces
  • Tracking and sensing
  • Usage research, evaluation methods and empirical studies

Additional Submission Guidelines

All paper submissions must be in English.

Paper submissions must not have been previously published. A manuscript is considered to have been previously published if it has appeared in a peer-reviewed journal, magazine, book, or meeting proceedings that is reliably and permanently available afterward in print or electronic form to non-attendees, regardless of the language of that publication. A paper identical or substantially similar in content (in its entirety or in part) to one submitted to VR should not be simultaneously under consideration for another conference or journal during any part of the VR review process, from the submission deadline until notifications of decisions are emailed to authors.

IEEE VR uses a DOUBLE-BLIND review process. Thismeans that both the authors and the reviewers should remain anonymous to each other. Submissions ( including citations and optional videos ) should not contain information that identifies the authors, their institutions, funding sources, funding sources, or their places of work. Relevant previous work by the authors should be cited in the third person to preserve anonymity. Authors should work diligently to ensure that their submissions do not expose their identities either through carelessness or intentionally. Authors that have questions/issues around double-blind submission policy should contact the program chairs.

Failure to make reasonable attempts to adhere to the double-blind policy will result in desk rejection.

In order to fully explain the relationship between the submitted paper and relevant previous work by the authors, authors may additionally upload previous papers as well as a non-anonymous letter of explanation; these materials will only be seen by the primary reviewer.

Authors are encouraged to submit videos to aid the program committee in reviewing their submissions. Videos must be submitted according to the instructions at the submission website. Videos submitted with papers will automatically be considered for possible inclusion in the video proceedings (video submissions may also be made independently, as described in the separate Call for Videos). When submitted as supporting material, videos must be free of any identifying information prior to reviewing as per the double-blind submission policy. If accepted for the video proceedings, a revised version of the materials of the materials will be requested.

Submission Deadlines

Each deadline is 23:59:59 AoE (Anywhere on Earth) == GMT/UTC-12:00 on the stated day, no matter where the submitter is located. A convenient tool to see when AoE is for your local time is setting your location at 1 and Baker Island (which uses AoE) for: https://www.timeanddate.com/worldclock/meeting.html

** The submission deadlines will be strictly enforced. Requests for extensions will not be honored **

  • September 3, 2019 : Abstracts due (REQUIRED)
  • September 10, 2019 : Submissions due
  • November 9, 2019 : Notification of first review cycle of first review cycle results
  • January 6, 2020 : Revised paper submission due for second review cycle
  • January 22, 2020 : Final notification for second review cycle
  • January 31, 2020 : Camera-ready material due from authors

Paper abstracts and complete papers must be submitted electronically through the online submission system: https://new.precisionconference.com/~vr

All VR Journal Papers submissions should be formatted using the IEEE Computer Society TVCG journal format described at http://junctionpublishing.org/vgtc/Tasks/camera_tvcg.html. Including a teaser image on page 1 is encouraged but not required.

Contacts

Journal Papers Chairs:

  • Joseph Gabbard, Virginia Tech, USA
  • Joaquim Jorge, INESC-ID / Técnico Lisboa, POR
  • Torsten Wolfgang Kuhlen, RWTH Aachen University, GER
  • Maud Marchal, Univ. Rennes, INSA/IRISA, FRA
  • Anthony Steed, University College London, UK

journalpapers2020 [at] ieeevr.org
program2020 [at] ieeevr.org

Conference Papers Chairs:

  • Ferran Argelaguet, INRIA, France
  • Gerd Bruder, University of Central Florida, USA
  • Regis Kopper, Duke University, USA
  • Marc Erich Latoschik, University of Würzburg, Germany
  • Tabitha Peck, Davidson College, USA
  • Christian Sandor, City University of Hong Kong, Hong Kong SAR
  • Xubo Yang, Shanghai Jiao Tong University, China

conferencepapers2020 [at] ieeevr.org
program2020 [at] ieeevr.org