virtual worlds Archives - Learnlets https://blog.learnlets.com/category/virtual-worlds/ Clark Quinn's learnings about learning Thu, 15 Dec 2022 23:57:30 +0000 en-US hourly 1 https://blog.learnlets.com/wp-content/uploads/2018/02/cropped-LearnletsIcon-32x32.png virtual worlds Archives - Learnlets https://blog.learnlets.com/category/virtual-worlds/ 32 32 Meta-reflections https://blog.learnlets.com/2022/12/meta-reflections/ https://blog.learnlets.com/2022/12/meta-reflections/#respond Tue, 20 Dec 2022 16:03:04 +0000 https://blog.learnlets.com/?p=8463 I was recently pinged about a new virtual world, a ‘metaverse‘ inspired new place for L&D. It looked like a lot of previous efforts! I admit I was underwhelmed, and I think sharing why might be worthwhile. So here are some meta-reflections. I’ve written before on virtual worlds. In short, I think that when you […]

The post Meta-reflections appeared first on Learnlets.

]]>
Lake reflectionI was recently pinged about a new virtual world, a ‘metaverse‘ inspired new place for L&D. It looked like a lot of previous efforts! I admit I was underwhelmed, and I think sharing why might be worthwhile. So here are some meta-reflections.

I’ve written before on virtual worlds. In short, I think that when you need to be social and 3D, they make sense. At other times, there’s a lot of overhead for them to be useful that can be met in other ways. Further, to me, the metaverse really is just another virtual world. Your mileage may vary, of course.

This new virtual world had, like many others, the means to navigate in 3D, and to put information around. The demo they had was a virtual museum. Which, I presume, is a nice alternative to trying to get to a particular location. On the other hand, if it’s all digital, is this the best way to do it? Why navigate around in 3D? Why not treat it as an infographic, and work in 2D, leading people through the story? What did 3D add? Not much, that I could see.

My take has, and continues to be, as they say, “horses for courses”. That is, use the right tool for the job. I complained about watching a powerpoint presentation in Second Life (rightly so). Sure, I get that we tend to use new technologies in old ways first until we get on top of the new capabilities. However, I also argue that we can short-circuit this process if we look at core affordances.

The followup message was that this was the future of L&D, and we’d get away from slide decks and Zoom calls, and do it all in this virtual world. I deeply desire this not to be true! My take is that slide decks, Zoom, virtual worlds, and more all have a place. It’s a further instance of get the design right first, then figure out how to implement it. I want an ecosystem of resources.

Sure, I get that such a meta verse could be an integrating environment. However, do you really want to do all your work in a virtual world? Some things you can’t, I reckon, machining materials, for instance. Moreover, we have benefits from being out in the world. There are other issues as well. You might be better able to deal with diversity, etc, in a virtual world, but it’ll disadvantage some folks. Better, maybe, to address the structural problems rather than try to cover them over?

As always, my takeaway is use technology to implement better approaches, don’t meld your approaches to your tech. Those are, at least, my meta-reflections. What are yours?

The post Meta-reflections appeared first on Learnlets.

]]>
https://blog.learnlets.com/2022/12/meta-reflections/feed/ 0
Helen Papagiannis #DevLearn Keynote Mindmap https://blog.learnlets.com/2019/10/helen-papagiannis-devlearn-keynote-mindmap/ https://blog.learnlets.com/2019/10/helen-papagiannis-devlearn-keynote-mindmap/#respond Thu, 24 Oct 2019 16:38:08 +0000 https://blog.learnlets.com/?p=7218 Helen Papagiannis kicked off the second day of the DevLearn conference. She explored the possibilities of AR with exceptional examples. She went through a variety of concepts, helping us comprehend new opportunities. Exposing the invisible and annotating the world were familiar, but collaborative editing of spatial representations resurrected one of the most interesting (and untapped) […]

The post Helen Papagiannis #DevLearn Keynote Mindmap appeared first on Learnlets.

]]>
Helen Papagiannis kicked off the second day of the DevLearn conference. She explored the possibilities of AR with exceptional examples. She went through a variety of concepts, helping us comprehend new opportunities. Exposing the invisible and annotating the world were familiar, but collaborative editing of spatial representations resurrected one of the most interesting (and untapped) potentials of virtual worlds.

The post Helen Papagiannis #DevLearn Keynote Mindmap appeared first on Learnlets.

]]>
https://blog.learnlets.com/2019/10/helen-papagiannis-devlearn-keynote-mindmap/feed/ 0
Stephanie Llamas #Realities360 Keynote Mindmap https://blog.learnlets.com/2019/06/stephanie-llamas-realities360-keynote-mindmap/ https://blog.learnlets.com/2019/06/stephanie-llamas-realities360-keynote-mindmap/#respond Tue, 25 Jun 2019 16:32:49 +0000 https://blog.learnlets.com/?p=7070 Stephanie Llamas kicked off the Realities 360 conference by providing an overview of VR & AR industry. As a market researcher, she made the case for both VR and AR/MR. With trend data and analysis she made a case for growth and real uses. She also suggested that you need to use it correctly. (Hence […]

The post Stephanie Llamas #Realities360 Keynote Mindmap appeared first on Learnlets.

]]>
Stephanie Llamas kicked off the Realities 360 conference by providing an overview of VR & AR industry. As a market researcher, she made the case for both VR and AR/MR. With trend data and analysis she made a case for growth and real uses. She also suggested that you need to use it correctly. (Hence my talk later this day.)

Keynote Mindmap

The post Stephanie Llamas #Realities360 Keynote Mindmap appeared first on Learnlets.

]]>
https://blog.learnlets.com/2019/06/stephanie-llamas-realities360-keynote-mindmap/feed/ 0
The ARG experience https://blog.learnlets.com/2019/06/the-arg-experience/ https://blog.learnlets.com/2019/06/the-arg-experience/#respond Wed, 05 Jun 2019 15:08:59 +0000 https://blog.learnlets.com/?p=7008 In preparing a couple of presentation for the Realities 360 conference coming up late this month, I got thinking about ARGs again. ARGs (alternate reality games) were going to be the thing, but some colleagues suggest that the costs were problematic.  I still think that ARGs could be powerful learning experiences. And, of course, I […]

The post The ARG experience appeared first on Learnlets.

]]>
In preparing a couple of presentation for the Realities 360 conference coming up late this month, I got thinking about ARGs again. ARGs (alternate reality games) were going to be the thing, but some colleagues suggest that the costs were problematic.  I still think that ARGs could be powerful learning experiences. And, of course, I understand that the overhead would make them useful only in particular situations. For instance, those where the needs were pressing and the real world experience is important. And I reckon those are few.  In some sense, disaster drills are an example!  Still, I thought it was worth looking at the ARG experience. And, of course, I made a diagram. It’s nothing particularly astute, but on the principle of ‘show your work’…

In particular, I was thinking about artificial virtuality (AV). In the continuum from reality to virtual reality (VR), AV sits between augmented reality (AR) and VR. That is, the goal is virtual (e.g. a made up one, not one that’s manifest in the real world, at least directly. And yet it permeates the real world. And that, to me, really defines an ARG!  Of course, it doesn’t  have to be tuned to the experience of a game, it  can just be a scenario, but you  know I’m not going to stop there! :)

So what’s going on here?  I’m suggesting that there’s a story that is the experience designed for the player. I talk about LARGs, which is an ARG for learning. The ARG experience here is implemented by an engine which embodies the game (just as games are done). Instead, however, of the experiences being mediated by a computer interface, instead activities are inserted into the players experience.

So, there’s an underlying model driving the action (just as in traditional computer games). There are variables maintaining state, and rules operating on them. So your choice depends on what’s happened before (actions have consequences), and you can be moving up or down depending on how you play. The rules determine what happens next. A colleague built a whole engine for this!

The information and decisions the player takes are mediated by real world interfaces, but distributed, not concentrated in one interface. Videos on a phone, or a screen being passed along the way (e.g. an animated billboard or a TV screen in an office) bring information. Social media is carrying messages.

And the player is similarly sending messages as responses. Even real world objects are instrumented, so a door might lock or unlock as the result of player actions. The player may be choosing between competing taxis. And it can be played out over days. In the example we did, the in-game characters would take overnight to respond to your messages.

Now this  could all be done by a puppetmaster (or several), but the goal here would be to set it up so it can run without a suite of people involved. The goal is to design a game like we do traditionally, but manifest across the player’s life. I do recommend seeing the movie  The Game  as a dramatic example.

The real question is what sort of things match these types of goals. The example we built was for sales training; handling virtual customers. As mentioned above, disaster preparedness could make sense. Or other real world awareness tasks (spies?).  Again, there may not be many situations, but for doing that mix of delivering a simulated experience in your life instead of a virtual life could be interesting. Certainly intriguing.

At any rate, I just needed to capture the ARG experience for myself. And to share at the conference. If you’re there, do say hello!

 

The post The ARG experience appeared first on Learnlets.

]]>
https://blog.learnlets.com/2019/06/the-arg-experience/feed/ 0
New reality https://blog.learnlets.com/2019/05/new-reality/ https://blog.learnlets.com/2019/05/new-reality/#respond Wed, 22 May 2019 15:02:22 +0000 https://blog.learnlets.com/?p=6975 I’ve been looking into ‘realities’ (AR/VR/MR) for the upcoming Realities 360 conference (yes, I’ll be speaking). And I found an interesting model that’s new to me, and of course prompts some thoughts. For one, there’s a new reality that I hadn’t heard of!  So, of course, I thought I’d share. The issue is how do […]

The post New reality appeared first on Learnlets.

]]>
I’ve been looking into ‘realities’ (AR/VR/MR) for the upcoming Realities 360 conference (yes, I’ll be speaking). And I found an interesting model that’s new to me, and of course prompts some thoughts. For one, there’s a new reality that I hadn’t heard of!  So, of course, I thought I’d share.

A diagram from reality, through augmented reality and augmented virtuality, to virtual reality.The issue is how do AR (augmented reality) and VR (virtual reality) relate, and what is MR (mixed reality). The model I found (by Milgram, my diagram slightly relabels) puts MR in the middle between reality and virtual reality. And I like how it makes a continuum here.

So this is the first I have heard of ‘augmented virtuality’ (AV). AR is the real world with some virtual scaffolding. AV has more of the virtual world with a little real world scaffolding. A virtual cooking school in a real kitchen is an example. The virtual world guides the experience, instead of the real world.

The core idea to me is about story. If we’re doing this with a goal, what is the experience driver? What is pushing the goal? We could have a real task that we’re layering AR on top of to support success (more performance support than learning). In VR, we totally have to have a goal in the simulated world. AV strikes me as something that has a virtual world created story that uses virtual images and real locations. Kind of like The Void experience.

This reminded me of the Augmented Reality Games (ARGs) that were talked about quite a bit back in the day. They can be driven by media, so they’re not necessarily limited to locations. A colleague had built an engine that would allow experiences driven by communications technologies: text messages, email, phone calls, and these days we could add in tweets and posts on social media and apps. These, on principle, are great platforms for learning experiences, as they’re driven by the tools you’d actually use to perform. (When I asked my colleagues why they think they’ve ‘disappeared’, the reason was largely cost; that’s avoidable I believe.)

I like this continuum, as it puts ARGs and VR and AR in a conceptually clear framework. And, as I argue for extensively, good models give us principled bases for decisions and design. Here we’ve got a way to think about the relationship between story and technology that will let us figure out what makes the best approach for our goals. This new reality (and the others) will be part of my presentation next month. We’ll see how it manifests by then ;).

The post New reality appeared first on Learnlets.

]]>
https://blog.learnlets.com/2019/05/new-reality/feed/ 0
Realities: Why AR over VR https://blog.learnlets.com/2018/08/why-ar-over-vr/ https://blog.learnlets.com/2018/08/why-ar-over-vr/#comments Wed, 29 Aug 2018 15:07:55 +0000 https://blog.learnlets.com/?p=6547 In the past, I’ve alluded to why I like Augmented Reality (AR) over Virtual Reality. And in a conversation this past week, I talked about realities a bit more, and I thought I’d share. Don’t get me wrong, I like VR  alot, but I think AR has the bigger potential impact.  You may or may […]

The post Realities: Why AR over VR appeared first on Learnlets.

]]>
In the past, I’ve alluded to why I like Augmented Reality (AR) over Virtual Reality. And in a conversation this past week, I talked about realities a bit more, and I thought I’d share. Don’t get me wrong, I like VR  alot, but I think AR has the bigger potential impact.  You may or may not agree, but here’s my thinking.

In VR, you create a completely artificial context (maybe mimicking a real one).  And you can explore or act on these worlds. And the immersiveness has demonstrably improved outcomes over a non-immersive experience.  Put to uses for learning, where the affordances are leveraged appropriately, they can support  deep practice. That is, you can minimize transfer to the real world, particularly where 3D is natural. For situations where the costs of failure are high (e.g. lives), this is  the best practice before mentored live performance. And, we can do it for scales that are hard to do in flat screens: navigating molecules or microchips at one end, or large physical plants or astronomical scales at the other. And, of course, they can be completely fantastic, as well.

AR, on the other hand, layers additional information on  top of our existing reality. Whether with special glasses, or just through our mobile devices, we can elaborate on top of our visual and auditory world.  The context exists, so it’s a matter of extrapolating on it, rather than creating it whole. On the other hand, recognizing and aligning with existing context is hard.  Yet, being able to make the invisible visible where you already are, and presumably are for a reason that makes it intrinsically motivating, strikes me as a big win.

First, I think that the learning outcomes from VR are great, and I don’t mean to diminish them. However, I wonder how general they are, versus being specific to inherently spatial, and potentially social, learning.  Instead, I think there’s a longer term value proposition for AR. There’s less physical overhead in having your world annotated versus having to enter another one. While I’m not sure which will end up having greater technical overhead, the ability to add information to a setting to make it a learning one strikes me as a more generalizable capability.  And I could be wrong.

Another aspect is of interest to me, too. So my colleague was talking about mixed reality, and I honestly wondered what that was. His definition sounded like  alternate reality, as in alternate reality games. And that, to me, is also a potentially powerful learning opportunity. You can create a separate, fake but appearing real, set of experiences that are bound by story and consequences of action that can facilitate learning. We did it once with a sales training game that intruded into your world with email and voicemail. Or other situations where you have situations and consequences that intrude into your world and require decisions and actions. They don’t have  real consequences, but they do impact the outcomes. And these could be learning experiences too.

At core, to me, it’s about providing either deep practice or information at the ‘teachable moment’. Both are doable and valuable. Maybe it’s my own curiosity that wants to have information on tap, and that’s increasingly possible. Of course, I love a good experience, too. Maybe what’s really driving me is that if we facilitate meta-learning so people are good self-learners, having an annotated world will spark more ubiquitous learning. Regardless, both realities are good, and are either at the cusp or already doable.  So here’s to real learning!

The post Realities: Why AR over VR appeared first on Learnlets.

]]>
https://blog.learnlets.com/2018/08/why-ar-over-vr/feed/ 3
A broader view of Augmented Reality https://blog.learnlets.com/2018/02/broader-view-augmented-reality/ https://blog.learnlets.com/2018/02/broader-view-augmented-reality/#comments Tue, 20 Feb 2018 16:01:37 +0000 https://blog.learnlets.com/?p=6198 I was answering some questions about a previous post of mine on AR, and realized I have made some unnecessary limitations in my own thinking. And I may not be the only one!  So I thought I’d share my thoughts on a broader view of augmented reality. Now, most people tend to think of Augmented […]

The post A broader view of Augmented Reality appeared first on Learnlets.

]]>
I was answering some questions about a previous post of mine on AR, and realized I have made some unnecessary limitations in my own thinking. And I may not be the only one!  So I thought I’d share my thoughts on a broader view of augmented reality.

Now, most people tend to think of Augmented Reality as visual augmentation of a scene. This typically done with a camera that registers what you’re seeing, and then adds information to the visual field.  The approach is usually either a projector on glasses (e.g. Google Glass) or by putting it on your screen (Apple’s ARKit).  But what occurred to me is that there’re more ways to augment the world with useful information.

One of the limitations of visual systems is their ‘directionality’; you have to be pointed in a particular direction to notice something.  Movement in the periphery of your vision may draw your attention elsewhere, but otherwise you’re pretty limited.  Yes, we have blindspots.

Audio, on the other hand, is direction-independent. It may be affected by distance, or interference (as is vision too), but is independent of where you’re looking. E.g. we can listen to the radio or podcasts while we drive, and the GPS notifications don’t require that we look at the map (“turn left in 200 feet”).

And this information  also can  augment our world.  It could be a narration of interesting points as you traverse some space, or it could be performance information. Or even notifications!  I regularly set alarms before events to do things like get me to the call or room on time, to remember to load presentations on flash drives, and more.  This extra information in the world is very helpful.  It’s what Don Norman called a ‘forcing function’, making it hard for your to avoid processing it. (His example was putting something you needed to take to work in front of the door so you couldn’t leave without at least moving it.)

Movement information can also  be useful. Vibration of a phone on silent, or the different taps that an Apple Watch can give to have you turn left or right are both examples.  (For that matter, I always wonder if airlines make it warmer during the flight to help you sleep, particularly at night, and colder before you need to wake up to land.)

There are lots of ways we can instrument the world to provide useful information (train arrival notifications, maps, street signs).  Digital support that is contextually cued is even more powerful.  But don’t limit yourself (as I was somewhat inclined to do) to just visual cues. Think of a rich suite of human perception and leverage accordingly.

 

The post A broader view of Augmented Reality appeared first on Learnlets.

]]>
https://blog.learnlets.com/2018/02/broader-view-augmented-reality/feed/ 1
2018 Trajectories https://blog.learnlets.com/2018/01/2018-trajectories/ https://blog.learnlets.com/2018/01/2018-trajectories/#respond Wed, 03 Jan 2018 16:08:12 +0000 https://blog.learnlets.com/?p=6102 Given my reflections on the past year, it’s worth thinking about the implications.  What trajectories can we expect if the trends are extended?  These are  not predictions (as has been said, “never predict anything, particularly the future”).  Instead, these are musings, and perhaps wishes for what could (even  should) occur. I mentioned an interest in […]

The post 2018 Trajectories appeared first on Learnlets.

]]>
Given my reflections on the past year, it’s worth thinking about the implications.  What trajectories can we expect if the trends are extended?  These are  not predictions (as has been said, “never predict anything, particularly the future”).  Instead, these are musings, and perhaps wishes for what could (even  should) occur.

I mentioned an interest in AR and VR.  I think these are definitely on the upswing. VR may be on a rebound from some early hype (certainly ‘virtual worlds’), but AR is still in the offing.  And the tools are becoming more usable and affordable, which typically presages uptake.

I think the excitement about AI will continue, but I reckon we’re already seeing a bit of a backlash. I think that’s fair enough. And I’m seeing more talk about Intelligence Augmentation, and I think that’s a perspective we continue to need. Informed, of course, by a true understanding of how we think, work, and learn.  We need to design to work  with us.  Effectively.

Fortunately, I think there are signs we might see more rationality in L&D overall. Certainly we’re seeing lots of people talking about the need for improvement. I see more interest in evaluation, which is also a good step. In fact, I believe it’s a good  first step!

I hope it goes further, of course. The cognitive perspective suggests everything from training & performance support, through facilitating communication and collaboration, to culture.  There are many facets that can be fine-tuned to optimize outcomes.Similarly, I hope to see a continuing improvement in learning engineering. That’s part of the reason for the Manifesto and the Quinnov 8.  How it emerges, however, is less important than that it  does.  Our learners, and our organizations, deserve nothing less.

Thus, the integration of cognitive science into the design of performance and innovation solutions will continue to be my theme.  When you’re ready to take steps in this direction, I’m happy to help. Let me know; that’s what I do!

The post 2018 Trajectories appeared first on Learnlets.

]]>
https://blog.learnlets.com/2018/01/2018-trajectories/feed/ 0
Why AR https://blog.learnlets.com/2017/09/why-ar/ https://blog.learnlets.com/2017/09/why-ar/#respond Wed, 13 Sep 2017 15:07:40 +0000 https://blog.learnlets.com/?p=5910 Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that  could be done (there were […]

The post Why AR appeared first on Learnlets.

]]>
Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that  could be done (there were some great games), but I’m focusing on simple information that I would like to see. It’s mocked up (so the arrows are hand drawn), so understand I’m talking concept here, not execution!

Magnolia

Here, I’m starting small. This is a photo I took of a flower on a walk. This is the type of information I might want while viewing the flower through the screen (or glasses).  The system could tell me it’s a tree, not a bush, technically (thanks to my flora-wise better half).  It could also illustrate how large it is.  Finally, the view could indicate that what I’m viewing is a magnolia (which I wouldn’t have known), and show me off to the right the flower bud stage.

The point is that we can get information around the particular thing we’re viewing. I might not actually care about the flower bud, so that might be filtered out, and it might instead talk about any medicinal uses.  Also, it could be dynamic, animating the process of going from bud to flower and falling off. It could also talk about the types of animals (bees, hummingbirds, ?) that interact with it, and how. It would be dependent on what  I  want to learn.  And, perhaps, with some additional incidental information on the periphery of my interests, for serendipity.

Neighborhood viewGoing wider, here I’m looking out at a landscape, and the overlay is providing directions. Downtown is straight ahead, my house is over that ridge, and infamous Mt. Diablo is off to the left of the picture. It could do more, pointing out that the green ridges are grapes, provide the name of the neighborhood that’s in the foreground (I call it Stepford Downs, after the movie ;).

Dynamically, of course, if I moved the camera to the left, Mt. Diablo would get identified when it sprung into view.  As we moved around, we’d point to the neighboring towns in view, and in the direction of further towns blocked by mountain ranges.  We should or could also identify the river flowing past to the north.  And we could instead focus on other information: infrastructure (pipes and electricity), government boundaries, whatever’s relevant could be filtered in or out.

Road picAnd in this final example, taken from the car on a trip, AR might indicate some natural features. Here I’ve pointed to the clouds (and indicated the likelihood of rain). Similarly, I’ve identified the rock and the mechanism of shaping. (These are all made up, they could be wrong; Mt Faux definitely is!)  We might even be able to touch on a label and have it expand.

Similarly, as we moved, information would change as we viewed different areas. We might even animate what the area looked like hundreds of thousands of years ago and how it’s changed.  Or we could illustrate coming changes. It could instead show boundaries of counties or parks, types of animals, or other relevant information.

The point here is that annotating the world, a capability AR has, can be an amazing learning tool. If I can specify my interests, we can capitalize on them to develop. And this is as an adult. Think about doing this for kids, layering on information in their Zone of Proximal Development  and  interests!  I know VR’s cool, and has real learning potential, but there you have to  create the context. Here we’re taking advantage of it. That may be harder, but it’s going to have some real upsides when it can be done ubiquitously.

The post Why AR appeared first on Learnlets.

]]>
https://blog.learnlets.com/2017/09/why-ar/feed/ 0
Realities 360 Reflections https://blog.learnlets.com/2017/08/reality-reflection/ https://blog.learnlets.com/2017/08/reality-reflection/#comments Tue, 01 Aug 2017 15:08:40 +0000 https://blog.learnlets.com/?p=5840 So, one of the two things I did last week was attend the eLearning Guild‘s Realities 36o conference.  Ostensibly about Augmented Reality (AR)  and Virtual Reality (VR), it ended up being much more about VR. Which isn’t a bad thing, it’s probably as much a comment on the state of the industry as anything.  However, […]

The post Realities 360 Reflections appeared first on Learnlets.

]]>
So, one of the two things I did last week was attend the eLearning Guild‘s Realities 36o conference.  Ostensibly about Augmented Reality (AR)  and Virtual Reality (VR), it ended up being much more about VR. Which isn’t a bad thing, it’s probably as much a comment on the state of the industry as anything.  However, there were some interesting learnings for me, and I thought I’d share them.

First, I had a very strong visceral exposure to VR. While I’ve played with Cardboard on the iPhone (you can find a collection of resources for Cardboard  here), it’s not quite the same as a full VR experience.  The conference provided a chance to try out apps for the HTC Vive, Sony Playstation VR, and the Oculus.  On the Vive, I tried a game where you shot arrows at attackers.  It was quite fun, but mostly developed some motor skills. On the Oculus, I flew an XWing fighter through an asteroid field and escorted a ship and shoot enemy Tie-fighters.  Again, fun, but mostly about training my motor skills in this environment.

It was the one I think on the Vive that gave me an experience.  In it, you’re floating around the International Space Station. And it was very cool to see the station and experience the immersion of 3D, but it was very uncomfortable.  Partly because I was trying to fly around (instead of using handholds), my viewpoint would fly through the bulkhead doors. However, the positioning meant that it gave the visual clues that my chest was going through the metal edge.  This was extremely disturbing to me!  As I couldn’t control it well, I was doing this continually, and I didn’t like it. Partly it was the control, but it was also the total immersion. And that was impressive!

There are empirical results that demonstrate better learning outcomes for VR, and certainly  I can see that particularly, for tasks inherently 3D. There’s also another key result, as was highlighted in the first keynote: that VR is an ’empathy’ machine. There have been uses for things like understanding the world according to a schizophrenic, and a credit card call center helping employees understand the lives of card-users.

On principle, such environs should support near transfer when designed to closely mimic the actual performance environment. (Think: flight or medicine simulators.)  And the tools are getting better. There’s an app that allows you to take photos of a place to put into Cardboard, and game engines (Unity or Unreal or both) will now let you import AutoCAD models.  There was also a special camera that could sense the distances in a space and automatically generate a model of it.  The point being that it’s getting easier and easier to generate VR environments.

That, I think, is what’s holding AR back.  You can fairly easily use it for marker or location based information, but actually annotating the world visually is still challenging.  I still think AR is of more interest, (maybe just to me), because I see it eventually creating the possibility to see the causes and factors  behind the world, and allow us to understand it better.  I could argue that VR is just extending sims from flat screen to surround, but then I think about the space station, and…I’m still pondering that. Is it revolutionary or just evolutionary?

One session talked about trying to help folks figure out when VR and AR made sense, and this intrigued me. It reminded me that I had tried to characterize the affordances of virtual worlds, and I reckon it’s time to take a stab at doing this for VR and AR.  I believed then that I was able to predict when virtual worlds would continue to find value, and I think results have borne that out.  So, the intent is to try to get on top of when VR and AR make sense.  Stay tuned!

The post Realities 360 Reflections appeared first on Learnlets.

]]>
https://blog.learnlets.com/2017/08/reality-reflection/feed/ 1