Aug 15, 2013

Drawing Parallels

Augmented [Hyper]Reality

 

There is a problem I see currently with Virtual Worlds and Augmented Reality in that there are essentially two industries acting as though they are separate. Even then, in the AR field, everyone is betting on Mobile and that in itself isn't quite right. Mobile is a stop-gap to reach the real end-goal; A truly immersive and spatial Augmented [Hyper]Reality platform that is wireless, markerless and high-fidelity.

 

There's more to AR than overlaying some information, and right now AR comes across very, very nascent at best. That's why I'm working on Parallel Worlds with Kevin Simkins to change that. Despite the restrictions about what I’m actually allowed to talk about versus what is truly going on, I’ve been informed that I need to write an overview for public consumption while talking a bit about it in my social media spheres. This in conjunction with the addition of not violating my restrictions. If ever there was a trickier post in this blog, I wouldn’t be able to point you to it.  

 

 

World Builder - Cube

 

 

This is a touchy subject as of late, because of various “obligations” that I am bound to. You may have noticed a few prior posts being brought down before this one, and the reasoning was that when some high-level people call you on the phone and tell you remove it… you don’t ask questions. That being said, however, I did get around to asking at least some questions about what I am allowed to talk about and what I am not in order to better satisfy both sides of this (being them and the public).

 

Looking at the general checklist, it is safe to say I can talk about the higher level ideas but I’ll likely be unleashing the Hellraiser Puzzle Box if I explain the inner workings. So this is what I’m going to do with this post… I’m going to go over some high level concepts and a general overview about what I’m involved with, but keeping in mind this checklist delivered to me about what my restricted topics are. Essentially, from looking at this, I’m allowed to say what I’m up to but not how I’m actually doing it in detail. More or less, let’s assume going forward that I’ll be leaving out a lot of very crucial details but it should give you an idea what I’m shooting for with my involvement in Parallel Worlds as a project, so expect a lot of mentions concerning details being restricted in this post.

 

I’ll try to give you an idea about what it is I’ve been tied up with for the past 6-12 months, and hopefully in doing so my phone won’t ring again instructing me to make edits or remove it. It’s kind of a balance between keeping you in the loop and making sure I don’t violate my gag orders (which is becoming increasingly hard).

 


 

 

[Hyper]Reality

 

The first thing on the agenda is to better define [Hyper]Reality. Most people who see it immediately think it is synonymous with Augmented Reality, and I can understand the confusion. It starts with Augmented Reality, but takes it a lot further than markers or overlaying information in a dumb-terminal manner. The best way I can explain it is by saying it is the equivalent of Second Life in Real Life. I wouldn’t say it’s a separate entity (like how VR and AR currently are treated) but more like an ubiquitous space that is interchangeable and contiguous.

 

The closest example I can offer for this is the World Builder video (which I highly suggest watching below before you continue on).

 

 

 

 

In order to achieve this sort of immersive, geo-location aware, markerless augmented reality system, you need to solve a hell of a lot of technical hurdles that really haven’t been addressed in the industry. I refer to both Virtual Reality and AR as a single industry because really they are two views on the same coin (just as a heads up).

 

The first issue is networking: Communications side, you need something that can handle large distances much further than your typical Wi-Fi router, while still remaining low power. We could (in theory) utilize existing cell phone communication networks like 4G LTE or whatever, but the data caps and unfavorable data packages from cell providers makes this a hit or miss scenario.

 

Ergo, what is on the table is a combined utilization of various communications systems and a few other things I’m not allowed to disclose. It has about a 5 mile radius with the power of a cell phone output, and boosted it could reach about 10 miles without a hassle. Of course there are methods to extend that further without needing the brute force methodology of boosting power, but this is one of those talking points that are on the black list. Suffice it to say, a lightweight method for near unlimited range for wireless augmented [hyper]reality exists.

 

There are additional methodologies that apply to the software side, such as Area Of Interest Networking Algorithms with (undisclosed) optimizations and additions, and a number of other things which require a level of detail that exceeds my ability to talk about them at this time.

 

 

Area of Interest 

 

The second issue is storage: Typical storage is completely unsuited for Virtual Reality and Augmented Reality. Even if you apply a Cloud Storage system it is still woefully underpowered and ill-prepared. The cost of storing all of those assets becomes a scaling issue over time and also ridiculously expensive as it accumulates. Typically it all its on a data center someplace and paid for by the company that runs the virtual environment service.

 

It’s not really hard to imagine the recurring and escalating costs of a data center over time… so this is a specific task that I’ve tackled in the Parallel Worlds project (and solved). I’ve taken to application of a type of polymorphic data structure that gives a total amount of storage ranging into about 100 sextillion Yottabyte of storage address space. To put that into perspective, it’s not infinite but the odds of running out anytime soon are slim to none. You could fit many Internets into that space easily and it would feel like a football stadium as seen from the perspective of an ant. That’s what I call real data management and storage capabilities.

 

To put this into mathematical perspective, the polymorphic data system can hold:

 

1.048576 × 1029 exabytes

 

 

And this is only a minor issue until you create a secondary node layer to give it a fresh address space to work with. Rinse and repeat for every address space you exhaust (assuming you ever exhaust the first).

 

This alone has garnered quite a lot of interest from folks ranging from NASA to the USAF, NAVY, and a number of military contractors (and the Department of Defense). It would seem the polymorphic data system on its own is quite valuable, and yet it is just a single component in the overall platform.

 

The Third Issue is more complex. It involves a less tangible solution and more about a change in thinking. The storage system already demands that on its own (everything is everything else), but this is a total redefining of what virtual worlds and augmented reality are used for and how they are interacted with on every single level – from overt to sociological implications.

 

I have a problem with the typical thinking about mobile being the conduit for AR, because it severely cripples the potential for it by removing the most powerful means of interaction in a virtual environment – your hands.

 

The first step, then, is to break the assumption of seeing the augmented world through a tiny little window that is your cell-phone. Free up your hands, and then use them as the natural means of interaction in the virtual world you are immersed in. Of course, now I conjure images of Google Glass… but Glass is meant more as a transformative technology, a catalyst to an industry that needed a push to get on board. It’s an excellent start to get the ball rolling, but it’s not something you would use for [Hyper]Reality because it’s woefully underpowered and ill-equipped to handle it. That being said, the premise of a lightweight headset is a correct assumption for [Hyper]Reality, but the components of that headset currently do not exist as an available development kit form. Luckily, we already know this at Parallel Worlds and are expecting to prototype a custom headset in the future in order to handle all of this.

 

 

meta one - space glasses

 

 

This of course brings us to the paradigm of markerless and spatially aware augmented reality. Digital items in the real world that are contextually aware of their real world surroundings and can interact with them. A general example is imagining that your Meeroo knows that the couch is in your living room and the digital pet would decide to sleep on your real world couch. Once you get the gist of this, you also understand that some really interesting problems arise when you realize the need for virtual items to have occlusion and interaction in the real world.

 

There is (also) the need to essentially figure out how one would effectively “grid” the entire real world in order to anchor our virtual things and give them a sense of reference without their “markers” or visual cues to guide them. A lot of very bright people have been working on such a thing for quite awhile, but the results have thus far been lackluster. That being said, a method to do this (ironically) already exists in another industry altogether and I’ve taken the time to outline the basis of how it would work out.

 

Continuing on, one of the other issues that had to be overcome was the complete lack of hardware to facilitate such a [Hyper]Reality system. If one were to have designed Parallel Worlds using a typical Vuzix AR headset, it wouldn’t know what the hell to do with the entirely new types of information we were giving it to work with, not to mention the inability to handle wireless communications, high-fidelity, geo-positional information and more.

 

The closest headset that could be cited for [Hyper]Reality at this point is the Meta One | Space Glasses. But even in a Generation 1 form, they are lacking for this… however, I believe a Gen2 or Gen3 will be far more powerful. I’m still happy to see the initial steps taken in this direction.

 


 

 

Elephants in the Room

 

I’m reminded of a conversation that Russell Brand had after the MTV Music Awards (2008) that introduced him to the United States audience. There was this whole thing about Brittany Spears and the pictures surfacing of her flashing her token slot, not to mention her total (and inexplicable) meltdown. Those were topics off the table for working with her, and Russell was totally frustrated about it asking if they thought there’d be an elephant in the room. And to the eternal credit of the MTV Promotional Director, they asked “What if there really was an elephant in the room?”

 

 

 

 

Of course, with what I’ve been up to lately there is also that metaphorical elephant in the room and to an extent what can be perceived as the literal elephant as well if you know what I’m talking about. This is, of course, that little bit about the fidelity of the [hyper]reality system itself and what on Earth is going to handle high definition graphics? I’ve talked before about what would handle this and only recently has it become a plausibility with skepticism (as opposed to an outright blasphemous statement of heresy). This is (obviously) a minor discussion about Euclideon Unlimited Detail and the ability to use it as a game engine.

 

 

Elephants

 

 

When I began on this topic in Quantum Rush and Quantum Rush: Duex I was merely sorting out how such a system could ultimately work and why it displays the particular properties of operation that it does. Over time, and a lot of research, I pieced together that it was not only possible but also highly probable that Euclideon was doing exactly what they said they were doing in making a game engine – but of course there is the obvious stop-gap of releasing a product for geospatial visualization in the meantime. This would be GEOVERSE, but it still doesn’t stop there. I’ve seen glimpses of that game engine implementation, and it looks like they are right on track.

 

That being said, I must also cover the reasoning why something like highfidelity.io rubs me the wrong way.

 

highfidelity.io

 

What this boils down to is the willful misunderstanding (or misrepresentation) of technology as packaged in a new product via grandiose claims. If you understand voxel technology, and specifically sparse voxel octree, you would immediately understand that while the inherent claims of photorealistic ability to an endless horizon are possible, to do such a thing with this particular implementation of voxels would require a super computer. This is why voxels are commonplace in the medical imaging industry but not in a game engine, let alone a dynamic and endless sandbox virtual environment. This is also the same exact reason people literally jumped my !@#$ when I wrote about how Euclideon Unlimited Detail worked.

 

What amazes me is that the same people who fought me tooth and nail about how Euclideon was impossible seem to suspend disbelief with highfidelity.io and praise it.

 

In the medical industry, they really only need to display a single 3D item in photorealistic detail, knowing full well that the more unique items you add to the view the more processing power and RAM you need to pull it off. In this manner, you can easily display a photorealistic model of the human heart but the computational power required for something like highfidelity.io simply becomes astronomical by comparison.

 

And therein is the (other) elephant in the room.

 

Watching Philip Rosedale sell the system to the public aggravates me because he’s building on the premise of a crippling flaw in the system and betting that the entirety of the user-base will donate their spare computers and bandwidth to make it up for him. This is both disingenuous and careless – especially for Philip, who now holds a position of “authority” in the virtual worlds industry, and thus commands a sort of blind faith in his audience for his assertions.

 

No evidence could articulate this better than the shear amount of money highfidelity.io has raised based on faulty logic. For what it’s worth, they may as well have sketched something on a napkin and written a vision statement.

 

This scenario is highly damaging to the entire industry, and it dismays me that few others seem to be pointing it out. Even worse are the people who actively defend this disingenuous behavior, when many of those very same people know better. It’s a disgrace.

 

In the end, the loser is you and the entire virtual worlds industry for this bit of snake-oil. If he’s violating the trust of the industry with more hype for a system likely to not live up to the claims, the blowback from Second Life in the mainstream media and industry will look like small potatoes by comparison. Quite frankly, the community has yet to live down the stigma and tarnished image from the collapse of the Second Life hype bubble… I find it unthinkable that people would willfully facilitate yet another hype bubble at their own expense.

 

The type of point-cloud rendering system that makes the claims of highfidelity.io possible is also the same system that doesn’t require a super computer to use, but instead simply requires a typical laptop. If highfidelity.io had such a voxel rendering system, they wouldn’t need your spare computers to make up for that crippling flaw, nor would Philip be trying to paint it like an innovation so you’ll buy into it. The bottom line is, he absolutely needs you to buy the line of BS or highfidelity.io simply doesn’t work at all.

 

At the end of the day, his disinformation causes massive amounts of damage to the people who are trying to legitimately solve these problems and innovate. When he screws up, it’s people like myself who have to explain why we’re not doing the same thing. In essence, Philip is managing to quickly make himself into the black sheep of the virtual worlds industry to people in the know and who aren’t so gullible, and just like that one family member that shows up drunk to the family gatherings and pukes all over the coat room, you get really sick and tired of having to clean up those messes and assure everyone that you’re not like them.

 


 

 

Oculus Rift

 

I have no love for Oculus Rift. This needs to be said in no uncertain terms. No, it’s not revolutionary, nor is it going to be the future of virtual reality. A majority of the advancements that Oculus touts are nothing more than Moore’s Law and the inevitable advancements in the technology industry that already existed.

 

Bottom line is, they made a HMD that is smaller, lighter, and higher definition with off the shelf parts. It’s good that they built it, and by no means do I think they shouldn’t have. But it hasn’t actually solved the original problems that HMDs had in convincing widespread adoption and re-igniting VR as a mass media.

 

As a matter of fact, it carries the same exact issues that stunted widespread adoption as the last generation of VR Hype:

 

 

oculus-rift-omni-treadmill-mars-nasa-640x353

 

 

In order to actually utilize Oculus Rift effectively, you need to overcome the fact that you are now blind, stationary and without interaction methods. Ergo, this picture is what your living room would look like in order to satisfy using an Oculus Rift effectively for Virtual Reality.

 

  • Add in the Omni treadmill so you can walk naturally (mostly).

 

  • Add a headset microphone to hear and talk.

 

  • Add a Kinect Camera to keep track of your body and natural gestures.

 

 

When people think of Virtual Reality in the 1990’s they often cite that the headsets were bulky and low resolution. The mental image of the “stone age” of Virtual Reality is something like this -

 

 

virtual_reality_helmet

 

 

And we lament how horrible things were back then and praise Oculus Rift and Omni Treadmill for revolutionizing everything, making it lighter and better, and ushering in a new renaissance for Virtual Reality as the future of interaction!

 

But wait a minute… that’s not what Virtual Reality in the late 90’s actually looked like. As a matter of fact, it looked more like this:

 

forumdelasimulationWhich looks pretty much like our first image with Oculus Rift + Omni.

 

But if you didn’t own one of these in the 1990s, then why would the mass market own an Omni + Oculus Rift?

 

Part of being an innovator involves being able to (re)define the future. What I’m seeing with the hype surrounding Oculus Rift and Omni, and even including Linden Lab jumping on the bandwagon, is dredging up the past, adding a new coat of paint on it, and telling the public

 

“This time it’s gonna work and everyone will love it!”

 

It’s literally the same old song and dance as in the 1990’s, which is pretty much like redesigning Laser Tag 15 years later and touting it as the next killer game of the future.

 

If you want to get an idea of just how much this is repeating the past with a new coat of paint, we need only look at the fact that John Carmack is now the CTO of Oculus.

 

Yes, the John Carmack… who brought you Doom, Quake, etc from the 90s. As a matter of fact, I’ve actually played Wolfenstein 3D (and Doom) on those VR Pods in the 1990s at the arcade. Blockbuster Golf n Games to be precise. That’s how much of a repeat this is turning out to be for this supposed Virtual Reality revolution…

 

John “I invented Doom” Carmack has a direct vested interest in driving Oculus.

 

 

 

Hail! Hail! The gang’s all here!

singing the same old song and dance

 

john_530_big 

 

 


 

 

The (Real) Future of Virtual Reality

 

 

Is simply to evolve it into Augmented [Hyper]Reality.

 

The only thing you need to play the game is a pair of glasses, because the entire real world is now your playground.

 

 

 

 


 

 

The only thing left is figuring out all those pesky details…

 

And now you know what I’ve been up to.

 

Oh… that and preparing to attend the Chicago Innovation Awards in October. I was nominated (with Kevin Simkins) under the “Up and Coming” category. It’s kinda a big deal…