The "instfield" line is halving the exponent. In Smalltalk-80 you could write it
guess := self timesTwoPower: self exponent // -2.
Also, "user notify:" would become "self error:". Modern Smalltalk actually has complete exception handling, so you could also define an exception class (for example ArgumentOutOfRange) and trap it.
Finally, let’s insist that the separators be part of the method name; i.e., let’s require that the name of the method be “rotate by: around:” and let’s get rid of the spaces to get “rotateby:around:” as the name and finally, let’s capitalize internal words just for readability to get “rotateBy:around:”. Then our example could be written
It's amazing to see how little the concepts have changed in desktop world since Xerox Alto. The only different concept I saw few years back was by 10gui http://10gui.com/ with VR, MR, and AR coming in we need such innovation again.
Yeah, worth watching the video. The music accompaniment was terrible and louder than it should be, but the concept is nice.
In the end of the video, they show it where the trackpad would be, which makes me think there might be a size issue. They also show it with a regular keyboard and I'm not sure everyone would require one. I do know that people like tactile keyboards but some folks have managed to function without one. So, a model where the touchpad is also the keyboard might be cool.
I touch-type, but maybe the trackpad could also be some sort of secondary display? I could see that coming in handy, perhaps even to display the above mentioned keyboard for this who can't touch-type.
Off-topic: HN is one of the few sites where I regularly click on links in the comments. On most sites, that's not a very productive activity. On HN, it is often interesting and educational.
Smalltalk was the first dynamic UI environment but within a few years you could do this on lisp machines as well (both the Xerox D machines in either Smalltalk or Interlisp modes and MIT CADR lispms). Which is to say the Smalltalk environment was influential both at the time and later.
If you read the Xerox PARC papers, there was a lot of shared work between the Interlisp-D, Smalltalk and Mesa/Cedar teams.
Actually some of the REPL and debugging features in Mesa/Cedar were done because they wanted to appeal to the Interlisp-D and Smalltalk users, while offering a strong type development environment.
Indeed I worked at PARC (at ISL, using Interlisp) and used all three platforms.
This article was about the '73 Alto implementation though, which preceded the D machines (and preceded my time at PARC by a decade as well). At that time Interlisp was PDP-10 only and had no GUI.
Is there someone somewhere discovering the ubiquitous usage of the next decades ? Does computer science still has the potential to bring the same kind of world-changing tools ?
Here's something i'd like to ask AlanKay : at that time, you probably had the feeling that you were working on groundbreaking technologies, but what were the magazines saying ? I'm pretty sure some people were thinking that computers were going to play a role, but was this obvious to everyone ? Did people hesitate between funding CS and, say, flying cars ?
I am not he, but I may be able to offer some history/perspective.
There was a real fear of computers back then, much like the fear of AI today. They worried that the 'big brain' would put people out of work and take over the world. Multiple movies were made where the computer was the antagonist, including the famous 2001: A Space Odyssey.
The first computer I ever touched was called a calculator and I've heard a couple of reasons for this. The official reason was that people expected computers to be big and the HP 9100A wasn't very big by standards of the day. The second, and unofficial reason, was that it came out in the late 1960s, which was at a time when people had a real fear of computers.
If you're curious, I believe the model I used was the 9100B. It supported magnetic strip cards and punch cards to store your algorithm. You'd load it into the memory by swiping the card or inserting the punch card. It would output to a TV and there was a plotter option that functioned pretty much like a modern plotter. There was also a dot-matrix printer included that was like what was on the older POS systems for receipt printing.
Anyhow, they called it a calculator instead of a computer. People had notions of what a computer was supposed to be and this was not that.
At the time, I didn't really appreciate it for what it was. I think my first exposure was in 1971 and it was used almost exclusively in the physics and astronomy labs. It wouldn't be used in the astronomy labs until later, after the observatory was completed and the telescope put in.
This was all at Kents Hill, no apostrophe, if you'd like to learn more about it. It's a boarding prep school located in Central Maine. As I was leaving, we got a mainframe connection to Dartmouth, which was a 'real' computer, though I never played with it.
So, largely people didn't seem to envision much for the future of computing, at least not at the personal level. There were exceptions, some visionaries who saw the potential, but the idea of a personal computer wouldn't really exist until later in the decade. Frankly, nobody saw much use for one. Computers were pretty useless without a great deal of domain knowledge. To use one, you were almost required to know how to program one. Specialty software was the norm and interaction was limited without knowing how to program.
For the most part, games were also limited to hardware specific devices and were console format or, eventually, arcade games that were not general purpose computers. People really didn't have much use for one in their homes and, as mentioned above, there was a real fear of computers.
YouTube has a copy of the HP 9100 promotional video. I can dig it out, if you want. It was pretty remarkable but I wasn't nearly as impressed by it as I should have been. So, I don't really think that the future of computing was obvious to everyone. In fact, I'd say the opposite was true and that it was only obvious to a very limited set of visionaries.
Edited to clean up some verbiage and add a wee bit more information about the 9100.
Edit again. If you have 21 spare minutes and would like to learn about the HP 9100, you can click this link:
(I figure some folks might actually be curious about the tech of the day. That was pretty state-of-the-art back then and far more compute power than most people would be exposed to for the next decade and a half, if not longer.)
Thank you for the historical info, and the video; as a big computing history buff, I really appreciate both the first-hand account and the media of the time!
I'm always happy to help. I get a great deal from HN, lots of things to learn and smart people to answer questions. It makes me happy when I'm able to give a return contribution.
If, for some reason, you have further questions, uninvolved@outlook.com is the email address that I hand out in public. I tend to write long posts and replies, so you have been officially warned.
"Smalltalk is a highly-influential programming language and environment that introduced the term "object-oriented programming" and was the ancestor of modern object-oriented languages."
Funny way to spell Simula.
It's fascinating how the GUI's haven't changed much since. I run a UI design agency (we work mostly with startups) but would love to collaborate with someone who's working on a (niche?) OS and see if we can redesign the UI. Open Source is fine. If you're working on something, hit me up. Details in my bio :)
> It's fascinating how the GUI's haven't changed much since.
I think you mean WIMP GUIs haven't changed much (and indeed, the WIMP interfaces of the classic Mac, Windows, current Mac, Apollo, Sun all felt very much like the Smalltalk GUI to me).
But I think the tile+touch interface pioneered on the iPhone is indeed a new GUI paradigm which shrugged off the baby-boomer "desktop" metaphor.
These shifts require a change in hardware, as any interface that gets traction with technology T forms a local maximum, which makes it inherently conservative. That suggests that the phone interface is unlikely to change.
What are the elements of the next generation? Looking at the horizon, some subset of voice and AR+gesture, and likely shared experience (note: I don't believe VR offers any affordances for mass UI). So far I have seen nothing viable (including some efforts I've participated in) with these technologies though.
Meanwhile I find those multi-column views infuriatingly wasteful. I KNOW I'm in folder A/B/C because I literally just clicked them. I prefer being able to use a smaller explorer window that shows more info. One key to this is a mouse with a fourth and fifth button, which windows intuitively maps to "forward" and "back", and mac unhelpfully maps to "nothing" and "expose" and doesn't seem to have useful alternatives built in.
Having back and forward buttons on my thumb, with Window's "Folders come first" list sorting makes folder spelunking much simpler for me
You can redesign the UI using Canvas, no need to do it at the OS level. It is hard to get past the standard WIMP GUI though, naturalistic UIs (NUIs) were tried a decade ago, but they were found to not actually be more usable.
I’ve been following OS UI experiments for some years and the problem with all of them was that they took an approach that sounded cool but wasn’t really practical. From 3D desktops to strange card based interfaces that wanted to replicate the messy physical desk. I think the key is to redesign the GUI by removing stuff instead of adding fancy new ways of shuffling files.
I imagine the big issue is that the underlying model isn't actually being changed to support the new UI (even by a wrapper-layer); so you're just doing shallow edits to the "skin" of the OS. The problem being the underlying model and the WIMP model have been co-evolving, and likely aren't very open to other fashions of interaction.
Presumably, the way to beat it out is to figure out how to change both to instantiate an entirely different interaction model, and thus naturally a new UI.
Otherwise you can really only commit to incremental improvememts, but nothing particularly interesting.
ie smalltalk's requires an entirely different underlying model to support its in-place editable UI model; it cannot naturally be built (but maybe hacked together) if you try to avoid that shift
The big incoming UI challenge is XR. But as with ST/Alto, it's an integrated bootstrap problem of hardware, system design, foundational software, user software, and UI.
Absent research labs, we wait on market availability for hardware. For the modern analogues of mice and bitmapped screens. Eye tracking may be cheap next year, but now it's $2k. High-resolution HMDs may be $10k next year, but now they're simply unavailable. Hand tracking is flaky. Haptics are "it buzzes".
For software, we're so used to the 2D windows interface, it's easy to forget how much foundational work was needed. Eg, around topography, and Smalltalk, and live coding. Hand gesture recognition, spoken dialog management, 3D constraint-based layout, are just a few modern analogues. And so on.
So it looks like we're going to be doing broad-based innovation and infrastructure construction, gated on hardware availability. And plagued by patents.
It's not clear to me what can usefully be done to explore UI in the meantime. You can do wireframing in TiltBrush, but not really. It doesn't give you the feel of say hand-attached controls, and of automated collaboration in general. The "oh, it feels good to present this graph as a sphere, and hold it you lap and peel off layers". So...
Perhaps do UI design by paper improv? A crew scribbling on and holding up paper, hovering around a "user", who is managing a todo list? :) That could be fun.
VR has a huge input problem, you can't touch any of the 3D stuff, you are basically left waving hands in the air. Bret Victor's lab is going in the other direction of tangible objects with digital projections.
You can also use tangible objects with XR. It might be useful for sorting things, like system subparts, and thinking with your hands. You can fit camera tracking markers on 1 cm wooden cubes.
> you can't touch any of the 3D stuff
The current controller "tock" impulse haptics can be surprisingly effective. Your brain fills in a lot. But no, no force feedback or individual finger haptics is mass market yet.
I think voice input will be even more important, once we get conversational UIs going, XR will work much better with some pointing and voice input (as in Iron Man).
> I'm not as pessimistic as Bret Victor [...] ABriefRant
I'd actually go one step further than that "rant".
In the real world, small physical motions usually have little control significance. Mouse movement and keyboards are among the many exceptions. With XR's hand tracking, that can change. For example, while gaming is focused on "feeling like it's real", so game hands overlap real hands, for getting work done, it's nice to have stretchy arms, so you can reach across the room to grab things. For optimal long-term ergonomics, you want a mix - sometimes you want the exercise of waving your arms, and sometimes you want to achieve the same end by merely twitching a hand resting on a knee or keyboard.
So not only did the video reduce all of human motion vocabulary, including gaze and voice, down to a single finger over a sad UI, but even that single finger was used in an impoverished manner. ;)
> XR will work much better with some pointing and voice input
Eye tracking as well. "Look; point; click" has a seemingly redundant step.
When someone is watching an education video, say IBM's atom animation[1], you want to both be able to field common questions like "What are the ripples?", but also notice that they are looking at the ripples, and volunteer related content.
One can do prototyping now, with WebVR, google speech recognition, and in-browser or google voice synthesis.
> Eye tracking as well. "Look; point; click" has a seemingly redundant step.
> When someone is watching an education video, say IBM's atom animation, you want to both be able to field common questions like "What are the ripples?", but also notice that they are looking at the ripples, and volunteer related content.
This is why I think the gaze tracking in the Windows MR platform (which lights up, quite literally in Fluent design applications, in the just released FCU) is more important than people yet realize. It's been a part of HoloLens demos from the beginning, but it's interesting how many people haven't noticed yet.
There's really no reason we can't treat it like navigating a video game; movement and interation through the normal kb+m/controller model; the helmet is just the monitor, with a bigger pov (due to head turning)
Im not sure why there's a belief that you need to pretend you physically exist in the vr world; you just need to be able to interact with, and navigate, for it to be sufficiently valuable. ie imagine using it as an infinite real-estate development environment, with possibly a visual layout of your data structures, and the ability to change location (ie a mountain top view). Its sufficiently valuable as a product with just this basic featureset.
We don't need to wait for full-body VR, or AR, though no one seems to believe it
Creating HMDs with an angular resolution sufficient for being a screen substitute, is still a work in progress. Otherwise, I image what you describe might have happened.
The available market was gaming. With its emphasis on "presence", high frame rates, high polygon counts, dislike of visual artifacts and blur, and tolerance of low angular resolution.
And even now, as parts become available to create a better screen substitute, the market isn't seen as worth pursuing yet. But 2018 should see a higher-resolution generation of gaming HMDs. And there's Varjo to watch/hope for, though it will be $10k-ish.
I wasn't really referring to how you physically interact with the system. More along the lines of: if you implement a 3D environment of your files, but you're still interacting with a directory/file model, then you've just constructed a less efficient interface. But if you can now move on to some kind of 3 dimensional, or n-dimensional model, maybe there's something to gain with the 3D interface.
Theres little point representing 2D structures in a 3D environment. You have to change what you're representing as well to see any fundamental benefits.
Though just relaxing the severe real estate constraints of screens may be a big deal. It's much easier for tooling to provide speculative content when its "off to the side, just in case you care" rather than "in you face, covering what you were working on". So various existing "cute... but it's just not worth the cost of dealing with" ideas might become viable. Code analytics and documentation could be ever present, accessible at glance, just off to the side.
That's one fun XR puzzle game. Given a domain where you deeply understand the constraints which determine some system shape, ponder what will change as XR alters the UI constraints. And what approaches already tried, might deserve another look.
The eleVR people have some interesting demos in this regard, where you program from inside the VR system. That might be the best way to try things out and conduct the "foundational work"
Hmm, I was ambiguous there. What I had in mind for "foundational" was, for example, constraint engines for GUI layout. Now we have CSS. But there was something like a decade, where we knew constraints were a good way to do UI layout, but there weren't available constraint engines. Each major language had odd bits of abandonware zombies, academic or hobby projects, with code which might or might not be online, or be buildable, or be licensed for use. So we had to wait for market incentives to change, and some non-trivial work to be done, before we had the foundation element of GUI constraint-based layout.
An Alto analogy might be bitmapped fonts and display.
A current analogy might be 3D dynamic structured graph layout. There's been a lot of research over the years on how to dynamically modify graphs without disorienting viewers. And on how to layout graphs within graphs, and with various spatial constraints. But last time I checked, the github cupboard was bare. So say you want to write a really wizzy todo app with complex dependencies. Well, first you read two decades of graph layout papers; then you write the graph layout library no one else has gotten to yet. Foundational work on our collective todo list.
I agree to a certain extent, and I think we can see that in the the evolution of video game interfaces. If you look at games like System Shock 2 to Bioshock, you see that the interfaces have evolved in the direction of finding ways of simplifying the information presented to the player and eliminating irrelevant information.
This is what killed Microsoft Surface the Table; the NUIs could never demonstrate their worth beyond whimsy. Metro seemed to actually be a reset with all the NUI stuff thrown out.
It's Smalltalk-76 that uses some strange characters.
The version that made it out of the lab into the wide-world was Smalltalk-80.
edited With help from Paolo Bonzini, Smalltalk-80 syntax for that square root code would be something more qwerty-friendly like:
"I Can Read C++ and Java But I Can’t Read Smalltalk" pdfhttp://carfield.com.hk/document/languages/readingSmalltalk.p...
Finally, let’s insist that the separators be part of the method name; i.e., let’s require that the name of the method be “rotate by: around:” and let’s get rid of the spaces to get “rotateby:around:” as the name and finally, let’s capitalize internal words just for readability to get “rotateBy:around:”. Then our example could be written
rotateBy: a around: v //This is Smalltalk
This is also what Objective-C / Apple does.
Inspired by Smalltalk-80.
See "A short history of Objective-C"
https://medium.com/chmcore/a-short-history-of-objective-c-af...
The above links run emulators in Javascript and have been used for live demos as well (see https://youtu.be/AnrlSqtpOkw?t=2m29s for a fun one)
Related, for a Javascript based live system check out https://www.lively-kernel.org/ (also created by Dan Ingalls).
The Lively project evolved over time:
- https://www.lively-kernel.org is from the Sun Labs / HPI days (check out the ancient http://sunlabs-kernel.lively-web.org, fully SVG based rendering :D)
- Lively Web: https://lively-web.org A live, programmable wiki (2012-2015)
- Since 2016 we are working on lively.next: https://lively-next.org. lively.next will focus more on the "personal environment" aspect.
https://news.ycombinator.com/threads?id=alankay1
In the end of the video, they show it where the trackpad would be, which makes me think there might be a size issue. They also show it with a regular keyboard and I'm not sure everyone would require one. I do know that people like tactile keyboards but some folks have managed to function without one. So, a model where the touchpad is also the keyboard might be cool.
I touch-type, but maybe the trackpad could also be some sort of secondary display? I could see that coming in handy, perhaps even to display the above mentioned keyboard for this who can't touch-type.
Off-topic: HN is one of the few sites where I regularly click on links in the comments. On most sites, that's not a very productive activity. On HN, it is often interesting and educational.
Actually some of the REPL and debugging features in Mesa/Cedar were done because they wanted to appeal to the Interlisp-D and Smalltalk users, while offering a strong type development environment.
This article was about the '73 Alto implementation though, which preceded the D machines (and preceded my time at PARC by a decade as well). At that time Interlisp was PDP-10 only and had no GUI.
Such as:
- the direct commercialization of that work (by Xerox PARC spin-off ParcPlace Systems) as ObjectWorks and then VisualWorks (now Cincom Smalltalk)
- IBM Smalltalk on mainframe and mini computers http://www-01.ibm.com/support/docview.wss?uid=swg27000344&ai...
- HP Distributed Smalltalk http://www.hpl.hp.com/hpjournal/95apr/apr95a11.pdf
- Gemstone https://gemtalksystems.com/
biotech gene editing ? quantum computers ? artificial intelligence ? Cars ?
Is there someone somewhere discovering the ubiquitous usage of the next decades ? Does computer science still has the potential to bring the same kind of world-changing tools ?
Here's something i'd like to ask AlanKay : at that time, you probably had the feeling that you were working on groundbreaking technologies, but what were the magazines saying ? I'm pretty sure some people were thinking that computers were going to play a role, but was this obvious to everyone ? Did people hesitate between funding CS and, say, flying cars ?
There was a real fear of computers back then, much like the fear of AI today. They worried that the 'big brain' would put people out of work and take over the world. Multiple movies were made where the computer was the antagonist, including the famous 2001: A Space Odyssey.
The first computer I ever touched was called a calculator and I've heard a couple of reasons for this. The official reason was that people expected computers to be big and the HP 9100A wasn't very big by standards of the day. The second, and unofficial reason, was that it came out in the late 1960s, which was at a time when people had a real fear of computers.
If you're curious, I believe the model I used was the 9100B. It supported magnetic strip cards and punch cards to store your algorithm. You'd load it into the memory by swiping the card or inserting the punch card. It would output to a TV and there was a plotter option that functioned pretty much like a modern plotter. There was also a dot-matrix printer included that was like what was on the older POS systems for receipt printing.
Anyhow, they called it a calculator instead of a computer. People had notions of what a computer was supposed to be and this was not that.
At the time, I didn't really appreciate it for what it was. I think my first exposure was in 1971 and it was used almost exclusively in the physics and astronomy labs. It wouldn't be used in the astronomy labs until later, after the observatory was completed and the telescope put in.
This was all at Kents Hill, no apostrophe, if you'd like to learn more about it. It's a boarding prep school located in Central Maine. As I was leaving, we got a mainframe connection to Dartmouth, which was a 'real' computer, though I never played with it.
So, largely people didn't seem to envision much for the future of computing, at least not at the personal level. There were exceptions, some visionaries who saw the potential, but the idea of a personal computer wouldn't really exist until later in the decade. Frankly, nobody saw much use for one. Computers were pretty useless without a great deal of domain knowledge. To use one, you were almost required to know how to program one. Specialty software was the norm and interaction was limited without knowing how to program.
For the most part, games were also limited to hardware specific devices and were console format or, eventually, arcade games that were not general purpose computers. People really didn't have much use for one in their homes and, as mentioned above, there was a real fear of computers.
YouTube has a copy of the HP 9100 promotional video. I can dig it out, if you want. It was pretty remarkable but I wasn't nearly as impressed by it as I should have been. So, I don't really think that the future of computing was obvious to everyone. In fact, I'd say the opposite was true and that it was only obvious to a very limited set of visionaries.
Edited to clean up some verbiage and add a wee bit more information about the 9100.
Edit again. If you have 21 spare minutes and would like to learn about the HP 9100, you can click this link:
https://www.youtube.com/watch?v=Ki1Inux1_wU
(I figure some folks might actually be curious about the tech of the day. That was pretty state-of-the-art back then and far more compute power than most people would be exposed to for the next decade and a half, if not longer.)
If, for some reason, you have further questions, uninvolved@outlook.com is the email address that I hand out in public. I tend to write long posts and replies, so you have been officially warned.
I think you mean WIMP GUIs haven't changed much (and indeed, the WIMP interfaces of the classic Mac, Windows, current Mac, Apollo, Sun all felt very much like the Smalltalk GUI to me).
But I think the tile+touch interface pioneered on the iPhone is indeed a new GUI paradigm which shrugged off the baby-boomer "desktop" metaphor.
These shifts require a change in hardware, as any interface that gets traction with technology T forms a local maximum, which makes it inherently conservative. That suggests that the phone interface is unlikely to change.
What are the elements of the next generation? Looking at the horizon, some subset of voice and AR+gesture, and likely shared experience (note: I don't believe VR offers any affordances for mass UI). So far I have seen nothing viable (including some efforts I've participated in) with these technologies though.
Miller columns are just intuitive for me when browsing a directory structure. I still don't know why Windows hasn't implemented it natively.
Having back and forward buttons on my thumb, with Window's "Folders come first" list sorting makes folder spelunking much simpler for me
More like, it has been shown in numerous research that it was quite good and efficient
Presumably, the way to beat it out is to figure out how to change both to instantiate an entirely different interaction model, and thus naturally a new UI.
Otherwise you can really only commit to incremental improvememts, but nothing particularly interesting.
ie smalltalk's requires an entirely different underlying model to support its in-place editable UI model; it cannot naturally be built (but maybe hacked together) if you try to avoid that shift
Absent research labs, we wait on market availability for hardware. For the modern analogues of mice and bitmapped screens. Eye tracking may be cheap next year, but now it's $2k. High-resolution HMDs may be $10k next year, but now they're simply unavailable. Hand tracking is flaky. Haptics are "it buzzes".
For software, we're so used to the 2D windows interface, it's easy to forget how much foundational work was needed. Eg, around topography, and Smalltalk, and live coding. Hand gesture recognition, spoken dialog management, 3D constraint-based layout, are just a few modern analogues. And so on.
So it looks like we're going to be doing broad-based innovation and infrastructure construction, gated on hardware availability. And plagued by patents.
It's not clear to me what can usefully be done to explore UI in the meantime. You can do wireframing in TiltBrush, but not really. It doesn't give you the feel of say hand-attached controls, and of automated collaboration in general. The "oh, it feels good to present this graph as a sphere, and hold it you lap and peel off layers". So...
Perhaps do UI design by paper improv? A crew scribbling on and holding up paper, hovering around a "user", who is managing a todo list? :) That could be fun.
You can also use tangible objects with XR. It might be useful for sorting things, like system subparts, and thinking with your hands. You can fit camera tracking markers on 1 cm wooden cubes.
> you can't touch any of the 3D stuff
The current controller "tock" impulse haptics can be surprisingly effective. Your brain fills in a lot. But no, no force feedback or individual finger haptics is mass market yet.
I think voice input will be even more important, once we get conversational UIs going, XR will work much better with some pointing and voice input (as in Iron Man).
I'd actually go one step further than that "rant".
In the real world, small physical motions usually have little control significance. Mouse movement and keyboards are among the many exceptions. With XR's hand tracking, that can change. For example, while gaming is focused on "feeling like it's real", so game hands overlap real hands, for getting work done, it's nice to have stretchy arms, so you can reach across the room to grab things. For optimal long-term ergonomics, you want a mix - sometimes you want the exercise of waving your arms, and sometimes you want to achieve the same end by merely twitching a hand resting on a knee or keyboard.
So not only did the video reduce all of human motion vocabulary, including gaze and voice, down to a single finger over a sad UI, but even that single finger was used in an impoverished manner. ;)
> XR will work much better with some pointing and voice input
Eye tracking as well. "Look; point; click" has a seemingly redundant step.
When someone is watching an education video, say IBM's atom animation[1], you want to both be able to field common questions like "What are the ripples?", but also notice that they are looking at the ripples, and volunteer related content.
One can do prototyping now, with WebVR, google speech recognition, and in-browser or google voice synthesis.
[1] https://www.youtube.com/watch?v=oSCX78-8-q0
> When someone is watching an education video, say IBM's atom animation, you want to both be able to field common questions like "What are the ripples?", but also notice that they are looking at the ripples, and volunteer related content.
This is why I think the gaze tracking in the Windows MR platform (which lights up, quite literally in Fluent design applications, in the just released FCU) is more important than people yet realize. It's been a part of HoloLens demos from the beginning, but it's interesting how many people haven't noticed yet.
A conversational interface is a huge step beyond voice, is much more than that. We aren't nearly there yet.
Im not sure why there's a belief that you need to pretend you physically exist in the vr world; you just need to be able to interact with, and navigate, for it to be sufficiently valuable. ie imagine using it as an infinite real-estate development environment, with possibly a visual layout of your data structures, and the ability to change location (ie a mountain top view). Its sufficiently valuable as a product with just this basic featureset.
We don't need to wait for full-body VR, or AR, though no one seems to believe it
The available market was gaming. With its emphasis on "presence", high frame rates, high polygon counts, dislike of visual artifacts and blur, and tolerance of low angular resolution.
And even now, as parts become available to create a better screen substitute, the market isn't seen as worth pursuing yet. But 2018 should see a higher-resolution generation of gaming HMDs. And there's Varjo to watch/hope for, though it will be $10k-ish.
Theres little point representing 2D structures in a 3D environment. You have to change what you're representing as well to see any fundamental benefits.
Though just relaxing the severe real estate constraints of screens may be a big deal. It's much easier for tooling to provide speculative content when its "off to the side, just in case you care" rather than "in you face, covering what you were working on". So various existing "cute... but it's just not worth the cost of dealing with" ideas might become viable. Code analytics and documentation could be ever present, accessible at glance, just off to the side.
That's one fun XR puzzle game. Given a domain where you deeply understand the constraints which determine some system shape, ponder what will change as XR alters the UI constraints. And what approaches already tried, might deserve another look.
Hmm, I was ambiguous there. What I had in mind for "foundational" was, for example, constraint engines for GUI layout. Now we have CSS. But there was something like a decade, where we knew constraints were a good way to do UI layout, but there weren't available constraint engines. Each major language had odd bits of abandonware zombies, academic or hobby projects, with code which might or might not be online, or be buildable, or be licensed for use. So we had to wait for market incentives to change, and some non-trivial work to be done, before we had the foundation element of GUI constraint-based layout.
An Alto analogy might be bitmapped fonts and display.
A current analogy might be 3D dynamic structured graph layout. There's been a lot of research over the years on how to dynamically modify graphs without disorienting viewers. And on how to layout graphs within graphs, and with various spatial constraints. But last time I checked, the github cupboard was bare. So say you want to write a really wizzy todo app with complex dependencies. Well, first you read two decades of graph layout papers; then you write the graph layout library no one else has gotten to yet. Foundational work on our collective todo list.
http://elevr.com/portfolio/future-programming-interfaces/
Here’s a thing: https://youtu.be/ZFdFyQBlXXU
You can eval JS there.