I like the Visual POV idea. It makes sense. Your little scope creep is significant though, this isn’t a case of us just reusing some of POV here to save time. I don’t know if you noticed you added a whooper of an enhancement kind off-handedly. Your “writing notes” implies manipulating the simulation real time with some know of interface. So this isn’t an Alpha lite with manipulation added on. This is a new project, and if we look at it that way are really talking about using old VR game’s mechanics and pretending its using our simulation tech. That would get you to our true objective, communication, quite quickly.
The idea of taking how we used to render images and make it work real time with a simulation on the scale of Alpha is a bigger lift, but not crazy. And since you said that out loud it probably should happen and could be done in parallel with other team members. But since we have two weeks before the real work begins I’m thinking we combine your VR throwback with some of POV’s interaction of the visual cortex, that might be useful if we want to scale the use of POV in the future. We could build a version of the mesh net that works from on top of your head, I assume we will need to up the juice to make up for what the hair diffuses and make it part of the goggle’s strap. I mean the back strap sits right on top of the occipital lobe anyway. Hopefully that will work like it does for me and the other parts of the brain ascribing meaning to the images will kick in automatically. This is fun, and will be useful. Good idea Rob.