Reply #3 to other comments posted about my proposal; February 1998

----------------------------------------------

Thanks for the continued comments. I've tried to respond, but as always--if I didn't understand your point, please do straighten me out.

I'll describe below how the I think model could handle sensory perceptions, experiences, emotions, and the problem of navigating such an overwhelming mass of data that could be accumulated. The examples can be extended to things we model in the physical world.

To begin: I think that this model for knowledge could, in fact, readily encompass what we perceive through our senses, as well as the fact that we perceive them. For example, if I vaguely smell ammonia and it makes remember fondly the time my Dad tricked me into taking a big smell from the bottle (by doing it himself!)--and we try to record this in the model--we could record the fact that there was a likelihood of a low partial pressure of an unknown ammonium compound in the air, that my nose perceived it, that that perception was logged by the brain, and that I then recalled an event and considered certain aspects of that event. The thoughts can be "data" attributes of the "myself-as-a-person" object, recorded as events relating to other modeled objects, as logged in time. Whether or not the model gets to that level of granularity, it allows for that level of detail and more. (All nouns in the above paragraph represent concepts that become objects of modeled classes in the system; their adjectives and adverbs etc. indicate data or action attributes of those objects. In OO terms, these are "properties" and "methods").

Experiences can be modeled by recording the interactions between us and our world. "She said, you said, she felt, you felt, you threw a ball, he saw it, ..." and so on. And, since people are objects of a class in the model, the model can record not only their actions and interactions in time, but also can approximate their emotions out to the level of our understanding of them.

For example, each "person" in the model could have numeric indications of various emotion-related or other factors, as recorded at a given point in time, to the extent that available data records it. It is possible to measure my heart rate, skin factors, and other physical things, and if you ask me, I can tell you my own level of pleasure or pain in various areas, based on my personal perceptions of safety, anticipated outcomes, and my perception of the world's state relevant to my expectations or hopes. I can probably indicate a level for each of these on a scale of 1-1000 or something. The things we call emotions may be modeled in a useful way for some a collection of such measures, stored as a number on a sliding scale (like in fuzzy logic) that changes over time. We obviously won't be asking everybody to give us all these measures at every moment of their existence, but where the data is available, the model can definitely provide a place to put it: and then we can query it. If all else fails, it can at least record our expressions of it, as actions of stored objects, in time.

Of course it is not perfect or we would have created a human being. But surely there is value in modeling what we can, especially when combined with all the other things we model about our world. It opens the door to asking some interesting questions of the system, both about past populations and future ones, or what-if scenarios that encompass humans, the purely physical world, or some combination of them. And for interacting with that model within the context of whatever computer/human interface we choose to bolt onto the top.

This is not to say that we can (now) understand and model everything that is, including what constitutes a human being. But as far as I can see, the model allows us to store everything that we understand, and to grow the model automatically as our understanding grows, without constant fundamental structural change being required to the model (though it's always allowed when needed). It lets us model anything we can express--as well as many things that we don't understand but to which we can assign some form of measure. (And I believe I can think of a measure for a lot of things; possibly imperfect, but always better than what we do now.)

With so much data, of course viewing it or processing it could become an attempt to work with an overwhelming, chaotic flood. The interface could (and may have to be) customized for usages. There would be a way to see only those chosen, relevant layers of the information, like the layers of an architectural drawing -- just as our mind now does when we look at something, and we perceive and process the highlights, though there's more there. While I have suggeseted a 3-d representation for now (or a 2-d one during early work on it), but any other conceivable representation that a computer can provide now or in the future could be added & plugged-in. Since this is only the interface to the data, and so can take any form that we choose, it filters through the data as needed.

Consider how much more empowering it would be to have such a comprehensive model (even if approximate) of all things known, versus simply storing & retrieving text and pictures through a web browser--or anything else that we have now! This can transform our work, regardless of the corporate culture or other considerations.