標高+1m

Don't be rational.

Vision Oriented Programming or OO Without Messages

OO paradigms that are built on top of messaging concept don't correctly model the world we live in. We dont send messages addressed to receivers.

Communication Without Messages

Consider a situation where you are driving your car (A) behind another car (B).

  • The driver on B sees a red light and brakes.
  • You see the brake light on B and derive that B is slowing down so you apply your brake.

There are no messages being sent in this scene. Because the driver on B doesn't know anything about A nor does B have A's reference, hence there is no way for B to send a message addressed to A. The brake light is a message without a receiver because a car turns its brake light on regardless of whether there's a car behind or not.

We don't send messages even when we are engaged in a conversation. We just affect the environment around us by speaking and gesturing, and we interpret the image in our vision and sound wave in the air to derive informations, then we act according to it. This is why a third person can jump into a conversation between two people.

I will call this Vision Oriented Communication (VOC) in this article.

Categories of Objects

There are (at least) two kinds of entities that conventional OO handles uniformly.

  • Dumb objects
    • obeys physics
    • have visual (i.e. can affect environment around it)
  • Intelligent Objects (extends Dumb Objects)
    • (e.g. Human, Animals, Robots, Car w/ driver)
    • have vision
    • have computational capability

Turning it into Code

Let's turn VOC into VOP (Vision Oriented Programming). Note there could be many implementations possible and I'm just showing one of those.

Representation for Dumb Objects can just be its visual. In the world we live in, we interact with objects via its visual. There are no scientific evidence that we are interacting with its entities (because science is based on observation).

//type Visual = Map Keyword -> Any
//type Address = (Number * Number * Number)

//Apple :: Number * Number * Address -> Visual
function Apple (size, mass, address) {
  //mutable map
  return { color:         "red"
         , size:          size
         , mass:          mass
         , recognized_as: "apple"
         , address:       address };
}

Intelligent Objects have:

  • public Visual
  • private Vision -- collection of visuals
  • private AI -- that interprets the vision and modifies the visual
//Car :: [Visual] * Address * Number -> Visual
function Car (vision, address, speed) {
  this.visual = { address:       address
                , brake_light:   false
                , recognized_as: "car" };

  this._vision = vision;

  this._ai = function () {
    setInterval(function () {
      //if this._vision has car visuals that have brake light on
      //   this.visual.brake_light = true
      //else 
      //   this.visual.address++
    }, 3600000 / speed)
  }

  this._ai();
  
  return this.visual;
}

We are lacking someone to controll object's vision who looks at each objects address and put mutually close objects into each others vision. Ideally, this should be handled by a language runtime or a framework.

Here's a demo: http://jsfiddle.net/CAjem/2/

What's Next

  • Create a framework dedicated for VOP
  • Derive FVOP (Functional VOP)

I'd love to hear your opinions!

Thanks! :)

[EDIT 6/12/2014 04:02]

I hacked up a framework for VOC! Check it out at: https://github.com/ympbyc/VOC

Demo: http://ympbyc.github.io/VOC/examples/traffic.html