Model G Dimensions as Levels of Artificial Intelligence

In 2016, Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering at Michigan State University, came out with an article about a hierarchy of 4 fundamental types of artificial intelligence. There are other possible frameworks to measure the progress of artificial intelligence and deep learning, but Arend's way happens to line up very well with the 4 dimensions within Model G in Socionics. Victor Gulenko already came up with a variety of basic ideas for what the dimensions mean, and he also came up with very clever ideas for how the functions work (which I will review). Even so, in connection with the ideas of Hintze and other researchers, it is possible for them to be further elucidated and made even more applicable.

Ben Vaserlan's Model G Graphic (Victor Gulenko's Ideas of the Functions, for reference)

In Model G:
- the Launcher and Control functions (to the right of the chart) have 1-dimension
- the Role and the Brake functions have 2-dimensions
- the Creative and Manipulative functions have 3-dimensions
- the Management and Demonstrative functions (to the left of the chart) have 4-dimensions

But what exactly does this mean?

- the Launcher and Control functions are merely "Reactive Machines"
- the Role and the Brake functions have "Limited Memory"
- the Creative and Manipulative functions have "Theory of Mind"
- the Management and Demonstrative functions have "Self-Awareness"

Review (Victor's ideas for the Launcher and Control functions): The Launcher function is naturally relaxed and "off". It is meant to be stimulated by certain triggers into an "on" state which begin a new round of activity, and it is meant to stay on course unless it is overstimulated, in which case the psyche switches to the Demonstrative or Brake function for stress maneuvers. The Control function is naturally tensed or "on", monitoring the environment for various anxieties and threats against the Management function. Energy needs to be returned to the Management or Role functions to resume goal-directed behavior and get out of an anxious, paralyzed state in which the Control function is lost.

This is what applies to the Launcher and Control functions (1 dimensional functions in Model G):
Type I AI: Reactive machines: The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine. Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov. Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled. - 
Arend Hintze
Important Observations:
- It fits the idea that the Launcher and Control are inflexible feedback mechanisms. They exist mainly to observe and monitor the environment, acting as switches for certain triggers.
- Because these functions are so inflexible, they have the worst ability of all of our functions to adapt to the situation. That's also why we tend to get into trouble on these functions, and why they are relatively easily fooled.
- These functions can still collect a lot of information, especially the Control function, but this is by virtue of acting as an inflexible feedback mechanism in the first place. In this sense, they still have a concept of the past, but they don't use the information they collect to modify their behavior. Their behavior is only modified once the difficulties faced by these functions become so great that the higher functions have to change course or re-program the lower functions.
- They can still be trained to a very high level in specialized tasks, especially when programmed by an intelligent designer (e.g. our higher-dimensional functions). They are not simply "weak". However, their abilities to directly adapt are very limited, so they are literally "robotic".
- Even adaptations at this level (which come strictly from higher functions) are new programmed rules to respond to triggers, which will inevitably be too inflexible in the future, which is also why this form of AI also runs into so many problems when representing the world.

Review (Victor's ideas for the Role and Brake functions): The Role function has the ability to adapt to situations in which it is uncomfortable or doesn't easily conform, showing another side of the psyche which masks what it's typically like in a more relaxed and less defensive state. The Brake function is usually in a relaxed and suggestive state, passively and uncritically taking in information, but in a dangerous situation it can manifest very strongly in a sudden and surprising way (though this stress quickly overloads the psyche to a point of shutdown and need for energy recovery, since it essentially revises one's own Management function which dissipates the energy of the psyche). These functions help us adapt to uncomfortable situations.

This is what applies to 2-dimensional functions in Model G (Role and Brake):
Type II AI: Limited memory: This Type II class contains machines that can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel. - 
Arend Hintze
Important Observations:
- The fundamental ability gained at this level: the ability to adapt to environmental stimuli along a trained axis of behavior. Instead of a reaction to each stimulus needing to be pre-programmed by a general, inflexible rule, observations from the world can be integrated into a more complex pre-trained representation of some aspect of the world (a kind of "state") into decisions for a more proactive program of behavior. Observations can now modify the behavior of the function/AI without needing to change the programming, albeit in pre-set ways (i.e. a state configuration).
- These functions can't keep spontaneously learning from memories and general life experience: this would be leading the way of the Socion, which is what our Management, Creative and Demonstrative functions do (higher dimensional functions with our activity orientation: Managerial, Researching, Social and Humanitarian). They learn within the limits of pre-training. The exception would be learning through higher programming or outside training (such as from Benefit and Supervision relationships).
- This level is still quite stiff and inflexible: the Role function also comes across as stiff since it is a mask which shields our more relaxed, natural and less pre-planned behavior, and the Brake function overloads the psyche. The point is that these mechanisms can be trained to minimize and smooth over these difficulties, but only by a greater living intelligence which is more than a sophisticated robot.

Review (Victor's ideas for the Creative and Manipulative functions): The Creative function is able to situationally optimize and switch between the tensed and relaxed extremes of a function, which makes it ideally suited to implementing goals in a specific local environment with unique and unpredictable features. The space this function has for many unique situations also makes it ideal as a collector. The Manipulative function also optimizes between a state of being charged with psychological motivation and not being charged with it, since this function directly supports our Management function with energy to allocate to the rest of the psyche. Since new sources of motivation and psychological reward pull us forward via the dopamine hormone, they are inevitably exhausted by decreasing sensitivity, so this function balances between hunger and satiety by becoming skilled at "manipulating" or provoking this function from others. Since it optimizes more for its own needs, it does not generally lead the Socion, even though it has more flexible functionality than lower dimensional functions.

This is what applies to 3-dimensional functions in Model G (Creative and Manipulative):
Type III AI: Theory of mindMachines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.
This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly. - 
Arend Hintze
Important observations:
- Because of the new "Theory of Mind" ability, this is the first level which supports meta-modeling. These functions can now understand complex agents and situations through their own learning, which gives them potential inventive and leadership qualities.
- Previous comment from Arend Hintze: "So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations."
- Through a combination of simulating and understanding the functions of agents across wider spectra (tensed vs relaxed, charged vs uncharged, etc., and the other functions in the psyche), functions at this level can sequentially compose together simpler agents and behavioral programs into subtasks (or coordinate them simultaneously in parallel), allowing them to learn to implement more complicated tasks in less familiar environments with far less pre-training and specific instructions.

Review (Victor's ideas for the Management and Demonstrative functions): The Management function is the main function which defines the type; it's the simplest and the most complex. It generally sets the goals for our psyche and, with practice and achievement, even acts as a goal-setting engine for the broader Socion, and it allocates energy and psychological resources to the other functions so that they can help achieve goals. The Demonstrative function also has the ability to set goals for the rest of the psyche if the Management function relaxes its supervision, and it usually does this during stressful times when a bold, nonstandard approach is needed, or when the type is more relaxed on its own territory. Because of the nonstandard mode coordinating the entire psyche, this function will drain energy much more quickly, and because of the more draining and vulnerably innovative (i.e. personal) nature of these goals, this function has a more vulnerable self-esteem than the Management function.

This is what applies to 4-dimensional functions in Model G (Management and Demonstrative):
Type IV AI: Self-awareness: The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.
Important observations:
- With the ability to form sufficiently accurate, purposive and fine-grained representations of the self, goal-setting and goal-directed behavior becomes possible. Managing this high quality representation (a psychic "homunculus" of sorts) allows management of the psyche and the body in pursuit of complex adaptive goals, similar to humans and complex adaptive systems more generally.
- We tend to “project” the most on our Management function because it’s how we interact with the world and how we are aware of ourselves, so it’s our main empathic means into the minds of other complex agents. For this reason, we always need to work on retracting our projections, but we also need to have some patience with them since they are inevitable byproducts of our brand of intelligence.

It's important to bear in mind that this is a preliminary theory: real experimentation and research will have to validate this or modify it accordingly with precise data and results completely outside of the realm of self-reporting questionnaires. What's useful about it is increased elucidation of the dimensions of Model G and Socionics, giving new ways to understand the types, and it also gives more concrete research directions (though there is still a lot of concreteness to add).


Important note: Model G is different from Model A, another model in Socionics which hasn't given a precise account of the relative capabilities of different functions within Socionics beyond the level of Reactive Machines (and it's often not even precise at that level, depending on the version of Model A, but this naturally a difficult task that no Socionics model has succeeded in yet). Model A usually uses Bukalov dimensions, some slight modificationsome invariant abstract priority scheme, or in the worst case binary dichotomies, such as the laughably simplistic "Strong vs Weak" (which amounts to little more than circular reasoning since we need an objective criteria of what “Strong” entails in the first place), but there may be better, more up to data versions. We need to start building more plausible dynamic systems if we want Socionics to have any future in scientifically understanding humans and intelligence. This is why I spend more time on Model G at this point, but I'm open to spending more time on Model A in a version that overcomes the reductionism, fundamentalism and inclination to insubstantial formal mathematics or self-reporting questionnaires, especially since Model G and Model A are compatible in principle anyways.


Further clarification: It is sometimes said that functions with a lower dimension in Model G have "higher information". This is very imprecise, so the confusion is understandable. In the theory I present above, the lower dimensional functions literally have less complexity as a matter of strict engineering or biology. This directly implies that tracking the state, functionality and collected information of lower dimensional functions is far more tractable than for higher, more self-aware functions which largely govern the psyche to begin with (and thus largely are in charge of managing the tracking so would be tracking themselves, slowing down their activity to a standstill). This is why we have "higher information" for lower dimensional functions; not because they literally collect or deal with information more effectively than higher dimensional functions, but because it is much faster and easier to understand an entity that is less complex in its activity and organization in the first place. The Control/Launcher functions are purportedly explicitly behaviorally programmed functions and the Role/Brake aren't entirely explicit but highly pre-planned and bounded in their operation, so understanding and observing their operation at a meta-level in our psyche is much easier than being meta to a much larger portion of how we manage our entire psyche (i.e. our Management function, our social mission, etc.). So our ability to enact complex behavior with higher-dimensional functions is qualitatively greater, but our ability to understand the totality of our own activity that goes into understanding and interacting with higher-dimensional functions is lesser.

Comments

Popular posts from this blog

Psychosophy Clubs and Sextas

SHS Subtypes Reference 2022

My General Understanding of Psychosophy