News and opinion
Back to overview
AI integration - is it possible?
When we encounter AI, we try to compare its logic with the level of intelligence of an educated adult person.
AI means Artificial Intelligence, but until today, there is no definition of what intelligence is, biological or other.
There is no explanation how biological neural nets are creating and developing higher forms of organization.
We urgently need to reassess and extract the analog logic of biological functions in the brain, in order
to use it in digital AI products and services. First understand the basic principles, afterwards develop
a viable theory about general intelligence, then create a new technology, from inception to production.
The creation of a general-purpose AI is the ultimate goal, in order to reproduce the functionality of a
(human) brain. If human intelligence can be digitally reproduced, then intelligence should be portable
in both ways, from human to AI and back. This way we could easier understand, explain and accept AI.
Intelligence is like a precious diamond:
a. in small form, it is used as a cutting edge to drill holes in the structure of others
b. most people want to "posess" the really big one, but only a few can afford to "be" themselves one
c. that is why some jewelry is shown only on special occasions in front of people who can really appreciate it,
while most of the time it should remain under lock and key, due to the very obvious reason of being misused
Therefore it is time to think about the consequences of real intelligence to the development of our society,
which implies intelligence is only one half, while the other half is being wise by making intelligent decisions.
Biological intelligence(=brain) evaluates a situation and possible outcomes with certain pros and cons;
while fear is a negative consideration representing disadvantage, being a threat to the own existence,
it can sometimes be overruled by hunger, means if one does not eat, life would be over anyway, so
the "free will" takes a different and opposite decision to do something, which usually it does not.
Similarly, this way of thinking is used as a powerful motivation for managers of many companies,
if they can not sale their own products and services, then they will reach a certain point in time,
where the cash flow is almost zero, so they will throw cheap and dubious products on the market,
regardless of the impact to consumers, just to "survive", living primarily on the brink of economic death.
This has serious consequences for the whole market and especially to the entire human society,
as this is not a viable and sustainable business model, it is practice for most companies everywhere.
Why is that? Because at the end, the only thing what matters is money, for both owners and staff,
as it represents the unique unit of measurement in order to trade work for products and services.
There is of course an alternative to this horror scenario namely to evolve to the next level,
for companies this means they have to make decisions about their in future far more in advance,
before they go bancrupt, they will have to allocate a certain budget for R&D, but the fear of
failure is big, means R&D is too expensive or no guarantee for a viable product after R&D.
Another alternative could be a new classification of companies, by their development goals;
certified B Corporations are a new kind of business that balances purpose and profit in one,
which could produce goods and services without any R&D, e.g. generic pharmaceutical drugs,
or simply being a manufacturer or service producers for others without taking any financial risk.
The human society has also other means of life beyond money, the most important is life itself.
A family has children and needs a lot of time, patience and money to raise and educate them proprerly.
But, if family would be a business itself, it would not make any sense, financially speaking,
who would invest vast sums and wait 20-30 years, just to have at the end some moody humans?
This all could change radically, as AI products and services will become an indispensable commodity,
especially when we could quantify intelligence and find out that for a modern technological process,
there is no need of human intelligence as supervisor, just machines with some moderate intelligence,
but the structure and flow of the manufacturing process could be decisive with only some of these AI.
So, what would be the benefits of AI when using it properly, means human needs without economic bias?
AI should not replace all human work, but do ONLY that work, which we humans categorize as:
a. too exhausting, because the physical border has been reached
b. too dangerous, because the environment is not suitable for humans,
e.g. toxic, contagious, contaminated, vacuum, outer space, etc.
c. too annoying, because the psychological and emotional requirements are too high,
e.g. routine, stress, vehicle piloting, etc.
What about the appreciation for every intelligence, biological, artificial or other autonomous form?
The centric based model is very popular due to its simplicity: everything circles around a center,
this is widespread around the physical world, from atoms to galaxies, there is always a center;
this also holds true to social organizations, just go up until there is only one person left, who decides,
we have only one consciousness and following only one goal at a time is simple, ignoring everything else.
Now imagine you have to develop a general-purpose brain to be used as a tool for multiple purposes,
on your way to achieve this unique goal, you will need to understand some facts related to this task,
namely intelligence needs to configure and evolve itself, the exactly same way like we biological humans,
in order to gain the additional knowledge and skills, one needs an environment and time for own experience.
How many of us develop feelings and appreciation for animals, plants, prosthesis or even tools?
After a certain period of time and depending on the positive results we can daily achieve with,
we feel ourselves attached to other "things" with which we interact and consider them either
a part of us or as part of the family? Can we develop feelings for intelligent robots or aliens?
The reproduction of thinking and human morality
AI should not have own necessities or emotions, let alone recognition, payment or some civic liberties.
AI products are made with the sole purpose to do what and when we want them to do. (=only a machine)
The ideal AI
There is a family with only one child. The father is permanently at work, while the mother lives at home only for the child. When the child needs something, never mind what, then the mother will supply it with full attention and understanding. No problems, no disturbance, no preaching the gospel, time delays or "I can not or I don't know", otherwise the child throws a tantrum. The "Always Only For Me Working Father" is the energy source for the "Big Mama", which builds the BEHAVIOR OF AI towards us humans, represented by the spoiled child.
We build machines after our poor imagination about some few parts of the world and according to our social behavior however with the claim of benefit. Are these not ideal conditions to build and use independent AI, thinking but lifeless, empathic but with no feelings of its own, maybe someday to each human superior but in any case without any social rights, only obeying our own will, our capriciousness abandoned, adapting our always changing needs and exclusively serving our own individual and most of the time contradictory purposes?
Now, why should anybody change their behaviour? Only because somebody enlightened us? No way.
This is not a compelling reason. We usually fight everything and anybody, because we have our free will.
Why do we do that? Because it is in our nature, which we kindly inherited from our ancestors. Instincts.
When we find ourselves on the brink of extinction, only then we could change our behaviour. Maybe.
The ability to reason does not imply we will always use it. Only when it fits the (temporary) purpose.
Why not? Well, there are several reasons, like reason implies time and thinking, it surely needs feedback.
And when the feedback is not always positive, then we have to imagine other scenarios and outcomes.
This all takes a lot of time and is energy consuming, besides this, maybe it contradicts the purpose.
Welcome to the human condition.
Back to overview