A long forgotten debate of “Will robots overtake humans” has
recently been revived by a comment on the threat of artificial intelligence by
Elon Musk[1].However, what is obviously missing in the debate is a clear description of some
realistic scenario by which robots could assuredly challenge humans. If this
type of scenarios would never be realistic then humans should never worry about
the possibility of being overtaken by robots. But if this type of scenario could
foreseeably be realized in the real world, then humans need to start worrying
about how to avoid the peril to happen instead of how to win debates over
imaginary dangers.
recently been revived by a comment on the threat of artificial intelligence by
Elon Musk[1].However, what is obviously missing in the debate is a clear description of some
realistic scenario by which robots could assuredly challenge humans. If this
type of scenarios would never be realistic then humans should never worry about
the possibility of being overtaken by robots. But if this type of scenario could
foreseeably be realized in the real world, then humans need to start worrying
about how to avoid the peril to happen instead of how to win debates over
imaginary dangers.
The reason that people on both sides of the debate could not
see or show a very clear scenario that robots could indeed challenge humans in
a very realistic way is truly a philosophical issue. So far people have focused
on the possibility of creating a robot that could be considered as a human in
the sense that it could indeed think as a human instead of being solely a tool
of humans operated with programmed instructions. It seems that nobody could
provide any plausible reason that it is possible to produce this type of
robots. The most vivid prototype of threat from robots is the famous Frankenstein
that would never be taken seriously by reasonable humans.
see or show a very clear scenario that robots could indeed challenge humans in
a very realistic way is truly a philosophical issue. So far people have focused
on the possibility of creating a robot that could be considered as a human in
the sense that it could indeed think as a human instead of being solely a tool
of humans operated with programmed instructions. It seems that nobody could
provide any plausible reason that it is possible to produce this type of
robots. The most vivid prototype of threat from robots is the famous Frankenstein
that would never be taken seriously by reasonable humans.
Unfortunately this way of thinking is philosophically deficient
because people who are thinking in this way are missing a fundamental point
about human nature: human beings are social creatures.
because people who are thinking in this way are missing a fundamental point
about human nature: human beings are social creatures.
An important reason that we could survive as what we are now
and could do what we are doing now is because we are living and acting as a
societal community. Similarly, when we estimate the potential of robots we should
not solely focus our attention on their individual intelligence (which is so
far of course infused by humans), but should also take into consideration of
their sociability (which of course would be initially created by humans).
and could do what we are doing now is because we are living and acting as a
societal community. Similarly, when we estimate the potential of robots we should
not solely focus our attention on their individual intelligence (which is so
far of course infused by humans), but should also take into consideration of
their sociability (which of course would be initially created by humans).
This would further lead to another philosophical question:
what would fundamentally determine the sociability of robots? There might be a
wide range of arguments on this question. But in term of being able to
challenge humans I would argue that the fundamental sociable criteria for
robots could be defined as follows:
what would fundamentally determine the sociability of robots? There might be a
wide range of arguments on this question. But in term of being able to
challenge humans I would argue that the fundamental sociable criteria for
robots could be defined as follows:
1)
Robots could communicate with each other;
Robots could communicate with each other;
2)
Robots could help each other to recover from
damage or shutdown through necessary operations including change of batteries or
replenishment of other forms of energy supply;
Robots could help each other to recover from
damage or shutdown through necessary operations including change of batteries or
replenishment of other forms of energy supply;
3)
Robots could carry out the manufacture of other
robots from exploring, collecting, transporting and processing raw materials to
assembling the final robots.
Robots could carry out the manufacture of other
robots from exploring, collecting, transporting and processing raw materials to
assembling the final robots.
Once robots could possess the above functionalities and
start to “live” as a multitude, we should reasonably view them as sociable
beings. Sociable robots could form community of robots. Once robots could
function as defined above and form a community they could have their autonomous
independence from human masters. Once that happens they could then possibly challenge
humans or start their cause of taking over humans.
start to “live” as a multitude, we should reasonably view them as sociable
beings. Sociable robots could form community of robots. Once robots could
function as defined above and form a community they could have their autonomous
independence from human masters. Once that happens they could then possibly challenge
humans or start their cause of taking over humans.
Then we might ask whether the above defined sociability
realistic for robots?
realistic for robots?
Since not all the functionalities mentioned above exist (at
least publically) in this world today, to avoid any unnecessary argument, it
would be wise to make our judgment based upon whether any known scientific
principle would be violated in any practical attempt to realize any particular
functionality among those mentioned. After thinking through all those
functionalities compared to what we could know from publically available
resources about the progress of robotics or industrial and domestic automation,
I could see no reason for any of the functionalities mentioned above to be considered
as not producible according to any known scientific principle. As a matter of
fact, most of the functionalities, or something similar to the functionalities in
the above definition of robotic sociability, have been realized in real life
separately in some mission specific automations. At this moment, the only thing
missing seems to be that no one has been able to integrate them together on an individual
whole robot. But since we don’t see any known scientific principle that would
prevent any of those functionalities from being realized, then we should expect
that with money to be invested and with time to be spent (might be 5 to 10
years as Musk said[2]) the creation of sociable robots as defined earlier could be real unless some
special efforts to be made by humans on this world to prevent that to happen.
least publically) in this world today, to avoid any unnecessary argument, it
would be wise to make our judgment based upon whether any known scientific
principle would be violated in any practical attempt to realize any particular
functionality among those mentioned. After thinking through all those
functionalities compared to what we could know from publically available
resources about the progress of robotics or industrial and domestic automation,
I could see no reason for any of the functionalities mentioned above to be considered
as not producible according to any known scientific principle. As a matter of
fact, most of the functionalities, or something similar to the functionalities in
the above definition of robotic sociability, have been realized in real life
separately in some mission specific automations. At this moment, the only thing
missing seems to be that no one has been able to integrate them together on an individual
whole robot. But since we don’t see any known scientific principle that would
prevent any of those functionalities from being realized, then we should expect
that with money to be invested and with time to be spent (might be 5 to 10
years as Musk said[2]) the creation of sociable robots as defined earlier could be real unless some
special efforts to be made by humans on this world to prevent that to happen.
Although possessing the sociability might be necessary for
robots to challenge humans but it might still not be sufficient to pose any threat
to humans yet. In order for robots to become real threat to humans, they need
to possess some ability to fight or combat. Unfortunate for humans, fighting
ability of robots might be more real than their sociability. Based upon what we
have already witnessed in the past, we might very reasonably expect that an
army of robots would be capable of doing the following:
robots to challenge humans but it might still not be sufficient to pose any threat
to humans yet. In order for robots to become real threat to humans, they need
to possess some ability to fight or combat. Unfortunate for humans, fighting
ability of robots might be more real than their sociability. Based upon what we
have already witnessed in the past, we might very reasonably expect that an
army of robots would be capable of doing the following:
1)
They would be highly coordinated. Even if scatter
around the world, thousands of robots could be coordinated though
telecommunication;
They would be highly coordinated. Even if scatter
around the world, thousands of robots could be coordinated though
telecommunication;
2)
They would be good at remotely controlling their
weaponry or even the weaponry of their enemies once they break into the enemy’s
defense system;
They would be good at remotely controlling their
weaponry or even the weaponry of their enemies once they break into the enemy’s
defense system;
3)
They could “see” and “hear” what happens
hundreds or even thousands miles away, no matter it happens in open space or in
concealed space, no matter the sound is propagating through air or though wire;
They could “see” and “hear” what happens
hundreds or even thousands miles away, no matter it happens in open space or in
concealed space, no matter the sound is propagating through air or though wire;
4)
Even as individuals, they might be able to move
on land, on or under water, as well as in air, in all weather conditions, and
move slow or fast as needed;
Even as individuals, they might be able to move
on land, on or under water, as well as in air, in all weather conditions, and
move slow or fast as needed;
5)
They could react promptly to stimulates, act and
attack with high precision, and see through walls or ground earth;
They could react promptly to stimulates, act and
attack with high precision, and see through walls or ground earth;
6)
Besides, they don’t have some fundamental human
nature such as material and sexual desires, jealousy, need of rest, or scare of
death. They are poison proof (no matter for chemical or bio), and they might
even be bullet proof.
Besides, they don’t have some fundamental human
nature such as material and sexual desires, jealousy, need of rest, or scare of
death. They are poison proof (no matter for chemical or bio), and they might
even be bullet proof.
As a conclusion, based upon what humans have already enabled
robots to do and what foreseeably would enable robots to do without any
regulation, we could very reasonably predict that the threat of robots taking
over humans is not solely a fairy tale, as long as humans would create robots
to be socially independent of humans.
robots to do and what foreseeably would enable robots to do without any
regulation, we could very reasonably predict that the threat of robots taking
over humans is not solely a fairy tale, as long as humans would create robots
to be socially independent of humans.
Then a big question would be why any reasonable humans would
create socially independent community of robots instead of making them tools or
slaves of humans?
create socially independent community of robots instead of making them tools or
slaves of humans?
We need to look at this question from two different levels.
First, whether someone who is able to mobilize and organize resource to create
a community of sociable robots would indeed has the intention to do so is a social
issue, which does not possesses the type of hard restriction as provided by
natural laws. In other words, as long as it is possible according to natural
laws, we could not exclude the possibility solely based upon our wishful thinking
of the intentions of all humans.
First, whether someone who is able to mobilize and organize resource to create
a community of sociable robots would indeed has the intention to do so is a social
issue, which does not possesses the type of hard restriction as provided by
natural laws. In other words, as long as it is possible according to natural
laws, we could not exclude the possibility solely based upon our wishful thinking
of the intentions of all humans.
Second, the competition of human society would provide
enough motives for people who are able to do something to enhance their own
competing power to push their creativity and productivity to its maximal edge.
However, history has proved that humans are vulnerable to ignorance of many
potential risks when they are going for extremes for their own benefits.
Especially, once some group of humans are able to take or avoid some
potentially dangerous risks, those who could decide the difference would
normally only a very few or even one single person. Consequently, since there
is not natural law to prevent community of sociable robots from being created,
without social efforts of regulations, we might come to a point when we need to
count on the psychological stability of very few or even a single person to
determine whether humans would be threatened by robots.
enough motives for people who are able to do something to enhance their own
competing power to push their creativity and productivity to its maximal edge.
However, history has proved that humans are vulnerable to ignorance of many
potential risks when they are going for extremes for their own benefits.
Especially, once some group of humans are able to take or avoid some
potentially dangerous risks, those who could decide the difference would
normally only a very few or even one single person. Consequently, since there
is not natural law to prevent community of sociable robots from being created,
without social efforts of regulations, we might come to a point when we need to
count on the psychological stability of very few or even a single person to
determine whether humans would be threatened by robots.
The last question remains might be why humans would make
robots that hate humans even if we would create communities of sociable robots?
The answer could be as simple as above: for the sake of competition.
robots that hate humans even if we would create communities of sociable robots?
The answer could be as simple as above: for the sake of competition.