The following is a brief summary of the article “finally, robotic beings rule the world,” and my thoughts on artificial intelligence. This blog post is my own processing, and is entirely opinion based.
We created robots. There is no question about that. However, humans seem to entertain self destructive qualities far too often, and our creation of robots might reinforce that quality of ours.
Artificial intelligence have the capacity to comprehend within the processing systems humans created. The question we have come to is- can robots reach a level of intelligence on their own that causes us to lose control of them entirely? AND- Can the consciousness we gave them become adaptable enough to surpass our own?
Our creation of their consciousness shows our infinite relationship with them. We will forever leave a blueprint of ourselves on them, and we will always be their beginning, their starting point.
Researchers are considering the possibility of robots adapting the intelligence we have given them to the point where we can no longer understand our control their actions. This topic is certainly debatable, but some are considering the idea that because of the current advanced state of artificial intelligence, there is a possibility that there may one day be just robots, and no people. Because the robots overtook us, of course. Are we creating our inevitable doom?
If we an create intelligence in robots to become human like, we can ask the reverse question- can humans become programmable? The post, “finally, robotic beings rule the world,” discusses this in detail, saying that although humans created robotic intelligence, they are considering if it possible for humans can be programmed to become robots. The following is a quote from the page that describes how this could happen.
“Put simply, the difference between feed-forward and feedback is the location of agency. If humans become predictable and programmable, does that mean that we lose agency? That we cease to be human?”
I would argue that the the existence of robots or humans would not exist without the concept “agency.” There has to be a starting point for something to be created/begin. In the creation of robots, humans have full agency. We know that much from the technology that we currently have. The only way it would be possible, in my opinion, for humans to be programmed, is if robots somehow developed agency enough to create humans (which would mean robots would have agency over humans). I do not think that life can create an intelligence greater than itself (but that does not mean that humans cannot create something that could destroy them, and ignore the consequences. In this case, they still had the knowledge that they were creating something destructive accessible, even if they chose to not look into the possibility of the destructive nature of what they were creating).
In plain terms, I think that humans have the capacity to create something to destroy themselves and ignore the consequences before it is too late, but I do not think that it is possible for artificial intelligence to surpass our intelligence, and somehow program us to become like them. If that were to happen, the concept of agency would cancel itself out. If the creators’ (humans’) agency created something that could think, I do not think that it could replace their agency, because robots were not created with agency. I explain why I think this would cancel itself out in the next two paragraphs.
Like I mentioned above, I think that agency in robots would only (hypothetically) be possible if robots could learn how to create humans- I do not think humans would roboticize themselves, nor do I think that it is possible to have a human conscious and be roboticized at the same time. If this did happen, and robots had agency, then humans would also have agency at the same time, because they would be equal to robots. If humans were roboticized, the agencies would have to become equal, and this would cancel itself out, making it impossible or anything at all to be created. We would lose the concept of creation without the concept of agency existing, and that concept originated in humans.
My thoughts on this are, although it is probable that robots can resemble humans, I think the reason for that would be that humans are creating them with an understanding of themselves already established. For humans to become robots, that would mean that robots would have to understand how to first create themselves and their own intelligence, and then understand humans and human consciousness. I think this scenario essentially cancels itself out. If both robots and humans can equally program each other, then there wouldn’t be a beginning. There wouldn’t be a starting point. Equal agencies would be coexisting, and that wouldn’t result in any production or creation at all, because there would no longer be the concept of creation. Humans were the starting point of robots, and humans are already humans, so robots being the starting points of human robots seems unlikely.