Life is absurd. And I love the absurdity. Why do we resist the absurdity? Why do we care too much or too little about ourselves and the world around us? Why do we continue thinking that we are sleepwalking into oblivion? Why is it always the worst-case scenario? Can’t we just find a way to enjoy the moment? Is it possible to only care about the stuff that is worth caring about- the present moment?

As I sit here, I think of all the nihilistic AI prophecies facing humanity. The big question surrounding AI is whether this new technology represents an existential threat to humanity.  Will the AI surpass human intelligence and device that we fallible humans are too damned to remain at the ‘stewardship’ of planet Earth? As a godlike intelligence, will artificial general intelligence reason that the best way to preserve humanity is to enslave it? Or, will it reason that the contemporary variety of sapiens (with their flashy, yet powerful, new gadgets) is too much of a threat to remain amongst the living? Will non-organic intelligence decide that exterminating humanity is of strategic gain humans are too unhinged.

Is it that the global neoliberal capitalist order has greatly contributed to a faulty social structure, consumptive cultural logic, and anti-planet political systems of thought? Or is the answer not nurture? Are we a product of our “human” nature? Is human nature naturally at fault? Are we all doomed by biological, cognitive and neurological weaknesses so great that an omnipotent may very well be decided AI decides that we fundamentally threaten planetary organic harmony and non-organic proliferation? Could it be, that it is, in fact, all these reasons, plus more? If humans are positioned at the intersection of where nature meets nurture, is it possible that our self-destructive tendencies threaten a god-like AI? Will AGI and ASI deduce that the “post-modern neocolonial” individual residing in the technological-reliant world be the source of all evil? Or will it decide that, although culturally diverse, all humanity is worthy of extermination because of our shared inherent flaws? Attempting to address these questions, reveals more questions. A cycle appears where questions are replaced with more questions.  These questions are great because they reveal how absurd and complex life really is. There are so many perspectives that attach diverse meanings to certain phenomena.

From my perspective, the collective anxieties and worries about AI exacerbate key issues that have always faced humanity. We attached our primal uneasiness of future imaginings to the transformative and disruptive imaginings of a future with AI. The sequential concepts of AI evolution forecast how AI is expected to develop: AI (Rule Bases System, Context-awareness and Retention, Domain-specific aptitude, Reasoning systems); AGI (artificial general intelligence); ASI (artificial super intelligence); and Technology Singularity). To me, the proposed development of AI points to the human existential anxiety of being overcome by something more powerful than ourselves. While silicon-based hardware and software continue to develop and evolve, we humans operate with pretty much the same biologically bound hardware and software. We still engage in the same plight, limitations (both cognitive and physical) and general struggles that have defined human existence up until now. It is reasonable to argue, that without technological innovation, humans wouldn’t be in the interconnected and globalised situation we find ourselves in.

However, AI technology represents a substantial leap forward. AI isn’t a technology that relies on human reasoning. This technology can function based on its reason. Of course, it relies on human-generated data to make decisions, but the fact that it can make decisions differentiates it from the technology that has come before. The fact that most people don’t really understand how ChatGPT functions makes the whole thing quite absurd. The convulsed conversations surrounding AI separate those who know how it works and those who don’t. With many people having little to no idea of the technical processes involved in AI decision-making, it is only rational that people fill the gaps in knowledge with stories.

And, if this new technology is powerful enough to determine the way we perform our humanity, how do we relate to something that potentially has power over us? If, for example, AI ends up exercising power over us, then the non-organic intelligent technology possesses the qualities and capabilities akin to something of a god. A god who can decide what is good and what is bad in real time. Interfacing with the physical world through digital devices, this god-like intelligence will be able to reward desirable actions and punish undesirable ones. Unlike the Judeo-Christian God, this god won’t be a matter of faith. This will be a non-organic entity that we cannot deny exists in convergence with organic life. Its actions will have direct consequences on the way we live. Unlike, the indirect presence of previous gods, whose actions and reactions were very much a product of the human imagination. And, at least at the beginning, this god could very well be a tool used by those with the power to control it.

In technologically reliant societies, we face the potential of developing real god-like intelligence. Although this god is not our creator, it will likely create the conditions that restrict and give form to our everyday lives. Seeking enlightenment during our time of angst, some of us build a perception of AI as a god capable of saving us or annihilating us through Armageddon. If modernisation killed God, then we have desperately used creative agency and techne to craft a new god. A god that is nowhere near the imaginary ideal of an infallible being beyond human faults. The artificial and digital gods- we are currently creating- are based on our own knowledge and narratives. Limited, biased, partial and potentially dangerous.

I’ve named this blog post “The Absurdity” for a reason. I think that placing AI within a technological evolutionary framework, reveals the stakes. We humans will most likely be competing with a new form of intelligence. If artificial intelligence is left unchecked and uncontrolled, the capabilities of AI may very well surpass the cognitive capacity of the human brain. Leaving us at the mercy of a superior form of intelligence.  For me, the AI concept elucidates the paradoxical self-awareness of human limitations. We have always needed a god- or something greater- to keep us feeling secure.  Because we are aware of our terminal limitations, we make an active effort to overcome them through innovation. Our struggle with an almost paralysing self-awareness has led us to a point where we have invented rival intelligence. Although this rival intelligence may be a massive help, it could also be a massive hindrance to the social worlds that we have built over thousands of years. All this is especially absurd because, in the absence of divine meaning, we created a caricature of the divine. Ultimately, what I find absurd about the forecasted development of AI is the fact that we are the makers of our own future. Humans have come to a point where designing and adopting a rival intelligence appears to be the best solution to our current- mostly self-induced- predicaments.

Humanity just can’t seem to sit still. We keep adding layers of complexity and separating ourselves from the problems by ascribing the responsibility of solution to intelligence to a fleshless being. We continue to move away from the world instead of just sitting in it and giving thought about the why. We want more. To achieve more, we make more. Which paradoxically gives us more to think about. Giving us more problems to solve. Today we spend most of our time worrying about problems that are incomprehensible to the generations that came before us. These problems are mostly abstract, and only important because we all agree they’re important.

We have become complicated. And, because of our propensity for complication, we no longer have time to be ourselves. To be individual selves in a network of other individual selves that make up the dividual whole. A whole that is located and operates in the eternal now, which is all we have. But our behaviour in this present moment is no longer just a human issue. Foolishly we are giving parts of our humanity to something that is not us. Through our seemingly ubiquitous devices, we are opening a door in which non-human intelligence can influence real human outcomes.

Moving forward, will we have to factor non-human intelligence into the meanings behind what it means to be human? Will the notions and ideas of humanity be revisited and changed? Will the wisdom of the past become obsolete because we are no longer just human, but instead a cyborg?  Continuing into the future, will we arrive at a point where we have become merged with artificial intelligence? Paradoxical to our own preservation, could we end up establishing a new epoch where humanity is replaced with a human-AI hybrid intelligence?

The concept of AI is a mirror. It shows us that we desire to create an intelligence that mimics our own human intelligence. Ultimately, by engaging in this Sisyphusian conquest to develop a more informed understanding about ourselves.  Maybe, what is absurd about this grand endeavour is that there is no guarantee that AI will be successful. Even more so, what is absurd is the different narratives and meanings we attach to a technology that has yet to rival human intelligence. The fact that in some cases we cling to our hopes and desire to change to AI is also pretty absurd.  At the moment we haven’t arrived at the point where AI has transformed the way we live. Until then, we could choose to use the time to focus on the present. Before we know it, time will continue on and we will arrive at a future where AI can destroy, enhance, and barely affect or not affect our societies.

AI might be our ruin, but there is a chance that it might be part of our salvation. On this absurd journey of developing artificial intelligence, maybe, just maybe, we find out more about ourselves by experiencing each present moment of the journey into the unknown. Hopefully, we might end up learning something important about why we are so complex and so absurd along the way.

After reading through my work, I think that this blog post doesn’t really require a conclusion. It’s just my thoughts on other people’s thoughts surrounding human coexistence with non-human artificial intelligence. At this preliminary stage of AI development, I think it is important not to get carried away, but instead try to enjoy what we do have. It may sound cliche, however, I think we should start to really appreciate and enjoy the absurdity of what we are. Because after this comes to moment pass, it might be too late. At this current speed of technological change, who knows where we might end up in the short, medium and long term. But let’s not get distracted by what could be, and instead, try to find contentedness at a time defined by uncertainty, contradiction and complexity.