Cornerstone: Embracing Human-AI Co-Evolution
Publication date
18.09.2023
This is one of the 6 cornerstones of our Manifesto. Read the Manifesto it here
The age-old philosophical debate on individual autonomy has never reached a conclusive resolution. It's a topic that continues to challenge our understanding of human agency and decision-making. However, in today's world, as AI continues to advance, this debate takes on a new and pressing dimension: we are building other autonomous decision-making agents. With the growing influence of AI in our lives, such as recommender systems, AI assistants or healthcare chatbots, we find ourselves in a unique position where we must decide what decisions to delegate to machines and how much autonomy we are willing to relinquish. Delegating decision-making to the algorithms is not necessarily a bad thing, rather something we can take advantage of.
​
The field of AI has come a long way from its inception. In its early stages, AI aimed to replicate human cognition in all its complexity. However, as we progress, we're witnessing the emergence of a new frontier: Neurosymbolic AI. This approach challenges the conventional notion that AI should mimic human thinking entirely. Instead, it seeks to combine the strengths of neural networks and symbolic reasoning, creating an entirely new form of intelligence. In this evolving landscape, AI is not trying to replace human intelligence, but rather enhance it. Neurosymbolic AI systems are particularly adept at tasks that involve sifting through massive datasets and identifying intricate patterns, at the same time, by blending neural networks with symbolic reasoning, it brings advantages like enhanced common sense and adaptability.
However, even Neurosymbolic AI often falls short compared to human intelligence, which excels in diverse areas, including creativity, emotional intelligence, abstract thinking, and nuanced decision-making, forging a deep connection with the world. Human cognition operates on multiple layers of intelligence, encompassing intuitive understanding, empathy, and subjective value judgments, areas where AI, including Neurosymbolic AI, still has significant room for improvement. While Neurosymbolic AI aims to bridge these gaps and amplify human capabilities, it's vital to acknowledge the unparalleled richness and complexity of human intelligence, which remains a benchmark for understanding the boundless facets of cognition.
​
This technological leap opens up exciting possibilities, but it also raises critical questions. As we embrace the potential of AI, we must grapple with the crucial question of what decisions and to what degree we are willing to delegate to machines. There's no denying that AI has the capacity to free us from mundane tasks, liberating our time for more creative and intellectually demanding endeavors. However, we must be deliberate in our choices.
​
For instance, it might make sense to delegate routine data analysis, logistics planning, or repetitive administrative tasks to AI systems. This delegation not only saves time but also reduces the risk of human error. But when it comes to ethical, moral, or deeply personal decisions, we must tread cautiously. Our choices on which decisions to extend to AI have profound ramifications for our collective future. We must engage in thoughtful deliberation and establish clear guidelines and protocols for AI development.
​
While we contemplate the role of AI in our lives, it's equally important to consider the reciprocal relationship: how humans impact AI. To ensure that we delegate our decision-making to the best possible agents, we must actively engage in the development and oversight of AI systems. It is a dynamic, interdependent process, akin to a living, evolving system where nothing remains static.
To better understand the nature of this co-evolution, we must also dispel the notion of AI as a neutral force. AI's impact is not solely about the technology itself but extends to how humans use, interact with, and adapt to it. It involves norms, cultural shifts, perceptions, and the intentions of system developers. Furthermore, self-learning AI systems absorb knowledge and behaviors from their human interactions, shaping their own development and influencing the humans they engage with. In this intricate processes of evolution, both humans and AI are constantly changing, learning, and adapting. Hence, it’s not only about how developers’ decisions influence AI development process, it’s also about how end-users interact with and influence AI systems both on individual and group/societal levels.
​
From the AI's perspective, this interaction is non-material, yet profoundly subtle. As AI becomes increasingly sophisticated, it can discern the most delicate nuances of human behavior and its environmental context. This ability to perceive subtleties brings forth profound questions about the nature of human-AI co-evolution. As AI becomes more integrated into our daily lives, it brings with it new ethical and moral considerations. Questions arise about the norms governing AI behavior, how AI should be treated by humans, and how AI should treat humans in return. This evolving landscape prompts us to develop a new ethical framework that takes into account the dynamic nature of human-AI interactions. As AI becomes a part of our cultural landscape, it can shape and be shaped by these factors. The narratives we create around AI, its role in society, and its implications contribute to the dynamic nature of this co-evolution.
That is why it necessitates a deep level of intersubjective understanding. AI systems need to comprehend not only the explicit language of humans but also the implicit, unspoken nuances. This understanding goes beyond mere data analysis; it requires AI to grasp the emotional and cultural context of human communication. Achieving this level of understanding can bridge the gap between humans and AI, making interactions more meaningful and productive. This means that AI's influence extends beyond the realm of repetitive tasks. It has the potential to transform human identity. As humans integrate AI into their lives, they may come to rely on AI for various aspects of decision-making, problem-solving, and even emotional support. This transformation raises questions about how AI impacts our sense of self, individuality, and autonomy.
​
The age of AI presents us with both opportunities and challenges. We stand at a crossroads, where the decisions we make about AI's role in our lives will shape our future. In this new era, the interaction between humans and AI must be a symbiotic relationship, where the best of both worlds can thrive. We should embrace this complex system where nothing is static and leverage it for our good. This is a philosophy that guides our work. This philosophy allows a mindset that continually observes human-AI interdependencies, demanding vigilance and prompting everyone to ask ourselves at every turn - how does this technology change my/our (as an individual and as a collective) decisions and behavior, how do I/we change the decisions and behavior of this technology.
Have Questions?
Talk to us. AdalanAI is building an end-to-end solution for AI Governance: SaaS platform and AI Governance Approach - novel ways to govern entire AI development life-cycle.