By Anand Rajan – Mission Leader & Co-Founder – Apurva.AI
This decade has already witnessed two transformative forces: COVID and ChatGPT—both evolving and profoundly impacting our social fabric. While one was a virus that shook the foundations of global health, the other went viral, swiftly transforming human-computer interaction and redefining the boundaries of AI-driven capabilities.
In our quest for solutions that outpace challenges, could exponential models like GPT hold the key to fulfilling our vast knowledge needs? Does the widespread adoption of such AI suggest its potential prowess in addressing these pressing concerns?
However, there are also palpable concerns about the negative repercussions of these technologies. Are these concerns warranted? Acknowledging that models like GPT might exhibit biases, conjure fanciful outputs, or even deliver outright incorrect responses, it is clear the stakes are high if these tools are misused. Yet, isn’t an AI just a piece of software and should we not place trust in our collective human acumen to steer these technologies towards societal betterment? Could the overarching benefits outweigh these inherent challenges?
Yes. All of this is true. But I am also reminded of what Edward Wilson, one of the greatest natural scientists of our time, said, “We have Palaeolithic emotions, Medieval institutions, and Godlike technology,” underscoring an important incongruity between Human Intelligence (HI), System Intelligence (SI) and Artificial Intelligence (AI).
Tristan Harris, Co-Founder of the Center for Humane Technology, eloquently reflects on our initial interactions with AI through social platforms—originally designed to empower voices, foster connections, and create communities. However, these platforms unfolded in unforeseen ways. Instead of nurturing these original intentions, social platforms raced to capture attention, breed addiction, fuel polarization, and perpetuate the troubling phenomenon of doom scrolling. As more platforms emerged, these detrimental aspects grew more pronounced.
The first wave of AI was curative, focusing on retaining user attention by organizing resources for sustained engagement. In contrast, the current wave of AI is generative, equipped with the capability to innovate and produce new content across various formats, demonstrating significantly advanced abilities.
This evolution in AI from curative to generative signifies a huge shift in the technology’s capabilities, presenting both immense opportunities and formidable challenges. As we delve deeper into harnessing the potential of AI, it becomes increasingly vital to steer this evolution responsibly.
The existing challenges with generative AI systems, such as hallucination, bias, explainability gaps, and the pervasive threat of deep fakes, stand as formidable obstacles. However, I remain optimistic that significant strides would be made quickly in addressing and mitigating these challenges.
A proactive and deliberate approach, anchored in three crucial pillars—robust ethical frameworks, continual advancements in research, and responsible deployment practices—will pave the way for transformative change. Future evolution may not solely hinge on model sophistication, but also on the nature and quality of the data underpinning these models, as well as on the learning reinforced through human feedback (RLHF). Herein lies a potential shift:
Firstly, acknowledging that meticulously curated data will take precedence. I envision a future where the primary focus on data surpasses its quality; it would extend to its context (National, Regional, Local), be attested to amplify trust, involve culturally diverse datasets, incorporate community voices, and potentially encompass regulatory oversight, all of which would be traceable and accountable through the Language Language Models (LLM) they power. This evolution could become the cornerstone for constructing safe and reliable systems that work for society.
Secondly, standards and specifications could emerge around the process of RLHF, where humans fine-tune Large Language Models. Such specifications could lay out approaches to handle safety concerns and biases while infusing values essential for societal good.
Finally, before their widespread adoption, Language Learning Models (LLMs) could undergo trials involving a diverse panel of experts, encompassing various societal segments, who have tested these models for efficacy and suitability.
In navigating the complexities of AI’s evolution and societal integration, the indispensable role of collective wisdom shines through. The imperative need for greater societal alignment—synchronizing human purpose (HI) with sustainable system design (SI) and digital transformation (AI)—will take center stage.
Understanding the motivations behind our innovations, evaluating the impact of our actions, and envisioning the intended societal outcomes are all integral parts of this alignment. It’s not just about what technology can achieve but how it harmonizes with our shared aspirations. This entails comprehending the ‘why,’ ‘what,’ and ‘how’ of our collective actions across government, civil society, and the private sector. It involves fostering inclusive dialogues that engage diverse voices while acknowledging the responsibility that comes with creating and deploying AI that serves the greater good of society.
Trust, inclusion, and emergence stand as cornerstones in such system design and serve as guiding principles toward a future marked by happiness, prosperity, and advancement.