Express Computer
Home  »  Guest Blogs  »  Beyond ChatGPT: Exploring the real future of GenAI

Beyond ChatGPT: Exploring the real future of GenAI

0 95

By Anusha Rammohan, Co-Chair, Data and AI Working Group, IET Future Tech Group

Many consider the release of ChatGPT a watershed moment for AI. ChatGPT and LLMs like GPT were able to demonstrate indisputably the capability of advanced AI models to display almost human like intelligence in processing information and conversational prowess. Predictably, since then, the generative AI space has seen so much frenzied activity that, even practitioners in the space sometimes struggle to keep pace with the latest advancements! Two years on since ChatGPT, the bustling generative AI scene has seen numerous advances not just in core technology development but in potential application spaces as well.

Looking back at the evolution of generative AI in 2023 foundational models have seen significant progression not just in text processing and conversational AI, but image and video generative models as well. Conversational AI models have gotten faster and more accurate, with particular focus on reducing bias and “hallucinations”. Moreover, encouraged by the success of LLMs for English, many independent initiatives to extend language support beyond English have gained traction. As exciting as these advances are, in parallel, generative AI models with images and videos have made significant inroads as well. The hyper realistic outputs produced by some of the image generative models are nothing short of impressive. While AI driven video generation still has a long way to go, the technology to get there has improved in giant leaps in the last year or so. The next frontier for foundational generative AI however seems to be in multi-modal generative AI. These models can process and understand different types of information including text, images and video. With many in the field throwing their hats in the multi-modal ring, models such as Google’s Gemini and OpenAI’s GPT 4V have emerged demonstrating the capability as well as usefulness of multi-modal generative AI.

The biggest beneficiaries of the generative AI boom have been applications and use cases involving text-based processing. Whether it is text content generation, information extraction, summarization, transcription, translation or advanced query and search, LLMs have become the go-to choice for fully or partially automated solutions. Conversational AI models have found widespread acceptance for everyday activities such as looking up information, creating prose or poetry, creating or debugging code, editing documents and so on. With ChatGPT boasting about 180 million users and Google’s Gemini gaining about 300 million monthly visitors, one could argue that the concept of a conversational AI companion is not as farfetched for the average user anymore.

This somewhat tumultuous generative AI scene begs the question: Where is generative AI headed? While opinions vary, almost everyone agrees that generative models will continue to become faster, more capable and more sophisticated. Although it may be a far cry from AGI (Artificial general intelligence), generative AI at its current pace of research and innovation will continue to push the boundaries of what AI is believed to be capable of. While research in foundational models and related applications is expected to attract the most investment of time, money and resources in the near future, there are some interesting adjacencies that will see traction as well.

As the initial hype of generative AI ebbs, organizations are starting to contend with the realities of building useful and economically viable generative AI applications. While large generative models have found numerous applications, a rising clamour for more reasonably sized but purpose-built generative AI models is driving research in miniaturization. Models such as Google’s LLAMA, Falcon LLM, Microsoft’s Orca demonstrate that smaller models can indeed achieve reasonable accuracies for specific applications. In fact, there is increased research focus on “Small language models” for cost effective use in real time applications that require speed and reduced latency. A related but less talked-about area of increasing attention is in the semiconductor and hardware space focused specifically on generative AI applications. In response to demands for faster, more capable and cost optimized generative model deployment, hardware majors and startups alike are realigning their efforts on hardware optimizations focused on generative AI workloads.

While the progression of generative AI as a transformational technology is inevitable, it would be unwise to ignore the potential social upheaval that it could leave its wake. The foremost concern is around widespread job losses and its result on the social and economic fabric of the world, a concern that is already playing out in some sectors. But the true nature of this upheaval is essentially a redefinition of what a “skilled” human employee is. In simple terms, if AI can do a job role faster or cheaper, it is no longer a “skill” human employees will be sought for. While that does somewhat constrict where and how human beings can contribute and earn a living, it also opens up many other roles and skills that have risen around generative AI. In order to create useful AI, humans will still need to architect, train, validate and monitor these new generative models. More importantly, generative AI is contributing to the emergence of entirely new economic opportunities by creating new business models, products and services while expanding global reach and accessibility.

However, the future of generative AI cannot be solely determined by market forces. There is an urgent and globally recognised need to have adequate safeguards to ensure responsible development and use of generative AI. Firstly, a framework is needed to address privacy and security concerns around the data used for generative AI training and inferencing. Secondly, policies are needed in place to ensure transparency by generative AI practitioners on the data used as well as the accuracy, reliability and possible bias of the models. As generative models produce outputs that are getting harder and harder to distinguish from reality, it is imperative that users be explicitly informed when they are presented with AI generated outputs. Governments along with academia and industry bodies will need to work in concert to strike a balance between technological progress and regulatory frameworks in order to ensure that the world reaps the benefits of generative AI without suffering from any of its pitfalls.

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.

LIVE Webinar

Digitize your HR practice with extensions to success factors

Join us for a virtual meeting on how organizations can use these extensions to not just provide a better experience to its’ employees, but also to significantly improve the efficiency of the HR processes
REGISTER NOW 

Stay updated with News, Trending Stories & Conferences with Express Computer
Follow us on Linkedin
India's Leading e-Governance Summit is here!!! Attend and Know more.
Register Now!
close-image
Attend Webinar & Enhance Your Organisation's Digital Experience.
Register Now
close-image
Enable A Truly Seamless & Secure Workplace.
Register Now
close-image
Attend Inida's Largest BFSI Technology Conclave!
Register Now
close-image
Know how to protect your company in digital era.
Register Now
close-image
Protect Your Critical Assets From Well-Organized Hackers
Register Now
close-image
Find Solutions to Maintain Productivity
Register Now
close-image
Live Webinar : Improve customer experience with Voice Bots
Register Now
close-image
Live Event: Technology Day- Kerala, E- Governance Champions Awards
Register Now
close-image
Virtual Conference : Learn to Automate complex Business Processes
Register Now
close-image