My goal is to emphasise the human aspect of AI to create more personalised, user-centric technology: Mukesh Jain, EVP & CTO, Capgemini

In a recent conversation with Express Computer, Mukesh Jain, Executive Vice President and Chief Technology Officer, Capgemini, shares his insights on the current landscape of digital transformation, AI readiness, and automation. Jain discusses the evolving awareness of sustainability within technology, the critical need for security in AI integration, and the relationship between AI technologies and employment in the tech sector. He emphasises the importance of bridging the skills gap in AI education and outlines the transformative potential of GenAI for Capgemini’s clients.

With companies around the world focusing on digital transformation, becoming AI-ready, and incorporating automation into their workflows, do you think sustainability is sometimes being overlooked in this pursuit?

Right now, I’d say the awareness of sustainability has just begun. We’re in the very early stages of understanding what it really means. Today, we all have laptops and mobile phones with much higher computing power. Because of this, even when we’re writing small programs or using AI, especially GenAI, we’re not always conscious of resource usage. It’s readily available, so we tend to use it without much thought.

That awareness of being more conscious, not necessarily frugal, is missing. In the past, when computing resources were limited, we were much more careful because we had to be. Now, with terabytes of memory at our disposal, we’re more open to using it freely. However, if we were more mindful, especially at a foundational level, we could optimise resource use better.

Do you think security is sometimes overlooked when companies aim to become AI-ready, and should it be a key parameter in incorporating AI into workflows?

There are two parts to AI. When it comes to GenAI, which is relatively new and a small subset of AI, people use tools like ChatGPT without realising the risks. They may unknowingly step into copyright issues or fail to understand that the way they input data into these tools can lead to that data being used to train models, potentially leaking sensitive information. 

Additionally, about three years ago, I coined the phrase “AI in cyber, cyber in AI.” This highlights the need to secure AI models. If someone gains access to a model, they could retrieve past data, including personal information. On the flip side, AI can be used in cybersecurity to predict and mitigate cyberattacks. However, most AI courses don’t focus on securing models, and cybersecurity teams, while skilled, often lack AI-specific training. This gap is becoming more evident, leading to news about models being hacked or data being leaked. This happens mainly because people aren’t conscious of the security risks when creating AI models.

How do you see AI playing a role in mitigating cybersecurity threats?

AI plays a crucial role in identifying and mitigating cybersecurity threats by recognising patterns in data. Every action, or lack of it, generates data that can be monitored. For example, during my time at Microsoft on the Bing team, we used AI to identify potential Denial of Service (DoS) attacks by analysing search patterns. If an unusual number of requests were made from the same IP address in a short period, it indicated a potential threat. This kind of pattern recognition helps detect malicious activities.

AI can learn from past attacks, but the challenge lies in predicting and preventing attacks that haven’t occurred yet. Hackers evolve, using tactics like randomising ping requests to evade detection. Therefore, AI needs to stay ahead by identifying anomalies in usage patterns—like the differences between weekday and weekend traffic. If there’s a sudden spike in activity outside of expected patterns, it could signal a potential threat.

Ultimately, AI in cybersecurity involves thinking like both a hacker and a defender. It’s about creating systems that can detect threats autonomously. This is something I discuss in workshops with product managers and business professionals, emphasising the need for a blend of expertise in AI, cybersecurity, and hacker strategies to build effective solutions.

There are concerns about AI and automation leading to job losses. How do you view the relationship between AI technologies and employment in the tech sector? 

About 35 years ago, when computers started coming to India, there was a fear that many jobs would be lost. But what actually happened? It became a net job creator. I’ve been following AI for 32 years, and similar concerns arose—people thought AI would reduce the need for workers. But I see it differently. With the same number of people, we can double our work output and grow the business. Some reports claim a 40-70% productivity gain from AI, but I believe 20-30% is more realistic. If we save 20-30% of our time, we can focus on being more innovative.

Regarding job loss—yes, if someone insists on sticking to manual tasks without adapting, they may struggle. Look at banking; the workforce has reduced, but those who are upskilled found bigger roles. Now, if people don’t get trained in AI, it will be challenging. In my role as the AI head of a charity organisation, we focus on training communities in AI to help them stay relevant. 

If you’re trained in AI, you’ll not only keep your job, but you’ll likely get a better one. Of course, if someone refuses to adapt, their skills will become outdated. Even if programming becomes more automated, I may need fewer programmers, but those who aren’t needed can go on to innovate in other areas. So while there may be job displacement, I view it as an opportunity for growth and innovation.

You mentioned a skills gap that requires individuals to upgrade their abilities. What strategies do you believe can effectively reduce this skills gap?

The new National Education Policy emphasises that individuals should start learning AI fundamentals early. However, many people mistakenly believe that merely using AI tools, like ChatGPT, makes them proficient. Knowing how to operate a tool is different from truly grasping AI. Being AI-literate means understanding the principles behind AI and applying them creatively.

Through workshops at Capgemini and various colleges, I’ve promoted “data-driven innovation” to encourage people to explore AI’s possibilities. For example, jewelry manufacturers and construction managers might not initially see AI’s relevance, but once they understand, they start seeing applications in their work. This curiosity is crucial.

When top management demands AI integration, there’s a push to train employees, which is effective. Product managers, MBAs, and engineers alike need data literacy—knowing how to gather, manage, and utilise data beyond just Excel. Every business activity generates data that needs capturing and processing for insights and decision-making.

A tiered skills model is essential: Level 1 for basic AI awareness, progressing to Level 5, where advanced skills are critical. Without such skills, job security could be at risk.

How do you see AI transforming enterprise architecture to streamline operations for both IT and business operations?

AI in enterprise architecture is an evolving field. I’ve been in architecture for nearly 30 years, and even with the best designs, critical systems can still fail under pressure, like during high-volume sales events or on busy railway sites. Consider Google and Microsoft: while they occasionally experience outages, large-scale failures are rare. For example, after Microsoft’s significant six-hour outage in 2006, we built a failsafe, self-monitoring system with Satya Nadella that drastically reduced failures.

A failsafe design is critical, and while experienced architects can create robust systems, AI is essential for real-time monitoring, identifying issues, and handling sudden traffic surges. Some architects resist learning AI, but those who have embraced it are building stronger, future-proof systems. Today, I won’t approve an architecture that doesn’t incorporate data insights and AI planning.  In short, AI is the key to building future-ready enterprise architecture.

What common challenges do you see your clients facing when they’re onboarding and trying to integrate digital transformation or automation into their workflows?

In my experience with several thousand clients, we typically see three main types when it comes to digital transformation and automation.

First, we have clients who are mature in their approach; they know exactly what they want, document it clearly, hand it over to us, and we execute. This is straightforward because we can efficiently build a robust solution based on our research and development expertise.

The second type includes clients who identify specific problem areas and need guidance to enhance cost-efficiency, quality, and speed. These clients often request solutions to reduce human workload and improve overall workflow efficiency. We collaborate, consult, and iterate to develop solutions that address these needs, focusing on innovation and continuous improvement.

The third type of client, which I find most exciting, comes to us with a blank slate, seeking our insights. They say, “Here’s our operation; tell us what can be optimised.” This approach allows us to apply Capgemini’s ASE methodology, innovation exchange, and AI capabilities. We explore potential improvements and present them, often to the client’s amazement, as they realise what’s possible.

Clients generally know their known problems, and solutions are often readily available. However, the most valuable work lies in exploring the unknown areas—problems they haven’t yet identified, where Capgemini’s thought leadership and innovation bring significant value. This is a common challenge across the industry, not just for us.

Given rising cloud costs, many enterprises are reconsidering their cloud strategies. With the need for on-demand data access, what do you see as the best path forward: staying cloud-only, moving to on-premises, or adopting a hybrid model?

Cloud costs are generally higher than on-premises because there’s an added service charge. Initially, the idea was that if you’re using cloud resources only partially, it’s more cost-effective. But if you need dedicated servers and constant access, then costs increase. High-end models, such as those requiring GPUs for AI, add to this expense. 

For most enterprises, a hybrid model is ideal. At Capgemini, we’ve implemented this successfully for clients by maintaining high-end GPUs on-premises, which is often cheaper than on the cloud. We also add a governance layer for scheduling tasks; for example, if a GPU model doesn’t need immediate execution, it can be scheduled for off-peak hours. This approach, reminiscent of older resource management models, is still effective.

A hybrid model can also improve data efficiency. Instead of transferring raw data to the cloud, only insights can be sent, reducing energy consumption and supporting sustainability. It’s also essential to minimise repeated computations to optimise performance.

AICapgeminiGenAIMLsecuritySkill Gaptechnology
Comments (0)
Add Comment