Gokul NA and Nikhil Ramaswamy met when they joined together as Applications Engineers at National Instruments (NI), a US-headquartered leader in Test, Measurement & Automation. At NI, they took different paths – Nikhil ended up becoming a Key Accounts Manager responsible for $1.5M in annual sales from Key accounts like GE, Honeywell and Bosch; while Gokul became a Machine Vision and Embedded Specialist for a large geography including Eastern Europe, India and South East Asia. It was at NI that they got access to study the problems faced in manufacturing automation and the gaps in machine vision technology. Even for a company like NI that was a leader in the machine vision space along with others like Cognex and Keyence, machine vision was notoriously difficult to solve. Of 10 customer opportunities that NI would have, NI would be able to technically solve the problem in only three cases and these three cases were simple Identification problems alone. They discovered that whenever these machine vision principles built for identification were applied for more dynamic applications like robotics, they were consistently failing. That’s when they identified that there are concrete gaps in the vision technology for robotic guidance and decided to step out in 2015 to solve this problem. Subsequently, the founders consulted for and delivered over 30 customised vision guided robot solutions that were previously unsolved by incumbents, to select manufacturers like Sansera Engineering, Timken Bearing and GE X-Rays. After the string of successes in custom designed dynamic vision solutions for robotics, the founders raised a seed round of funding from Deep Tech Micro VCs in August 2019 to build a universal hardware and software platform for vision guided robotic manipulation.
What are your key products and solutions in robotics space?
CynLr is focussed on solving the vision guided robotic manipulation problem – enabling robotic arms to learn how to pick, orient and place objects even if they are presented in a cluttered or unpredictable way. Today, robot arms can only perform pre-programmed trajectories and cannot pick an object if the object is not already located and aligned in a very precise manner prior hand. The ability to pick from a random pile and know how to manipulate accurately has been an elusive globally unsolved problem for robotics for over five decades now. Through over a decade of fundamental research in dynamic vision, CynLr has built a human-inspired dynamic vision hardware and software platform that can solve this elusive problem.
Which are your target client segments and how do their businesses benefit through your deep tech solutions?
Our go-to-market target is discrete manufacturing industries like automotive, electronics, white goods, aerospace, jewellery, etc. in addition to select use cases in e-commerce warehousing and logistics. Customers choosing to deploy robots enabled with CynLr’s visual intelligence can automate previously non-automatable tasks effortlessly. For instance, today world over, it is impossible for a robot to pick a bolt from a bin and place the bolts while assembling a car or a two-wheeler such that the bolt is aligned to the screw hole and the first two threads are tightened without slippage. Torquing of the bolt can be automated, but this coordinated and oriented placement of bolt is very difficult to automate as it not only requires visual guidance, but also tactile feedback and knowledge of how to operate the bolt. Look around you, almost all products you use contain bolts, screws or other fasteners and today each of those fasteners must be placed physically by a human. It is estimated that close 70 per cent of effort in producing a product is in the fastening activity and that is manual across all product lines. We can automate this and many such other previously non-automatable tasks using our visual intelligence technology.
For tasks that are automatable today using custom infrastructure, CynLr platform allows for a much more simplified deployment, with no need for hardware customizations from task to task or object to object. Customers can expect to deploy robots 70% faster, and also ensure that the resulting robot cell is adaptable and not rigidly tied to the specific product.
Your focus on innovation.
The problem we are solving is a globally un-solved technical challenge, and one that involves fundamental research in neuroscience and the science of how humans and animals see and learn. This problem has always been a decade away from being solved.
We have been researching this problem fundamentally for the last 10 years, and we have built a few world’s first innovations to solve this. We have built a hardware platform comprising over 400 carefully designed parts. We have built the world’s first convergence and auto-focus based event imaging camera that looks beyond colour and depth. In the process, we have also built a depth perception camera that has at least 10x more speed and 2x more resolution than the industry’s best. And we are also building ground up a strong intelligence framework based on reinforcement deep learning to learn an object visually from a manipulation point of view, all of which, from our purview are unique innovations even from a global perspective.
What are your plans for the future, including geographical expansion and new tech products?
We have now built and validated hardware and core software layers to solve this global challenge. We are looking to raise further capital this year to commercialise the technology and go to market. We have pilot interests from Indian Automotive OEMs and component manufacturers.
While India is a fantastic test market to deploy initial systems and validate our platform, the demand for such advanced robotics capabilities is much riper, and the price point much more attractive in advanced manufacturing economies like the US, Japan, South Korea and Western Europe. After product validations in India, we intend to set shop in the US and Japan before globalizing our sales to more countries.
Please share your views regarding the growth of deep tech industrial robotics in India.
It’s a great time to be a deep-tech startups in India. A few years after we began in 2015, deep tech began to be a theme of mild interest to VCs, in a world that was otherwise dominated by E-commerce, internet based consumer apps and B2B SAAS. Now we already see several deep-tech only focussed seed funds like Speciale Invest, some of whom we are fortunate to be associated with, and larger Tier 1 and Tier 2 VCs have begun investing early stage and growth capital in startups with a deep tech IP based thesis. These are great trends. While the bulk of the heavy lifting in gaining traction for deep-tech startups in India has been led by the sectoral themes of space tech, EV and lifesciences, robotics and intelligent machines is more recently figuring in several funds’ area of interest and we are hopeful that this trend will continue and more like us will germinate from India to solve for the world.