Podcast Episode 18

From Detection to Perfection: AI’s Role in Detecting and Fixing Bad Code 

Amartya Jha, Co-Founder & CEO of CodeAnt AI, joins host Ryan Davies on the AiFounders Podcast Show to discuss the key role of AI in detecting and fixing bad code. Jha shares insights into his journey from experiencing rejections to ultimately being accepted into Y Combinator, highlighting the importance of persistence in entrepreneurship. He delves into the challenges faced in software development due to neglecting bad code and explains how CodeAnt AI leverages AI-driven detection and fixing engines to identify and rectify code issues automatically without disrupting logic.

Start your free month of CodeAnt AI Pro here: https://www.codeant.ai/#pricing (use code RAYANDAVIES)

Listen to the Episode Now

Introduction and Background

Ryan:  Welcome everyone to the AiFounders Podcast Show. Our podcast is dedicated to celebrating the remarkable accomplishments of AI innovators, entrepreneurs, and visionary founders and the captivating stories behind the movements they have built. I’m your host, Ryan Davies, and I have the honor of hosting today’s episode, From Detection to Perfection: AI’s Role in Detecting and Fixing Bad Code, with our special guest,  Amartya Jha. Thank you so much for joining us here today. I cannot wait to dive into this one.

Amartya: Same here, Ryan.

Ryan: This is going to be fantastic. For our audience, here’s just a little bit of a background. Amartya is the co-founder and CEO of Code and AI, which is backed by Y Combinator and an entrepreneur. First, we’re going to have a fun story about that right away. Failed three times with starters in the last three years before getting accepted into the program. We’re going to talk about that, but, really, we’re talking about the importance we’re going to focus on again what that code is, impacting how it is holding start-ups back, businesses back, and the development of an AI, which is a DEV tool that enforces clean code effortlessly. It is fault-tolerant, scalable, optimized for CPU memory, and latency fixes all kinds of code issues with every change without breaking logic detections and auto-fixes. We’re going to dive more into it. I’m going to let the expert take us away on that one here before we kick off, but really, to get started, give us a little bit about your background. What led you to find CodeAnt AI, particularly in the context of addressing issues related to bad code?

Amartya: Sure, and thanks a lot, Ryan, for such a lovely introduction. I was back in India, and I was leading a tech team. For the last four years, I have been in the tech industry, leading teams of different sizes. I always felt that, more often than not, we are just neglecting the barcode. Any engineer who is joining a team doesn’t own the previous things that were done. Initially, it just tries to build on top if your code is bad. The engineer doesn’t care. They’ll just care about that. Their changes are getting merged, and they’ll move ahead. We always found that engineers have to ship features faster, and they don’t care about fixing these things, and eventually, it leads to such a big issue that we have to redo the entire code. I was in Zeta and ShareChat, and I had some experiences where we had to redo the entire infrastructure code into a code that was written for 1.5 years. It is very hard to do that. Why don’t companies have a mechanism where whenever a code is changed, they should be able to find what is bad in it and fix it? When I use this term bad. It’s very nondeterministic. Let’s make it very deterministic. A code that is duplicated 100 times, a code that has security vulnerabilities, a code that is dead, you’re not using it anywhere. It’s just lying there, a code that consumes lots of resources like CPO, memory, and latency. This is an example of bad code. Today, when I was an engineer and when I was tech lead, I always wondered, do we have any tool to accurately find it plus also fix it? Because we, as engineers, really don’t give a shit about it. Can you tell me 500 places where my code is bad? I just want you to go ahead and fix it for me, but be sure that when you’re fixing it, you’re not breaking anything because if you break something, I’ll throw you out. So that’s exactly what the problem we also face. We thought, OK, let’s build something from the ground up. So it was July of 2023. I left my job and joined Entrepreneur First. Entrepreneur First, for those who don’t know, it’s an awesome platform where you go and you can get a co-founder who’s as vested as you are. So, I met my co-founder there, Chinmay Bharti. Chinmay Bharti is a guy who has led software teams to build HFT products and scale them. He knows exactly how to build a software structure, and I had the experience of scaling it. We clicked on the same vision that we wanted bad code to be automatically fixed by AI. We started building on it. In October, we formally started CodeAnt AI, and by November, we got into IC.

Journey to Y Combinator

Ryan:  It’s amazing. That is a good, a good run there before we started. You told me a little bit of a story about YC and maybe bucking the trend a little bit and blazing your own trail there because our audience is so heavily founded. We’ve got a lot of founders, people looking to get into this space, tell our audience that story a little bit more. I think that’s just a fascinating, very cool story to kind of talk about your persistence and, again, just how great the idea is and what you’re trying to build here.

Amartya:  YC is one of the biggest platforms where the youngest founders can directly go and literally change the dynamics of the start-up. I’ll be very honest here: When we were back in India, we were having a very hard time raising a preceding round. We were doing anything to get that, and you see, the YC process is just 10 minutes of interviewing. If you just do 10 minutes, good, you are into the YC batch, and that’s a huge advantage. How it worked for us was we got a first call, and it was the YC call after three years of applying for the first time I got the interview. So I was very excited. I met all the Indian founders who were back, basically YC batch mates in their previous batches. I met them. I asked them how actually to go about those 30 minutes. Everybody gave me some advice: say this, be this, approach like this, and we got a ton of advisers. We got confused, and when we went into the interview, we just didn’t portray who we were, what we were building, and what problem we were solving.

We were just in the mindset that we had to be get-liked by these people, and that didn’t work for us, so we got a rejection mail from Y Combinator. The good thing about that rejection mail was that mail always talks about the things that you didn’t do, and for us, it was mainly the product. They were questioning how many times a user was using the product. Can this product actually scale? Do engineers actually fix their bad code every day or not? So it was not on the business end; it was more on the product end, and this actually worked for us because we had all the metrics that it actually worked. We have 500 engineers using the product and fixing code every day. So we just took the metric from our AWS, and we copy-pasted it. So that was the only question. My YC group partner was very kind enough, and he was like, OK, there are two things that could have happened. I guess right now, he could have seen the hustle that these guys went ahead, got in the entire data, shot the entire video of the product, and sent it to him, so either it could have been the hustle, or it could have been the numbers. He was very kind enough, and he told me, OK, let’s do one more call for 10 minutes. So, another call happened, and this was with Michael Seibel. So we went to the call. Michael was there, and Michael just started interviewing us again, and then the next seven minutes, he knew everything about what we were building, and that’s an insane thing about YC. You won’t believe YC interviews tons of companies. They have actually the insight of every company that they’re building, in which case. That’s the awesome thing. So Michael came in the next seven minutes, and we convinced Michael that we were building an awesome thing. We will have to get into YC, and Michael was convinced, and in the last three minutes, we were like, OK, we did it. So that was the journey. I would suggest that if you’re right and if you know that the data that is thrown at you, you can actually show them that you have done something, then they’ll obviously agree to it and YC is very kind enough in this.

Detecting and Fixing Bad Code with AI

Ryan: That’s amazing. I love that story. I wanted to make sure we put that there because, again, it’s so fitting for our audience, people, to hear that and take something a little bit above and beyond what we are here to talk about today. Let’s shift gears into that part. Let’s talk about CodeAnt AI and its ability to leverage AI to identify and detect bad code. Give us a little bit of a rundown on that. Are there specific patterns or indicators the system looks for, and how accurate is that process?

Amartya: Correct. When we talk about detecting bad code, there are multiple areas where you can go and find bad code, duplicate code, anti-patterns, non-documented functions, etc. What we have done is we have used AI detection and fixing engines, and then we have used a rag model where we have written our own rule-based engines, and we try to map whether the fixing that is done by the AI is correct or not because understand our business, we are not in a business of code generation. When you’re in the business of code generation, let’s say 50% of the time, you cannot be correct, and the end user won’t care because he just wants some boilerplate code to be there. We’re in the business of code cleaning. If we are wrong, you are actually breaking the production code for that particular company. So, how do we go about it? First, we use AI and rule-based engines to detect bad code. So, there are a lot of open-source tools that you can use. If you want to build your own, you can use SonarQube, which is open-source detection. You can use some Lenders detection, but they are very limited. They won’t detect high things like where memory clogging is happening, where latency is actually coming, where CPU throttling is happening, and what the different parameters of that are. Second, let’s say you have built a comprehensive detection engine. The next thing is fixing how we go about this. We have used AI for fixing, plus we have written more than 1000 rule-based engines handwritten. These are basically your AST parcels. Every language has its own abstracts and tax tree. We play around with that, and we pass that on to give you the exact place where you need to change your bad code with a good code. So, all my engineers are actually engineers who have worked in high trading frequency firms, and they have scaled down the code base and optimized it for even 20 milliseconds of latency. These guys are pro in this. So we wrote all the ASTs, and we are actually writing more ASTs every day. We are training on that and creating this as a rag module so that every time our response is generated for fixing by AI, we can check by a rule-based engine, and then only we can suggest it to the end user so that he’s 100% sure that whatever it is coming is correct.

AI-driven Code Improvement Process

Ryan: That’s phenomenal to be able to go through to that level and really detect and fix the fact. One of the fascinating aspects as well. Also, as you mentioned, it’s not just recognizing that high accuracy but the ability to correct bad code. Explain a little bit more about that and the role that AI plays in that correction process, and just really, again, the impact that that has you mentioned off the top about how you’re not breaking logic. So you don’t have to worry about stuff going in and not getting fixed properly. Just again, to be able to implement something like this, some of the efficiencies and some of the things they’ll achieve as a result.

Amartya: I just did a customer case study here without naming the customer. So basically, we got a big customer who was in HR tech. They build software for HR platforms, and we connected to their company. We deployed CodeAnt there. You won’t believe it in the next three days of deployment. They fixed more than 30,000 bad code instances, 30,000 and the entire team. They documented more than eight years of code base in the next three days. That’s the impact. The confidence that they got by using a tool was the fact that they started making small PRs. Small PRs on particular files. How it works is we directly get integrated into your GitHub or Bitbucket. We get a list of all the repositories that are there, and you can bulk fix up to 200 files in a single go, which adds a lot of value to these kinds of companies. So, they went ahead and created PR for one or two files. The tech leadership engineering team sat together, they reviewed each and every PR, and they were like, OK, this is good. It’s not breaking anything, and they ran the entire regression test suit, testing, test suit, etc.

And it passed. And they were like, OK, good enough. Can we increase from one file to 10 files? 50 files? And ultimately, they went ahead and fixed 500 files in bulk in a single group. That’s what we are giving it to them. Now, coming to not giving, to be honest, we are not perfect right now. We are evolving. What we do is if we know we don’t have to fix this, but we have a crazy detection for this, and we are not sure whether the fixing that we are going to give is accurate or not. We don’t display it to you. You will never get that. You’ll just get a detection because the last thing that I want is my enterprise clients coming back to me and telling me that the code fix that you have done today actually broke the entire testing mo. That is something that we don’t want, and that is why we are spending a huge amount of time, not just on the AI stuff, but building an actual SAAS layer, which is a ground-up AST parcel that can help us.

Challenges and Solutions in Software Development

Ryan: I think that’s an incredible real-world example of how organizations can experience significant improvement after addressing and correcting bad code and what that looks like from the other side. What are some of the impacts and challenges that they face when they’re dealing with it? Like from a development standpoint, operational standpoint? When are you able to go in and do this and see the other end of it? What does that organization really do? What are they able to see and embrace, and see the benefits of that?

Amartya: I’ll just talk about a developer journey. How it works is you join a team or you take on a new project. You are always told to contribute to the existing pieces of code that are already written. The first step for a developer is to go there, try to understand the entire code base with good context, and then start contributing to it. More often than not, every engineer is different. The way they have written the code base is different. Someone will write one line of documentation for that function, and someone will write ten lines of documentation. There’s no standardization. That’s one problem. We help or document the entire code with processing logic. So developers who are contributing will be able to understand what the code base is. This is done. The second thing is as a developer.

Whenever I try to commit to any existing code base, I refactor that code base a bit. I want to make sure that the code base which is existing is good enough because if I’m putting some crazy ass algorithm on top of it, I want it to work as well, but if the underlying code is not good enough, it will not give me the same performance. So, I’ll invest some time in cleaning that code up. For companies, the biggest pain point is when we talk from the biggest enterprises. They actually know what they want their developers to follow. They actually have a very good set of policies that they want to enforce on developers but the problem is these policies exist in Google Docs.  I met a client. It is India’s largest fintech, with a valuation of $7 million. We are doing a paid pilot there. The problem is that this client has all the good practices in 33-hour videos in a Google doc. No engineer has time to actually go and see because they have to shift features faster. So, they want everything to be enforced while they are writing the code. So we have an editor extension that, whenever you’re writing code, then they will prompt you. If you do something wrong, let’s say you didn’t add a doc. You didn’t add type, and printing your code is bad. Our code is not scalable or optimized. It will prompt you. You don’t have to ask him questions. That’s the one thing that we have done, which is the second developer engagement. Third, whenever a developer goes from understanding the code base to restarting the code base, it is now the time the developer will start contributing on top of it, collaborating, contributing, and pushing the peers to understand his journey. Also, when he’s building a product, he is told to build a minimum viable product. Even in the biggest MNCs, he is told to ship the smallest feature, the smallest version of that feature that actually works. He’s not told to create an optimized feature. He’s just told to create the smallest feature. In that journey, when developers don’t understand the thing, they have to optimize it, or they don’t have time to optimize it. They push substandard features. So that is where we also come in. We tell you that whenever you push any PR, we will scan that PR and tell you that these are the pain points that you haven’t addressed and by doing this, you can actually fix it. So we want to do the heavy lifting out from here. So this is how we understood the developer journey and wanted to be there in all three areas.

Ryan: I think that’s a really great understanding. and again, for our listeners to be able to understand what some of these benefits are, what it’s going to mean to them to be able to, to tackle this and address these challenges before it kind of goes out of hand for them, like you said, now of a sudden you 30,000 plus instances of happening and what does that mean and how does it go? When it comes to future trends, both AI and in the software development industry? Looking ahead, how do you envision the role of AI in the context of development? But specifically in terms of code, quality and improvements, and error prevention?

Amartya: Going forward, basically getting from an idea to actually running the production code will be commoditized. Anyone, whether it’s a developer, product designer, or manager, should be involved. Anyone will be able to take their ideas and create a production-level code that will actually scale. This is the journey that I believe will happen in the next 3 to 5 years. You won’t have very specific SDEs that you need to be required to do that. Your systems will be able to help you here. Now, coming to the other side, how do we see it? We believe that going forward, code completion companies like OpenAI or AutoCoPilot Replit, etc, will help us in this journey. Second, we need tools. There will be a lot of tools that will come out: tools for detecting bad code and fixing it, tools for doing automated Q wave without breaking the existing logic, and tools for understanding the dynamics. Whenever our code is changing right now, all your testing is static, making sure that testing is dynamic, making sure your code-based scanning is dynamic, and giving you problems then and there, these kinds of tools will emerge, and they’ll take center stage.

Why? If code generation from idea to production is commoditized, there’ll be a lot of gatekeepers who will come into the picture, who will take the role and tell you that’s okay. Is this actually a good code? You can actually put it in production but will it cost you $10,000 to run, or will it cost you $100,000 to run, and how can we reduce the cost by making a code better? That is where AI will play a significant role and we will need not just AI but pure engineering also to make the AI better.

Advice for AI Business Founders

Ryan: I think that’s a great journey of what’s to kind of come here down the path and how you’ve really kind of positioned yourself for great long-term success and for CodeAnt AI to really take off from that side of things as we’re kind of winding down here on the episode for our AI business founders that are tuning in, what advice would you offer based on your experience in navigating the challenges and opportunities within this industry, especially in that context of AI-driven solutions for code improvement.

Amartya: I would tell folks to try to understand how you fix any bad code today because if we know the steps, like I take five steps to fix this database injection issue, correct? What are those five steps? I’ll simply write it down. These are the exact five steps that AI will also do. Another thing is there can be an intelligence step, which is like I want to see what is my current traffic pattern depending upon that. I’ll do something like this. These are the stuff that AI might not know today, but it will know in the future. It’s your job role if you’re a software engineer to figure these trends out, and I would love it if you could build something internally in your companies. Go ahead and try to figure out the patterns of how we can make the current AI way more efficient for my company. Can I use this AI and the power to understand my current traffic patterns, current error patterns, current user patterns, and so on and so forth, and use the data and correlate it with how the code is written? If I know that this piece of code is called 100,000 times and this piece of code is called one time from my traffic data. Can I just use AI to automatically fix all those things? And this is, as an engineer, you can actually do that, and this is exactly what we are doing. But I would love it if you guys also can do and pick this up in the company because we have a very solid motto. We are not building CodeAnt AI because we really want to. We really didn’t want to, but I was there, and we said that a ton of bad code is written, and developers like you and me really want to ship new things, exciting things. We don’t want to fix the same things again and again. These are mundane tasks for us. Let’s automate it.

Ryan: That’s perfect. I think that’s a great place for us to kind of. I love giving that last piece of advice and things like that. But Amartya, before we go here, I want to put you in contact with our audience and give them the opportunity to get in contact with you as well because I’m sure there are people listening here. Yeah, this is something I need to be a part of. This is like you said. This is something maybe I need to bring in and try to do in-house or bring you to help us do it. So I know it’s CodaAnt AI is the website, but for getting in contact, what’s the best way for us to be able to do that and to learn more?

Ryan: Perfect. I think that’s a great invitation for the audience. I always love doing that because they listen to you. We’ve got a key opinion, a key thought leader in this area, and then they go OK. Now what, how do I, how do I get more? How can I apply this to myself? So definitely take advantage of Amartya’s offer and join, get on LinkedIn, contact him, send emails, or do whatever you need to do. But I think this is something that, as you said, the next 3 to 5 years, the acceleration that’s just like everything with AI is going to become so mainstream, but this is something where you want to be on it today, making sure you got those pieces, those processes like you mentioned, documented it in place. So with that, I want to say, Amartya, thank you so much for joining us here today and sharing all of your information, everything in the area, and also a little bit about your journey. I love the opening story. I think that’s just for our for our listeners. Such a wonderful, wonderful lesson, so much, great here to unpack. Thanks for joining us today.

Amartya: Thank you so much.

Ryan: Wonderful, and for our listeners, thank you for joining us on this enlightening journey through innovation. We hope you’ve been inspired by the incredible stories that were shared today, and remember, the future is driven by pioneers like our guest, Amartya Jha, and the limitless possibility of AI. Stay curious, stay innovative, and keep exploring the boundless horizons of technology. But before we sign off, we have a small request for our dedicated listeners. Always. If you’ve enjoyed our podcast, take a moment, share it, leave a review, and subscribe on your favorite platform. It’s your feedback and your support that help us bring more amazing, incredible guests like Amartya Jha to join us and share their knowledge, help all of us grow and learn more about AI, and so on. I will say thank you again, everybody. Thank you, Amartya, for joining us here today. This is Ryan Davies signing off. Take care

About Our Host and Guest

Director of Marketing – Ekwa.Tech & Ekwa Marketing
Read More
Co-Founder & CEO of CodeAnt AI
Read More

” Coding is not just about fixing the same things again and again. Let’s automate it “

– Amartya Jha –