Crowdbotics founder and CEO, Anand Kulkarni began his company to accelerate the process of software development using artificial intelligence, in a movement the company call Code Ops.
The idea behind Code Ops is that users can speed up the process of creating software by using reusable parts, snapping together pre-validated components – a task machine intelligence excels in.
The Crowdbotics machine intelligence has similarities with the generative artificial intelligence of large language models like ChatGTP, but instead of the model predicting character-by-character what comes next in a response, Crowdbotics predicts the next correct component in a software program, choosing from existing elements.
Army Technology interviewed Kulkarni to learn about how this approach differs from alternative methods, and how it is being used by the armed services to address defence requirements for information technology with built in security features in a timely manner.
Kulkarni reveals: “Most software systems that we want to build, in the real world look like software systems that we have built before . . . Once you build it and validate it for a given organisation, you shouldn’t really need to do it again.”
“But despite that, most of the time, when organisations build software – DoD is no exception – they end up reinventing the wheel every time they have: a new base; a new contractor; a new team… So they end up slowing down the process of building software and doing it in a way that’s very inefficient.”
Crowdbotics lets companies and the government speed up that process by using pre certified components, modules of code that have been validated and whitelisted in a security context, or for particular purpose. It uses machine intelligence to predict how to fit those together for a given use case.
“Our AI tool in our software product lets you automate the process of writing your requirements, and then stapling together the parts that you need to build that software product,” he explains.
Andrew Salerno Garthwaite (ASG): How easy is it to use from a ‘no-code’ perspective?
Anand Kulkarni (AK): It’s actually quite easy to plan software from a no-code writing perspective. Anyone, non technical or otherwise, can show up, use natural language or pass into PDF to describe what they are trying to create, and the system, which has been trained on an organisation’s requirements, and a large system of historical requirements that have been doled out by the world. And it will write the requirements for you using AI. You can try it yourself, it’s actually quite easy to use.
Now, of course, the next step is building the actual software that comes out. That’s the part where things get interesting. Crowdbotics will not write all of the entire application for you like a no-code tool might. Typically we’re being used to build serious applications that need heavy lift. So we will try to stand up, staple together, the warm start for the product to get you 30-to-70 percent of the way there, and then a developer team, either us or one inside our customers, will go on and write the rest. That extension model is typical for how you can accelerate a software development process using AI this way.
ASG: Does that have massive advances on lead time and cost?
AK: Yes. So typically, we end up seeing delivery using the Code Ops model being about 55% of the time of a conventional software development lifecycle. So, cutting it almost in half. And typically our customers are seeing this come out to about a third of the cost. Certainly in our own deployments inside the defence industry, we see software life cycles that usually take three years being executed and built in a matter of months. So think six-to-nine months cycles for software creation. And of course, that’s for production, end to end. Rollout, the actual initial versions of this can come out very, very quickly. So that changes the context of what you can do when you’re thinking about timelines to get production code out to the field, or into the hands of teams in a matter of weeks or months, and not years.
ASG: How does this patch in with defence software procurement contracts?
AK: There’s two different ways that we end up changing the procurement process within defence.
The first one, of course, is the idea that you have pre-approved, pre-certified components of software. And of course, this is a notion that a lot of innovators inside the DoD have been trying to advocate for, for some time, because they see this kind of duplication of effort between bases.
A good example is Iron Bank, which is the DoD’s effort to stand up a security container registry . . . and they have said, here’s a whitelisted list of elements that we can use to rapidly streamline the process of doing so. Crowdbotics takes this notion further. We index a very large variety of reusable parts and systems, typically software, but we are thinking in the future about adding non-software components in there as well, for the purpose of accelerating how teams can actually find and put together validated components.
The other side this helps on is in requirements writing, which is often a giant pain for procurement teams. So if you are a contracting office somewhere in the Air Force, let’s say, you are stuck working with one of the large primes in a antagonistic process about figuring out what are the exact, letter-by-letter requirements that we are going to adhere to for the next software system we’re bringing to market. And that’s not informed by discussions on the outside, or by historical data about how software systems like this had been built, or by information about how such systems are built in the commercial world in an agile fashion.
[Crowdbotics technology] empowers the procurement team to do something very different. They can use AI to have a more fruitful construction of the process of building this software requirements set, as opposed to being beholden to the prime and building that requirements set. So that’s a particular way for them to change how they are writing the requirements for a particular procurement process. And our view is that this empowers them to save, not just time, but also money. And having a better and more nuanced discussion about the requirements themselves.ASG: If the DoD was realistically trying to never reinvent the wheel again, would that mean they would always use Crowdbotics?
AK: We think this is the right way to build software, right? Code Ops as an approach is the appropriate way to plan and launch applications. But we think that the data needs to be available for all bases, all teams, all groups in the DoD to pick up and adapt. So we welcome what we call private modules, or private teams putting their own data in here for them to figure out how to surface them up to DoD users: software elements or components that have been identified, discovered, validated, showing up.
Now, I think we’re really far away from talking about if this a monopoly solution in that sense. We really are trying to answer a different question, which is how do we make sure people know that Code Ops is a better way to build software and to plan software specifically within DoD contexts?
And that’s the one that I think is the more interesting question, because right now, many smart minds in the DoD are asking how do we build software in a way that’s more nimble and keeps us competitive and gives warfighters the edge? And often they are building software, the way that they build aeroplanes, which is not the way that the commercial world operates and in doing things.
ASG: How much trade are you doing making software with the defence department?
AK: We’ve actually been working with the DoD for four years now, and our largest commercial partner today is the DoD. Right now we are actively deployed and working within several groups: AETC; ATC; and AFRL as some examples.
We initially came into the air force through one of their software wings, the group called Castle Rock. But these days we are most widely used and working with operating bases, folks that are actually trying to build software for a specific, narrow purpose for the warfighter, and solving real problems with the DOD. Which is a good and active endorsement, we think, of the success of this model. You can actually use this not just in the abstract sense, but in a practical way to build useful software that actually advances the mission. That’s when you can really see signs of success.
ASG: Can you give an example of this in software?
AK: Probably the best known application that we’ve seen built inside the DoD is a product called data driven readiness. So this was conceptualised by an instructor pilot, who was trying to figure out the problem of how to accelerate the training cycles using information coming from the airframe? So modern airframes as you know, the strike, the F-35, in many ways, these are highly sensor infused platforms. They generate lots and lots of information. But unfortunately, that information is not being systematically used as feedback in training cycles, or in tactical analysis today, this was the problem identified. This information is great, but it’s sitting on tapes in some archive instead of being used to accelerate the process of learning.
After a mission, certainly in a training context, you go through with your instructor, look at the tape, see what happened, look at particular moments. But it’s being done using a very archaic methodology, looking at a particular scenario by hand and talking about what happened. And yes, that is a sort of the conventional way that we teach each other things. But in an era of modern technology, there’s no reason you shouldn’t be able to do something much smarter.
So the Air Force set out to use artificial intelligence to say, ‘Can we look at this data systematically?’.
The data driven readiness product allows, during a debrief, for an instructor pilot to look automatically at what happened. So the platform uses AI, to identify every relevant activity that happened during an entire sortie. To say, ‘Here’s the specific manoeuvre that occurred, or a specific procedure that occurred based on our AI analysis of what happened in the mission.’ And then, ‘Here’s how it did.’ Meaning, ‘Was this good or bad?’
So for example, let’s say you’re carrying out a procedure of some kind, you’re landing and you’re coming in at the inappropriate angle of attack, right? Okay, this is a dangerous situation, it puts stress on the plane. And in worst case, it could cause an actual accident. That’s the sort of thing that you can detect automatically. And you can say, this is a landing that is happening here is your angle of attack, here’s the prescribed angle of attack, and you were coming in too hot for your angle of attack. That’s a very simple situation.
If you look at multi-ship engagements, you can start asking very sophisticated questions. For example, if we’ve just completed the setup for a dogfight, and we were about to engage, and we were about to enter a situation where one red team is putting blue team in sort of a Engagement Zone for an attack, can we identify that and say, ‘Hey, this was the right time to do X, and you did Y instead, you missed an opportunity to engage.’ That is a highly relevant point to be able to point out and identify. And of course, we can identify and provide that feedback point.
So these are important ways that the Air Force is using AI to answer questions today, using products built on Crowdbotics. And that is a very strong endorsement of this kind of approach. This is like Big Air Force problems in the sense of like, how do you speed up pilot production? How do you cut down the number of sorties using AI? How do you improve the tactical edge for the warfighter by doing large scale analysis?
And I think those are good endorsements. But there’s also little things that we see solved using this approach as well. We saw a great a great product built by a veteran that was simply around solving the problem of security, travel reporting. If you are needing to do something as basic as reporting your travel for your base’s security requirements, which is a pain, right? And he said, ‘Well, look, this is a real problem for the defence industry, it’s a real problem for actual operating bases. It’s a pain.’
But that kind of thing is not possible to do in a universe where you’re dealing with large scale primes – building software for you – because it’s such a small niche problem. He was able to build that using Crowdbotics, which is another example of a little problem that can be solved by this approach.