The Vertical Space

#90 Chris Gentile, Merlin: Inside the race for trusted tactical autonomy

Luka T Episode 90

In this episode we sit down with Chris Gentile, General Manager for Tactical Autonomy at Merlin, for an in-depth look at the evolving role of autonomy in defense. Chris outlines the value proposition of autonomy as a tool, not a standalone product, and discusses the principles of a modular, hybrid, and hierarchical architecture design in building robust autonomous systems. We explore how recent milestones in trusted autonomy are reshaping military strategy and capability, and how the technology must evolve alongside certification requirements and operational realities.

Chris also shares his personal journey into the field and offers a nuanced perspective on current use cases driving demand for autonomy. The conversation also touches on the cultural and organizational barriers to adoption, two autonomy schools of thought, air superiority in the age of autonomy, best practices for developing autonomous systems, visual language models, data challenges and startup opportunities.

Chris:

Perception, decision, execution. It's the way fighter pilots teach maneuvering to younger folks. Did you see the right thing? Based on what you saw, did you make the right decision? And then based on your decision, did you physically execute it correctly? Trying to wrap all that up and just hit the"I believe" button on artificial intelligence, I think is the wrong path. Building systems that are modular, hybrid, and hierarchical from the start, and built in such a way that you can address the ultimate questions that a certification authority are gonna ask you, I think that's the absolute best practice.

Jim:

Hey, welcome back to The Vertical Space and our conversation with Chris Gentile, General Manager, Tactical Autonomy at Merlin. Chris is an impressive person, as you'll see, and frankly, you feel a little bit better about our American military and industry when you hear a serious, bright person who served in our military and is now doing important work in the industry. With some of the pomp, superficial and unfocused work in our world and in parts of our industry today it's frankly good to hear non hyped, serious competency from a person who is focusing on meaningful problems that have to be addressed for our commercial and military requirements. We start our conversation around autonomy, and I really like Chris' comments around focusing where value can be added and how we have to start with baby steps towards continuous improvement. He discusses how and why he started with autonomy, cool story, and how his autonomy focus started with defense applications. His answer of why he focused on autonomy is in and of itself, a terrific discussion We then discuss the major milestones towards trusted autonomy. And listen to his explanation of how autonomy is a tool to open up a whole new trade space that gets back to quantity mattering. And as he says that in his mind, this is the clearest path to continuing to deter global conflict and allowing the world to continue in this path to greater prosperity. Good discussion on air dominance, what we've experienced in recent decades, and how it's dramatically different today and how it's going to be in the future. He then discusses the future of autonomy in the military and the significant changes the military will face in the near and long term. He answers Luka's questions on what's needed to develop an autonomous system from collecting data, training and developing algorithms, testing and operationalizing them and the processes and the challenges along the way. Towards the end, Chris discusses the recently awarded Air Force's Next Generation Air Dominance Program, which you'll hear was awarded just before our recording. Cool to hear a fighter pilots perspective on this new weapon system. By way of background, as mentioned, Chris is the General Manager Tactical Autonomy at Merlin, and previously was an F 15 and F 22 fighter pilot with the United States Air Force. He's a graduate of the United States Air Force Academy with a Bachelor's Degree in Astronautical Engineering. A substantive, serious, meaningful discussion. Thanks very much, Chris, and to our guests, we hope you enjoy our discussion with Chris Gentile as you focus on important problems to solve while operating profitably in The Vertical Space.

Luka:

Chris, welcome to The Vertical Space.

Chris:

Thanks for having me. Good to be here.

Luka:

So we start by asking if there's anything that few in the industry agree with you on.

Chris:

You know, first off, that's a bold statement. I don't know what everybody else thinks privately, but, you know, the biggest thing that I I tend to foot stomp, having been in the space for a few years now, is that, I'm not sure I agree that autonomy itself is a product. I, I think autonomy is a tool. it's, it's part of the evolution of aerospace technology that, you know, goes back to the dawn of these, systems. And the real product that comes out of that is what new mission can I do? What, cost efficiency can I gain? In the defense space, what increase in survivability or effect can I take? But autonomy itself is not a product. I don't know of any customer out there that's going, I'd like to buy one autonomy, please. The trick is in tailoring, all the advances in artificial intelligence, autonomy, automation, et cetera, and turning those into meaningful you know, user stories, whether that's for defense community, a civil community, an emerging market, et cetera. But I see a lot of folks who say they're autonomy companies, but it's not immediately clear what their actual product, what their actual value is. And I think that's something we're gonna see a lot more clarity on as we move past this first wave of, you know, excitement and into delivering real capabilities.

Luka:

Okay. Not to stretch the analogy, but if autonomy is a tool, there are tool companies, so why wouldn't there be an autonomy company that's building autonomy as a tool.

Chris:

I think that's reasonable and I think that's the direction that you're starting to see you know, a lot of the more mature players take on, realize, you know, what aspects of the stack are you gonna specialize in, just as there's different variations of tool. if you look at what it takes to create, you know, modern semiconductors, there's a, a global supply chain just to get to machines that enable that, whether it's sitting in Taiwan or, you know, being built here under the CHIPS Act. And I think you're seeing, at least in my corner of the world, the software, autonomy space focused on aviation, you're seeing, different groups start to specialize on different areas there, all the way from, just modernized flight controls, taking on the mantle of stuff that, have been under avionics and autopilot companies for decades, all the way up to the very high end of human interaction, command and control, et cetera.

Luka:

What are the implications for companies who are thinking about autonomy as a product as opposed to a tool?

Chris:

I, I think that presupposes a lot of opinions about how, any individual customer will choose to procure this. Much bigger names, you know, whether it's your Palantirs, Andurils et cetera, have been talking for a while about shifting the DOD specifically towards a more modern software acquisition approach, license based sales, more commercial applications, things like that. And, you know, I think, I think that's a good conversation trending in the right direction. But the fact remains that the vast majority of our defense budget is still spent on platform centric capabilities. And so if you wanna sell these sort of software tools, you need to have some sort of close coupling with the airplane or the box that it's gonna go on. And so I think that's the first hurdle, you know, that a company needs to decide is, is to what degree of exclusivity are you willing to accept? What degree of specific tailoring of your tools are you willing to accept? In what ways are you going to trade away potential larger market in the long run for, immediate deployment. you know, buying down some of the risk, towards immediate platforms right now. So I think that's the first, I think that's the first big question. the second is, you know, companies should just be realistic about which parts of the stack their talents are in. You know, modern aviation, civil, military, emerging mobility markets, things like that these are some of the most complicated machines humans have ever built. The idea that an early stage startup is gonna come out and be able to revolutionize every aspect of it, it's somewhat unrealistic. And so I, identifying where the specific value can be added, which network of partners, you know, both traditionals and other emerging peers, need to come together in order to, to solve these problems. There are a lot of conversations happening about this. I'm not, you know, not sure that anybody has all the answers. That's for certain.

Jim:

Chris, we had a guest on, a couple of episodes ago who said one of the mistakes some of the US-based companies have is they're focused more on capability rather than the customer experience. And you're saying something similar with your response of, stop focusing on autonomy. Think about what the value of autonomy's gonna bring to the space. How is the customer experience going to change through the introduction of autonomy? And how is it different than how most people are thinking today?

Chris:

You know, I've had a couple different versions of this conversation and, I, I almost always reference a really credible treatment of this topic. There's a book called Tiger Check. It's by a guy named Steven Fino, his call sign is Munch. He's a, a former F 15 pilot. Got his doctorate at Johns Hopkins. He works at Raytheon now. But it, it describes, the introduction of automation into, fighter aviation going back to World War ii, up through like the F 15. One of my big takeaways from that book is that the, the fighter community and guys fly fighter aircraft, certainly have a lot of opinions, but there's almost this incredible horizon singularity that every piece of technology and tactic that occurred before now is, an immutable law of the universe and the God-given foundation that, everything we do is built on and everything in the future is kind of bs. I'm just gonna, white knuckle it, you know, be a night of the sky, and go from there. And so, where that leads to, is the idea that a lot of these technologies we're talking about there, there's two key things to frame my, my thinking on it. The first is that, we're talking about evolutionary, not necessarily revolutionary, tools, but that evolutionary improvement in tools can lead to revolutionary changes in outcomes. So that's a first, you know, there, there have to be baby steps. We can't ask an entire community whether they're charged with, you know, moving millions of people a day safely around the world and simple applications, whether they're charged with, you know, projecting power defending the nation on the military applications. They can't just hit pause on all that and, and come out a year later with a completely different construct, right? We have to take steps along the way. And, the second major aspect to this is that to actually deploy, use these tools to generate those user experiences and change capabilities, there's a technical problem which a lot of people and a lot of money are focusing on, and there's equally a cultural problem. you know, if you wanna look at aviation autonomy, one of the key fielded systems right now is called Automatic Ground Collision Avoidance. It's deployed on F 16, F 22, I think it's coming out on F 35 and a few other platforms. And you know, if you've seen Top Gun Maverick when they pass out due to the G-Force, the airplane almost sits the ground. Great scene. This system basically stops that, right, in the F 16 if it detects that the aircraft's, potentially gonna impact the ground, requires more than about a 5G recovery, which is pretty aggressive. Unless the pilot actively disengages it, the system will take over, recover the aircraft, avoid the ground, return it to, to safe flight, and then return control to the pilot. That was roughly mature in the early nineties. There was a joint Air Force NASA program that demonstrated that on F-16s. We didn't field it until 2014. In the time, between the final demonstration of that NASA program and the fielding of it in the operational fleet, we lost 47 aircraft and 51 people. Not because the technology wasn't ready, but because the culture wasn't ready to accept this loss of control for operators. They prioritized weapon upgrades over the safety upgrade, et cetera. And so, you know, as you go through that, I think that, that focus on, on the user experience, on the specific benefit, being able to coach it in terms that meet the people who are entrusted with, you know, buying, maintaining, or operating these systems where they are is the real differentiator. And, and companies that can solve that cultural fit along with their technical product, are the ones that are gonna be successful.

Luka:

And also I think there's a, difference in who the customer is. Back to Jim's question, what the perception of value is. It really is different from, a defense application and a commercial application. in the commercial world, there's a cynical view of saying, well, really, what do you get out of replacing the pilot? You invest a lot of money to get something that is as good as a pilot ultimately. And so it's a longer conversation that, we can get into, in the course of this, discussion. But, maybe before that, how did you get started working on autonomous systems.

Chris:

Literally, I, I stumbled into it a little bit. So I had a background in, you know, aviation engineering. I, I flew, flew fighters in the Air Force. I did some test work, got exposed to a couple other pieces of advanced technology and then I was looking for something to do when I retired. I, I bumped into a company that was, getting ready to, attempt to compete in the, DARPA Alpha Dog Fight program. And I'm sure we'll talk about that over this podcast, but, you know, that was, that was fundamentally trying to take a very narrow application of, of autonomy using a combat use case. You know, one that gets people's attention and just see where the state of the art, background, you know, 2019, 2020 was, and I, I signed a, a 20 hour consulting contract with a five person company, to basically try to teach a couple of programmers a little bit about dog fighting. I, I get a two or three hours into that contract, right after I retired. And I think I said something like, you know, I, I noticed a little bit of Python. Let me show you what I'm talking about. And, and from there it was off to the races. That was an exceptional time in the industry, specifically from a defense application because, three major forces came together. The first was that the, the nation got serious about the need for autonomy, and I'm sure we could talk about this more, but, we're in a world, in the defense world where quantity matters. You see this in Ukraine, just mean there are thousands of, of various sized drones flying over that battlefield every minute of the day. The second was that certain tools finally matured enough to allow, you know, as Steven start talking about this, right? That was a few years into the, modern AI revolution. Nvidia was starting to, to crank out, you know, the, the tools and the infrastructure we used to build these systems. So, so we had a need come out. We had tools that, that were showing promise in meeting the need. And then the last thing, again, specifically on defense, there was a realization that maybe the kind of people who know how to bend metal and build airplanes are not necessarily the same people who know how to write this level of software. And so the government made steps to, to open up industry to more diverse participants. And, all three of those things came together and I think really lead to where we are now. And, and certainly my place in it.

Luka:

Interesting. So looking back, those several years, what do you see as some of the big milestones and inflection points in driving towards trusted autonomy?

Chris:

Yeah, a couple of major milestones. The first big one is that even though it's still limited and there, and there's more room to go, you know, the Air Force has gone out and explicitly deployed some of these technologies on, the Vista F 16, you know, the Secretary of the Air Force himself went and, and flew on an airplane that for at least part of that mission was, was controlled by, a, a couple of different AI agents. So that was the first one. And, you know, just showing that it could actually, in a, in a small way, still a lot of work left to be done, start to make that jump from research simulation, video games, if you will, out to the real world. You know, the first time, even in training with all the rules around it, that a human who was fighting their best fight got gunned by, an autonomous agent was a pretty big deal. The second thing that, that you just can't ignore is, you know, what we've seen in Ukraine that. This combination of, of computer science, artificial intelligence, expedient manufacturing, things like 3D printing, stuff like that. And the scale that we're able to bring means that, you know,$200 drones can take out multimillion dollar, legacy weapon systems. And while there are challenges in extending that to other theaters, you know, across the Pacific, et cetera, I think it's certainly gotten everybody's attention. And, you know, I, I'm just following a lot of the messages here. Alex Karp book just came out, a few months ago, and hits these points better than I ever could. So I think those are the two big things, is that we showed that these could move beyond the lab, at least in limited fashion. And in Europe we see very much the need for them.

Luka:

And I'm really excited to ask you this follow up question because you've seen state of the art on both sides being deeply entrenched in the Air Force and working on some of these systems, but also in the industry. And so what really, honestly, is the gap between, what has been demonstrated at some, you know, small scale or in the lab or some limited, technical demonstrations versus what is required and what needs to be deployed in real world operations. So, I guess two questions. You know, what is that gap and what really is the state of the art?

Chris:

Oh man, there's a, there's a bunch of things to get out there. The first gap is that, You know, you could sum it all up and say certification right. There's a lot of reasons that autonomy is hard. There's trust, explainability, performance, transition to aircraft, all those things. But, but at the end of the day, you know, achieving a, a civil certification or military airworthiness on these capabilities is the bar that needs to be crossed and that has not fully and comprehensively been crossed yet. And so, you know, you've got companies out there that are addressing different parts of that, but that's gonna be the, the single biggest one. Now realize you said earlier, like, customers have different opinions, different requirements, different levels of acceptance of this. And I think you very much see that, you know, in, in Ukraine, their standard is like, pretty much, will it, will it go that way before it fails? That's good enough, for them to put in a, into, you know, practice. For the US Defense Department, you know, there's a balance, threat to life, threat to risk, ability to project power. If you're talking about a civil application, you know, despite, obviously a very tragic year so far, Western, large airline transport is the safest and most effective complex system humans have ever built. And to meaningfully improve on that is gonna take a very high bar of proving your technology, proving your certifications before you're allowed to potentially impact that. But it all comes back to the same thing is can you demonstrate using formal systems engineering and certification processes, that you've met that bar for that domain.

Luka:

So how is that achieved? How can an entity, an organization, whether this is a startup or an incumbent, go in front of the FAA and say that their autonomy stack is safe enough? What are the best approaches? What are the best practices?

Chris:

You know, I think, I think there's a couple different people doing a couple different approaches. What I believe is that the, the core of this has to be designed in from the start. You know, if you look at the, current progress on some of the large language models, your, you know, your gpt and, and things like that. I'm gonna simplify a little bit, but you're, you're seeing kind of the same process repeated over and over again with each new model that gets released and that, you know, they do a bunch of training, they do a very automatic process. Right. The, the, once the algorithmic challenges are solved, it's, it's fundamentally like, do you have enough data and do you have enough electricity and something comes out and then people start to find all the challenges with it. There are hallucinations, it's, you know, going off the rails one way another, and then, there's a lot of follow on fixes that go into it. Do I add filters for topics that I don't want it to be able to say, do I, you know, force it to fact check certain things? Do I literally make GPT call itself before it outputs its thing and, you know, go use other agentic features to, to search the web and make sure that those legal citations actually exist before some lawyer, you know, shows up with a brief with a bunch of fake cases. I think there's some folks who thought that that sort of model, would work in the autonomy space as well, that, you know, you have some shiny new tool. A couple years ago it was reinforcement learning. Maybe now it's, it's VLMs or something that's gonna get me like 90, maybe 99% of the way. I'm gonna be able to demonstrate very impressive demos very quickly that are gonna work great in simulation, they're gonna work great in, in constrained environments. And then I'll just, you know, knock down some of those, you know, outstanding areas, with a little bit of post-processing. I, I think that method is, is fundamentally flawed for a bunch of reasons. But I, I think that all of these features have to be addressed. All these requirements have to be addressed upfront in the design. Are you architecting the system in such a way that, that you can explicitly get the trust and explainability from each step, along the way. You know, if you wanted to use like the, the self-driving car industry as an example, in general, most of these systems work that they use, you know, cameras, lidars, and other sensors to perceive the world, and they build a map of the world and you know, even if it's not exposed to an ultimate consumer, to an engineer, they're able to look at that map of the world to picture the world and immediately diagnose it didn't see that bicycle or, or you know, it, it addressed something incorrect. And from there, what trajectory do you wanna build for the, the platform to do. Okay, does that trajectory makes sense. Can I check that there then? Okay. Am I able to effectively guide this vehicle or this airplane across that trajectory? Things like that. Perception, decision, execution. Right? It's, it's the way fighter pilots teach maneuvering to, younger folks. Did you see the right thing? Based on what you saw, did you make the right decision? And then based on your decision, did you physically execute it correctly? Trying to wrap all that up and just hit the"I believe" button on artificial intelligence, I think is, is the wrong path. Building systems that are, modular, hybrid, and hierarchical from the start, and built in such a way that you can address the ultimate questions that a certification authority are gonna ask you. I think that's the absolute best practice, as you go forward there.

Peter:

I think I'm grappling with a couple of questions in this topic area that I'm trying to sort out. One of them is what use cases demand this highest level of autonomous behavior, this very dynamic, reactive type of behavior. And then in what domain areas or flying on what type of a platform are the iterations gonna happen the fastest in the development such that on that platform, the technology leads the rest of the industry. Secondary to those questions, is this FAA question of if you do bring it into a certification regime, how would you do it and for what purpose? You know, the why, but the, but the first two questions are the ones that are hanging in the back of my head as we sort of survey this here.

Chris:

Yeah, absolutely man. Those are, those are great questions. And I think it's especially interesting'cause it's, I guess I've been thinking about it a little bit differently, which is that, you know, in the long run, the highest level of autonomous behavior has applications across all these things, right? Like in general, whatever the field is, civil, military, airplanes, cars, whatever, like we're trying to achieve a superhuman level of performance and open up some new, suite of capabilities and missions. if that's urban mobility, we want, you know, we want to give people hours of their lives back at, at higher safety, lower cost, improved efficiency to, to what they get now. If it's in the defense application, we wanna be able to, you know, project power hold adversaries at risk defense forces at a, at a higher level than we are now. But I go back to what I said earlier of that evolutionary step. You know, the immediate applications are, if I'm looking at a, a small drone platform like Ukraine, am I able to maintain, awareness, or hold targets at risk at a greater ratio than the number of humans I have. So instead of one human looking through one camera or one human flying one FPV drone and holding one target at risk, can I start to apply autonomy to increase that ratio? I think it's the same thing on, on some larger platforms, like the United States' Collaborative Combat Aircraft effort, CCA, you know, hey, there is value in just providing an additional sensor or an additional weapon that doesn't have to be physically attached to an F 35. If the first round of these aircraft did nothing more than fly in formation and augment the sensor weapons capabilities of a formation of F-35s, that's not an especially complex AI task. There are all these other challenges in autonomy of just how you make that airplane work reliably and effectively and, and meaningful workload. Now in all these cases, there's a long run, right? in Ukraine, it's, and you're starting to see some of this development occur are dozens to hundreds to eventually thousands of these platforms able to, distribute sensing perception amongst themselves, make targeting decisions, and prosecute complex engagements? On the CCA side, are these gonna generate what we've seen in science fiction books like Ghost Fleet and stuff, you know, where they're able to, to execute superhuman performance and, and continue to reduce that workload. I think each of those cases, you know, as far as which platform is gonna lead, I think they're all going to push different aspects of that envelope, in different directions. I think smaller like group two and group three platforms, are gonna push the boundaries in collaborative perception, sensor fusion, heterogeneous teaming. So taking non similar, platforms and putting'em together so that you can, get more than the sum of the parts. I think that's where you're gonna see those capabilities really exceed. On larger efforts like Collaborative Combat Aircraft, I think that's where you're gonna see, new frontiers of, decision making, uh, really, you know, able to infer, aspects of the battle space and take advantage of it at a slightly more individual level. Maybe using things that weren't obvious to humans, you know, start generating superhuman performance there. And on the civil and other side, I think you'll start to push the bounds of network effects of efficiency, right? How much can I optimize the fuel burn of a large aircraft, the fleet deployment of, eVTOL urban air mobility systems in order to effectively serve their population while minimizing, noise pollution, carbon footprint, et cetera, things like that. And so I'm not sure there's a one size fits all answer. I think, I think you're gonna see different parts of the industry solve different parts of the problem. And ideally if we go back to, again, that modular, hybrid and hierarchical approach, then we'll be able to transfer some of these behaviors and technologies across these domains and, realize additional benefits there.

Peter:

Do you think it's an important distinction as you describe these different uses of autonomy to distinguish between autonomy that is powered by deterministic code versus autonomy that is powered by some flavor of AI that is so dynamic that it is not deterministic? So at the first extreme it could be, yes, we can put large numbers of drones in the air with small numbers of humans operating them and, you know, create so-called swarms and direct them at lots of targets. But we can do that ostensibly just by flying lots and lots of waypoints. Those aircraft themselves are not, operating at the type of, autonomy level that you would see in something like a dog fight. At the other end of the extreme, we see drones that are being intercepted by other drones and it would be wonderful if they could, make evasive maneuvers and have much more dynamic behavior. That to me seems like a different software architecture in order to enable that level. And as you talk about this, it, it feels like you're bridging across a lot of different behavior types. How do you think about that distinction? And is it important? Is it something that we need to surface here?

Chris:

I, I think it's absolutely important. You know, my, my personal view of the industry is that at, at least in defense aircraft autonomy, you've really got those two schools of thought, right? You've got, companies or researchers who are growing out of, like the surveillance missions, you know, think Predators and Global Hawks and things like that, who very much think of autonomy as the ability to automatically build a plan and then adjust that plan, that flight path when something in the environment changes. But there's kind of always a, a plan, you know, if, if any of our listeners are, are civil pilots, you know, especially commercial pilots, you know, there's always a full flight plan in your FMS in your autopilot system. And, you know, the, the computer is just jumping in and changing aspects of that. And then you've got the kind of companies who cut their teeth on DARPA Alpha Dog Fight and Air Combat Evolution, which were heavily using reinforcement learning and things like that. and they think about problem space very differently. You know, I can say that the, the team that I was working with, there was no path that existed more than about a second in the future, right. That was constantly, you know, reacting to the information that it was given and, and going forward from there. That said, I, I, I do wanna foot stomp something and I think the DARPA team who led this program would say the same. We all think that dogfighting is, you know, the ultimate application of this, it's the most dynamic, it needs the most AI. One versus one dog fighting against a similar opponent especially when you have good data on what they're doing, that that's absolutely a solvable math problem that doesn't necessarily need AI. We chose to use AI because we're interested in, exploring some of the boundaries of that, especially in the case where we could go back and verify, you know, either ahead of time or after the fact, like what the optimum solution was and how close we were able to get to it with the hardware and algorithms that we had available to us. So I, I think the winners in this space, and I don't mean that from just a company that's gonna succeed, right? Like the US government is sponsoring the creation of reference architectures and ensuring that you can bring together these different types of technology that when the problem is solvable deterministically, or has a high safety impact, you're able to use those tools. When the problem is not tractable or, or the performance of using, you know, those traditional techniques is not enough, that you have an architecture that allows you to introduce some of these advanced capabilities, take advantage of the, the hundreds of billions of dollars of research that's going into AI right now, but do it in such a way that's bounded, because of the domain, because of the risks of failure. It's not like we laugh at what GPT, you know, hallucinated, it's, it's an airplane crashes or the wrong target gets hit. So can we build the right safety bounds, tools like runtime assurance, which are defined standard, like those words mean things to systems engineers. That as we introduce those complex functionality on one end, we're building the right system architecture under it that can say, okay, how will I know if it's wrong? What do I do if it's wrong? And can I achieve at least a, a fail safe backup using techniques that I can traditionally prove and certify?

Jim:

Chris, when you came outta the Air Force, you said that's when you got excited about autonomy and you said, this is gonna be my area of focus. What was the problem that you thought could be fixed from autonomy that gave you the most energy as you came outta the Air Force. What did you say I've got to fix this. This is the great opportunity.

Chris:

Oh, I, I'm so glad you asked. This is a topic I'm truly passionate about. You know, when I was in the Air Force, I was lucky enough to fly some really exquisite airplanes. You know, I got to fly F 22, things like that, that represents like the pinnacle of the United States' Defense Technology posture from about 1970 to to today, really, which is that, we are gonna solve problems by pouring money on them, and building, you know, relatively limited numbers of really exquisite platforms. And, you know, I I believe strongly in the post World War ii, western world order. I wanna see that exist. I think it's led to the greatest, increase in the quality of human condition globally, of, of anything in all history. So, so, you know, with that as the backdrop, we have adversaries who think differently. In the last 30 years, really ever since, you know, 1990, 1991 Gulf War One up through the, the war on terror, we've kind of given a masterclass to our adversaries of, of how America, prosecutes conflict, what technologies we use, what assumptions we made. And they had, you know, that 30 years to sit back and say, okay, how would I counter that? And. That, that's really what it comes down to is that, you know, to hold a target at risk, a weapon system has to do three things. It has to survive to get to the target, it has to be able to find the target, sensing, precision, whatever you want to call it, and then it has to have an effect on it, whether that's a large enough explosion, whether it's a non-kinetic effect, whatever. America has historically done that by, by building stealth or faster systems, both of which have gotten very, very expensive to get that next increment of stealth or speed. And then because I have this really expensive platform, I put really expensive sensors on it because I can't afford that many of'em. And so each one of'em has to be able to find and understand the battle space on its own. And now I've spent a fortune on an airframe and a sensor, so it better carry enough weapons or enough explosive or whatever to, to do the job when it gets there. And that then makes the platform bigger, which makes it harder to make it stealthy and fast, which makes the platform more expensive. So I'm willing to spend more on that. And, and this, this has spiraled up to the point that we're about to field, you know, hypersonic missiles, that that cost$50,000 a shot, you know, as of eight hours ago, right, we're about to go buy a multi hundred million dollar fighter. That, that became unsustainable. And now look at it from the adversary's point of view, right? Like China has a lot of smart engineers, they can look at, some of these platforms. They can make assumptions about how stealthy it is and where those, vulnerabilities are. We're a, we're a, a more open nation so they can look at our budget and see estimates of how many, we're we're gonna build. We're relatively public about, you know, what platforms carry them or these things so they can make assumptions about how many of'em they're gonna have to deal with at once. And again, they've had 30 years kind of uncontested to build solutions against that. Autonomy, to me, is a tool to open up a whole new trade space that gets back to quantity mattering. And so in each of those cases, if I have the ability to effectively coordinate a bunch of things, I can make improvements in survivability, precision, and effect, without, necessarily needing a human in each one of those. And I think that this is, in my mind the clearest path to continuing to deter like global conflict and allowing the, the world to, you know, continue on this path to, to greater prosperity. And so, yeah, that's, that's a topic I'm, I'm passionate about is just how can I do more control, more things with the same number of people.

Luka:

Chris, just to go a layer deeper on this, what do you think was a turning point in this, you know, thinking about mass? Because I don't think that anybody ever said, you know, mass doesn't matter. And so how were the fleets sized for these sophisticated, platforms, and was the surprise in underestimating the lethality and the density of the integrated air defense systems of, expected opponents. Was it the realization of the vast amount of, territory that needs to be covered. And so therefore, with these exquisite systems, you just cannot, have that luxury of, saturating the battlefield at that size and lethality combination, or was it something else? I.

Chris:

I think it's a little bit all the above, and it, it would challenge the, you know, the, the question that, you know, we ever didn't think mass mattered. I mean, o obviously it does, but you also, you know, it's just true that with the transition to, precision weapons and then more survivable delivery platforms, like we, we very aggressively scaled down. I mean, we, we showed that it takes a lot less things to achieve a given effect, on the adversary. And I, I think we just got a little too, a little too enamored with that concept that, you know, we went from WWII, Korea, Vietnam, you know, the concept of, of sorties per target to, very clearly in 1991 and on the number of targets per sortie. And that's, it's very seductive. you know, when it works, it, it works very well. but like every other, Tit for tat and, you know, for thousands of years of military history, it was gonna swing back the other way. And so I think we're getting there and, you know, this is not new. Folks have been thinking about this for a long time. Again, there have been books written about it. There are, were studies, war games, that showed all this. You know, part of it was, like I said earlier on, the technology had to reach a certain point. If you'd come around, 15, 20 years ago and said, man, I, I see a world where we're gonna, have literally$10 cameras with grenades strapped to them flying around Ukraine, finding Russian airplanes and vehicles on their own. You know, there were people writing about that in science fiction, but the technology just didn't exist. Like the state of the art for, for computer vision just wasn't there. so, so some of those things had to change. And then you can tell where I'm going with this. Like the big thing has been Ukraine. I think if you told an average, you know, I, I, I don't know this, and I don't want to put words in their mouth, but I, I think if you told a modern industrialized nations like air defense commander, that had a bunch of missile systems under, under their charge or, you know, a tank battalion commander, that they were gonna be excluded from the battlefields completely by a bunch of toys, you know,$300 plastic drones, that they wouldn't have believed you and now they can't ignore it.

Luka:

In the interest of being, the devil's advocate here and to take this conversation to maybe an absurd extreme, but, let's say that we had a UFO type of aircraft or system that could, teleport itself wherever, at any point in time and, project power in a way that is unimaginable. And if you could bet the entire budget on that exquisite capability, why would you not want to do that? I.

Chris:

Yeah, I'm not gonna do a good job of engaging with this devil's advocate. You know, I, I hope that's what NGAD is. I'm jealous of the guys who are gonna get to fly it, and if it works, I'll be forever grateful. But you know, I just said there's always a cycle of, of tit for tat, whether it's an offensive technology, a defensive technology, it, it is fragile. When I started flying F-22s in 2005, you know, you can go back and look at the, the, the articles that were released around that time. You know, the first Red Flag, a hundred something to, to zero. You know, it wasn't my sortie but we were, we were joking about it in the bar at Nellis one day that like there was a mission where eventually Raptor One just waits for a gap in the radio and just shouts. You know, every F 16 in the Nellis airspace is dead. Just trust me. You know, that, that was literally the biggest challenge that first couple of years, was finding enough time on the radio to call all your simulated kills. that's changed, right? Like we, we've reached parity, both in our own training and, and the adversaries. and it, it probably happened a little bit faster than, than we would've hoped. And so yeah, if there is an invisible UFO that can, that can just do whatever, then you know, we should certainly keep researching those technologies and, and certainly keep going there. But, for the money, you know, and where I'm sitting right now, this suite of tools. Like extending the capability of any one human by not forcing all these things to still be attached to their platform, seems like a really promising and cost effective way to, to continue to address this. And the other areas, sorry, not to drag out my answer, but you know, when you're, when you're talking about the defense application specifically, like the, the perception, and deterrent effect is almost as important. And so think of things not just in how stealthy are they? How, how far can they fly, how many weapons can they carry, but what, what impact does it have on an adversaries decision calculus? And so if I go build the UFO and they say, okay, this is, this is invisible. It's, it can go into hyperspace and it can hold any targeted risk, that, that's certainly gonna get their attention. The B-2 did that, F 22, F 35 did that, and then they, they got to work countering it. Distributing that same strategic effect over thousands or hundreds of thousands of individual things. It just presents more uncertainty to the adversary. It's more geographic areas they have to cover. It's, it's more types of platforms they may have to cover, things like that. And so in general, if I had a choice between two solutions for the same amount of money that I think could solve the immediate problem, but one of them did sort of in a way that was more uncertain for the adversary, I think I would pick that one.

Peter:

I think that latter approach also feels like a lower risk bet to make to go with that mass and to go with, the capability that's embedded in it versus making a bet on, the one all powerful system that you can build very, very few of. It's not only uncertainty, but it's a lower risk path. And if you extrapolate and you become a master of building economical, lethal mass, then the paradigm shifts even further than what we're already looking at in Ukraine. I mean, you could build autonomous systems that you could infect another country with during peace time. And, you almost have a gun to their head. I mean, some, almost like, like drone locusts. That would be, all over. There's a lot more extrapolation to this concept than what we have today.

Jim:

Define air superiority over the last 10 years and over the next 10 years. What's the role of autonomy and if you were king for a day, what would you do to change how we perceive air superiority for the next 10 years, particularly as it relates to autonomy? I.

Chris:

That's a, that's a good question. You know, again, some background, I mean, I've talked about this, this 20, 30 year period, that was kinda ushered in, or at least the world realized it started in 1991. I think we will look back on this 20, 25 years in history as like an absolutely un unprecedented and, and potentially unrepeated phase of, human military history forever, back to sticks and rocks, right? After the fall of Soviet Union and, with, you know, all of our second offset technologies, stealth precision weapons, et cetera. There was a couple of decades where there was literally no point on earth that America could not, conventionally affect if we wanted to. I don't know that type of, unilateral dominance will ever be achieved again, because of the democratization of the kind of tools that we're talking about today. Unfortunately like an entire generation of senior military leaders and decision makers have come to believe that that level of dominance is our birthright. And, you know, what you're seeing now is the world, at least America, waking up to the point that, that that is not guaranteed, anymore. And so I, I grew up, I mean a standard F 22 like mission objective that we'd draw on the board and trained to was like, zero blue losses, a hundred percent, you know, sanitization of the enemy, a hundred percent weapons on target, on time. Just, just absolutely zero. Zero quarter given on anything. And that's, that's just not realistic, anymore. And, you know, this is not unique to me. In fact, like the Merge podcast or, or article just two weeks ago, you know, was talking about how the Air Force is explicitly discussing phased, or pulsed air superiority, right? The ability to, to gain enough control over the battle space to achieve some other effect, you know, prevent an amphibious landing, deploy some forces, and then, and then come back because we just no longer have the dominance, the logistics train, the number of platforms, et cetera, to maintain that level of dominance across the board. And that's what you're seeing now. I mean, there's a lot of compounding factors to this, but you know, I think if you'd told, US Air Force mission planners, 10, 15 years ago that, you know, roughly peer conflict would occur in Ukraine, between Ukraine and Russia, and that three years in there still wouldn't be clear air superiority established that just would've been a, an absolute foreign concept to them. but it's, it's obviously realistic.

Jim:

And that the air superiority that was still in question was achieved through these plastic things, as you say, as opposed to what they would've imagined 15 years ago. So, so the next 10 years, Chris, you're again, you're king for the day. What would you do different than you think the path we're on right now and, and how should autonomy play a different role than what you think the current planning is?

Chris:

You know, I think, and I hope that we'll, develop and deploy these as, as complimentary technologies. you know, the challenge in going all in on low cost autonomous mass, Even though we may not understand them fully right now, it's going to have, weaknesses and, and challenges as well. There's still gonna be room for, some of the traditional systems that we've built. Obviously, you know, again, with, with the news today that the NGAD program is moving forward, you know, I think they're very complimentary. I think there's both an offensive and and defensive side to that. Offensively, it, it goes without saying that if you're, you know, gonna build lower cost systems and use more of them, you're gonna take trades in things like range, endurance, payload capacity, et cetera. So you're shifting some of that risk to, logistics concerns. Barely a year into Ukraine, you started seeing excellent articles both from folks who were fighting, you know, in, in, in there, as well as like the rest of us watching the lessons learned from this. It's like, okay, it's time to shoot the archer, not the arrow. because there, there's too many arrows, you know? So I, I think that's some of the area I think you're gonna see more focus on. Well, you are seeing focus through efforts like Replicator and stuff on the, the supply chain, the manufacturing logistics concerns, the command and control channels, data links, assured connectivity, cyber attack surfaces, resilience to adversarial AI, things like that. I think those are gonna be the areas that we have to defend against. and we'll need to think through those things both offensively and defensively. Just like everything else, how can we, you know, protect and, and provide these capabilities for, for our forces and, and deny or degrade them, on the adversary side? You know, again, we have the benefit of hindsight looking at this, I believe it was in 2014, 2015, there was a Taliban attack in Afghanistan that got onto base and threw grenades and, and destroyed a significant chunk of the US Marine Corps's Harrier fleet in one day. You know, they weren't. They weren't shot down. All their, all their training, all their technology was of no use. This was individual people with, with hand grenades. And that squared is, is the world that we're getting ready to face going forward.

Luka:

Small drones as offensive counter air assets.

Chris:

A lot more Flankers have been killed by, literally plywood and foam drones than have been killed by any surface-to-air or air-to-air missile.

Luka:

That's crazy. That's crazy. Okay, so building autonomy. Let's rewind back to, a team that wants to develop an autonomous system. Break down that process in, whatever logical chunks you want to break out, whether this is collect data, train, develop algorithms, test them, operationalize them, or, you know, refactor this however you want, but talk about the process and where the challenges lie for each of those.

Chris:

Gotcha. You know, again, this is how I, I choose to do it. It's how, you know, at Merlin that we approach this problem. And again, it's, it's to go back, to that, that earlier comment that autonomy's hard for a bunch of reasons. Trust, explainability, shared command and control, modularity and, and certification. And so for us, the most important thing to get right, there's a variety of tools you can use for each level of the stack, but if you don't have the architecture in place, and it's an architecture that's capable of, you know, if it's a civil application, can I deliver the, the artifacts that the FAA is gonna require to allow me to pass the certification program. If it's a military platform, am I able to interface and interact with all these other exquisite sensors and, and things like that that, you know, I'm probably not gonna necessarily own, you know, so in that case, can I, can I plug into the government reference architecture, things like that. So getting that architecture correct, understanding those external touch points. That's the first thing, right? Like a definition of autonomy or an autonomous system is something that, you know, makes a decision, takes an action based on a perception of the world. So again, perceive, decide, act, ooda loop, you know, sense, decide, act all these acronyms, all mean the, the same sort of thing. And then within that, you know, one of the things I haven't talked about a lot is why I think, modularity is so important here and. One, one of the things, all these military systems we've been talking about, it's, I am sure you've had folks on your, podcast earlier who talked about just lamented how long it takes to build a new airplane. Right? You know, especially a, a military platform, software is obviously moving much quicker. And so one of the advantages of modularity is that we can again, decouple those timelines, start to realize software benefits on a software time scale that's, that's decoupled from the underlying hardware. So that drives the modularity. Within that, the trick to make a modular system work is you decide where to put those boundaries and what level of abstraction you're willing to accept. And every one of those decisions is a systems engineering trade. You know, the more I'm able to tightly couple a system together, vertical integration, if you're thinking like manufacturing processes, you know, bespoke tight integration, as it gets onto an individual platform. There's at least the potential of higher performance, if you do that. But what you're trading away is, is some of that ability to rapidly evolve your system, rapidly iterate, expands the type of platforms that you can work with. So getting folks, the right interdisciplinary team together, you absolutely have to have folks who understand the operational problem, the, the context and the, those cultural impacts who are, who are able to effectively communicate that user story, user experience requirements, bidirectionally to the technical team. I spent a lot of my time, you know, talking about and trying to teach aspects of, of combat aviation to engineers and, at the same time, trying to teach a lot of these, engineering caveats to, to my peers who are still in the operational community. I think though, like increased understanding at that level is, is only gonna make everything better. So you've got that team and then. You know, these are, these are interdisciplinary problems. In order to effectively execute here, I need folks who, who truly understand, guidance, navigation, control systems, you know, fly by wire, core robotics. I need people who understand, you know, various techniques, old school, deterministic methods of, of optimization, path planning, et cetera. I need folks who are experts in, the various, you know, modern AI learning based, approaches, whether that's, machine learning for perception, things like computer vision, and stuff like that. Whether that's tools like reinforcement learning for behavior definition, some of the modern research into the, transformer based architectures, VLMs et cetera, that can kind of bridge some of these gaps, be able to understand where the, overall community, is going, what tools are gonna be available, where the market's going, and be able to bring that together. And so once you've got all those ingredients, you know. Honestly, when we start a new effort and a new use case, my personal technique is just to have a couple of days of raw brainstorming, allow the team to, to think through like what the ultimate architecture could be. If I had to solve every use case that, that, you know, I either know about or could imagine, and go from there. But then absolutely time box that, right, like a team that only ever cares about building the perfect architecture and solving every potential use case, will never move past that analysis, step. I want'em to be aware of those trade-offs, understands what the potential regrets could be in the future. And then we snap a chalk line, we, we come back the next morning and we start solving this particular problem, but with those things in mind. The two major areas that I focus on are, you know, there's a certain minimum level of performance that user requirements that's non-negotiable. And then at every step along the way, I use the absolute simplest tool for the job. So if it's flying airplane, you know, inner loop stability and control. We know how to do that. We've got a hundred years of flight control system development behind us. I don't need to waste electricity and time like trying to train a neural network and accept all the regrets that could come with that for problems I already know how to solve. You know, pick the right tool for the job and then go from there. And in general, given the domain, simpler is gonna be the tiebreaker. So, you know, you develop the architecture, you decide what your abstractions are gonna be. There's a variety of techniques in there. But, you know, it can be as simple as like, I have some decision or planning layer and then some control layer that actually flies the airplane or steers the sensor or something. What's the right, level of abstraction, between those, if I'm capable of just passing a flight plan, waypoints, you know, and maybe buying an off the shelf autopilot to execute that, there could be some real benefits there, right? Like I can, I can either buy a certified autopilot or I can certify it. I can very deterministically check that flight plan against, hey, am I flying over a country's border, I'm not allowed to fly over, am I flying into the ground, all those things. But at the same time, you can imagine, and, you know, with your background, how would you describe an air, air engagement, defensive maneuver, et cetera, in terms of way points, you know, that's obviously missing out a whole lot of context of high performance maneuvering and things like that. So, you know, deciding what that abstraction is gonna be. Am I able to pass a, a flight plan or do I need to, no kidding be talking in terms of, g and roll rate, you know, stick and throttle, in there, you know, find the right level of abstraction, build the right guards around it, and then go on. The last thing that I'm a, a real big fan of is, because the architecture is so important. achieving some sort of 80% solution even if you know the performance isn't where it is, but just, you know, in engineering terms, like getting water through pipes and starting to validate some of those assumptions that you made about architecture abstraction. You wanna try and find all the problems in your thinking as early as possible in the process. And, and this isn't cosmic, right? Like, you know, fail fast, fail off, and is, you know, a mantra in this space. So it's not like it's new, but you know, there's always an art to getting done, right.

Luka:

You mentioned government reference architectures before, are the autonomy architectures that you were just talking about, are those redesigned with every effort or are those architectures well-defined, well understood, and well accepted. You know, let's just focus on the DoD community.

Chris:

The short answer to all of your questions is no, right? Like we are still at the leading edge of, of this industry, of these tools. There, there is not a fielded CCA, you know, degrees of autonomy and actual fielded operational systems are not really out there. I, I think you have strong but healthy disagreement about almost every aspect of, of what you just talked about. For the commercial and market aspects, the, the market model, like the, the way companies are going to realize revenue, you know, developing into a system that is heavily biased towards a, a government owned platform is still uncertain. you know, how do you appropriately incentivize enough, smart engineers, enough private capital to get into a space, that is not necessarily gonna reach the kind of scale that you see, you know, in civilian electronics or, or, you know, software apps or, or things like that. I think that's an open question. What degree of portability and modularity is acceptable? A lot of folks use the phrase app store. you know, I want the iOS or the Android and then, you know, somebody can sell me a dog fighting app or a surveillance app, or. The, the big question is who gets to be Apple in that case,'cause they're the ones really making money, out of this, you know, not necessarily, any individual, app provider. And then also is, is that model sufficient? Right. Or, or does there need to be some prime, some somebody at the end of the day who can wrap their arms around this and say, I I'm certain that this particular collection of software tools, platform, et cetera, is able to do the mission you need it to do. Is the government capable of, of being that, that lead integrator, and that mission validator, or are there like technical skills, longevity, et cetera, that, that are gonna drive that to be in the, in the private market? How would you stop that participant from owning and adjusting the, the reference architecture in a way, that would. So, you know, perhaps inhibit competition in the future. I think these are all open questions. I'll end that on an optimistic note though, which is that, knowing a lot of the participants in this space, both on the government and industry side, I think there's broad agreements that, solving these problems is important, that continuing to enable the deployment of these technologies, in a way that's responsive to, to the government and the defense market demand, that doesn't close the door on the next, you know, bright young person with, with a good idea. You know, everybody stands where they sit. Companies are trying to secure their competitive position. The government's trying to get the best deal for their money. But at the end of the day, what you've got are a bunch of great Americans trying to solve, common problems. And, despite the friction and uncertainty, I am optimistic to see the, the broader team come together.

Luka:

And, and despite the benefits and the motivation behind modular open system architectures and, the separation of flight autonomy and mission autonomy, and thinking about avoiding vendor lock, moving at the speed of software without impacting airworthiness, all of these things aside, what do you see as an unintended pathology of this concept. Maybe we can, address this in the context of the CCA or, any other, more generalized concept.

Chris:

I actually heard a, a government representative at one of these sessions, say, Hey, like if you want to go fast, go alone. But if you want to go far, go together. And, you know, I think that sums it up really well. They're, trying to bridge that gap. you know, the first challenge, the first pathology to overcome is that modularity, open design it does not make the first article cheaper, faster, or better. What you're doing is you're, you're attempting to preserve, a competitive landscape in the future so that you don't get stuck, with a non-optimal solution when, when that winning contractor's A team goes off to win the next big program and you're left with whatever they feel like giving you. And so I, I think that's the, you know, there's a pretty good realization that that's the problem we're trying to solve. The, the pathology we're trying to avoid is paralysis by analysis and, giving everybody a veto. you know what I mean? One of, one of the challenges with any sort of consortium or committee-based process is that, everybody can say no, nobody's really sure who can say yes and, and how you actually move forward and establish things there. I think it takes strong leadership, from government and it takes strong agreement from industry, to, to follow that leadership. And, you know, knock on wood, so far so good.

Luka:

There's a lot of, buzz in the robotics domain about advances in visual language models and visual language action models. To what extent do you think that applies to what we're talking about?

Chris:

You know, honestly, I'm, I'm still trying to decide that myself. I'm enough of an enthusiast that I'm just incredibly excited about what I've seen in, honestly, within just the last few months. And, and despite being pretty well entrenched in this industry, the, the pace, and I know these are all selected demos, right? Like, nobody puts out a video well, unless they're trying to make a point themselves, like nobody puts out a video of all the things that didn't work. But, I think the, the pace of acceleration in the last year or so has been absolutely eye watering. It, it absolutely exceeds what I would've predicted a few years back. And to be honest, I'm excited to see where these tools go. That said, today at least I still feel, really strongly that the, the concepts of the modular hybrid and hierarchical architecture, still solve all those problems that I've been talking about. I think what we're starting to realize though is that you may be able to use very general purpose tools at each level of your hybrid and hierarchical system. There was an incredible paper out of, the Waymo guys, last fall and, and they use, you know, like their robot taxis are driving a, a, a hybrid and hierarchal system that works similar to how I, I, you know, said earlier that it has perception modeling, trajectory generation, trajectory following with various safety systems layered on it. They ran an experiment where, they wanted to see a, I believe they used a, a Gemini network. I'm, I'm not sure exactly which tool, but they wanted to see if one of these large transformer based models could really tackle the whole problem. and it was, it was not perfect, but it was, it was positive research. But what I thought was interesting was they still, even though they were using the same tool, they were, they were using it in a modular fashion, right? They asked the network, hey, make sense of these camera views and build me a world model. And then they asked the network, where do you think these vulnerable road users will be, you know, over this time horizon. And then they asked the network, what trajectory do you think I should follow? And went through that. So even though they were using the same tool at each step, they were still invoking it in a modular fashion. And then I, I just think that's an incredibly exciting field going forward.

Luka:

What about the data challenge in building autonomous systems that fly? Can you shine some light on to what extent you can rely on synthetic data. What is the state of the art in terms of quality, of, that data, that is, acquired in simulations versus having to go out and collect real world data that's, highly annotated, or otherwise, much higher quality than what you can get out of, some of these simulation tools.

Chris:

Yeah, I think, this is similar to your questions about the role of the architecture, like the maturity and the acceptance in that, you know, for this particular application, like if I were to use CCA, we're still building the first one, right? We're still building V one. And so what the pipeline looks like to collect data from, from fielded platforms, get that back, label it, annotate it, clean it, augment it with, with simulation tools and put that back into training. To the best of my knowledge, nobody has that yet. you know, that is, that is the next step, out there. So it's important to, to point that out. You know, the Air Force invested heavily, for a couple of years now, and starting to build aspects of that pipeline, really centered around Eglin Air Force Base, the ACC's Experimental Operational Unit, the EOU, is gonna be the first user of CCAs, and that's their charter, right, is to get these things out there, start training with them, collect this data and go back. But, you know, to some of your audience who aren't, deep into AI, it is difficult to imagine the scale of data required before you can start to, use AI to solve these problems. At EpiSci we used less reinforcement learning than the other teams on that Alpha Dog fight and ACE programs but I can confidently say that our agents flew more simulated dog fighting in a weekend than all humans have ever flown real dog fights since the invention of the airplane. And we were doing less than everybody else. Like that is the scale of data required in order to do this. And so, the tools for how you collect something in real life and then how you come back and augment that with sufficient quality of simulated data. So it's not that your collected data from real life is turned straight into your training pipeline. It's that you use that data to improve the quality of your simulations and your simulated data set. And then you train on that and then you have a validation step on the backend. And while, this is not my industry, I think you see that, in practice with the autonomous, vehicle companies. You know, again, whether it's Waymo or Tesla or whatever. Yeah. Every Tesla out there is sending clips back when it sees something it doesn't understand, but that's not trained on directly right. That goes to their infrastructure. They build million of permutations of that and then put you know in sim, and then put that into the training system.

Luka:

Yeah, that's interesting. I was recently speaking with a, a robotics researcher and we started talking about synthetic data and some of the more well-known simulations that exist out there. And this person was quite, quite bearish, on the quality of that data. And one of the things that, this person mentioned was that in classical robotics, you could split the the simulation problem into perception piece and the motion control system piece. And you didn't really require, high quality simulations for evaluating and testing some of these models. But once you start breaking up that barrier between the two silos and start talking about things such as, a vision language action model, then you need to start simulating everything from the pixel to the, control system in a single simulated environment. And that is way too immature at this point to represent the real world. Have you come across this problem, or is that maybe specific to generalizable robotics?

Chris:

You know, a lot of the early architectural features that I talked about, the way we chose to address the problem, you know, all the way back at Alpha Dog Fight was specifically a response and acknowledgement to this. And this was before the current deployment of the, VLMs and the incredible video that Boston Dynamics released, what, two days ago, you know, showing Atlas doing backflips and stuff. So again, I I wanna acknowledge that, you know, it's amazing I mean, when you, when you throw like, I don't even know, 10 to the what exponent, you know, number of computational cycles and hundreds of billions of dollars of investment into space. Obviously, you know, breakthroughs start occurring. That that's just a given. But you know, that that was exactly it. That's why, I, I so strongly advocate breaking up these systems and using, using traditional tools where they're sufficient. Because the idea, especially in defense application, right, where the scale is never gonna be, the same as some of these other, other markets driving your, your simulation to the point that you can have confidence in that intend answer and that it will survive, simulation to real transition. I, I don't have that confidence. I, I don't think it exists yet. And then again, for defense specifically, you have, you have two other real key challenges and I'll, I'll just talk perception for a moment. Although your point about the, the intent, problem stands, you know, the first is that if I want to go, you know, train a system, use computer vision as sort packages in an Amazon warehouse, like that is a, I have access to that, right? I, I may be able to control the lighting. you know, it's, it's relatively straightforward to gather the data or start to simulate the data that I need in order to train that. it is significantly more difficult to do that on, you know, pictures of an SA 22 missile system in a deployed environment, strangely enough, you know, those type of systems don't necessarily appreciate you coming up and taking a bunch of pictures, from, from every angle. So you've got that for one. you're gonna be dealing with much data, It just doesn't exist in the world. And then you're gonna be dealing with an adversary, right, that is going to attempt on a as fast a cycle as they can to, you know, change that problem before you make that problem harder. Everything from basic camouflage and concealment all the way up to advanced adversarial AI attacks and all those things are kind of hitting at that core assumption that you're gonna be able to build the perfect sim and train the perfect behaviors in it. So you absolutely have to be cognizant of that problem and use all these other techniques and design architectures that are resilient to it.

Luka:

If we end up going down the path where we need real world data, does that mean that incumbents necessarily have an advantage over any startup that's trying to innovate in this space? Because in automotive, what worked was you either had the Tesla model where you fielded cars and then you collected data and were able to use that for training or AI models. Or you had a Mobileye model. But in an aerial application, accumulating that real world data is very expensive. And so how does a startup do that as opposed to, an incumbent OEM or, operator?

Chris:

I, I think there's a couple answers to that that go beyond just a little bit of the data question, but you know, just on the data side, my general opinion of what we've seen in, in Ukraine in similar areas is that, it's not like there's any one, vision model that's ruling the world. Instead what you see are the differentiator is kind of the pace at which you can, make these updates. You know, does your, is your training system able to take a, a single snapshot from a system that came back and then, you know, amplify, augment that data and, and improve your models the next day? And so, you know, it's one of those areas where I think there's a long tail, there's plenty of room at the bottom, to go out and solve individual problems. The second thing is in the defense ecosystem, right? That you have a, a couple orders of magnitude of scale, that are solved still within the startup realm, right? Pick a, you know, just of wellknown spaces, right? Like even, even Shield and Anduril which a lot of people use in the same voice are roughly an order magnitude apart in terms of, resources, scope deployed, reach, et cetera. Obviously Palantir up at another level there, and then you've got the primes off to the side. And so, you know, not everybody's gonna be good at everything. And I think what we're seeing now, you know, in in Palantir and Anduril I think are are excellent examples of this, is that the type of problems that your traditional aerospace primes were built and organized to solve are not necessarily the problems that apply to these autonomous systems. You wanna be able to attract different people, use different tools and, and potentially have different go to market models. So, you know, I, I'm generally optimistic again, that like the western world and the way we build and form companies will continue to, to adapt, to all those different areas.

Jim:

You were talking about the recent fighter contract that was awarded just a couple hours ago. Is there anything you could talk about that, to educate our audience on, on, what you think they should know about it? That would be of interest?

Chris:

I don't think there's much I can add, compared to what's out there is that, You know, I obviously think autonomy is gonna, open up new trade space. It's gonna continue to, add, uncertainties to the adversaries and augment our capabilities. But I'm not so naives to think that, you know, autonomous affordable mass is, is not gonna have its own weaknesses. And so, you know, this is a bet, similar to what the Air Force just said, you know, at, at AFA, right? That they war gamed this out a lot of ways. And, and while there are a lot of answers to these problems, you know, they liked the outcomes better when NGAD next generation air dominance platform was part of the system. And so, you know, I think we're gonna have all the same challenges that the F 22 and F 35 program did. I think in order for this to move forward, they're, gonna try and build a lot of cool new technologies that are gonna all come together in a, you know, a very complicated, system. I think, controlling cost, and balancing that against, when we can actually get these things out in the field is gonna be important. But, from, everything I've read and heard about it, it sounds like it's gonna be a really exquisite set of capabilities and a little bit jealous of my peers who, who are gonna get to fly it someday. Actually a lot, a lot, bit jealous.

Jim:

Nice. Listen, people can't see you. We can. And behind you is a racing horse. Why is that behind you in your office? Do you mind?

Chris:

So, like I said, I just moved, a couple of months ago and haven't unpacked my office. Historically, and, and probably the last time I talked to Peter and Luka, this was all my Air Force memorabilia and stuff, but it, it just so happened that, one of the first things I've unpacked, when I got here was that photo. My, my wife used to, competitively train and, and compete in equestrian. And so, in, in what's an otherwise bare room, I've got a picture of her running an an upper level cross country event. just, you know, one thing it speaks to me and it's, it's one of the things my wife and I talk about a lot is that, when she was riding upper level horses, you know, in in flying airplanes there's just something amazing about, You know, being able to exceed the limits of what a human can do. And, for Amy it's, it's partnering with another living creature to, you know, go jump over these eight foot obstacles and stuff like that. You know, for me it was, it was gonna fly airplanes in a previous life. but it's something we, we both talk about sometimes times.

Luka:

Good. All right. Well, Chris, this episode has been in the making for a long time so glad that we were finally able to coordinate it. Thank you very much for your time and for sharing your thoughts. It was a, it was a treat.

Chris:

Yeah. I really enjoyed it. I look forward to talking again.