Weighty Thoughts
Weighty Thoughts Podcast
From 6 Weeks to 600 Seconds (or Less)
0:00
Current time: 0:00 / Total time: -38:28
-38:28

From 6 Weeks to 600 Seconds (or Less)

The revolutionary potential of utilizing AI for electronic hardware design
Brain on a chip

In the process of researching and writing my book, I’ve had some great conversations with experts with unique, first hand perspectives on AI. I’ll be sharing some of these interviews here because I think some of you will enjoy reading them in full, and it’ll give a taste of what’s to come in the book.

A previous conversation I posted was with Dr. Joshua Reicher and Dr. Michael Muelly, Co-Founders of IMVARIA, about the of AI in clinical practice, which you can check out here. The below interview is with Tommie Adesanmi, Co-Founder and CEO of CircuitMind, about the AI in electronic hardware design.

Follow updates on the book here, including the presale that will be coming up next month!

Get AI book updates


Most people are familiar with AI as an impressive productivity tool in software engineering. However, one of the most interesting places for massive impact are actually many of the other disciplines in engineering. One such place is actually designing the brains of our modern computing revolution: the circuit boards.

CircuitMind generates electronic designs for circuit boards – those (typically green) boards that have components and traces all over them inside computers, phones, cars… and almost literally everything these days.

As Tomide Adesanmi (Co-Founder and CEO of CircuitMind) describes, the normal process is quite laborious. An engineer would create a block diagram of the design, a high level functional “sketch” of the overall system. Then, the engineer needs to go select components, reading complex technical documents called data sheets—which, from personal experience, can be tens or (fortunately rarely) hundreds pages of dense technical information and charts of just information about the part. 

This process is painful, laborious, and requires a lot of work just to select the right components, connect them together, analyze, make sure it’s all correct, and actually fits all of your requirements. But that’s not the end. After all of that, you need to then stuff these components into an extremely tight space, which is the physical board (which, especially in consumer electronics like with Apple products, have allotted space that only gets smaller and smaller).

Typically, Tomide relates, this process takes between a week to six weeks per iteration. There can be between 3-10 iterations to get to the final design, if things go well. 

CircuitMind uses AI to bring that process from one to six weeks to 60 to 600 seconds. It’s obvious how impactful this would be in all sorts of industries and would completely revolutionize the process of electronic design.

(Full Disclosure: Creative Ventures is an investor in the company)

Topics Discussed

  • What is CircuitMind?

  • Why won’t neural networks or LLMs work for what you’re doing?

  • Will this replace electrical engineers?

  • What advancements do you see as AI models improve?

  • Open weight models vs. specific closed models?

  • Using deep learning or symbolic models

  • Who are CircuitMind’s customers?

  • How much could you accelerate productivity?

  • What’s next for CircuitMind?

  • Have you seen industry pushback on AI generation?

Circuit Mind | Portfolio | Entrepreneur First

What is CircuitMind?

James:

All right, great. We are live. So, Tomide, why don't you introduce yourself and CircuitMind?

Tomide:

Thanks very much. Nice to be here, James. So, I'm Tomide. I'm the co-founder and CEO of CircuitMind. Before CircuitMind, I was an electronic systems engineer. So, I used to build helmet-mounted display systems and heads-up display systems for jet fighter pilots at a company called BAE Systems. Started CircuitMind, and what we do at CircuitMind is we build AI and automation systems for electronics engineers, specifically for the design process.

James:

Okay, great. Why don't you describe specifically within CircuitMind how AI plays a role in what you do?

Tomide:

Yeah, it's a great question. So, the goal of our platform at CircuitMind is to generate designs. So, you, as an electronic engineer, typically today you'll be starting by creating a block diagram that represents the requirements of your design, whether that's a processor connected to some sensors, connected to some drivers and actuators, and you set this all out. And then you go away and start selecting components by reading these very complex technical requirement documents called data sheets, figuring out how these components, what components you want to pick, connecting them together in a design, analyzing the design, making sure it's correct, meets the requirements you need, and then positioning these components in a tight space, which is the physical board. And you're still doing this on some design software.

Typically, this process takes between a week and six weeks per iteration, and between three to 10 iterations to get to the final design. So, what CircuitMind does is instead of taking that one to six weeks, we're bringing that one to six weeks down to something like 60 to 600 seconds. So, we want to do one iteration of that design very, very quickly.

And so we use AI, there are two parts of the platform. One of them is we've been able to build a system where we can describe these components that I mentioned before, the processor, this, instead of this being described as a data sheet, being described in a digital model.

The problem is these technical documents (data sheets) are hundreds of pages long. And so where we use AI specifically we can extract this deep, technical information and create at least some parts of our digital model. We call this system Elixir. It can read data sheets and technical information and creates a model. 

The other part of the system uses more mathematical algorithms to solve for a circuit. AI, the way you know it, like LLMs and so on, are not quite the right fit for being able to read these models and then generate a circuit quite yet. But we use deterministic algorithms.

So you can still call it AI in some cases if you want, depending on how you define AI. But these are deterministic algorithms that solve for a circuit, not intuit an answer.

Share

Why won’t neural networks or LLMs work for what you’re doing?

James:

No, I think that's great. And I think especially for this case, it's helpful to describe why neural networks or deep learning AI does not work for it. I think the audience here would appreciate that. Why don’t you just throw these things into neural networks or LLMs blindly?

Tomide:

Yeah, I mean, the first one is that the LLMs don't really have enough good data on what, or there's not enough data out there on what a good input for a circuit is. You know, this block diagram with the detailed information, the design intent we call it, of the engineer, and what a good circuit was out there, because there are essentially trillions and trillions of potential ways you can go from a set of requirements to a circuit. 

All that typically exists today is data on the final circuit, which is just a drafted documentation of what you have designed in your head. And even when that exists, it's still locked up inside massive organizations, but no single massive organization has enough circuit design data for you to then just even go and train an LLM.

The other thing is roughly, at least what we believe in CircuitMind, we've done a lot of tests is that LLMs are really good for problems that mimic human intuition, rather than problems that mimic human reasoning. And so circuit design takes a lot of steps.

Every single thing you're doing, you're using a lot of data, taking a lot of steps, doing calculations, looking at graphs and pinpointing for me. It's not about one of those things.

It's about all those things that lead to one small decision of connecting one line to the other line. So you see sort of a chain of thought reasoning and it goes through five steps and everybody's happy. You have to go through a lot of things, but a person can do that very quickly when you're doing a circuit design problem. 

So yeah, that's the other bit – intuition versus real kind of reasoning calculation and going through these sort of rigorous steps to get to an answer. 

In fact, we've tried LLMs more on like, just suggest me a component for one little block of mine. And that's actually better because it's intuition. I've seen a lot of things. Maybe I can just give you three options here and the human, you know, sometimes it's wrong, but that's more where I think LLMs could help in the actual circuits design process.

James: 

Yeah, totally. I won't ask you to rag on OpenAI. I’ll do that myself here. Apple's recent paper and some of these others have demonstrated exactly what you're talking about in the sense that it's pretty clear that LLMs do not reason no matter what, you know, the chain of reasoning or other things that OpenAI says that its latest models do. It just doesn't really resemble it even in fairly simple cases. 

So yeah, I think, and I've described it myself this way to machine intuition based on the way that there's an information bottleneck in neural networks. So yeah, totally agree in terms of that makes a lot of sense. And I think it's a great nuance to have. 

Image

Will this replace electrical engineers?

James: 

I guess let's talk a little bit more about then, what do you think the impact for industry is? I mean, I think this will get into your customers. But I think one thing that people are interested in is this is going to replace electrical engineers, like what place does your product serve in the chain?

Tomide:

I think, you know, just high level, these things are tools that augment an engineer. So we're kind of thinking about the augmented engineer.

There's always a spectrum, right? There's always a spectrum that goes from this thing is a SaaS software tool. SaaS software also augments people. But like this SaaS software, but it's vanilla, it doesn't automate anything. It's not really smart or intelligent. And, you know, let's say there’s something that can do all of your job as an engineer. So [CircuitMind] is somewhere kind of in the middle, which is like a massive, massive step change. 

But it's not something that is going to look like a human engineer. The analogy I typically make is, but it will transform what the job means, you know, what an electronic engineer is doing. So you want the more creative things to be done by the engineer, the new things, the new designs, the verification and checking of a design. So creativity, that's one thing that will not be automated. 

The ownership of the design, which involves doing all the verification analysis, making sure it's right. You want the exploration choices to be made by design. So this platform will help you generate 10 different design options, optimize for cost, size, power, change the processor, change that. So you're running through the design space now with this very powerful tool, but the final decision is made by you. And there's a lot of nuance behind the choice that you're going to make in the end. Can't be rule-spaced essentially. 

So you still have to make that choice. So you'll get engineers there. And then the final thing that I think will transform the job of an engineer is, it's going to make engineers less of a specialist in, okay, I only do this part of a design and more of like a system engineer. I'm going to sit on top of these tools, generate a schematic, generate multiple schematics, generate some layouts, generate some firmware and fill in the gaps in all of them. Rather than I'm a firmware engineer, I'm a kind of layout engineer. I'm a front-end design engineer, which is how some companies are doing it today. But the reality is that trend has been happening anyway. So this is just something that's going to kind of accelerate that trend. 

So I think if I were to go back and just kind of say, what's the compendium of this? I think in the future, what people will look like is there'll be these really specialist engineers that are very good at some of these really creative things, like very, very high requirements, analog and RF designs, where you're coming up with new topologies and so on, those real specialists in that. And then there'll be these people who are the opposite of that, which is like super systems that can use these designs as aid. So the first one can do things that these things cannot do at all. And then the second one, they're more like decision makers and systems people pushing these algorithms to generate stuff and then filling in the gaps and choosing the line.

James:

Yeah, that's beautiful. That exactly actually hits on one of the core themes of the book, which is AI at this point are just statistical distributions. They are not creative almost by definition, just in terms of what they can or can't do. So yeah, human creativity, hard to define and still something really, really hard for us to replicate right now. So that's perfect in terms of it. But I'll ask pretty directly, because, hey, you know, you are making these engineers more productive. Is it going to be replacing electrical engineers? Are you going to be taking jobs away? Let's just ask that question directly.

Tomide:

Great question. It's not going to be replacing because there's already a massive shortage of engineers in the industry. I think about a stat I read somewhere. I can't remember where now, like 50 percent of people doing this job are going to retire in the next 25 years or something. 

And the pipeline of engineers feeling that that's supposed to be, you know, kind of filling the gap is not nowhere near what it's supposed to be, because if you're an electronic engineer from Stanford or wherever that's going into the workforce, you can be hired by Google or, you know, some software company or McKinsey that's going to pay you triple the salary or something on entry. So there's no point in you going. So actually, the industry is trying to train apprentices. But same thing – the apprentices spent seven years in the company from the age of 15. When they're 22, they can probably get a much better job somewhere in software or something like that. And so they're hemorrhaging people. So they need something that fills in the gap. And so this is why I really don't believe that there's going to be any replacement there.

That's number one. Number two is now designs are becoming more complex. So people are taking longer to do these designs. And so this is just going to be something that kind of helps them bridge that gap as well. And then there's a massive demand, you know, increase in demand for basically electronics. So demand is far outpacing supply, but it's increasing over time because of IoT, robotics, machine intelligence, and video is building lots of GPUs and the new generation and so on. 

So it's just exploding, right? The demand for electronics. So [the demand is] nowhere replacing the engineers and because there's a massive gap in demand and supply right now.

James:

Yeah, I'd actually be curious if there was actually any industry that looks that way in terms of just not too many people because I don't think I've encountered a single case at this point, whether it's manual labor or higher end, sort of white collar jobs or engineering jobs where the problem is too many people, pretty much everywhere. So now that totally echoes what I've heard in a lot of different areas as well.

Sign up for AI book updates

What advancements do you see as AI models improve?

James:

I guess we've already talked a lot about what AI can't do well, but maybe one thing that would be helpful to know is where do you see if the models get better? And if you keep seeing different changes within the industry, as again, these models get better, where do you see them being able to plug in? And what do you see as your long term defensibility within this particular area?

Tomide:

Yeah. So the place where I see AI being incredibly helpful is like today, even when we're creating these models that represent a component, like if you think about it there are millions of components out there in the market of electronic components. And they come from thousands of different manufacturers. These people write these technical descriptions of these components that again are a hundred plus pages long in these data sheets in their own language. 

And even within one company, one guy might be a creative mark because these data sheets double as technical documents on how to use the component and choose it as well as marketing documents to kind of sell to engineers and say, oh yeah, I can do this great thing for you. And so, you know, some people might write it with a bit of flair and some people might just kind of be very factual. But typically from data sheet to data sheet, they represent things in different ways. And they represent concepts on how these chips work in different ways. 

These chips and all the components work in different ways. What we found at CircuitMind is that LLMs are pretty good at, if you train them, you can do some RAG (Retrieval Augmented Generation) stuff on them. You can pretty much read parts of these data sheets that make up our model, the model that we've created for these components. So that to describe this model is to describe this component. So you can read this. Now we don't have that, this is not at a hundred percent, it's probably 60, 70% we can extract now.

This is one of the main bottlenecks for CircuitMind and we're improving this capability every day. If you can get to 100% of every component readable by LLMs, which I think is going to be possible, I think it transforms the way the sort of unit economics of Circuit Mind as it stands today. We just kind of don't have to think about anything.

So our defensibility is not really in that because I think that's going to be kind of, you know, done by LLMs easily. I think defensibility for Circuit Mind is what is the model? How do you describe a component like a complex chip from NVIDIA, as well as a simple chip from, you know, someone else, maybe microchip and have a unified language for all of that. Took a lot of time with a bunch of mathematicians, electronic engineers, and software people to sit in the room and try to come up with that. It's defensible. It's not, you know, infinitely defensible, but it's somewhat defensible because it's always changing when new concepts are added.

And then the second one is these mathematical algorithms that we've developed over time, which we're still adding to, again, they're also kind of evolving, that can actually solve for a circuit. Those are where our defensibility is, right? Not on the creation of the model, but we'd be so excited the day that the LLM gets to like, you know, 99%, 100%. It changes the unit economics for CircuitMind completely and the value we can also deliver.

James:

And I think I heard you say earlier that you also had different proprietary data and other things that you're able to gather over time with doing this too, right?

Tomide:

Absolutely. So that's another thing. So think about this. Now with CircuitMind, you have a way to have the whole thread of a circuit from design intent through design generation to the final circuit. This is what you have because you're generating everything from scratch. You're not doing much of this stuff in your head anymore. It's going through this process of you had this circuit and it generated this circuit and someone validates it afterwards and says this circuit actually is something that I want to take forward from a list of other circuits. 

So what you can now do is that you can use this validated, both information about design intent and validated data to start figuring out how to, you know, let's say intrude circuits, train on this circuit and kind of do more things that maybe your rules cannot do, or even replace parts of your rules based system. So this is something that we are kind of thinking a lot about within CircuitMind. We can also have access. So we will have access now to data of circuits that people do take forward into their real circuit designs.

Open weight models vs. specific closed models?

James:

Yeah, perfect. And it sounds like as the models improve and everything, it only helps you just because it'll be able to help you parse everything better. I guess one question more out of curiosity than anything else, are you all using open weight models more or have you been using some of the specific closed models? I'm curious how that landscape has been for you.

Tomide:

Yeah, from my understanding, this is something I'd confirm with my co-founder, but it's the closed models that we've been using mostly. We've tested a lot of the models and we just kind of go with the ones that perform better on our data sets. And it's the closed models that pretty much do the best.

Using deep learning or symbolic models

James:

Yeah, I guess it makes sense. You just sort of go off of what works well and it's not like you're locked into them anyway. So yeah, it makes a lot of sense in terms of that. I guess one question too is within the current perspective of all these things happening, are you thinking about anything within say deep learning or even symbolic models, which is sort of a blast from the past beyond LLMs at this point?

Tomide:

Yeah. So when I describe Elixir, again, this is for reading data sheets and extracting information. Actually, I oversimplified it in the sense that we have different models that read different things in a data sheet. It's not just one giant model.

And LLMs are good for certain things, but they're not good for certain things. We have some specific models that are deep learning models that are built to read certain things. And they're not all built by us. We're relying as much as possible on what is out there, like the state of the art. And sometimes we build our own stuff, but essentially this is a set of different things, sometimes machine learning algorithms, deep learning algorithms, LLMs, or even some simple rules based things after some OCR stuff to kind of just extract this information and then normalize the data afterwards. So there's a whole bunch of stuff happening there and a bunch of different models that we use.

James:

And did you have to build your own RAG pipeline or was that something that you were able to get off the shelf?

Tomide:

We did build our own RAG pipelines. I don't know whether that's in production because we experimented with a lot of things,so I don't know exactly what made it to production. My co-founder would probably be able to tell you that. But yeah, it's probably a mixture of some of the pipelines we built and some off the shelf.

James:

Yeah, totally. And I think this echoes a lot of other startups I've talked to where just the progress right now is moving pretty fast and it's kind of throwing together what works. So yeah, I mean, [Creative Ventures are] big believers obviously in terms of what you do and a part of it is I think thematically one thing that you would always want with whatever AI startups that we invest in and think will succeed is as the models get better, the startups get better.

They don't get crushed just because the models end up getting better, which in our opinion is a path to irrelevance in maybe the not so long term. So yeah, we like that characteristic in terms of CircuitMind as well.

focus photo of gray and black circuit board

Who are CircuitMind’s customers?

James:

So I guess in terms of maybe one thing to also get a sense of is what sort of customer set is your biggest right now and where do you see that going ultimately?

Tomide:

Customer segments?

James:

Yeah.

Tomide:

We first started testing this technology with SME (Small and Medium-Sized Enterprises) companies. That's where you typically start. Who can move quickly? Who can test this out? Who wants quick value out of this? But we found that there's a lot of enterprise value for larger companies.

So these are companies with, let's call it worldwide, more than 50 electronic engineers. They're doing 50 plus circuits a year, typically doing kind of high complexity circuitry. They're typically also working on final products where the electronics in it are the long lead time sort of component. So think about, I call it complex electronics in the box. Like if you're doing a vehicle, a car, there's a lot of other things, harnessing this, the engine, etc. But if you're doing like a data center server, it's like just a thing with electronics in it. It's like a box with the electronics in it. 

So “complex electronics in a box” enterprise companies is where we're putting a lot of focus, but we have automotive, aerospace and defense customers. We have medical, EMS, design services, industrial. So it's not really an industry kind of segmented problem. It's more of a problem of how many circuits are you doing? Are you doing enough for this to be valuable to you? And are you blocked by the electronics in terms of lead time?

How much could you accelerate productivity?

James:

Yeah, totally. Maybe this is completely off the wall, but one thing that I've been pretty public about is my skepticism towards the AI accelerators or ASICs, just because the architecture of whether it's LLMs or deep learning or whatever does change. I mean, now we're looking at certain gradient free networks, KAG networks, like some of these other things that just existing architectures and ASICs probably wouldn't hit. 

Maybe this is complete speculation, but do you think since this is an electronics in a box kind of situation, do you think you'd be able to bring productivity so high up in terms of designing these things that it actually becomes cheap enough to defy my prediction here?

Tomide:

Well, that's a good point. I don't think it would be enough productivity to defy your prediction because I believe with chips, if you're right, the problem is that to verify it, it's not even the design of a chip, it's the verification of a chip. This can take you years.

This is the long cycle thing for an AI accelerator chip company, the pre and post silicon validation step. So we will bring some productivity when you're actually then prototyping once the chip's there and putting it on the board, but it's going to be nowhere compared to the time they take to do this sort of chip validation. 

So I would love to say yes to that question, but my suspicion is that if you're correct, then there's a fundamental flaw in the business model.

James:

Yeah, totally. I mean, just out of curiosity, do you have any predictions in that realm or thoughts on that realm?

Tomide:

I think it takes some faith to believe that GPUs are not going to continue to be the dominant way.  ASICs that are developed in a certain way are going to be much better than GPUs that are doing this. And I don't know about that. I'm not an expert, but that's, I think, what you have to believe.

James:

Yeah, totally. I mean, I agree in terms of that. I think it even takes a leap of faith to believe that AMD would be able to actually jump in and become a more significant part of the training pipeline or whatever over NVIDIA, given their very poor track record of being able to develop the software on their side. But anyway,we'll have to see how everything plays out. But yeah, I think you're right. It's a lot of leap of faiths and I'm in agreement in terms of that.

What’s next for CircuitMind?

James:

Well, I think one of the, as we're starting to wrap up, one of the things just to talk about then is, yeah, what comes next for CircuitMind? What are you excited about? What are you looking forward to?

Tomide:

I'm excited about a lot of developments that we're making. We're automating more and more complex parts of the design process, which take more time and where people can get more optimization. So power, analog stuff, that's where we're going. Like high speed FPGAs and switches and optical. You just take more meat and build more defensibility when you go, when you are able to do more complex things. And that really, really excites me.

And then the other part that excites me is just being able to come up with a testimonial that says, well, this sensor device that's now in production was 70% generated by a machine intelligence, let's call it that, a machine intelligent platform. I think that also excites me as we grow in the market. And then other business models open up. Component suppliers want to sell their chips and want to know which customer chooses someone else for what. You know, having all these conversations in a place where you are one of the only solutions is pretty exciting.

Share

Have you seen industry pushback on AI generation?

James:

Well, I guess actually on that, just because in certain industries even if you have shortages of people, you do get pushback on some of the AI generated stuff. Have you seen any sort of pushback within your industry or any sort of objections from engineers about what you're doing?

Tomide:

Absolutely pushback all of the time. I see that all the time, but it really depends on your technology and your platform, your product strategy. 

There are some engineers who are certainly fearful, but not most, like that might be an initial reaction, but most engineers are just engineers. They're pragmatic, right? You can't come to them and say to them, yeah, well, there's this LLM and it said this line should be connected to this line. 

Okay. What's the proof? Do I need to go and reread the data sheet for that and do it myself? Right. It's just not pragmatic if you cannot trust and you cannot explain what you can do. So when CircuitMind comes in and says, look, there is some AI, the stuff that the AI does is checked before you ever even search, it's just about creating the models. And then there's this deterministic thing that is kind of like a simulation of sorts, but it's not really a simulation, it's a solver. And you can explain exactly how it works and it gets you the circuit and it's following these rules. Plus here's another engine that just does checks. 

It's all about doing verification of that first design. One thing solves, one thing verifies, then it's a different conversation completely. It's what I see designers have been doing since the eighties with synthesis and place and route engines. There's nothing kind of really bizarre about that concept. So as long as you can build something that's reliable and trustworthy, explainable, and you know, verifiable for an engineer, you're on the path to convince them. 

And then there are a lot of engineers that have started using ChatGPT in their personal lives. There are now early adopters within every organization. You just need to find those guys. They need to raise their hands and volunteer. They might not be your best engineer. They might not be your most seasoned, or even they might not be your most junior. It's the person that raises their hand to say, I volunteer to bring this into my workflow and test this out for you guys. That's the guy or the lady that you want to get involved to start with.

James:

And it sounds like the thing that you help bring is a transparency and understanding of how your process works, which is how you build trust. It isn't just, oh, we are a hundred percent accurate or 99.99% whatever accurate, it's here is how we do it. And you can understand it inside and out.

Tomide:

100%. In fact, our demo doesn't show a demo of the platform only. There's a demo of the component models. There's a pipeline that creates all these things together. 

That's what creates confidence. And when you generate to show the checks and the separate engine that checks and the results from that check, you have a full report that the engineer can go and look at. You have some full analysis of the circuit as well on its reliability and so on. And these are rules-based. So you kind of have to have all of those things all together.

James:

Yeah, totally. And one of the reasons why I bring that up is this has come up in industry after industry. It's not really about guaranteeing someone perfect accuracy, which I'm personally skeptical you will ever be able to do for multiple reasons and not the most limited being that just doesn't fit how deep learning works of any sort. 

But just ultimately like you can't trust it unless you can have some level of explainability and transparency. So yeah, that totally resonates and fits with almost every other area.

Tomide:

And just to add to that the first thing you have to say to people is like, it's not like people are a hundred percent accurate, right? Like, so the first bit is that I talked about these three to ten iterations. No design is done the first time, right? You're going to make a mistake and you're going to go do a re-spin. So what you need to do is say, okay, this thing at least has a chance of being better or more accurate than the person, or in addition to a person can be more accurate. That's kind of what you have to prove. Not a 100% accuracy.

James:

Yeah, totally. I mean, someone else at another one of these interviews had made that same point where if you're talking about clinicians and you're criticizing deep learning of being a black box and not perfectly accurate, guess what, a clinician is a black box and you still have to figure out processes to deal with that ultimately.

So yeah, totally makes sense and comports well with what a lot of other industries are saying in adoption of this.

All right. Is there anything else that I should have covered, but didn't before we wrap this up?

Tomide:

No, not really. Those were great questions. I enjoyed that.

James:

All right, perfect. Thanks so much, Tomide.

Tomide:

Thanks a lot, James. Thanks for having me.

Thanks for reading!

I hope you enjoyed this interview.

Once again, this is a preview of the type of content in my book.

If you’d like to see more on this topic and others on AI, follow updates on the book release (and upcoming presale) here!

Get AI book updates

What You Need To Know About AI book by James Wang

Discussion about this podcast

Weighty Thoughts
Weighty Thoughts Podcast
VC on AI, deep tech, startups. Former Bridgewater, Google[x], startup founder. Read by top engineers, fund managers, and policymakers.
Listen on
Substack App
Spotify
RSS Feed
Appears in episode
James Wang