Podcast
Episode 
16

What Fascinates Venkat Krishnamurthy of Salesforce?

Talking supercomputers, AI and big data with the Forrest Gump of Tech

0:00
0:00
https://feeds.soundcloud.com/stream/1785575841-bobby-mukherjee-563753774-loka-podcast-with-venkat-krishnamurthy-senior-director-product-management-at-salesforce.mp3
What Fascinates Venkat Krishnamurthy of Salesforce?

Venkat Krishnamurthy describes himself as “the Forrest Gump of Technology,” and he’s only partly kidding. His experience in finance, supercomputing and big data brought him from Goldman Sachs to the top of Salesforce, where he’s currently senior director of product management. Venkat talks to Bobby about his diverse professional background, how the evolution of supercomputing led to our current AI-dominated moment and the ways generative AI will continue enhancing data analysis in the future.

Transcript
  1. 0:00 The Forrest Gump of Technology: From Goldman Sachs to Oracle to Cray
  2. 15:30 Supercomputers vs AI—and the Classic Hacker Flick “Sneakers” 
  3. 21:00 The Origins of Open Source and its Influence on Data & GenAI
  4. 33:15 GenAI and Cloud-based Data Solutions

Bobby Mukherjee: Welcome to Loka's Podcast, "What fascinates you?"--conversations with entrepreneurs, engineers and visionaries who are bringing innovations to life. I'm your host, Bobby Mukherjee. Today I’m talking with a very talented friend of mine, Venkat Krishnamurthy. I met Venkat a few years ago when he was at a company called OmniSci, who came to Loka as a customer. We learned that we’re both Carnegie Mellon grads and that we share a deep passion for product management. These days OmniSci is known as HEAVY.AI and Venkat is a senior director of product management at Salesforce. I’m glad we’ve stayed because we had a really stimulating conversation. Well, welcome to the show and let's start by having you introduce yourself. Venkat Krisnhamurthy: Hi, I'm Venkat Krishnamurthy and I'm a senior director of product management at Salesforce

Bobby Mukherjee: Excellent. Well, welcome Venkat. It's great that you're here. I know we've been planning to do this for some time, so I'm glad that we could finally do this here in Salesforce tower in San Francisco.It's lovely, lovely to be here. So oftentimes I like to sort of rewind back the journey and kind of go to the origin chapter and would like to do that with you. So. I've known you for a while, but I actually don't think I've ever asked you, how did you, and why did you pick mechanical engineering originally in college as a major?

Venkat Krisnhamurthy: This is easy. You know, it came down to the way the college system works in India. You write an exam and depending on your rank, you kind of get allocated a major and you don't have a choice. So that's basically how it works. I don't think I was particularly inclined one way or the other towards any, I mean, everybody wanted to do something with, you know, engineering.

Venkat Krisnhamurthy: You know, computers back in the day, right? Right. But that was kind of why. And yeah, that's a fairly simple answer. 

Bobby Mukherjee: Yeah, no, that, that, that makes a lot of sense. So then how, you know, from that starting point, how did you, you know, make that transition to software development, computer science and, you know, data and analytics, how did that happen?

Venkat Krisnhamurthy: This was also, I think, you know, it's one of those, when there are a few, it's kind of a fate moment. It's like, we don't have too many choices. You kind of have to go on, right? Okay. What happened was at the time that I graduated, sort of the IT revolution in India, it just kind of picked up. It was all based on the primary driver.There was the sort of services, IT services, and there was a company called Infosys, which at the time that I, that I left college was in the hundreds of employees. And I was like about three orders of magnitude bigger. 

Bobby Mukherjee: Wait, wait, in the hundreds? 

Venkat Krisnhamurthy: In the hundreds. Actually, yeah. I was, I think in Maybe over, when I joined, it was probably over a thousand people, but everybody could fit in a building and they just had a new headquarters built on the outskirts of Bangalore and they were basically recruiting people from the IITs, right?And at the time I didn't, you know, a bunch of family things were happening, but I didn't want to go to grad school in the US and so I just ended up like a whole bunch of my classmates. Starting there and it turned out that I liked software development and coding and it was fun. It almost felt like an extended form of the college atmosphere.

Bobby Mukherjee: So from there at a certain point you ended up in New York working for Goldman Sachs, which is quite, quite a contrast, I would say. And I remember you telling me that your experience there at Goldman, you know, was both like an early introduction to entrepreneurship and to machine learning. I mean, there's a lot to unpack there.Could you talk through that a bit? 

Venkat Krisnhamurthy: Goldman happened a couple of years after grad school, my grad school was deferred because I spent a couple of years in Infosys. And as it happened, my primary customer at Infosys was at that time, not a particularly prominent company called Apple.And I decided against all reason to go to grad school for a few years in after that, I ended up a year or so later, I ended up at Goldman and the reason, you know, you bring up entrepreneurship and machine learning both. So one thing about Goldman is that while you have your day job and mine was in risk analytics supporting market risk management and operational risk and all of those typical capital markets back office type of thing, right.You know, I also saw some chances to improve the way those things were done. Using some of the stuff I learned at Carnegie Mellon University grad school and Goldman sort of encourages that if you can find the time, which basically meant, you know, finish your day job, then fine, time to do this, which I did. And the sort of machine learning angle came in, the problem was interesting, right?It was kind of in the operational risk realm where you had a bunch of unstructured data in the form of incident reports. Which every time, like, for example, there is a system screw up, or if there is like any sort of operational incident, you would have these exhaustive reports written up. And those reports factored into what's called an operational risk capital charge, which was in the order of billions of dollars.Right. And the entire sort of remit was to try and reduce that as much as possible, because that's money that you're not setting aside to do actual sales and trading or investment right. So there's the opportunity cost of that. And so the idea was they had a, an army of people who would look at those reports and try to categorize them, you know, in terms of different buckets of potential exposure and what the capital charge was attached to that.I'm getting into a lot of details here. The basic idea was you were looking at someone's basically text and you were transforming that into a monetary value that you would associate with the capital charge. And what I saw was an opportunity to use. You know, natural language processing in 2006, 2007 vintage to, to try and do this automatic classification.And yeah, we came up with this, we built like a prototype and we showed it off to a bunch of folks in and around the organization. And that turned out to be, people liked it a lot. I remember showing it to the head of compliance at the time at Goldman. And he said, I didn't know computers could do this.That was interesting. And it was entrepreneurial because, you know, idea kind of put it together, show it to people, get interest. I didn't end up developing it into a business because I had the other thing that was dragging me down, just my immigrant status at the time. Yeah. And low and behold at the time, it also happened that there was this minor event called the financial crisis.I kind of got a little bit disillusioned with the whole industry at the time. And I was always looking to make my way back to the West coast. So yeah. So, I mean, the summary here is that I found a way to, you know, use natural language processing and tech as of that vintage to an actual problem in, you know, operational risk and it was pretty interesting.

Bobby Mukherjee: If you were to just extrapolate from that moment in time to now, if you were trying to solve that today, what have been sort of the technical plate tectonics that have changed that would have made it much easier? I mean, is it? 

Venkat Krisnhamurthy: Oh, that's a great question, Bobby. And it's like, you know, the, the plate tectonics from the time there was a Pangea, it was like this one sort of landmass to now, you know what the continents look like.That literally the sort of all of that, all of the evolution that's happened from then to now, you could think of it that way. I mean, right now, unless you've been hiding under an earth sized rock, the entire AI revolution is based on in some form, text processing, right? And except that the, the, the entire stack that powers it looks nothing like what, you know, you had back 15 years ago.

Bobby Mukherjee: To mark the moment in time as we sit in front of each other today at this very moment I believe Nvidia has passed Google in terms of market cap Which is just really great, which is just something else I mean it shows you the the wave that we're presently in so at some point you You ended up at Oracle where I had spent some time as well and early in my career I'm just super curious.How would you compare the culture at Oracle to that of Goldman Sachs? 

Venkat Krisnhamurthy: You know, this is something that I've thought about ever since I, you know, left Goldman and I went to Oracle right after that. And as it happened, the person that hired me at, at Oracle was an investment banker by training. And he now runs, interestingly enough, a fairly successful tech startup.In the data common space correlation and you should probably have him in your podcast. Okay. The interesting thing here was, I think Goldman, if I reflect on it, was a sort of risk taking culture. And what I mean by that is the basic revenue would sort of driver of, you know, what made money for Goldman was sort of this idea that you could take risks.If you knew what you were doing, you were conscious about how you took the risks. You were allowed to make mistakes a couple of times. If you made the same mistake more than a couple of times, then it didn't end well. But it was risk taking in the sense that Hey, new ideas, new approaches were always encouraged.And I think that was built on this foundation of communication where there was no hierarchy. So the idea wasn't that you had like your own fiefdom and you know, you had like some big head honcho at the top, you know, you could only approach in sort of that bar or something like that. It was more like you could go talk to anyone.And if as long as you had a case to make and you had like the data to back it up or like your conviction to back it up. You could go talk to literally anyone and that was encouraged in some ways, you know, the, I remember there was like a bunch of McKinsey folks analyzing how Goldman survived the financial crisis.You know, there's no secret sauce other than the fact that everybody was on the same page with respect to what was happening and who was taking what risks and how to unwind these risks. So risk taking was, so which is kind of why if I look at my own little entrepreneurial tiny thing that I did there, it was allowed because that was the culture.

Bobby Mukherjee: Which is just really amazing to hear having never worked at Goldman. But my, my sort of perception of Goldman outside looking in is that it's this firm that's over a hundred years old and you know, I'm a West Coast guy. So I assume sort of a New York type culture would be much more stiff and rigid.So to hear that they were actually much more entrepreneurial than that and much more flat than that. 

Venkat Krisnhamurthy: It was, it was really direct. Like the communication, I mean, it's first of all, New York. So you could imagine people don't have time for nice teas. 

Bobby Mukherjee: So very, very straight to the point. 

Venkat Krisnhamurthy: Exactly. And that worked.And I think the other thing was that struck me about Goldman was the, the idea of, you know, people talk about culture in the context of companies. And I, that was, it was almost a tangible thing. And if you think about the fact that this company has been on for like 150 plus or more than that now, 150 plus years.Everybody sort of internalized this culture, you could call it drinking the Kool Aid, but it was more like the intangibles of You know, communication and things like, for example, there was always 360 degree reviews. So everybody got to a point in your performance and what you had, what you did and all those, those types of things.So there was a very tangible sort of culture to it, which I think just survives generation after generation. And that's kind of the amazing part. And the other thing that was striking was how advanced It's probably no surprise, but you know, the firms that in high finance, you know, capital markets and the buy side they are basically technology companies.I mean, there is no tangible product. You're not like building stuff that goes on a shelf. Right. So it is information and they have the capital to kind of live at the technological edge of how this all works. And that was, Goldman was like that. Some of. The smartest technologists that I've ever worked with were there to this day. 

Bobby Mukherjee: Yeah. It's a remarkable place. So if you were to shine a light then on the culture at Oracle, what would you say there? 

Venkat Krisnhamurthy: Yeah. I, I always like to say that I learned the, you know, the most interesting things about technology that I learned was at Goldman and the most sort of my education in terms of business and how it works was at Oracle.Kind of like the mirror image of what you would expect. And I think Oracle, the, the commercial focus was just astonishing. You know, the, the go to market organization from sales to, you know, everything supporting the sales organization to marketing, that was a machine in so many ways.And so I think in a few ways, it sort of made up for the fact that Oracle stopped being a true technology innovator. You know, a few years ago, and they just, right now, the, the idea of being able to sell literally anything. Whether it's acquired or whether….

Bobby Mukherjee: It funny. I never even thought about that, but that's so true.I think that yes, Goldman you would think of as like a financial services and Oracle, you would think it was a technology firm, but if you think about Oracle's success in the, in the, in the previous decades, that was really all financial engineering excellence of all the mergers and acquisitions. 

Venkat Krisnhamurthy: It's like, you know, the, the joke about Goldman used to be that it's like a hedge fund with an investment banking business attached and the thing about Oracle, that was sort of a corollary was it's like basically like a private equity firm with the technology business. 

Bobby Mukherjee: I’ve never actually heard that, but that makes a lot of sense. So another one of the places that you had to stop was Cray. So start off maybe by talking a little bit about what Cray Computer is and then some of the experiences that I think might be really interesting for people to hear about.

Venkat Krisnhamurthy: Yeah, I think this is a part of my career where I actually tell people I'm like the Forrest Gump of technology, kind of seeing these things play out in different scenarios across different industries. Yeah, I think Cray was an interesting stop for me. You know, at the time that I was at Oracle, I was sort of getting a little bit.Disillusioned with what I was doing there as a product person. And, you know, fun fact is that my manager at the time at Oracle and I were working on sort of at least putting together the idea for what became a company called Alation. And he of course took it and he actually built the company out and, you know, I couldn't join him at the time, but I did sort of feel that I wanted to work at a smaller early stage company.And, you know, I was at one of these, at the time there was this thing called big data about 10 years ago, and I ran into someone at a technology conference who was sort of. Incubating a company that was parent company was great. And this was doing work in large scale graph analytics and, you know, customers in the, in the life sciences and in government.And I just happened to have done work in that area at Goldman coincidentally. Yeah. And so, you know, the dots connected and I ended up there. So it was for the first couple of years, it was this incubated startup. And then we tried to spin off and that didn't work out. So we, it ended up getting spun back into Cray.So, and I spent about a couple of years in the CTO office at Cray. 

Bobby Mukherjee: And just giving kind of listeners, the basics, what Cray, I mean, I think of it as, I think of it as like a supercomputer company, but you could probably do better. 

Venkat Krisnhamurthy: No, it is at the heart of it. It is, it is exactly that it is a supercomputer company.And. The idea of what a supercomputer is. You know, I think CRA's sort of legendary. Mm-Hmm. image when it comes to super computing is the original CCRA one. It's sort of the semi-circular right machine, which was everything. Literally the processors, the, the entire sort of infrastructure was built from the ground up.Right. Yeah. And it was designed to solve large scale scientific problems. And that was the 

Bobby Mukherjee: So I know of Cray because, I mean, this is an obscure film reference, but it is featured in a scene in a movie called Sneakers. If you've ever known the movie Sneakers with Robert Redford, where they play a bunch of hackers, basically.And there's, I hope I'm not spoiling Sneakers for anyone who hasn't seen it, but there is a very important scene where Ben Kingsley and Robert Redford meet to have a conversation about something and they do that in that semicircular Cray thing. So it plays a role in sort of pop culture that the design of the Cray, the supercomputer, 

Venkat Krisnhamurthy: What's interesting is when I joined the number of people that were still at Cray who had been through, you know, think of technology companies and for example, the value, right?There isn't a culture of continuity in terms of people just, you know, move in and out. Right. But I actually met some of the engineers who worked with Seymour Cray, the founder. Yeah. And I heard like all these terrific stories about what he was capable of. Wow. And you know, things, the kind of designs that he could come up with essentially on pen and paper to build.Bobby Mukherjee: But the crazy part is the original supercomputer, the original Cray 1, probably, you know, is not as powerful as, you know, the, the laptops that are sitting on our desk here, you know. 

Venkat Krisnhamurthy: Probably less powerful than a modern Raspberry Pi in several ways. But then if you take. The current generation of tracer computers, these are, you know, you're talking machines that can handle an exaflop, which is, you know, getting my numbers right here, 10 to the 18 operations a second, 14 point operations per second.

Bobby Mukherjee: So just to give someone a visual picture, like how many MacBook Pros is that? 

Venkat Krisnhamurthy: So the modern MacBook Pros aren't linked. These are pretty powerful machines too. You're talking at least on the order, pure computational horsepower, at least on the order of thousands, if not tens of thousands of these machines.And I think the, the amazing thing is that the machines, if you go to many of the national labs, In fact, I would say all of them have great machines or have had three machines, right? If you've heard of NCSA Blue Waters, like this is one of those famous ones at the University of Illinois, Marc Andreessen worked at NCSA.Blue Waters was a Cray machine, you know, and then all these, the newer ones are at Los Alamos at Lawrence Berkeley. in places like that. These machines are, I think the biggest thing I learned about supercomputing and its primary, what people do with computers is, is science at scale, right? So the idea is that you have the ability to model scientific phenomena with almost real world fidelity.So that is like, if you think about it, like you can model the inside of a star with amazing fidelity, you can, and this is another interesting fact, 65 to 70 percent of the world's weather forecasts are run on Cray supercomputers. Oh, wow. Literally everywhere by the center, at least back when I was there, used to essentially go through a cycle of buying Cray machines.And so these large scale physics based phenomena that you simulate were the primary workload. Then everything changed because the same exact architecture and engineering that goes into these scientific computing machines, which are kind of what the primary purpose of supercomputers is, now turns out that AI uses exactly the same.Sort of engineering principles, right? So you hear the term AI supercomputers now and pretty much the same thing. 

Bobby Mukherjee: Yeah. So in many ways it was kind of like the precursor to the, you know, circuitry and, and semiconductor excellence that NVIDIA provides as a basis for, you know, changing AI before that was available.

Venkat Krisnhamurthy: The big position that about 10 years ago, and this was like, even I think in the time that CUDA first came out as a programming framework, and a couple of years before that people had figured out how to use graphics cards. To do large scale, well, large scale parallel processing that you couldn't otherwise do, right?And one of the first Cray machines, which was about 10 years ago in Oak Ridge National Lab, was the first one that incorporated GPUs as part of the compute fabric. So they had like 10, 000 GPUs and that was like, whoa. And these days that's like, you know, not much, but you can see sort of the connection between The compute used for AI and traditional supercomputing 

Bobby Mukherjee: Should have bought some NVIDIA stock then!

Venkat Krisnhamurthy: It would have been good.

Bobby Mukherjee: So I wanted to move the discussion to talk about open source data and gen AI and do it in in phases. So maybe start off for the audience, because I know this is an area of interest for both of us that we share, but maybe as a primer, give the audience maybe a primer on open source. 

Venkat Krisnhamurthy: Sure, you know, this goes back to well before my time, I would think, right.And I think a lot of it has to do with the Free Software Foundation, Richard Stallman, and you know, the whole set of, and in parallel things that came out of Berkeley, like the BST Unix distribution. And I think this whole ethos that I think one of the articles that you pointed me at, Bobby, said something to the effect that all software used to be open source, because most software was like, you know, research and open source at a particular point in time.That changed when sort of commercial incentives came into the picture and all of that, but the idea was that code was something that you shared and people improved and whatever purpose it was used for, you kind of had this culture of sharing and sort of collaborative innovation that was always there.It started off, I think the big prominent drivers were things like free software foundation and sort of the licensing ideas that Richard Stockman came up with. But then there's a whole bunch of these things that popped up at about the same time, right? So, for example, if you take in the early nineties, I think it was, Python emerged as a programming language.And it wasn't really, I mean, if you think about the history of what it's meant to AI, I've actually interacted with folks in that ecosystem who are like, who lived open source in a very real way. They do it for, just to advance what's possible. Yeah. And yeah, so you know, that's kind of the start. I think the, the, the original ideas were just the idea of sharing code and being able to collaboratively innovate on solving specific problems in any area, right?

Bobby Mukherjee: And I mean, I think it's very interesting. We were probably, we could probably, you know, chart a graph where, you know, in the ‘50s, ‘60s, and maybe even ‘70s in the technology industry, money was being made with hardware. And so, you know, doing software for the greater good and the collaborative thing, and there will be no economic value exchange, which is fine because we can pay the rent with the hardware bills and the hardware sales guys can take care of that.But then I think You know, you could probably draw a line to see what happened when Gates struck the OS deal with IBM, you know, and what happens to the free open source movement. It takes, it goes into an ice age, I would say, and it wouldn't be until Linux that it has like this, Next kind of step function growth, you know, 

Venkat Krisnhamurthy: It's a great point because once the commercial incentives, I think that's a, that's a key observation because once the commercial incentives changed on software, I mean, nobody had time to do.

Bobby Mukherjee: Yeah, they could pay rent doing this. And we all have to pay rent at the end of the day. So that's open source. That's a good primer. Getting to our area of interest, how do you think open source has impacted data and data infrastructure? 

Venkat Krisnhamurthy: Well, this is, it's almost like a separate area by itself, right, compared to the broader open source world.It's so many different dimensions of it, but you know, I like to think of a couple of sort of concurrent trajectories. So one is in data infrastructure. And by that, I mean, if you leave aside things like operating systems, which Linux, the open source weren't there. Data infrastructure for a long time was a preserve of Big companies like Oracle and the like, but even then, like, again, I don't know the exact timing of this, but I know MySQL and Postgres, which came out of Mike Stonebreaker's research initiatives at MIT and others, these were open source and they were relatively low key at the time.And they served very specific needs in terms of, well, an alternative to something like Oracle that you could use for not too serious work at the time. I think that changed in maybe the 2000s, like the mid 2000s, where it became the momentum behind this, especially with the rise of the big cloud providers.Nobody wants to fork over like. gazillions of dollars of licensing to a commercial database company. And so that began a whole set of set of whole set of things in motion. So that's on, you know, there's two aspects of this. One is like core data infrastructure for recording what your business is doing, right?Your MySQL and Postgres and the like, but the bigger momentum shift, at least in what, at least in my mind was, On the analytics side of things, right? So, you know, the famous or rather infamous Google paper about MapReduce came out in 2004 or so, then everybody was like, well, it seems like we can do large scale data processing without paying Teradata an arm and a leg or, you know, Oracle or, you know, others.

Bobby Mukherjee: Sharding is your friend.

Venkat Krisnhamurthy: And then, so Doug Cutting went off and wrote Hadoop and you know, and I have an interesting story, you know, the work that I did at Goldman, we actually used Lucene, the original Lucene engine as the basis of, you know, building a clustering. Sort of framework on top of that because it does the text processing that we needed to do.And so the idea here is that that the analytics and data processing side of things, as opposed to the, you know, the OLTP or data recording side of things really blew up when Hadoop came along. And it was adopted by, I think, other than Google who kind of built the original thing. Right. Facebook used it, pretty much a whole bunch of infrastructure and they can't be with Hive and things like that.And there was in the early 2010s, that was the rise of big data as this sort of discipline. But then the entire movement was based on open source. An open source foundation, but what was interesting was that was Hadoop and I think kind of raised the prominence of open source, but there was an even longer community, the Python world, you know, think of NumPy and these Python tools that you use for data processing.That existed for even longer and were open from the very beginning. And, you know, I actually had a chance to, I work quite a bit with Travis Oliphant, the creator of NumPy and, and what's happened is that Python's ease of use and sort of familiarity kind of concurrently moved with what was happening in Hadoop and it turned out that Hadoop was actually pretty difficult to use Hadoop or even Spark to a certain extent.And then with the rise of. AI, when NVIDIA sort of people realized they could use GPUs for this stuff, they needed a sort of user friendly programming environment. And that turned out to be Python. Python just won that pretty easily. And you think about it, all of this is based on open source as a foundation.And I keep thinking about the fact that you take a framework like PyTorch or TensorFlow when they first came out, they were open source from the beginning, because the incentives for Google and Meta were not to monetize the frameworks, it was to have like a community of people working on sort of the infrastructure while they You know, could work on attaching it to user experiences within their properties.

Bobby Mukherjee: Right. So yeah, again, there was a clear monetization angle for them. They weren't doing it out of the kindness of their hearts, but they knew how they were going to get paid. And so it was in their best interest to, to, I guess, honor the open source edict. 

Venkat Krisnhamurthy: And, and what's, what I think is amazing about this is leaving aside the whole, you know, thread of open source into every aspect of The data infrastructure world, like every part of the stack, you can name like cutting edge sort of open source tools that really drive innovation.It's true. And I think it's become an ethos now to think about, you know, how do you solve these problems in an open way, which I think is more powerful than any particular framework or like, I don't, if you go to the average commercial enterprise company these days, they'll have a lot of open source sort of.Projects that are based or efforts that are based on open source technologies because the inherent advantages that it has, right? In terms of no vendor lock in the ability to See what's happening out in the open and be able to contribute to it. You know, not everybody does it to an equal degree, but it's become an ethos to the extent that you don't, you're not, tied to a commercial entity or a vendor. And I think that's a good thing. 

Bobby Mukherjee: So what impact do you think open source data will have for the training of the various LLMs that are currently battling it out? 

Venkat Krisnhamurthy: That's a that's a great question because I think it comes down to two different aspects of the whole GenAI movement, right?So the first is the models themselves, and the other is in the data. And it's worth separating the two, at least in my mind, because what everybody's excited about is the models because they do things, they can be used to do things, but all of these models, at least the open ones ultimately are trained on a set of open data.You know, data that's out there with, you know, liberal open source licensing, and they can be used for pre-training these models. So whether it's Lama or whether it's Mixtel or, you know, all the open source variants, they all use a common set of openly available data sets. Now, there are some issues with respect to attribution and copyrights and whatnot that haven't been fully resolved.But you can trace back the sort of inputs into open source models to these, to these open data sets. What I find interesting about that is that not every company discloses their training data, including meta as it happens. So the weights are open, the data sets aren't, and that's probably because there's some legal implications involved in those training data sets.But that's one, one side of it. And on the, on the markets themselves, I think, you know, the way it's lining up is that not everybody has the resources of wherewithal for training the markets, because it does take a huge amount of capital investment. Yeah. You know, if you have a spare supercomputer lying around, you should probably do it.The bigger deal that I see is that the architectural innovations will continue to happen, you know, on the basis of these open models and you know, how do you, how do you make the next Epsilon improvement that becomes, you know, that creates, you know, better reasoning or whatever, right? So there's two parts of this data part.It's less clear that it's open source. Some of them are like, for example, there's a couple of foundation model builders that are doing this with. the truly open data itself. And so the advantage of that is when you think about things like ethics and fair use and, and bias and all of these things, you do want to look at the data.And that is important. On the other hand, the sort of incentives now are like, Hey, there is an open model, which to which I have access. And maybe I don't need to pay too much attention to where the data came from, but I think that it changed. So I think a lot of the development is going to be done in the open.It'll be interesting to see how the data side of this plays out and whether that also ends up becoming truly open. 

Bobby Mukherjee: Agreed. So how do you think generative AI will impact the demand for cloud based data solutions in the near future? 

Venkat Krisnhamurthy: So I think It changes the mix of what people expect from a data foundation, right?And this, I go, go back to my experience at Snowflake. So Snowflake kind of effectively re invented what used to be called data warehousing. And it's, it's a huge thing because they transformed the user experience of what a data warehouse used to be. You would have to bring these giant boxes from Teradata to the data center, hire armies of consultants to build out like data warehousing solutions.And they made it as simple as swiping a credit card and doing stuff. But that was serving structured data and analytics and BI in the way that used to from, from the very beginning data warehousing was about. GenAI is I think primary impact is going to be in the modalities of data that people use within analytics and of any type of analytics and insight of any type, right?So, so anything that you do with GenAI is going to primarily involve unstructured data for the first time. And that is kind of where I think the primary impact is going to be because, you know, there are a lot of companies that are trying to bring the two together. The world of unstructured data requires a whole different set of capabilities at the, everything from the infrastructure level to the software level to how do you process that data, data, and how do you ultimately turn it into structured data so you can ask meaningful structured questions of it.Yeah. And that is kind of where I think the primary impact is going to be because. It is still, nobody wants to do this all themselves. So it is going to be capabilities that are delivered as a service. I think the, the, it's going to change the kind of infrastructure and certainly the kind of economics of this, because.Let's say you're using Gen AI in the context of an application. Do you want to pay 10x more because somebody's using GPUs in the backend? I don't know. This is the stuff that has to play out, which will be interesting to watch. 

Bobby Mukherjee: Yeah, I very much agree. Do you think generative AI can contribute to making data analytics more accessible to non technical users?

Venkat Krisnhamurthy: Oh, absolutely. This is kind of, again, just looking back at It's Snowflake and peer companies in that group. It's always been this holy grail of saying, okay, I just want to talk to my data and get. Answers. And we think about the evolution of BI from, you know, you used to have static reports that literally came out and you had to look at them and that's all the information or insight you got about your business.Then it became, you had tools like Tableau, which sort of transformed that by saying, Hey, you can actually explore it, but in sort of a limited and somewhat circumscribed setting. And, but you still, you know, you were living within sort of a bounded world of the questions you could ask of your data.And it wasn't really. You know, the cliche term is obviously conversational. You weren't really interacting with your data. You were consuming it ultimately, sort of coming up with ideas based on what you saw. Now it's the possibility of promise is that it can be actually used. Interactive in a very real sense where you can have a multi turn sort of exploration of it like a conversation.Exactly. So the user experience is fundamentally going to be transformed. I don't know what it means for like, you know, if your primary modality of consuming data was a really cool dashboard and a dashboard is still pretty powerful because of the bandwidth. You know, visual data consumption and insight is still much higher than, you know, you don't want, you don't want to have your GNI read back sort of things to you, right?So it's going to combine, I think where it will work is that you will have the ability to. Take what dashboards did really well, which is give you visual, the sort of the high bandwidth visual insight combined with the ability to ask sort of ad hoc deeper questions and have the system reacting, which is not possible in the current set of BI tools.Companies like ThoughtSpot have been trying to do this. But you had to build a whole different kind of AI before Gen AI and LLMs came along to do that. Yeah. Now it seems actually possible. So that's I think the big sort of interesting and cool part of this. 

Bobby Mukherjee: Yeah, as much as the LLMs are powerful, it's I think driven by the UX in front of the LLMs.

Venkat Krisnhamurthy: The conversational In the context of data and You know, analytics, LLMs are really about the UX, at least for now, I think, you know, I keep, this is something I keep thinking about, which is the current generation of LLMs are trained on, going back to the earlier point, on a fairly general set of data sets that are available for public use.Use, but then there's a whole dark data in terms of what's actually inside companies. Right. And it's dark data, not just in terms of the fact that it's not available publicly, but also the fact that there is every industry you go to as like this, a set of domain concepts and terminology, right? And chances are, if you really want to have a conversation with your data, and if you're in finance or if you're in healthcare, you're going to be using domain terminology to ask those questions.I don't think the current generation of LLMs understands domains in any depth. And there's a first gen analog to this, which used to be in the data warehousing world. We used to create what are called the conceptual models. Yeah. And, you know, for, for finance or for, for healthcare. And these were sort of world models of the key data objects in finance represented as a relational model in, you know, like presentation.Right. And the sort of AI equivalent of that is something called ontologies and it's a fancy term, right? And the idea is that somehow these things have to come together if you want your LLM, you know, let's say you have an LLM facing a trader that's being used by a trader in like a buy-side hedge fund.And they want to ask about their exposure to equity derivatives in, in a particular market. They're going to ask that, that's the question they're going to ask. And they're not going to wait for it. They're not going to translate those concepts into something the LLM can understand. It already has to have the corresponding fine tuning to understand domain models. That's like a huge opportunity area. I think for enterprise use in the analytics setting. 

Bobby Mukherjee: I'm certainly interested in what can be done there. And I think there are a couple of different domains where that kind of advancement could be. And I, frankly, I think that's where in, in the next year or two, we should see some advancements, I would say.

Venkat Krisnhamurthy: Yeah. You know, what, what I find to see about it is how is it going to change the economics and sort of the value chain, right. In the sense that, you know, now I think the infrastructure side of LLMs is sort of something that we roughly see settling down where, you know, You know, who builds them and how do you make them available for, you know, for example, for you take something like Amazon's Bedrock, right?It's put together. You take either models from open AI, or you have open source models like Lama or whatever. And then Amazon's built the sort of serving infrastructure and sort of the infrastructure where you can use it. But that stops short of. being useful in particular domains. And then there is the domain companies or the application companies, including Salesforce, for example, where the knowledge that is encapsulated in what they know about user workflows in healthcare or finance.Somebody has to represent them and somebody has to bridge the gap from these still general purpose LLMs that Amazon is helping you use to domain knowledge. And that to me seems like, you know, like you were saying, that's like the big opportunity area because that to me is a long tail of, Opportunity here.So yeah, that's going to be fun for us, people who work in enterprise. This is going to be the, exactly. 

Bobby Mukherjee: I'm looking forward to seeing exactly how that plays out. Well, Venkat, I really appreciate you doing this. This has been fantastic. Thank you so much. 

Venkat Krisnhamurthy: Thanks, Bobby.

Bobby Mukherjee: That was Venkat Krishnamurthy, senior director of product management at Salesforce and a brilliant thinker who’s working in the space where big data and artificial intelligence overlap. I really enjoyed our conversation and I hope you did too. Feel free to rate us on whatever podcast platform you listen on—we really appreciate the feedback. Until next time, I’m Bobby Mukherjee and as always, I wanna know: What fascinates you?

Venkat Krishnamurthy
Senior Director, Product Management, Salesforce

Loka's syndication policy

Free and Easy

Put simply, we encourage free syndication. If you’re interested in sharing, posting or Tweeting our full articles, or even just a snippet, just reach out to medium@loka.com. We also ask that you attribute Loka, Inc. as the original source. And if you post on the web, please link back to the original content on Loka.com. Pretty straight forward stuff. And a good deal, right? Free content for a link back.

If you want to collaborate on something or have another idea for content, just email me. We’d love to join forces!