An introduction to generative AI with Swami Sivasubramanian
In the last few months, we’ve seen an explosion of interest in generative AI and the underlying technologies that make it possible. It has pervaded the collective consciousness for many, spurring discussions from board rooms to parent-teacher meetings. Consumers are using it, and businesses are trying to figure out how to harness its potential. But it didn’t come out of nowhere — machine learning research goes back decades. In fact, machine learning is something that we’ve done well at Amazon for a very long time. It’s used for personalization on the Amazon retail site, it’s used to control robotics in our fulfillment centers, it’s used by Alexa to improve intent recognition and speech synthesis. Machine learning is in Amazon’s DNA.
To get to where we are, it’s taken a few key advances. First, was the cloud. This is the keystone that provided the massive amounts of compute and data that are necessary for deep learning. Next, were neural nets that could understand and learn from patterns. This unlocked complex algorithms, like the ones used for image recognition. Finally, the introduction of transformers. Unlike RNNs, which process inputs sequentially, transformers can process multiple sequences in parallel, which drastically speeds up training times and allows for the creation of larger, more accurate models that can understand human knowledge, and do things like write poems, even debug code.
I recently sat down with an old friend of mine, Swami Sivasubramanian, who leads database, analytics and machine learning services at AWS. He played a major role in building the original Dynamo and later bringing that NoSQL technology to the world through Amazon DynamoDB. During our conversation I learned a lot about the broad landscape of generative AI, what we’re doing at Amazon to make large language and foundation models more accessible, and last, but not least, how custom silicon can help to bring down costs, speed up training, and increase energy efficiency.
We are still in the early days, but as Swami says, large language and foundation models are going to become a core part of every application in the coming years. I’m excited to see how builders use this technology to innovate and solve hard problems.
To think, it was more than 17 years ago, on his first day, that I gave Swami two simple tasks: 1/ help build a database that meets the scale and needs of Amazon; 2/ re-examine the data strategy for the company. He says it was an ambitious first meeting. But I think he’s done a wonderful job.
If you’d like to read more about what Swami’s teams have built, you can read more here. The entire transcript of our conversation is available below. Now, as always, go build!
Recommended posts
- Demystifying LLMs with Amazon distinguished scientists
- Curious about automated reasoning
- Curious about quantum computing
Transcription
This transcript has been lightly edited for flow and readability.
***
Werner Vogels: Swami, we go back a long time. Do you remember your first day at Amazon?
Swami Sivasubramanian: I still remember… it wasn’t very common for PhD students to join Amazon at that time, because we were known as a retailer or an ecommerce site.
WV: We were building things and that’s quite a departure for an academic. Definitely for a PhD student. To go from thinking, to actually, how do I build?
So you brought DynamoDB to the world, and quite a few other databases since then. But now, under your purview there’s also AI and machine learning. So tell me, what does your world of AI look like?
SS: After building a bunch of these databases and analytic services, I got fascinated by AI because literally, AI and machine learning puts data to work.
If you look at machine learning technology itself, broadly, it’s not necessarily new. In fact, some of the first papers on deep learning were written like 30 years ago. But even in those papers, they explicitly called out – for it to get large scale adoption, it required a massive amount of compute and a massive amount of data to actually succeed. And that’s what cloud got us to – to actually unlock the power of deep learning technologies. Which led me to – this is like 6 or 7 years ago – to start the machine learning organization, because we wanted to take machine learning, especially deep learning style technologies, from the hands of scientists to everyday developers.
WV: If you think about the early days of Amazon (the retailer), with similarities and recommendations and things like that, were they the same algorithms that we’re seeing used today? That’s a long time ago – almost 20 years.
SS: Machine learning has really gone through huge growth in the complexity of the algorithms and the applicability of use cases. Early on the algorithms were a lot simpler, like linear algorithms or gradient boosting.
The last decade, it was all around deep learning, which was essentially a step up in the ability for neural nets to actually understand and learn from the patterns, which is effectively what all the image based or image processing algorithms come from. And then also, personalization with different kinds of neural nets and so forth. And that’s what led to the invention of Alexa, which has a remarkable accuracy compared to others. The neural nets and deep learning has really been a step up. And the next big step up is what is happening today in machine learning.
WV: So a lot of the talk these days is around generative AI, large language models, foundation models. Tell me, why is that different from, let’s say, the more task-based, like fission algorithms and things like that?
SS: If you take a step back and look at all these foundation models, large language models… these are big models, which are trained with hundreds of millions of parameters, if not billions. A parameter, just to give context, is like an internal variable, where the ML algorithm must learn from its data set. Now to give a sense… what is this big thing suddenly that has happened?
A few things. One, transformers have been a big change. A transformer is a kind of a neural net technology that is remarkably scalable than previous versions like RNNs or various others. So what does this mean? Why did this suddenly lead to all this transformation? Because it is actually scalable and you can train them a lot faster, and now you can throw a lot of hardware and a lot of data [at them]. Now that means, I can actually crawl the entire world wide web and actually feed it into these kind of algorithms and start building models that can actually understand human knowledge.
WV: So the task-based models that we had before – and that we were already really good at – could you build them based on these foundation models? Task specific models, do we still need them?
SS: The way to think about it is that the need for task-based specific models are not going away. But what essentially is, is how we go about building them. You still need a model to translate from one language to another or to generate code and so forth. But how easy now you can build them is essentially a big change, because with foundation models, which are the entire corpus of knowledge… that’s a huge amount of data. Now, it is simply a matter of actually building on top of this and fine tuning with specific examples.
Think about if you’re running a recruiting firm, as an example, and you want to ingest all your resumes and store it in a format that is standard for you to search an index on. Instead of building a custom NLP model to do all that, now using foundation models with a few examples of an input resume in this format and here is the output resume. Now you can even fine tune these models by just giving a few specific examples. And then you essentially are good to go.
WV: So in the past, most of the work went into probably labeling the data. I mean, and that was also the hardest part because that drives the accuracy.
SS: Exactly.
WV: So in this particular case, with these foundation models, labeling is no longer needed?
SS: Essentially. I mean, yes and no. As always with these things there is a nuance. But a majority of what makes these large scale models remarkable, is they actually can be trained on a lot of unlabeled data. You actually go through what I call a pre-training phase, which is essentially – you collect data sets from, let’s say the world wide Web, like common crawl data or code data and various other data sets, Wikipedia, whatnot. And then actually, you don’t even label them, you kind of feed them as it is. But you have to, of course, go through a sanitization step in terms of making sure you cleanse data from PII, or actually all other stuff for like negative things or hate speech and whatnot. Then you actually start training on a large number of hardware clusters. Because these models, to train them can take tens of millions of dollars to actually go through that training. Finally, you get a notion of a model, and then you go through the next step of what is called inference.
WV: Let’s take object detection in video. That would be a smaller model than what we see now with the foundation models. What’s the cost of running a model like that? Because now, these models with hundreds of billions of parameters are very large.
SS: Yeah, that’s a great question, because there is so much talk already happening around training these models, but very little talk on the cost of running these models to make predictions, which is inference. It’s a signal that very few people are actually deploying it at runtime for actual production. But once they actually deploy in production, they will realize, “oh no”, these models are very, very expensive to run. And that is where a few important techniques actually really come into play. So one, once you build these large models, to run them in production, you need to do a few things to make them affordable to run at scale, and run in an economical fashion. I’ll hit some of them. One is what we call quantization. The other one is what I call a distillation, which is that you have these large teacher models, and even though they are trained on hundreds of billions of parameters, they are distilled to a smaller fine-grain model. And speaking in a super abstract term, but that is the essence of these models.
WV: So we do build… we do have custom hardware to help out with this. Normally this is all GPU-based, which are expensive energy hungry beasts. Tell us what we can do with custom silicon hatt sort of makes it so much cheaper and both in terms of cost as well as, let’s say, your carbon footprint.
SS: When it comes to custom silicon, as mentioned, the cost is becoming a big issue in these foundation models, because they are very very expensive to train and very expensive, also, to run at scale. You can actually build a playground and test your chat bot at low scale and it may not be that big a deal. But once you start deploying at scale as part of your core business operation, these things add up.
In AWS, we did invest in our custom silicons for training with Tranium and with Inferentia with inference. And all these things are ways for us to actually understand the essence of which operators are making, or are involved in making, these prediction decisions, and optimizing them at the core silicon level and software stack level.
WV: If cost is also a reflection of energy used, because in essence that’s what you’re paying for, you can also see that they are, from a sustainability point of view, much more important than running it on general purpose GPUs.
WV: So there’s a lot of public interest in this recently. And it feels like hype. Is this something where we can see that this is a real foundation for future application development?
SS: First of all, we are living in very exciting times with machine learning. I have probably said this now every year, but this year it is even more special, because these large language models and foundation models truly can enable so many use cases where people don’t have to staff separate teams to go build task specific models. The speed of ML model development will really actually increase. But you won’t get to that end state that you want in the next coming years unless we actually make these models more accessible to everybody. This is what we did with Sagemaker early on with machine learning, and that’s what we need to do with Bedrock and all its applications as well.
But we do think that while the hype cycle will subside, like with any technology, but these are going to become a core part of every application in the coming years. And they will be done in a grounded way, but in a responsible fashion too, because there is a lot more stuff that people need to think through in a generative AI context. What kind of data did it learn from, to actually, what response does it generate? How truthful it is as well? This is the stuff we are excited to actually help our customers [with].
WV: So when you say that this is the most exciting time in machine learning – what are you going to say next year?