Conversations Live
The Future of AI
Season 14 Episode 7 | 56m 44sVideo has Closed Captions
We hear from experts on what AI advances mean for the future.
As artificial intelligence continues to evolve and touch nearly every part of our lives, we hear from experts on what AI advances mean for the future.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Conversations Live is a local public television program presented by WPSU
Conversations Live
The Future of AI
Season 14 Episode 7 | 56m 44sVideo has Closed Captions
As artificial intelligence continues to evolve and touch nearly every part of our lives, we hear from experts on what AI advances mean for the future.
Problems playing video? | Closed Captioning Feedback
How to Watch Conversations Live
Conversations Live is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipSupport for Conversation Live comes from the Gertrude J. Sant endowment.
The James H. Olave Family Endowment, and the Sidney and Helen S Friedman Endowment.
And from viewers like you.
Thank you.
From the doctor Keiko Muir Ross.
PSU production.
Studio.
This is conversations a lot.
Good evening.
I'm Susan.
Hi, Chesky.
As artificial intelligence continues to evolve and touch more of our lives, there are growing concern about its impact on everything from news and education to the arts and technology.
How does it work?
Is it reliable?
Is it going to make some jobs obsolete?
Joining us to talk about all of that and answer your questions are two experts on artificial intelligence.
Howdy.
Husseini is an associate professor at Penn State's College of Information Sciences and Technology.
He's also associate director of the center for Artificial Intelligence, Foundations and Engineered Systems at Penn State.
His work includes applications that can help people use AI make to make informed decisions.
Amelia Yadav is an associate professor in Penn State's College of Information Sciences and Technology.
He's also associate director of the center for Socially Responsible AI at Penn State.
His research includes developing AI solutions to address problems faced by marginalized communities.
You, to can join in on the conversation.
Our toll free number is 1-800-543-8242 or email us at connect@psu.org.
Thank you both for joining us.
Thank you for having us.
So we're going to start with a simple basic questio that is going to be difficult.
So when we're talking about artificial intelligence or AI what does that mean.
Well thank you for that question.
And you're right.
That is one of the most difficult questions that you want to start us off with.
Because you see, the field of AI started in, the late 50s, early 60s.
And ever since then we've seen 65 years of progress.
We've, you know, I scientists have invented ChatGPT.
We've invented a lot of other things.
But one thing that we've not been able to settle is a definition of artificial intelligence that satisfies all of us.
I teach an undergraduate course at Penn State every semester on AI.
And sort of the stance that I take in that course, this question comes up every semester.
What's the definition of artificial intelligence?
And the stance that I take in that classroom, is that I tell my students that it's less important for the to learn the definition of AI, and more important for them to learn about what can they do with it?
Because, I argue that even if we, you know, we were to learn, a definition of artificial intelligence, there's a good chance it's going to be outdated in the next two years, right?
But all of that notwithstanding, let me try to offer, you know, a definition of AI that is as encompassing as possible, for today's, time.
So to me, very simply put, AI is any technology that has the potential or it promises to automate many intelligence based tasks that are usually considered the 40 of human beings.
And therein lies, you know, the reasons because of which people disagree about what is a, and what is not AI.
Because a lot of people, including myself, you know, you know, between me and Hoda I'm sure we will disagree about what we think is a task that requires intelligence.
And hence, if we automate that, that's going to be considered AI versus some other task that, you know, one of us might feel that it does not require intelligence.
And therefore, if we were to automate that, that might not be considere artificial intelligence, right?
If I were to take a step, you know, a deeper step, into this question and I were to ask you, what do humans really do?
Right.
If we are trying to automate, or, you know, what, what kinds of intelligenc based tasks to humans engage in.
Well, all of us think we reason.
And then once our reasoning is complete, we take decisions and then we act upon those decisions.
Now this is where, you know, we end up arriving at a, at a definition that is very, familiar to computer scientists, which is artificial intelligence is the science of automating decision making across a whole variety of domains.
In a way, importantly, such that that ability or ability to automat these decisions can improve over time as we get more information about the world that our agent or we as human beings are operating in.
So as we collect more data, our ability to take better decisions, more optimal decisions should improve.
And that, I think, is a good, workable definition of artificia intelligence as of today's date.
Whether that is talent, I don't know.
But, yeah.
How do yo would you like to add to that?
Yes, absolutely.
I agree, that we don't really have a definition of artificial intelligence that everybody agrees upon.
But I want to start with quote from one of my favorite, science fiction, writers, authors, Arthur C Clarke, who says any sufficiently and I may be paraphrasing, advanced technolog is indistinguishable from magic.
The hope that today I.
That I have is to be able to demystify this definition of artificial intelligence as Amelia correctly mentioned.
Really, artificial intelligenc is the science and engineering of creating systems that can perform, tasks that require intelligence.
These tasks could be reasoning in order to make decisions.
Could be learning from new concepts or learning from mistakes.
Could be perceiving your environment in order to be able to learn.
And in orde to be able to make with reason, you need to perceive the environment.
And also you want to be able to communicate through some natural language processing.
There are many, many different tasks that we do that requires intelligence.
And artificial intelligence essentially is the field that takes these tools that come that have been developed in disciplines such as computer science, engineering, statistics all the way to cognitive sciences, sociology, and economics, and puts all of these tools together to make good decisions or intelligent decisions.
Now, I want to point out something here about the definition of or about the terminology of artificial intelligence that was coined by one of the fathers of artificial intelligence, John McCarthy.
So when they term when they coined this term there, they were focusin on studying human intelligence.
However, this has kind of changed over the years that we were thinking about artificial intelligence as kind of two differen dichotomy of human like behavior or human like intelligence versus intelligence, or rationality or very optimal and very rational decision making.
These two are not necessarily are the same thing.
I'll give you an example.
If you are, all of you probably have a cell phone in your cell phone, you can download the chess ap and you can start playing chess with the software.
Right, which is actually run by AI.
You can design a software, a chess player, software that can actually beat every human being.
But it's the best chess players in the world.
So this is kind of what I'm calling the very rational behavior.
So they're acting very rationally.
They think and act very rationally.
But at the same time, I can design a software that can mimic the behavior of the best chess player in the world, or mimi the behavior of a five year old who just started learning chess.
So thes these are all under the umbrella of artificial intelligence.
As.
That's fascinating.
And I keep thinking of, you know, there's some chess players that have known habits.
So if you programed it to be like a particular chess player, that would be interesting.
So for a typical person who does not work in an AI related field, it can mean it's easier to get information on the internet that's, you know, that's wha we mostly do with it these days.
But, does it have broade implications for everyday life?
Hi, Patty, I'd like to start with you with this one.
Yeah, absolutely.
It does.
In fact I want to point out to the fact that maybe, we starte this whole conversation about AI and artificial intelligenc started surfacing to the public, maybe around 2020 when the ChatGPT was released publicly and people to start using it.
And all these conversation started kind of way at a faster and rapid and, than than before.
But the AI systems have been around for decades.
In fact, we are very good a creating AI systems and software that are very good and are able to solve a single or very limited number of problems like, for instance, the phone.
I mean, going back to the example of the phone in your hand, contains thousands of very small software that all of them run some sort of AI algorithm.
So they run algorithmic techniques to give you the right direction.
For instance, if you're going from, A to B, i can calculate the best direction or the shortest path for you.
Right?
They can also notify you of different types of social media buzzes that happen, right.
They can efficiently compute the best rendering approach to display the video on your phone.
That you're watching.
Right?
There's also, like, a lot of personalized recommendation systems, right?
So that you go and when you go into your streaming services, it recommends what typ of movies maybe you would like.
Right?
If you go into the doing online shopping, most of you probably do.
There's a lot of product recommendation.
All of these underneath have an AI engine that is working.
Now, if you kind of zoom back and look at the industrial sectors, AI has already impacted, many different sectors, including healthcare, in healthcare, it helps us in diagnosis, in finance, it helps us wit fraud detection in agriculture, which is a big thing in Pennsylvania.
That a lot there's a lot of smart farming technology and as well as education that we have now personalize or customized tutoring systems that are and there's there's many, many of aspects of of life are already impacted by AI.
So it looks like we have an email if if you would like to answer this one.
Sue writes, could you please explain that AI is more than generative AI?
So much of what I see is Chicken Little approach warning that the sky is falling, and AI is about to take over the world, but it isn't new, and it has been used in so many good functions that most people don't see or realize.
Absolutely.
And Sue is very right.
Thank you for that question.
So so, you know, as Had pointed out in his answer, yeah.
Is really this umbrella term for a lot of technologies across a lot of disciplines, right.
And out of that huge super se of technologies that exist, one relatively large subset happens to be called machine learning, right?
Which is, the subfield of AI i which we take in a lot of data.
The idea is you want to solve a task, and for the most part, there's not a very good understanding of how to solve that task.
Now, you could consult a domain expert.
So, for example, let's say you're trying to detect cancer from, you know, let's say mammogram images, right?
And for the most part you could consult an oncologist who could, who you could sit with and ask them to help you understand what inside these mammograms corresponds to the presence of a cancerous tumor, and what insid these images does not correspond to the presence of a cancerou tumor or a benign tumor.
Right?
But for the most part, you know, it's not it's quite likely that even an oncologist, of even a well-trained oncologist could be would not be able to come up with an exhaustive list of rules of or, you know, cancerous patterns that it could enumerate and list out those, that number of patterns or the number, the number of rules that you would have that they would have to enumerat would just be too long, right.
So, so that, so so therefore what machine learning does instead is it says, well, why don't we give you a lot o data about existing mammograms that have been collected from, let's say, 100,000 patients over the last ten years and a computer?
Why don't you go figure out for yourself what inside these images?
Oh, and by the way, we are also going to tell you which mammogram corresponds to an actual cancer patient and which mammogram does not correspond to an actual cancer patient, to the positives and, and the negative cases.
We will tell you, this is what is called a supervision in machine learning terminology.
And then basically we are going to, you know, tell the computer to go figure for itself what patterns.
And so these images are common amongst the patients who had cancer versus what kinds of patterns are common amongst the patients who did not have cancer.
This is machine learning.
And within the sphere of machine learning there is a very small subset, that is called generative AI for the most part.
And that within that small bucket of, you know, subfield of, of, of AI or of machine learning, you have things like ChatGPT, you have things like Dall-E, etc..
So, so she was very right when she says that, you know, you know, AI is much, much, much, much, much, much more bigger than generative AI.
And she's very right.
And she says, that, you know, it can be use to do so much good in the world.
For example, I in my lab at Penn State that I run, basically what we try to do is we, try to see how I can be used as a force for good.
And specifically, we try to see whether we can solve critical problems faced by marginalized, low resource communities in America and around the world.
By collaborating with these nonprofit organizations that are working with these low resource communities on the ground.
So, for instance, to give you a couple of examples, we've developed AI developed and field tested AI rhythms that have been proven to raise awareness about HIV prevention amongs homeless youth in Los Angeles.
We've field tested algorithms which have been proven to accurately triage, the medical severity, of, of, you know, conditions faced by pregnant women in Kenya.
And the field tested the system as well.
We worked with nonprofits, in the Philippines who who are working with, children who have been victims of online sexual abuse and exploitation.
And we've developed some apps for, for for their case managers to be able to so that they can efficiently do their work.
And the list goes on and on.
So really, AI can be used to do a lot of good.
That's not to say that generative AI cannot be used for good.
It is being used for a lot of good.
But but to answer, you know, very literally answers to this question, I mean, certainly generative is one very important, very popular subfield of the overall umbrella term that we refer to as I, and certainly generative AI and other fields of AI can be used for a lot of code.
Hari, do you have, something to add to that?
Yeah.
Yeah.
Very short.
I think Amelia basically covered all of that.
I, I really appreciate that question.
I think it's a very right question to ask.
I, as I mentioned has been around for a long time.
Generativ AI is a type of machine learning that is truly been fascinating, right?
Because mainly because even if you look at its its architecture, if you look at how it works, it's actually quite simple.
But what it's been able to achieve is very impressive.
And this is something that we talked about before.
We talk about how it has some sort of emergent capabilities, things that we have been feeding.
It is a lot of data, a lot of compute.
It actually takes billions of billions of data.
These, these, these structures are massive.
They're very large.
They they are using a lot of data, have a lot of examples of things that are around the world giving these examples to these generative AI and then we are asking them to do tasks that are not necessarily, the tasks tha they have been trained to write.
So there is a lot of fascination about generative AI these days, which rightfully so, but I think by itself is a much, much bigger, area of research.
In fact, I want to kind of give this example because I use this example in my class when I teach that imagine you, an alien is coming down to earth and wants to make soup.
And of course I can give them away.
You can give a cookbook to the alien that there are rules and structures of how of making different types of soup.
That's one way of, solving your problem.
And this is kind of what we have been looking at.
And nor in the community as a symbolic high.
But then you can also think about it as some sort of a machine learning or generative AI.
Now, in this case, instead of giving you a cookbook, you go to give different types of soups to Elliot to try.
Right?
It could be different variations of French onion, tomato soup and so on and so forth.
This alien can, taste them, can smell them, can feel the texture, and can try to kind of come up with their own definition of what a soup is and, and what are the different features and attributes of the soup the most fascinating part of this is that they may be able to make connections that mayb we haven't made them ourselves between different flavors.
So, for instance, they can realize that, oh, this savory flavor can go good with a little bit of, sugar spike.
Right.
And then that allows them to create new soup that maybe they have never tried it before.
That's called Martian soup.
So, so just to kind of sum up here, I want to say that AI is a very big field.
Generative AI is fascinating.
It has its drawbacks.
But it's a very fascinating area.
Well, thank you so much.
And we have our first, caller of the evening.
So this is Janice from Pleasant Gab.
Hi, Janice.
Hello.
How are you?
I'm doing good.
What is your question for us?
So we know all about these programs when it comes to AI regarding breast.
I'm sorry.
All these problems when it comes to AI regarding risks such as AI, hallucinations and making things up.
And then there's environmental problems from powering the servers.
So is the benefit worth it?
And what's the benefit tha AI gives us that makes it worth the potential risks that we're facing?
How did you want to start this one?
Sure.
Very good questions.
Both of these, I mean, this both of thes concepts are very interesting.
We are aware that the AI makes a lot of mistakes, especially generative AI.
In fact, I share the same concern because a lot of these hallucinations or mistakes that the AI makes is are ver plausible type of hallucination.
And this is actually wha makes it a little bit dangerous because you have these systems that make that very fluid.
They can create a full language, they can respond to you they can have a communication.
And they're usually mistakes that they make or very plausible.
They're things that you think that could be true, right?
And that is actually one of the dangers of AI.
But there's a lot of AI researchers are working on these and trying to fix that.
Obviously, the fixes that we have had so far are not very deep enough.
And that is just because of the structure of these, the architecture of these neural networks, what we call neural networks, is very complex.
In fact, the information within these structures are very latent.
You cannot just point out to part of this information and say, hey, this is the part that I need to fix in order for the system to work really well.
I want to also address your second question about environmental issues.
I absolutely agree that this is one of the main concerns.
I think, the energy that these massive models are using, and there has been lots of reports about the CO2 emissions, of these models is massive, to train these models, there are way that we can actually think about how to, kind of alleviate these type of problems.
For instance we can think about how to use, more cleaner energy, lik solar energy or nuclear energy.
On the technical side, there's also been there's been very recent move on understanding that maybe we don't need all of these massive amount of these massive models, for solving every problem.
So maybe we all overusing little bit of these, AI models.
It's kind of feels like you'r using a jackhammer to kind of, put a painting on the wall.
Right?
At times, and I think many of the companies have already started understanding, right, that they can achieve, the same kind of output using something called distilled model of smaller models into smaller parameters that are more efficient to train.
So this is kind of me ending this with a very kind of a good hope at the end.
Yeah.
I, I am hopeful that we can come up with better energy models to take care of this.
If you're just joining us, I'm Susan.
Hi, Chesky.
And this is Sue's conversations live the future of AI.
We're talking with two AI experts and we'd love to hear from you.
Our toll free number is 1-800-543-8242.
Or email us at connect@su.org.
So speaking of mistakes or hallucinations.
So I did a Google search fo spring weather in Pennsylvania.
And it comes up with this AI overview and a summary in different categories.
But at the bottom it says AI responses may include mistakes.
So Amelia, how do people know if the information they're getting is reliable?
Well, the short answer is we don't, know that it's really reliable.
And in fact, you know, a lot of the output that comes from these models, it is truly random.
Right?
So, so it has been generated effectively, you know, there inside when when you, you know send in a query to such a model, there are some sophisticated set of coins that are flip and the based on the outcome of those coins, you get a response.
It is best illustrated by.
If you were to ask the same question to ChatGPT, or even to Google's AI two times, you will get different version of the of of of of the answer.
They may correlate somewhat, or they may not.
But by and large, every single time you, you pass a query to a model, the answer is randomly generated.
And therefore, by and large, there's no guarantee that the answer that you would get, is going to be reliable.
Right?
The ability of these models to answer questions accurately, and with nuance is improving.
And it can be expecte to improve in the days to come.
But yeah, at this point of time, there's no guarantee, you know, we should be keeping our wits about ourselves.
So if you, for example want to decide what clothes do you want to wear, do you should you be carrying a raincoat or should you be carrying an umbrella?
In Pennsylvania in spring?
Certainly.
By all means, use Google AI to to find out what what weather in Pennsylvania looks like in the spring.
But please, at the same time, look outside.
Or at least, you know, corroborate that whatever Google is telling you with some other more reliable sources, because there's no guarantee that that, the answers, as howdy was mentioning these models could hallucinate.
They've been known to hallucinate.
And there are ways in which these hallucinations are trying to be fixed.
But we are not there yet.
In terms of, so s we should always be very careful about taking the outputs of these models at face value.
And certainly we should not be making any decisions that have some implications some consequential implications in our life based on the outputs of these models.
Yeah.
I like that.
Look, look outside.
Look, take a look at their, Howdy.
Can I learn from its mistakes?
So it keeps getting better?
Yeah, absolutely.
I mean, I think keep learning from the mistakes is actuall the foundation of the modern AI.
If you're talking about generative A or in general, machine learning, these systems are developed explicitly to learn from the mistakes and improve over time by using the error signals that they receive.
So I think the whole year already talked about, different types of machine learning.
You can have task where you're showing a picture of cat to an AI and label them as cats.
And and then the AI, the machine trie to identify these cats, right?
To identify attributes so that in the future.
So based on this train set of images that could be very large, be able to identify the next one.
So this is what they do by learning by mistakes and making sure that every tim the signal comes in, they can, fix themselves.
Self-supervised learning is another version of this which again, technologies like ChatGPT, are using is in this case, they're actually creating their own signals.
You can think about, machines that or software systems that play a game against themselves.
Imagine their training against another chess player.
I'm using a chess again, against other AI systems or against themselves.
For thousands of millions of times and learn from the mistakes they're making.
And and this is basically, one of the foundations of AI that is, there's this AI system that you're using right now.
It's very similar to wha we call reinforcement learning.
And this is essentially how children learn, right?
The children learn how to, avoid the risk based on, some sort of a reward and punishment so that if they touch the stove and they burn, that's no no, that's a negative reward.
And if they kind of eat a candy and they get a sugar spiking, that's very good.
So that's a yes.
Yes.
So all of these AI systems are learning from their mistakes.
One thing I wanted to add t the maybe the previous question that you ask is, just to add to what we just said, we need to fact check, we have to think about these AI systems as very glorified search engines.
At the end of the day, what they're doing is to really guess, based on the sequence of input, what is the next word?
What's the most likely thing that can happen in the next word?
Right.
Or the next thing?
So they're really guessing.
They don't know.
They don't fact check anything.
So it's really up to us to go, whenever we work with these two, fact check and make sure that the information they're providing to u is actually correct or correct or credible.
Yeah, we hav to fact check even the humans.
I just like to add to that if that's okay.
So so I mean, I completely agree with what whatever Hadi said.
So just to contextualize the ability of system to learn from its prior mistakes or from feedback, as you call it, if anyone's used ChatGPT fo a while, every once in a while, it asks you or it gives you two different versions of responses and it says, which one do you prefer?
Oh, that's a very.
And then you pick one an then it says, okay, I love you.
And that is one very explicit way in which as GPT improves itself and learns which, responses are more in line with what human being or my user is expecting of me.
This is a the, you know, one, one kind of reinforcement learning that Hadi talked about, which is called reinforcement learning with human feedback.
And so, yeah, so, so, you know, as how they rightly mentioned the ability of AI systems to learn from its mistakes or to learn from feedback, be it human feedback or non-human feedback is built into a lot of these systems.
We have another phone call.
So, Odell from Pleasant Gap.
Hi, Odell.
Hi.
How are you?
I'm good.
What is your question?
Thank you.
I'm wondering about what I've heard is called model autophagy disorder.
I've heard some peopl explaining that training future AI models on the output of of current or previous AI models will, in time make the quality, and, and the diversity of the outputs go down.
But I've heard other people say that that's not a problem at all.
And so I'm just curious as to, what the what the expert consensus is.
Do you want to.
I don't think I, I don't think I understood that the phrase that they use, can we ask them to repeat it?
Yeah.
Did you say model?
Model.
Autophagy disorder is what I've read online.
Okay.
I haven't heard that phrase myself.
But I think, so, so I, I will try to answer what what I think, the question is referring to if it is the case.
So what I heard from this question was, the if you train future models on outputs that have been derived from the models that have been historically generated, I guess there are multiple ways in which you can think about it, right?
One one particular way is that you know, if your older models were what we call as biased, right?
They generate answers which are biased in some way, and you use the outputs of those, or you use those biased outputs to train future models, then you can expect those biases to propagate onto future systems.
Right.
So in that way, I guess the last part of the question that was asked was, can you expect a certain sort of set of homogenization of responses, that you can expect to guess?
Also, there wouldn't be there would be a lack of diversity, right?
There wouldn't be enough sort of.
Yeah.
Basically, if the old system was biased, you can expect the new system to be biased as well.
And that can certainly happen.
Because at the end of the day, there's this, you know, a lot of what happens inside AI models is dependent on the data.
Right?
And if the data, in this case, the data for the future models is going to be the output, the biased outputs from the previous model.
And therefore you can certainly if that data is biased, you can certainly expect the outputs of the future models to be biased as well.
Yeah.
Does that answer your question, Odell?
Do you have a follow up?
Yeah, I suppose I just wonder then, is there a way to prevent that, or is that a serious problem in AI going forward?
Well, so to be honest, I am not, sure, I know of people who who are trying per se.
I mean, I may be wrong.
I need to look at the source that you were reading, but I do not know what kinds of systems.
What kinds of future systems are you referring to when when you know you're saying that these systems are going to be trained on the outputs of, of, you know, models tha have been trained in the past?
If that were to if that were the case, even if we were to be doing, you know, that that were happening, there are a lot of techniques that people are working on in a field called fact ML, which is fairness accountability and transparency in machine learning, in which there's a whole host of techniques that people are trying to, come up with that ensures that even if, let's say, your data or the inputs to your system, because they're being derived from prior systems, they are biased.
Or then those bia do not propagate to the outputs of your, of your model.
So so I do know that that is certainly possible.
There is also but but mayb you're not talking about biases.
Maybe you're talking about lack of diversity in general.
There are methods i which you can forcibly enhance the diversity of the outputs that you're getting.
For example, in the context of ChatGPT, there's this temperature parameter, that you can play around with.
By the way, this is not something that people in OpenAI can do.
This is something that you and I can do, right?
You can play aroun with that temperature parameter.
And as you play around with it, you can start getting more diverse responses aka more creative responses.
And if you dial back that parameter, you start getting more homogenous, more fixed responses.
So so there are certainly way in which you can either increase or decrease, the diversity of the responses or the diversity of the answers or outputs of machine learning models or AI models more generally to a leve that you are comfortable with.
So there are certainly techniques that can be, that can be used to address these, situations.
So if you're just joining us, I'm Susan.
Hi, Husky.
And this is Sue's conversations live the future of AI.
We're talking with two AI experts.
And we'd love to hear from you.
Our toll free number is 1-800-543-8242.
Or you can email us at connect@su.org.
And it looks like we have an anonymous email.
So this email says how will I be used by medical professionals?
Will doctors be able to submit things like blood analysis data taken over tim periods in the patient's history to determine trends that lead to cures or abatement of symptoms?
Or will medicine or, sorry medication combinations be screened for possible reactions to the patient's health and possible serious side effect because of certain combinations.
So how do you, do you want to take this one?
Yes.
Of course.
I would like to also maybe add a little bit to the previous gentleman.
Previous caller.
I'm not sure if they were talking about model collapse, because that is something that is happening in these very large models.
If they're talking about model collapse, we can think about these, generative AI and, models usually have a huge, huge network what we call it neural networks.
It's many, many layers.
The problem is that, once when the way that they distill information and they keep information within these layers are not very clear the way we have them right now or something, they kind of act more of a black box.
We don't know exactly what is going on.
So using this and as an input to another system or even an input to itself as a feedback loop sometimes results in somethin that we call a model collapse.
There is a lot of research going on in this trying to figure out how t prevent these type of collapses.
Now, going back to the question that you asked about the medical field, a it's currently being used in medical for AI is being currently used in medical field in a variety of ways.
It's helping doctors to, make decisions, as I think, in the beginning of this program, Julia mentioned the very good example of, detecting abnormalities in x ray.
For instance, images Doctors are using it right now.
It's enhancing their capabilities and making and, not only in diagnosis but also in, finding the best way to, the best way that best drugs to, the best drugs or medications to, that, that they can give to their patients.
There is also this field of, scientific drug discovery which is a very, very new field.
And it talks about how we can use the AI models, to create new drugs that maybe we haven't had before.
This is a very, very fresh field.
A lot of people are working o it, and there's a lot of money, pouring into this field.
But the idea here is that instead of having ver large labs and many, many people who are trying many different things, it's very expensive.
It's time consuming.
We may be able to actually train, these AI systems to be able to discover new drugs for specific purposes.
There is also this, maybe adjacent field of, customized, medication.
And that's really about not giving you the exact one drug to all these different types of patients, but trying to find out what would be the best combination for your case.
Very customized to your health to have a prognosis, prognosis of your whatever condition you're dealing with to your family history and so on and so forth.
I think we're all looking forward to having some personalized health care these days.
We do have another caller.
We have girl or girl from State College.
Hi there.
Hi there.
I'm here.
How can we help you today?
I have a question that, I don't think it's been touched on yet about what these two experts think about warnings from high level AI developers, about how I could develop, independent thinking, as it were, so as to put its, a preferences or goals before those of humans.
And perhaps we're to, contro human beings and posed dangers, potentially, in the long run, to, even human destruction.
So curious about that.
John?
Yeah.
Great question.
Joel.
Girl, I'm sorr if I'm mispronouncing your name.
But yeah, I mean, Gerald.
Gerald.
Okay.
Yeah.
So I think, what you're referring to once again, is the above is the possibility of what we refer to as artificial general intelligence emerging, right, as how the is mentioned.
And I, I've touched upon there is there are some, you know, anecdotal signs of some emergent, you know, AGI, as we call it.
That being said, there is a lot of work.
It's a very hot topic in AI research these days.
What we refer to as alignment research.
Right.
Trying to align generative AI models with our values, our beliefs, our desires, and our goals about wha these systems should do.
Right.
And these this this alignment can, can, can happen along multiple dimensions.
So, you know, earlier o I gave you I talked about this, you know, GPT asking us whic response does it do we prefer.
Right.
That' a very simple form of alignment where basically ChatGPT is just trying to make sure that the language that it is generating is well aligned with what we expect.
The answer to be.
Right.
So, so, so just to make sure or to, you know, yeah.
So so that's on very simple form of alignment.
But then again, there could be other forms of alignment.
So you could have alignmen along the dimensions of safety.
So you would want to make sur that, ChatGPT outputs are safe, that they don't lead you to generating outputs that could be unsafe for human beings or the peopl who are reading those prompts.
You could also encode at a higher level, not just, you know, conversations about trying to align ChatGPT ChatGPT is output or output along safety, but along many other dimensions along all the dimensions that you're talking about.
So as to ensure that the AI i well aligned with what we want AI to do for us, so that it does not end up developing a lot of the, the, the nefarious, possibilities that we are worried about, that it does not start controlling us or, or even have that thought of controlling human beings.
If that were the case.
Oh, by the way we don't have any evidence that, you know, it is anywhere near you know, that that capability.
So we don't need to be worried.
I am not worried at this moment, but but then again, there's this, you know, all of alignment research, which is a very hot topic in AI, is specifically focused on trying to get generativ AI models to, align themselves with the values that we expect from these AI models.
So, you know, hopefully you know, in the days to come, you know there's been some recent work, hopefully in the days to come, there will be more work.
That will ensure that we don't end up, you know, reaching that outcome that you talked about.
Jared.
Hati.
I would like to hear your your comments.
Too.
Absolutely.
I think, I'm glad you brought up the discussion about alignment.
I think alignment is actuall one of those very sophisticated, research questions that goes beyond just computer science or AI research.
The idea of valu alignment is very interesting.
There's a lot of people working about how we align the values of AI with values of humans or communities or societies.
The difficult questions here is really about whose values, right.
So you can ask these questions of if you're aligning AI, whose values we have to represent, do we have to have a very specific one value that we all agree is human being, or is it going to be a very pluralistic way of looking at values, right.
And value alignment?
I want to also mention that, the question about AI kind of for edge or artificial general intelligence is a question that requires a variety of steps for the AI to be developed.
Right now, we are very good at creating AI or AI systems that are abl to, process language very well.
They are able to, communicate very well and they're able to help us with image image recognition.
So these are very specific tasks.
They have started to show a little bit of reasoning capabilities.
Right.
But all of these capabilities that are talking about are capabilities that are more cognitive capabilities.
In order for an AI to become really go loose, it also requires to not only complete all of the cognitive abilitie that all of us can do as humans, but also have some sort of physical abilities.
These are abilities to interact with the physical world.
And that's kind of anothe research area that in robotics, people are looking at it and it turns out, in fact, manipulating objects in the physical world is a much, much more difficult than looking at language.
And for a very simple reason that language is already an abstraction of the world.
So we were able to actually throw a lot of data and compute at these huge, huge models and be able to get the language right.
Right.
But in order to get, things like physical manipulation in the world or, or context aware reasoning, we need something more than what we have right now.
It's really it's all I can think of.
Is that what is it 2001 Odyssey is Dave okay?
Let's just get it gets a little interesting.
So I do have, like a question and I know you you've talked about it, you've covered it, but I'm not sure our audience or everybody understands what is the difference between AI and generative AI.
Like what would be a simple explanation for peopl who are just brand new to this?
Sure.
So I think, you know, how do you give a good example of soup and Martian soup?
Yeah.
I mean, I mean, how I would explain it is, you know, I'll, I'll give another analogy.
Right.
So, so think about how you young children and their early stages of development, learn about colors, right.
And when they're first thought about colors, they're told this is red and this is blue.
And in in a child's early development, all they know is that this color is different from this color, right?
When they're very, very young.
That's all they know.
They know how to distinguish the red colors from the blue colors but they don't really know how.
What are the properties of these colors that make them red?
And what what are the properties of this color that makes it blue?
As you get to a high level of understanding, or a deeper level of understanding, you start and you start learning the properties of these colors that make them red and blue, and they realize what happens when you mix these colors.
It is then that you realize that, you know, red is not just one color.
You can have many shades of red that can be generated.
Right.
So that is one way of thinking about, generative AI versus AI, which is similar to ho we were thinking about it, or, you know, similar to how these example about Martian soup as a new example of soup, which is different from existing soup that that humanity has created, in even simpler terms, generative AI.
So, you know, let's say that you have images of cats and elephants two different kinds of images, and you're trying to, I guess, you've been given this dataset and, I tell you that I, you know I want you to build an AI model that can distinguish the cats from the elephants.
Right.
And sure enough, there are many different AI models that you can build, that can very accurately figure out what are the cats.
You know, that this is a cat image and this is an elephant image, and that is what we refer to as discriminative AI, because what it is really learning is it is trying to understand ho to discriminate images of cats from images of elephants.
It's the need not develop a deeper understanding of what do these cat images really look like?
And what do these elephant images really look like?
Right.
You could I could give you a different problem, which is forget about classification.
Forget about distinguishing cats and elephants.
I just want you to really understand what are the properties of these cat images and what are the properties of these elephant images like, what are the features that go into making these cat images, images of cats, and what are the properties of these elephant images that go into making these images of elephants?
And once you develop these this deeper understanding in mathematical terminology, that's called learning a probability distribution.
But we won't go there.
Once you have this deeper understanding, it turns out that you can leverage this understanding to generate new images of cats and elephants that are unlike any images of cats and elephants.
They're still look like cats and images and elephants.
So.
So you won't be able to distinguish this is an artificially generated cat or that this is an artificially generated elephant.
You won't be able to tell that, or I won't be able to tell that.
But it is still a new brand new artificial image that was not a part of our original dataset.
And that is why it gets the name generative AI.
Because once you have this deeper understanding, you can use it to generate new data.
And you can do it with images.
You can do it with text, and GPT, for example, i one example of how you generate new text in response to, you know, existing.
If you give it a, you know, a prefix, it generates the next word very accurately.
If this is a prefix of a sentence, what would be the next word?
And yeah, so it has many, many applications.
Generative AI before GPT and image based models got popular, the most talke about application of generative AI was that when you had, healthcare applications, a lot of patient data is privacy protected by the hyperbolas?
And so if you want to develop AI models for health care, you can use generative AI to generate to create artificial patient data.
Right.
That is very similar to your original data.
But you're not, you know, violating any privacy law because that data is artificial.
It does not correspond to any patient.
And that, I think, is a good hopefully a good understanding the word generative AI is.
Looks like you could train with that.
So, we have an email question.
How Mary writes, what has been the most extraordinary accomplishment of AI to date?
Guessing it's not playing chess.
Yeah.
So that's, I think that's going to be a very difficult question to answer.
There has been a lot of, accomplishments of AI, in the past several decades.
Of course.
One of the most impressive one, I think the one that we have been talking about all night is about generative AI.
And I want to mention that we are very fascinated about generative AI, not because of the science that goes into it.
In fact, the science that goes into generative AI is not that complicated.
It is that it is a very it has a very known architecture, called Transformers.
And we know exactly how to build it.
And what it takes is a lot of data and a lot of compute.
Right.
So the data we are talking about, with a lot of parameters, we are talking about trillions of parameters that is being trained.
All right.
The example that Amelia gave about language, we have trained this generative AI to create new language based on all the language that has been out there, all the books, all the internet, the whole internet, everything that has been out there.
They have kind of scraped i out and tried to kind of create a model based on that.
So it is not very surprising if this model, if you ask yo to give me a summary of Benjamin Franklin's life and accomplishment, it's abl to kind of create that for me.
That's not very surprising.
The most surprising part and the most impressive part, is that it is good at things that we haven't really trained for, and that's what it was.
What I'm what we are calling it common sense reasoning.
For instance, you can ask questions about can fish run, right?
So I mean, that's a questio that this data set has not had.
Thes models have never seen.
Right.
But they're able to answer it somehow.
We can ask questions about ca Amelia be taller than himself?
Right?
I mean, this this question sums very common sense for us.
And we have never trained this these models on these type of questions, but they're able to actually show this level of reasoning.
Now, does this mean they're able to do more a deeper level of reasoning?
We don't know.
That's $1 billion question.
A lot of people are, look, they're working on it to understand whether thes models can generalize reasoning and can actually become, better at reasoning and come up with new things.
Right?
Thinking about inventiveness, inventing new things.
So this is what I would say is this is one of the reasons we are very excited about this technology.
Yeah, I do notice when I was writing something and had given me an answe that was just completely wrong, and I said no, you're focusing on the wrong thing.
I did not give it details.
And it really it's like, oh, I think you meant this.
And I was so shocked that it got it right.
Exactly.
From my very vague response to it.
Yeah.
As how do you think we are surprised by that as well.
Yeah.
The fact that it can do that.
Yeah, yeah.
It's it's quite amazing in the I would say, you know, I was very skeptical.
I was not a fan last year.
I was like, what?
No, you need to do this yourself.
But it's it's pretty amazing.
So I understand tha your work includes finding ways I can be used in positive ways, including improve people's health.
And so can you give me an example of that?
So, as I, you know briefly mentioned in one of my one of my earlier answers, we do a lot of work in health, but not just health.
But since the questions about health, let me answer that.
As I alluded to earlier, we worked, for example, one of the examples that I can give is we worked with a nonprofit in Kenya that was working on improving maternal health outcomes for rural women in wome living in rural areas in Kenya.
And what they the problem was, women in Kenya, the you know, they have, you know, concerns about their pregnancy status.
They have concerns about the if they've given birth, about the health of their themselves and their newborn child.
And doctors are far away, right.
And, you know, going to see a doctor is is is a time consuming journey.
You know, they got to book a taxi, you go to the nearest city, wait for a doctor, and then realize it was all in vain because you know, it was it was normal.
You know, there was nothing to be seen for.
It's it's perfectly normal for them to be experiencing a headach at this stage of the pregnancy.
So what they've done, what this nonprofit has done is they've developed, an SMS based helpline, a delivery service using which women in Kenya can instead of going to see a doctor, they can just send a free SMS message, to this helpline, which gets read by a human clinically trained operator who can then decide whether this message requires immediate medical attention or it's a questio that they can answer themselves.
If it's a low risk question about diet, about nutrition, right?
Unfortunately or fortunately for this nonprofit, they were growing rapidly.
You know, they were getting millions of SMS message every single month, by the way.
You know, then the name of th nonprofit is Jacaranda Health.
You know, big shout out to them.
They're wonderful, wonderful collaborators.
So they were growing to rapidly and the the, the they were not they did not have enough helpdesk operators to be able to handl the scale of incoming messages.
So there was a mismatch between mismatch between the demand for help and the supply of helpdesk operators who could provide this help.
And as a result of that what they really wanted us to do is they wanted us to come up with an AI algorithm that could automatically read these SMS messages that are being sent in by pregnant women.
And automatically detect the severity of the medical condition that this mother might be experiencing so that most, you know, women with more severe conditions.
For example, if it's a question about severe abdominal discomfort, they can be bumped to the top of the queue so that they can be answered first.
Wherea if it's a question about diet, you know, should I be eatin papayas in my third trimester?
Or if it's a question about exercise, you know, is, one out of access?
You know, walking enough for m at this stage of my pregnancy?
These are still important questions.
But given tha we are in a situation where we we don't have enough help, they can be answered later.
And this AI system did i for them and we field tested it.
The system is operational as we speak.
Yeah.
So that's well thank you that I mean that is needed in that area of the world.
So our guests tonight have been Professors Heidi Hosseini and Amelia Yadav I'm Susan.
Hi Chesky.
Thank you for joining us on Suze Conversations Live the future of AI.
Rewatch this and previous episodes of Conversation Live and more of your favorite.
PSU.
Programs on the PBS app.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Conversations Live is a local public television program presented by WPSU