AI To Enhance Or Hack Humanity?

There is been heat about AI and its future. Will this trend is good for human or bad in general? Therefore, scientists and public are starting looking into ethics and potential concerns and threats. 

When it comes to AI, we can't miss Li Feifei, who is the pioneers in this field, used to be the head of AI lab of Google and now a professor of Stanford. 


Here are some of her opinions about the debate recently in event held by Stanford Center for Ethics and Society, Stanford Institute for Human-Centered Artificial Intelligence and Stanford Humanities Center. 

First, lay out where we are; then talk about some of the choices we have to make now; and last, talk about some advice for all the wonderful people in the hall.

Where we're headed?

Fei-Fei Li: I'm very thankful that people have opened up this really important question for us. When you said the AI crisis, I was sitting there thinking, this is a field I loved and feel passionate about and researched for 20 years, and that was just a scientific curiosity of a young scientist entering PhD in AI. 

What happened that 20 years later it has become a crisis? And it actually speaks of the evolution of AI that, that got me where I am today and got my colleagues at Stanford where we are today with Human-Centered AI, is that this is a transformative technology. 

It's a nascent technology. It's still a budding science compared to physics, chemistry, biology, but with the power of data, computing, and the kind of diverse impact AI is making, it is, like you said, is touching human lives and business in broad and deep ways. And responding to those kinds of questions and crisis that's facing humanity, I think one of the proposed solutions, that Stanford is making an effort about is, can we reframe the education, the research and the dialog of AI and technology in general in a human-centered way

We're not necessarily going to find a solution today, but can we involve the humanists, the philosophers, the historians, the political scientists, the economists, the ethicists, the legal scholars, the neuroscientists, the psychologists, and many more other disciplines into the study and development of AI in the next chapter, in the next phase.

"Maybe I can try and formulate an equation to explain what's happening. And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans."

Is that specific concern what people who are thinking about AI should be focused on?

FL: Absolutely. So any technology humanity has created starting with fire is a double-edged sword. So it can bring improvements to life, to work, and to society, but it can bring the perils, and AI has the perils. 

You know, I wake up every day worried about the diversity, inclusion issue in AI. We worry about fairness or the lack of fairness, privacy, the labor market. So absolutely we need to be concerned and because of that, we need to expand the research, and the development of policies and the dialog of AI beyond just the codes and the products into these human rooms, into the societal issues. So I absolutely agree with you on that, that this is the moment to open the dialog, to open the research in those issues.

FL: Okay, can I be specific? First of all the birth of AI is AI scientists talking to biologists, specifically neuroscientists, right. 

The birth of AI is very much inspired by what the brain does. Fast forward to 60 years later, today's AI is making great improvements in healthcare. There's a lot of data from our physiology and pathology being collected and using machine learning to help us. 

What if brains can be hacked?

With all the issues of privacy, if you have a big battle between privacy and health, health is likely to win hands down. And the big danger is what happens when you can hack the brain and that can serve not just your healthcare provider, that can serve so many things for a crazy dictator.

FL: Humans are humans because we're—there's some part of us that is beyond the mammalian courtship, right? Is that part hackable?

FL: I do not have answers to the two dystopias. But what I want to keep saying is, this is precisely why this is the moment that we need to seek for solutions. This is precisely why this is the moment that we believe the new chapter of AI needs to be written by cross-pollinating efforts from humanists, social scientists, to business leaders, to civil society, to governments, to come at the same table to have that multilateral and cooperative conversation. I think you really bring out the urgency and the importance and the scale of this potential crisis. But I think, in the face of that, we need to act.

"The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated." 

Although we don't know or understand what consciousness and mind  are, but again, the bar for hacking humans is much lower. 

FL: So I want to make two comments and this is where my engineering, you know, personally speaking, we’re making two very important assumptions in this part of the conversation. 

One is that AI is so omnipotent, that it's achieved to a state that it's beyond predicting anything physical, it's getting to the consciousness level, it’s getting to even the ultimate love level of
capability. And I do want to make sure that we recognize that we're very, very, very far from that. This technology is still very nascent. Part of the concern I have about today's AI is that super-hyping of its capability. So I'm not saying that that's not a valid question. But I think that part of this conversation is built upon that assumption that this technology has become that powerful and I don't even know how many decades we are from that. 

Second related assumption, I feel our conversation is being based on this that we're talking about the world or state of the world that only that powerful AI exists, or that small group of people who have produced the powerful AI and is intended to hack humans exists. But in fact, our human society is so complex, there's so many of us, right? I mean humanity in its history, have faced so much technology if we left it in the hands of a bad player alone, without any regulation, multinational collaboration, rules, laws, moral codes, that technology could have, maybe not hacked humans, but destroyed humans or hurt humans in massive ways. It has happened, but by and large, our society in a historical view is moving to a more civilized and controlled state. So I think it's important to look at that greater society and bring other players and people into this dialog. So we don't talk like there's only this omnipotent AI deciding it's gonna hack everything to the end. And that brings me to your topic that in addition to hacking humans at that level that you're talking about, there are some very immediate concerns already: diversity, privacy, labor, legal changes, you know, international geopolitics. And I think it's, it's critical to to tackle those now.

What AI can do today and benefits of it?

FL: So in human-centered AI, in which this is the overall theme, we believe that the next chapter of AI should be human-centered, we believe in three major principles.  

One principle is to invest in the next generation of AI technology that reflects more of the kind of human intelligence we would like. I was just thinking about your comment about as dependence on data and how the policy and governance of data should emerge in order to regulate and govern the AI impact. Well, we should be developing technology that can explain AI, we call it explainable AI, or AI interpretability studies; we should be focusing on technology that has a more nuanced understanding of human intelligence. We should be investing in the development of less data-dependent AI technology, that will take into considerations of intuition, knowledge, creativity and other forms of human intelligence. So that kind of human intelligence inspired AI is one of our principles.

The second principle is to, again, welcome in the kind of multidisciplinary study of AI. Cross-pollinating with economics, with ethics, with law, with philosophy, with history, cognitive science and so on. Because there is so much more we need to understand in terms of a social, human, anthropological, ethical impact. And we cannot possibly do this alone as technologists. Some of us shouldn't even be doing this. It’s the ethicists, philosophers who should participate and work with us on these issues. So that's the second principle. And within this, we work with policymakers. We convene the kind of dialogs of multilateral stakeholders.

Then the third, last but not least, I think, Nick, you said that at the very beginning of this conversation, that we need to promote the human-enhancing and collaborative and argumentative aspect of this technology. You have a point. Even there, it can become manipulative. But we need to start with that sense of alertness, understanding, but still promote the kind of benevolent application and design of this technology. At least, these are the three principles that Stanford’s Human-centered AI Institute is based on. And I just feel very proud, within the short few months since the birth of this institute, there are more than 200 faculty involved on this campus in this kind of research, dialog, study, education, and that number is still growing.

How do we make decisions with biases?

FL: Yeah, that's an excellent question.I'm not going to have the answers personally, but I think you touch on the really important question, which is, first of all, machine learning system bias is a real thing. 

You know, like you said, it starts with data, it probably starts with the very moment we're collecting data and the type of data we’re collecting all the way through the whole pipeline, and then all the way to the application. 

But biases come in very complex ways. At Stanford, we have machine learning scientists studying the technical solutions of bias, like, you know, de-biasing data or normalizing certain decision making. But we also have humanists debating about what is bias, what is fairness, when is bias good, when is bias bad? So I think you just opened up a perfect topic for research and debate and conversation in this in this topic. And I also want to point out that you've already used a very closely related example, a machine learning algorithm has a potential to actually expose bias. Right? 

You know, one of my favorite studies was a paper a couple of years ago analyzing Hollywood movies and using a machine learning face-recognition algorithm, which is a very controversial technology these days, to recognize Hollywood systematically gives more screen time to male actors than female actors. 

No human being can sit there and count all the frames of faces and whether there is gender bias and this is a perfect example of using machine learning to expose. 

So in general there's a rich set of issues we should study and again, bring the humanists, bring the ethicist, bring the legal scholars, bring the gender study experts.

FL: Well, we opened the dialog between the humanist and the technologist and I want to see more of that.

 

 

 

 

Edited by AI Analytics