Episode #5: David Gunkel

The above image was remixed from this NIU Newsroom article.

The above image was remixed from this NIU Newsroom article.

Dr. David Gunkel challenges us to think otherwise. In this podcast, we touch on many challenges: the problem of other minds, how to teach students scalability, the use of battlefield robots, robot rights, what does it mean to be human today, moral agency, Kurzweil, Žižek, Wikileaks, and we wrap it up with his Lonely Island picks.

The questions he's paving right now are the questions we'll have to grapple with in the coming years, and because of him, we'll have a better framework in which to operate under. 

David's Piece of Advice: "Shaking in your pants isn't a bad thing, it shows you know what's valuable and that you're worried about something. But you have to transform that into some actionable. Fear is a beginning, not an end point. You can always transform your fear into a positive action, and it's my hope everyone will be able to do that."

Quote of the Show: "When the robot invasion comes, and it's not whether or not it comes, it's here already, it's not going to look like how we've been told...It's going to be like the fall of Rome, where we'll invite the barbarians into our home and they'll slowly but surely take more and more of our occupations, and we'll wake up one day and say, 'My god, where did all the robots come from?'"

A week before the podcast episode, I threw up a post on Facebook asking friends to post questions for the podcast. The questions are below. The answers in the show start around the 52-minute mark

Facebook Q&A:

  • To what extent are ethics determined by our technological capabilities or limitations?
  • Your opinion on machine learning in regards to AI and consumer privacy? 
  • Does AI have the capacity to replace white collar workers?
  • How do you prepare the next generation of students?
  • What do you think of the idea of creating fonts that simulate emotions?
  • AI is often depicted in pop culture as an evil. But what are some of the positive implications of AI?

Guru™ Show Notes (What's Mentioned in the Podcast, In Order):