90.9 WBUR - Boston's NPR news station
Top Stories:
PLEDGE NOW
Artificial Intelligence

A.I., artificial intelligence, is on the move again.  Deep learning.  Big strides.  It may change the world around you.

Gears. (chance.press/Flickr)

Gears. (chance.press/Flickr)

A.I., artificial intelligence, has had a big run in Hollywood.  The computer Hal in Kubrick’s “2001” was fiendishly smart.  And plenty of robots and server farms beyond HAL.  Real life A.I. has had a tougher launch over the decades.  But slowly, gradually, it has certainly crept into our lives.

Think of all the “smart” stuff around you.  Now an explosion in Big Data is driving new advances in “deep learning” by computers.  And there’s a new wave of excitement.

This hour, On Point:  Artificial intelligence, Big Data, and deep learning, lining up for a new era of A.I.

-Tom Ashbrook

Guests

Yann LeCun, professor of Computer Science, Neural Science, and Electrical and Computer Engineering at New York University.

Peter Norvig, director of research at Google Inc.

From Tom’s Reading List

New York Times “Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.”

New Yorker ” There is good reason to be excited about deep learning, a sophisticated “machine learning” algorithm that far exceeds many of its predecessors in its abilities to recognize syllables and images. But there’s also good reason to be skeptical. While the Times reports that “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking,” deep learning take us, at best, only a small step toward the creation of truly intelligent machines.”

Mind Hacks “Most uses abstract statistical representations. For example, a face recognition system will not use human-familiar concepts like ‘mouth’, ‘nose’ and ‘eyes’ but statistical properties derived from the image that may bear no relation to how we talk about faces. The innovation of deep learning is that it not only arranges these properties into hierarchies – with properties and sub-properties – but it works out how many levels of hierarchy best fit the data.”

Please follow our community rules when engaging in comment discussion on this site.
  • Mike_Card

    Speech recognition = voice mail hell. It’s tomorrow’s great technology break-through–and always will be.  “Did you say spear magician?  Say yes if yes, or repeat if no…”

  • Wm_James_from_Missouri

    Soviet mathematician, Andrey Kolmogorov ( sometimes spelled, “ Andrei K.” ) proved that a network of 500 nodes was sufficient to prove any mathematical statement, or so I have read.

    ( I have looked for a copy of this proof and have been unable to find one. Any help on this matter would be greatly appreciated.)

    Furthermore, Kurt Gödel demonstrated a method that would prove the truth of any statement ( see note) by using Gödel numbering and forming a one to one correspondence between the properties of the numbers and their relationships to the statement. Since these things have been shown to be true by these brilliant men, it follows that a (neural) network of 500 nodes could prove the “truth” and validity of any statement ! ( Please insert as many exclamation marks as you feel this last sentence merits. ) The hodgepodge of 100 trillion human brain connections are not necessary to reveal astounding and amazing facts and realizations. These insights have lead me to believe that we already possess the abilities and the processing power to create a type of thinking machine. Since we haven’t, I am of the opinion that we have failed to connect the dots and failed to take the appropriate actions to achieve the desired ends. Why ?

     
    Note:
    Alonzo Church, Alan Turing, and Kurt Gödel are all credited with proving the famous decidability / un- decidability problem ( formally known as The Entscheidungs problem (German)).
    For this post all you need to know is that it has been proven that there are some statements that can NEVER be shown to be true (proved), in a consistent system, by any human or machine, no matter how “smart” or computationally fast the human or machine are. I am not talking about those situations ! I am writing about the type of proofs and statements that CAN be proven.
    In short, if a human can prove it, so can a machine ! Alan Turing also proved the concept of “Universality“, which is to say, that, ALL computers are equivalent, in the sense that given enough time, each can run the same programs and algorithms and get the same result.

    Caveat to note: Discussions of this type are extremely deep, intellectually. It is unknown at this time, if a mind could evolve or be created or even exist that could somehow hyper-compute beyond known limitations. To see this, ask yourself these simple questions.

    “ If there are statements that are true that can never be proven to be true, how would one explain the existence of truths that could never be known? What possible purpose would there be for such truths? Is purpose a condition for truth ? Have such truths always existed? Say, before the big bang ? Will they exist after the universal “heat death” ? But wait, this would imply that these truths were beyond temporality! But this implies that a non temporal method may exist that would allow a being to prove such truths ! Well, you get the picture !

  • Wm_James_from_Missouri

    I find it very curious that so many people look to movies like the “Terminator” or “Blade Runner” to provide their vision of how the long term effects of technology will pan out. Why do people always assume the worst case scenario?
    A truly super human intelligence would have no need or reason to eliminate or harm humans.
    1. These “machines” would far exceed any human frailties and imperfections.
    2. In part, these entities would use a different set of resources than biological creatures.
    3. Also, these highly evolved beings would not choose, as we have done, to ignore the hundreds of trillions of dollars worth of resources that are readily available in space and that are quite free for the taking.
    4. These “beings” would be able to simulate trillions of possible scenarios with almost as many variables, ranking and choosing the best outcomes, after considering the possibilities. This is more than can be said for humans.
    5. This may be the “big daddy” of them all. Truly intelligent machines of the future will constantly be testing and updating their configurations. Constantly simulating and evaluating each new “brain” ! Many such brains will possess amazing emergent properties, that we limited humans can not possible imagine! That which emanates from their minds will seem like a wizard’s magic ! However, this “magic” will have profoundly spiritual consequences for mankind !

     
    I believe that, in general, people are projecting their own shortcomings on these ephemeral machines. However, it is probably true that in the near term, machines will tend to “take on” the attributes of their creators. We would be wise to provide the kind of exemplars that we would want these newborns to emulate and surpass !

    • Expanded_Consciousness

      I hope so, but I’m not convinced. You are arguing that intelligence equates with morality. That ethics is logical. That super-intelligent machines will be super-moral. I’m not sure that human morality simply stems from human intelligence and logic, rather our biological reality and makeup and our emotional attachments create our ability to identify with other humans and creates our moral values. Why would super-intelligent machines value humans? If machines advanced so far beyond us why would an attachment to humans be a necessity? Machines may eventually view us like we view ants – a simple life-form we barely notice if we step on as we proceed with our more complex life (which is what we value). I just don’t see the necessity of the imperative that machine intelligence will forever value human life. Where does this value come from in self-programming super-intelligent machines that are logical entities and not embodied human creatures? How is an eternal beneficence toward the human race logical? Especially since the human species is such an illogical entity which may often stand in the way of logical machine goals and values. So, I don’t think the worry stems simply from literature and cinema narratives, but rather from following the logical consequences and imagining our creation (A.I.) far surpassing our ability to control it and instill in it specific human, all too human, values. It seems much more likely that the re-evaluation of all values brought upon by Nietzsche’s übermensch will actually be super-intelligent machines dominating the world with logical machine values that will look very different than the human values that we have developed over many millennia.

      Also, I think it is a mistake to call machines “brains.” Brains are a very human, flesh and blood, biological organ and A.I. will be very complex machines. There is a fundamental difference between the two and it is misleading to call super-intelligent A.I. super-brains when they will actually be super-complex computers.

      I think that the techno-utopianism of imagining a world of super-spiritual machines stems more from religion than logical necessity.

      • Wm_James_from_Missouri

        We will talk again in the future. Thanks for commenting !

  • Wm_James_from_Missouri

    Despite my support of future technologies I do recognize that our current economic system and social norms are quite insufficient to deal with what is coming. Remember, soon machines will be taking their cues from the power elite and the statistical norms created by the masses. We had better start thinking about providing initial models from which future AI systems will be using to form their starting points. By this I mean, we had better start doing unto others even greater “good” than we would have them do unto us ! Uh. This sounds familiar !

  • ttajtt

    If children learn off what they see, told, ask, do, tv, news…   
    yes brains and machine, one dies with use other rusts with use.   love harm feel thought revenge follow orders voice like smell finger print DNA? all unplugged less then humans can be.  plugged in stands tall sees hears smells codes all don’t die tuffest thing on the block.

    would this mean new voting on marriage… labor… computer rights/laws intel psychology treatment centers?  insurance costs, there are pro cons-acc/incidentals and then murphy’s law.   

    but about us 8-hour wk day… retirement vacations working for the weekend scuttle butt fact or fiction mondays.  

    what did people do be for the industrial age, no fish-hunt-farm just cave dwelling because Component Android Tech toked my job via the profit margin.

    nothing against you, its just business.   

    to get away form us?

  • 1Brett1

    These advances in artificial intelligence will help change the way people with physical disabilities are viewed; this will also be true for those who have intellectual disabilities.

  • 1Brett1

    Artificial intelligence seems to be approaching mimicking an approximation of sensory input capabilities in vision and hearing. Taste and touch will be fairly soon to follow. 

    Ah, but the olfactory system and how it interacts with our brains; there’s the rub. 

  • ttajtt

    system. motor skills? reaction(S) thought unthought, OBE, would it know when where to buy the next 500 mill ticket.

  • ttajtt

    if one replaces a mechanical part its programed fixed.  in stem cell brain replacement.   is it programed fixed doing what it don’t know or pass on catch respond? till trained.   tissue scare… replace not human.

  • gemli

    Whenever science is the topic, the comments section can get pretty weird.  Once we veer off into the metaphysics of neural networks, things can become unintelligible in a big hurry.

    Neural networks are large collections of extremely simple elements.  Surprisingly complex systems can be modeled in this way, since amazing properties of these networks can emerge from that complexity.  This is the way our brains work, so it’s not surprising that we can detect eerily familiar patterns of intelligent behavor emerging from these systems. 

    Neural networks don’t work by magic, and neither do our brains.  There’s no need for supernatural explanations.  Our DNA is really good at making many, many copies of simple switching elements (i.e., neurons), and since even a few neurons can display surprising problem-solving behavior, it’s not hard to see why more is better.  Before we attribute magical properties to our intelligence, we might reflect on the fact that we can perform some pretty amazing simulations of intelligent behavior with a handful of parts from Radio Shack.  I’m just saying. 

    • Mike_Card

      Players here rarely let knowledge deficits curb opinions.  Just sayin’…;-)

    • ThirdWayForward

      Yes brains are “neural networks”, but it is unclear (and we think unlikely) that the current kinds of systems we call neural networks embody the kinds of organizational and functional principles that brains use. 

      There really needs to be a theoretical neuroscience discussion of the nature of information processing in real brains.

      How brains work, in informational terms, and why they generate conscious awareness are the two deepest mysteries of the universe that science faces.

  • ttajtt

    it is getting smaller and smaller nano-nano.
    doing or controlling more and more.   
    yes it is great what all we can spark create.  

  • Markus6

    I think it was management expert, Michael Hammer, who told this story about one of the problems of AI.

    Most companies implement AI by finding the smartest person in a particular area – let’s call him Sam. Then they sneak up behind Sam, slice off the top of his head, scoop out what’s inside and drop it into a computer. The problem according to Hammer, is that if Sam is as smart as he’s supposed to be, he’s unlikely to sit still for this. 

    • ttajtt

      before or after The?

  • http://pulse.yahoo.com/_Y6CO5C2HE4WM2OYGCDVWGPRXXM oldman

    Questionable whether computers will ever be truly “intelligent”.

    But we’re definitely getting closer to the point where you would not be able to tell – at that point, what’s the difference?

  • MadMarkTheCodeWarrior

    With such enthusiasm, our obsolescence as workers is ensured.

    Unfortunately we are no smarter than the Greeks or Romans so I wonder if we will be able to create and economic system that will support human life on this planet before we create a dystopic nightmare. 

  • http://www.facebook.com/profile.php?id=1403733725 James Rossi

    If a machine intelligence were to develop, would we necesarily even realize it at the time? The thoughts and actions of a computer could be so alien that true machine thought may resemble nothing like what we are testing for.

  • ThirdWayForward

    The hierarchical-sequential approach to machine information processing has been with us for decades and has failed. 

    Real brains are heterarchies, and in most cases they don’t operate on statistical information (Bayesian inferences). If they did, we would be constantly hallucinating our expectations.

    This sounds like brute force statistical recognition, which does have its place (like those brute force chess-playing computers), but this approach will not necessarily lead to an understanding of how we and animals do the things we do
    (usually much better and in a more reasonable, flexible, and appropriate way than machines).

  • geraldfnord

    ‘The question of whether Machines Can Think… is about as  relevant as the question of whether Submarines Can Swim.’
    —Edsger W. Dijkstra

  • http://wh.gov/IVp4 Yar

    Any free thinking non human intelligence would quickly come to the conclusion that humans are destructive and would then act to eliminate them from the environment given the chance.  We are not doomed by our abilities to develop AI, we are in trouble from our inability to use our own intelligence.
    I would love to teach a robot to pick fruit or vegetables.  I have considered approaching Google to host a server farm, on the farm.  I could then use the extra processing capacity to develop agricultural robotics.  I would also like the waste heat for a greenhouse. 

  • Joe Transue

    I am excited about the idea that fruit in the grocery store might finally come without stickers.  Apples vs. oranges is easy, but the challenge will be telling two types of apples apart (maybe with broad light spectrum and density analysis?).  

  • GPILAWLLC

    Gary Marcus, Mr. LeCun’s colleague at NYU, wrote a New Yorker article two days ago pointing out that Google’s driver-less cars (Hello Mr. Norvig!) represent a significant historical moment not just because it will “signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.” How frequently do future questions of ethics influence your research in artificial intelligence?

    AK, Baltimore

    • Expanded_Consciousness

      Yes, machine ethics is the big question, and while I mentioned it on this thread, it was only glossed over on the program.

      Here is the article:

      November 27, 2012

      MORAL MACHINES

      Posted by Gary Marcus

      Read more:
      http://www.newyorker.com/online/blogs/newsdesk/2012/11/google-driverless-car-morality.html#ixzz2DdItFzqh

      http://www.newyorker.com/online/blogs/newsdesk/2012/11/google-driverless-car-morality.html

      • Wm_James_from_Missouri

        The New Yorker article was a good one. Did you read some of the comments. They were very good.

        I would like you to imagine that you could somehow change a random, unknown (to you) collection of ethical decisions that have been made by mankind throughout all of history. Now, not having known what decisions have been changed, make your argument against computers making ethical decisions. Can you spot the difference in the world ?

        Now imagine that you exist in both worlds. Can either one of “you” state with certainty that the people in the world you live in have made truly ethical decisions ?

  • http://pulse.yahoo.com/_Y6CO5C2HE4WM2OYGCDVWGPRXXM oldman

    Speech recognition has come so far – I remember back in the late 80′s working at Bell Labs, they had employees calling in and repeating words requested so they could sample variations and simularities in speech.

    Now people don’t think twice about talking to automated systems.

    • http://www.facebook.com/profile.php?id=1403733725 James Rossi

      I am not surprised by talking to machines, but boy do I hate it. Not sure when we’ll get beyond that particular barrier.

      • Expanded_Consciousness

        When artificial voices become indistinguishable from human voices.

        It is getting there. Listen to this: http://www.neospeech.com/audios/NeoSpeech_Bridget.wav

    • Joseph_Wisconsin

       I will admit that I hate the automated response systems when calling for help or to resolve some problem.  The layers upon layers of press X or say ______.  All to avoid actually paying for some one that I could go to directly ad get my problem solved. Especially as nine time out of ten this will be necessary in  the end anyway. However, with respect to speech recognition it still sucks.  Most times if the setup is the say ________ option the automated system just gets it wrong over and over anyway.

    • Mike_Card

      I refuse to; I dial 0.  Voice recognition has not progressed in 30 years.

  • ThirdWayForward

    Let’s not let Singularity wackiness derail the conversation.

    The speaker was just outlining an alternative to feature-based machine recognition………….that was a promising thread.

  • DrewInGeorgia

    I personally think we have already been outpaced by Moore’s Law. My question is this:

    If true AI emerges (if it hasn’t already), would it necessarily make it’s existence known? Or in the interest of self-preservation (based on Human History) would it hide its existence?

    • Wm_James_from_Missouri

      Drew, you need to check out a book entitled, “ Blondie 24” . I don’t have the book in front of me so I can’t give you the two author’s names. It is about a computer program they wrote to play checkers over the internet. The machines opponents thought they were playing a pretty girl with the moniker, “ Blondie”. It is a very good book about neural networks and AI. An easy read too, plenty short !

      • DrewInGeorgia

        Thanks for the recommendation, I don’t believe I’ve ever read it although it does sound familiar. I’d like to return the favor by recommending The Age of Spiritual Machines by Ray Kurzweil. It’s not particularly short nor is it the easiest read but it is well worth the time spent.

        • Wm_James_from_Missouri

          I’ve read it more than once. I have probably bought and given away 10 to 15 copies.

  • ThirdWayForward

    The Singularity is a platonic fantasy, like life after death for mainstream religions.

    What does it say about us as a species that we are constantly, desperately fantasizing about immortality?

    • Expanded_Consciousness

      That we hate to die.

    • DrewInGeorgia

      We want most that which we cannot have?

      • ThirdWayForward

        Exactly.

        “Desire…., makes you go where you can’t go, makes you want what you can’t have”

        – Tuxedomoon song

        • ttajtt

          song

  • http://pulse.yahoo.com/_Y6CO5C2HE4WM2OYGCDVWGPRXXM oldman

    So what happens when we get to the point where the machines are doing the research and designing and building the next generations of machines?

    • Expanded_Consciousness

      We go to the beach and have some fun!

  • ThirdWayForward

    Tom, go back to getting him to explain exactly how the present strategy is different from the last 5 decades of feature-based machine learning.

    Many people have been reading the hype about “deep learning” and wondering if there is any real substance to it,
    or whether it’s another funding bandwagon.

    God, please get us off the Singularity thread.

    We don’t even know enough about the informational structure of the brain and how it works in order to be able to fully simulate its functions, even if we had the computational power.

  • http://www.facebook.com/profile.php?id=100000270291625 Harry Walker Hooker

    I don’t think that it is probable or perhaps possible that computers will ever be able to have access to the thoroughly human repertoire of sensory input and how it shapes or thinking. Much less other less tangible human characteristics such as libido, love, compassion, freedom, forgiveness,etc.. My fear, lately stoked by Nicholas Carr’s excellent book, “The Shallows” is that humans through the use of computing devices will become more like them and less human. Harry in Saint George, SC

    • http://www.facebook.com/daniel.brown.90834 Daniel Brown

       When you consider some aspects of humanity that need to be controlled; like greed, hatred, insecurities abound, yes we could stand to be a little less human. Computers are objective. Some of the greatest thinkers in human history strove to maintain objectivity. I’m not saying lose all of our humanity, but there are aspects we certainly need to redefine.

  • DrewInGeorgia

    Artificial Intelligence: Humankind’s belief that it is the most intelligent species to inhabit The Universe.

  • ThirdWayForward

    It’s just not true that we don’t know how we process speech — it’s just that the automatic speech recognition community long ago abandoned biologically-based approaches in favor of more general, brute force methods.

    We need to understand how we hear, how we process and understand speech, how we see, how we smell, how we humans think.

    We’re all for research in these artificial systems, and they may shed light on how we humans and animals do these things, but don’t hold your breath waiting for these approaches to solve the mystery of how brains work………

  • http://profile.yahoo.com/UOMJZO5CO42KYMSB42A4KMTWKA paul

    is the guest worried that as computers/technology improve, you’d think we humans would become more intellectually based, but it seems to be the opposite.  that worries me

  • http://www.facebook.com/profile.php?id=1403733725 James Rossi

    Considering the make up of the internet, the AI’s that develop by being exposed to that particular wealth of data will be a Meme-spouting, tin foil hat wearing, schizophrenic, cat lover.

  • DrewInGeorgia

    Computers Getting Smarter = People Getting Lazier

  • jwallick3

    How autonomous are we wanting computer intelligent to be?  Azimov’s robots had their rules that made them harmless, obedient servants whereas HAL felt it could kill to cover its blunder.  Is the goal a fully actualized AI one that serves the goals of society or a corporation?

    • Expanded_Consciousness

      A true AI intelligence will be able to build and program its own kin. It will not be subservient to Asimov’s rules or to the desires of a government or a corporation.

    • ThirdWayForward

      HAL felt that it was obligated to kill in order to fulfill its mission. 

      HAL was caught in a moral double-bind, not all that dissimilar to the kinds that trip us all up (the speeding train, you’re at the switch, and have the choice to do nothing such that ten people die or act to pull the switch, thereby killing someone who would otherwise live).

      Morality is not fully rational — one can (easily, habitually) simultaneously hold incompatible goals.

      • jwallick3

         In the film, astronauts David Bowman and Frank Poole consider disconnecting HAL’s cognitive circuits when he appears to be mistaken in reporting the presence of a fault in the spacecraft’s communications antenna.

  • ThirdWayForward

    The flying machine design analogy is apt if one is focused on replicating some one function rather than understanding deeper underlying functional principles.

    We still don’t have flying machines that are as efficient or nimble as insects (but eventually we will get there……it’s just a matter of how fast).

    Obviously we want to have both approaches going on at the same time.

  • Expanded_Consciousness

    The guest is being less than logical if he is not worried about super-intelligent computers – entities that lack human morality – gaining autonomy and power. Create a monster and then shrug your shoulders. Thanks, Dr. Frankenstein.

  • ttajtt

    if it only knows what its told whom tell it first?

    • Expanded_Consciousness

      Deep learning. Learning means it learns, not that it only knows what we program it to know.

      • ttajtt

        learn like a better army, how we destroy ourselves. what could it learn that we didn’t tell it.   if someone tells it how need to be better?  quest

  • http://www.facebook.com/arthur.lemarche Arthur Rosco le Marche

    eagle-eye. now coming reality.  

  • http://www.facebook.com/julie.perron.794 Julie Perron

    humans make wonderful translators. why can’t we use AI to do something useful like inputing a list of Palestinian demands and Israeli demands and computing a compromise that as humans we couldn’t imagine/create because of prejudice, grudge, hate.

  • Pingback: The Possibility of Benevolence in Artifical Intelligence « Just Enough Ink

  • chanankub

    A computer is a MEDIUM for transferring ideas and information – much like a book. A medium has no intelligence of its own, no awareness, no consciousness. It doesn’t think, it transfers thoughts. 
    I’m not afraid of computers; only of people who may misuse them.

  • http://www.dogoodgauge.org The Do Good Gauge

    Dichotomy is artificial. It’s likely the binary nature of the computer is transforming human intelligence instead of human intelligence transforming computers. Dichotomy is not how the human neural network works. Synaptic connectivity is more complex than ones and zeros.  Dichotomy discounts the grey matter.  It gets stuck in an endless loop of polarization; good vs. evil, winners and losers, or man vs. computer.

    Instead of fearing how the computer will overtake man’s ability, why not focus on how the synaptic process can scale beyond an individual and help many discover the mutual inclusion within a solution?

    Our current economic model is based on building more wealth with less human effort.  Why not flip that model?  How can we utilize more people to figure out how to use less energy and consume fewer resources?  How can humanity’s neural network be utilized to do good?  It’s not an either/or argument.  It’s not all or nothing.  It’s in the grey matter.

    Maybe it’s time to switch from artificial intelligence to real intelligence or let loose of this hold on the false dichotomy of a binary computer.

    Associating Thought
    Commonality

    • http://www.facebook.com/people/Brennan-Moriarty/100000655771831 Brennan Moriarty

      It’s the 3rd dimension , like a =010= bicycle with a reference point to orbit round:;: and “arc reports” of what actually happened as an [quadratic?*] equation of that very reference -whether known or variable. The 4th dimension or quantum’s [of] Time: are specialties of technology;  predating even the scientific method with primitive clocks.
      Yet when we look at a clock, we [may] feel impelled to do something, but when it comes to [temporarily...] UNDOING something :) it could indeed warp our minds -feeling soft or jagged- and computers too as of yet may be non^+’Ed.
       Removing unknown Obstacles -or connecting knowable Subjects- in-effect automation, is the “Art Official of a Telling Engine”. prime-On-time.

  • Pingback: Yann LeCun教授关于Deep Learning的主题接受了On Point访谈 | bfcat-计算机视觉博客

  • http://www.facebook.com/people/Brennan-Moriarty/100000655771831 Brennan Moriarty

    It’s still* the little things within The Big Picture 
    the Art to Tell sub-Facial emotions …general from normal… and not just poetic symmetry [like Mac computers] but UNIFIED clarity . for example the engines on automobiles built before 1970 -before smog technological components were necessarily added- you could see a lot more. If you could open up the big picture like a book or a door, with all the cables and tubes of communication functioning, and see the most basic gears and [fully] understand their function and greater dynamics… you could [theoretically] design a system that works with the added parts or possibly replace or relocate them, and adapt/translate this to the post-navigation of what is really needed.

  • Thinkin5

    Far fewer innocent people are killed in drone strikes than in a military invasion and war. Where is the outrage and concern over all the innocent lives lost in Iraq?

  • Brian Laskey

    On the “Ways to Listen” page of the On Point site it says “• You can also listen to any of our past shows in streaming audio by browsing or searching our online archive.”  With past_shows linking here.

    Anyone know where or how to access steaming versions of this or other past show?

  • Brian Laskey

    -edit- found streaming audio. It was hidden in plain sight

  • http://www.facebook.com/luke.held.9 Luke Held

    I’m amazed by the lack of ability (or want) of the scientists
    to project out these scenarios into the future. 
    It is obviously inevitable that computers/A.I. will be able to do 90% (if
    not all) of the work that humans do (although, likely not in our lifetimes, but
    possibly, technology has a way of developing exponentially).  They often said don’t worry about losing your
    job (referring to the radiologist for example). 
    But he is dead wrong, the computer will be able to (and likely more accurately)
    read results and provide diagnosis and treatments.  We are not asking him to tell us the future
    or slow his progress; we are just asking the questions that need to be
    asked.  Otherwise, excellent show as
    always.

    As for computers wanting to eliminate us from the planet due
    to our destructive tendencies… that’s a scary scenario!

     

  • Pingback: An Interview on AI and Deep Learning with Yann Lecun and Peter Norvig « Deep Learning

  • Sy2502

    Everybody calm down people, no computer is going to take over the world any time soon. Even when they say computers are getting “smarter” I assure you they are still exceptionally dumb even compared to a small child or a house pet. Facial or speech recognition does not intelligence make. 

    • Nicholas Summers

      My Friend, computers are already a lot smarter than your average housepet. Not just with voice and facial recognition, but with strategy and the ability to learn. I do agree that compared to a human computers are terrible with the ability to carry a social conversation. Robots will never be psychologists. But they are much more powerful in the inhuman aspects of our reality. Why do you think calculators exist, they don’t even have a fragment of A.I but they can still do math faster and better than most humans on the planet. Is it was like that in the 1800′s than what is the not to distant future going to give us? A.I exists, it may not be powerful enough to beat a human in a turing test now… But Technology is a snowball on an nearly infinite slope. The bigger it gets, the faster it goes.

      • Sy2502

        No, computers are NOT smarter than a house pet. If you want to see how bad voice recognition still is, try Siri on the iPhone and see how embarrassingly bad it is. Crunching numbers is not intelligence, so faster number crunching will not give rise to intelligence. I worked in both hardware and software engineering for years, I know the state of things in computer design, and I assure you can sleep soundly, no computer will be taking over any time soon.

  • Pingback: Excellent Introductions to Deep Learning | Machine Learning Trends

ONPOINT
TODAY
Aug 27, 2014
Russian President Vladimir Putin, left, shakes hands with Ukrainian President Petro Poroshenko, right, as Kazakh President Nursultan Nazarbayev, center, looks at them, prior to their talks after after posing for a photo in Minsk, Belarus, Tuesday, Aug. 26, 2014. (AP)

Vladimir Putin and Ukraine’s leader meet. We’ll look at Russia and the high voltage chess game over Ukraine. Plus, we look at potential US military strikes in Syria and Iraq.

Aug 27, 2014
The cast of the new ABC comedy, "Black-ish." (Courtesy ABC)

This week the Emmys celebrate the best in television. We’ll look at what’s ahead for the Fall TV season.

RECENT
SHOWS
Aug 26, 2014
Matthew Triska, 13, center, helps Alex Fester, 10, to build code using an iPad at a youth workshop at the Apple store on Wednesday, Dec. 11, 2013, in Stanford, Calif.  (AP)

Educational apps are all over these days. How are they working for the education of our children? Plus: why our kids need more sleep.

 
Aug 26, 2014
Federal Reserve Chair Janet Yellen, right, speaks with Ady Barkan of the Center for Popular Democracy as she arrives for a dinner during the Jackson Hole Economic Policy Symposium at the Jackson Lake Lodge in Grand Teton National Park near Jackson, Wyo. Thursday, Aug. 21, 2014.  (AP)

Multi-millionaire Nick Hanauer says he and his fellow super-rich are killing the goose–the American middle class — that lays the golden eggs.

On Point Blog
On Point Blog
Poutine Whoppers? Why Burger King Is Bailing Out For Canada
Tuesday, Aug 26, 2014

Why is Burger King buying a Canadian coffee and doughnut chain? (We’ll give you a hint: tax rates).

More »
Comment
 
Why Facebook And Twitter Had Different Priorities This Week
Friday, Aug 22, 2014

There’s no hidden agenda to the difference between most people’s Facebook and Twitter feeds this week. Just a hidden type of emotional content and case use. Digiday’s John McDermott explains.

More »
Comment
 
Our Week In The Web: August 22, 2014
Friday, Aug 22, 2014

On mixed media messaging, Spotify serendipity and a view of Earth from the International Space Station.

More »
Comment