Sunday, December 14, 2014

What Artificial Intelligence Is Not




First of all, AI is nothing to be frightened of. It’s not a sentient being like SkyNet or an evil red light bulb like HAL. Fundamentally, AI is nothing more than a computer program smart enough to accomplish tasks that typically require human quality analysis. That’s it, not a mechanized, omnipresent war machine.
Secondly, AIs are not alive. While AIs are capable of performing tasks otherwise performed by human beings, they are not “alive” like we are. They have no genuine creativity, emotions or desires other than what we program into them or they detect from the environment. Unlike in science fiction (emphasis on the fiction) AIs would have no desire to mate, replicate or have a small AI family.
Next, AIs are generally not very ambitious. It’s true that in very limited context, an AI can think similarly to us and set tasks for itself. But its general purpose and reason for existence is ultimately defined by us at inception. Like any program or technology, we define what its role in our society will be. Rest assured, they will have no intention to enslave humanity and rule us as our AI overlord.
___________________

Thanks to Rob Smith (@robpecabu) for a welcome infusion of reality-based thinking.

As to AI and sentience, though... It seems to me that the door is very much open. 
Merriam-Webster tells us that sentient means "able to feel, see, hear, smell, or taste."
My own work bears directly on what might be called artificial sentience. 

Since this blog is all about all that, though, I will refrain from repeating myself this one time.

Monday, November 17, 2014

Jason Lanier on the Myth of AI




Views from The Edge

Lanier is always thoughtful and often provocative in a constructive way. I happen to agree with him, here:
The usual sequence of thoughts you have here is something like: "so-and-so," who's a well-respected expert, is concerned that the machines will become smart, they'll take over, they'll destroy us, something terrible will happen. They're an existential threat, whatever scary language there is. My feeling about that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn't optimal is the way it talks about an end of human agency.
But it's a call for increased human agency, so in that sense maybe it's functional, but I want to go little deeper in it by proposing that the biggest threat of AI is probably the one that's due to AI not actually existing, to the idea being a fraud, or at least such a poorly constructed idea that it's phony. In other words, what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing. 
What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if we talk about the particular technical challenges that AI researchers might be interested in, we end up with something that sounds a little duller and makes a lot more sense.
For instance, we can talk about pattern classification. Can you get programs that recognize faces, that sort of thing? And that's a field where I've been active. I was the chief scientist of the company Google bought that got them into that particular game some time ago. And I love that stuff. It's a wonderful field, and it's been wonderfully useful.
But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous ... when you do all of that, you create a series of negative consequences that undermine engineering practice, and also undermine scientific method, and also undermine the economy.
The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive. I'm going to go through a couple of layers of how the mythology does harm. 
The most obvious one, which everyone in any related field can understand, is that it creates this ripple every few years of what have sometimes been called AI winters, where there's all this overpromising that AIs will be about to do this or that. 

Thursday, November 06, 2014

Lowe's Robots




In the near future, you might be surprised to visit to the giant hardware store in your town and find yourself greeted by a chatty robot rather than a human sales assistant. A harbinger of this age of robotic shopping is being trialled with two Oshbot robot sales assistants at an Orchard Supply Hardware store in San Jose, California. Built by Lowe’s Innovation Labs and Silicon Valley technology company Fellow Robots using "science fiction prototyping," the OSHbots are designed to not only identify and locate merchandise, but to speak to customers in their own languages.

The personal touch makes visits to cavernous megastores less intimidating – especially when you’re a novice in the world or U-bends and junction boxes. But human sales assistant cost money, which can often be more effectively spent by concentrating human talents on more complex tasks than hunting down a self-tapping drywall screw. To allow this while still keeping customers happy, Orchard Supply, a subsidiary of Lowe’s, is seeing how well robots can take up the slack.

Saturday, September 06, 2014

Automata





"If we go back to the city, we will die."

"To die, you have to be alive first.You're just a machine."

"Just a machine? That's like saying you're just an ape."

Friday, August 15, 2014

The Robots of Dawn



The narrator makes a number of compelling points, but the analogy to horses is lame.

Horses don't vote, stage mass demonstrations, discomfit robber barons or sabotage factories.

On a larger note, economic determinism of whatever kind is fundamentally flawed in that people are assumed to behave like so many cogs in a social machine.

But inanimate objects do not reflect on their situations, nor do they seek to change them.

We need to consider what sort of civilization we want.

We must never become slaves to the machine.

Tuesday, June 17, 2014

Field FX @ 1 Million Cups




I presented a multimedia extravaganza to an audience of techies and business people recently, courtesy of the good people at 1 Million Cups

People told me it went well, despite me being rusty and an initial glitch. There were excellent questions and some good laughs.

The video is hosted at livestream. If you're interested, I suggest you fast-forward past the first 8 minutes or so, when the show really gets under way.

The brainchild of the Kauffman Foundation in Kansas City, 1MC is all about entrepreneurship. Groups meet every two weeks to hear people pitch their startups for 6 minutes. The rest of the hour consists of a Q&A session. 

Our meetings are highly informative and lots of fun. So it's no surprise that there are now branches all over the US.


Saturday, May 24, 2014







I'm taking my multimedia presentation on the road! It's all about AI, robotics, neural nets and quantum theory.

A business friend told me that the talk was all about robots at this year's SXSW conference. 

Physics in Mind: The Quantum Brain was recently named book-of-the year by Physics World.

And I have been approached by a major university publisher. So the time seems right.

The presentation is geared for a college-educated audience, with ample time for Q&A.

If interested, please contact me for details: bjflanagan[at]fieldfx.biz


Tuesday, April 29, 2014


For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.

"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.

http://stanford.io/1tZkat4