It's a rather strange question, isn't it? A camera doesn't have as much intelligence - still less self-awareness - as a flatworm. So what's the point of asking a question like this?
It's because there's a surprising and profound conclusion. So please suspend your puzzlement for a moment and stay with me while I explain.
I want to start with the question of what makes anything sentient. What does it mean to be sentient? It seems to me there's a spectrum of definitions. Like everything to do with consciousness, it quickly gets fuzzy. You have practitioners of utter woo at one extreme and physicists arguing that consciousness is a quantum effect on the other. But there's no need for me to go into that here. It's much easier for me simply to define what I mean by "sentient", which is that an entity is sentient if it is aware of itself. Why is that important? Think about it: if you can be aware of yourself, then it means that you are aware of the distinction between you and the outside world - everything that is not you.
A camera sensor sees the world that surrounds it. Does it know that it's doing that? To know something, you have to be sentient, or you're just storage.
So what would not just a camera, but any "machine" have to be for it to be self-aware? There are some specific things.
First, to generate awareness, it would need to know about the world. It would need at least some senses. Cameras are off to a good start. A camera obviously has an eye. If it has a microphone, then it has ears. But it wouldn't be very self-aware if it only ever looked at the same scene, so it would have to move. Would it be enough to lash it to a Land Rover or a motorbike? I doubt it because it would be utterly dependent on the whims or inclinations of the driver.
So it would need to be self-propelled and in control of its movements. That would probably get us a long way. But sometimes, "being there" in the sense of "location" isn't enough. In fact, if that's all you have to understand your place in the world, it's sadly lacking. I think we would need to give our camera more senses, like touch, for example. Not just a proximity or contact sensor, but proper, flexible touch, of the type that comes with having fingers attached to hands attached to arms and a body. Why? Because exploring the world is a crucial part of our ability to understand it. We don't just understand the world by looking at it or listening to it. We need to touch things too. Sometimes we'll get burned or stung. But even with these hazards, we learn, all the time.
Theories of the mind
Some of this comes down to our theories about the mind, which is crucial to self-awareness. Children likely start without a "theory of mind" when they develop. This doesn't mean that they haven't read any philosophy books yet. It's about whether inexperienced young individuals understand that other people have minds: that you're not the only living, intelligent being in the universe and those crawling objects at the daycare centre that look a bit like you have thoughts just like you too. It's reasonable to think that kids make this leap of imagination quite quickly. Having arms and legs and feeling the carpet push back against them generates a powerful impression.
It's also about more than moving around, seeing, hearing and touching things. You also need to know what your own body is doing about the world out there. Nobody expects that when we shake hands with someone, our own hand will pass through theirs. We all know what it's like when we bump into someone in a crowded place (if you're from the UK, you'll automatically say "sorry" even if it was the other person's fault).
Knowing where our limbs and body are in relation to the world is actually a sense in itself. It's called proprioception, and you can see it at work when you close your eyes and try to touch your two index fingers together in front of you. It's also how I can play an E flat major ninth on a keyboard without looking at it. My "muscle memory" puts my fingers into the correct position relative to each other, but it's proprioception that puts my hand in the right place on the piano. I assume that machines have a good sense of proprioception because they are full of sensors and - actually - couldn't work at all if they didn't track where their movable parts were at all times.
Perhaps the final part of what you would need to make a machine sentient is agency: the ability to make decisions. I don't mean algorithmic decisions ("if the traffic light turns green, start moving") but creative and common-sense decisions.
That's a tough one. It's not at all clear what is the pathway to agency. And what if you need sentience to achieve agency, and not the other way round.
Ultimately, we don't know if machines will ever be sentient. If they do achieve this level of consciousness, will we even know it? New animal welfare laws are making it illegal to boil lobsters alive because of course it should. But the laws talk about lobsters being sentient. How can we possibly know how such a different life form could be self-aware? Perhaps by analogy. We observe their behaviour and conclude they're feeling pain.
And perhaps that's all we can ever do. It's likely that sentience will be an "emergent property". That it will emerge out of complexity. And when it does, we might not recognise it.
All of this leads me to a curious conclusion: a sentient camera is not just a bit like a robot, but is the very definition of a one.
Tags: Technology Futurism AI
Comments