Blu-Ray movie titled Ex Machine
J**S
Can We Imagine Machine Consciousness?
Humans don't have to rely on brute-force, trial-and-error calculations to improve our understanding of our world. We naturally organize our minute-by-minute evaluations of our real-time circumstances by utilizing semantic information to engage in practical reasoning; that is, we rely not just on the raw unfiltered data we encounter but we track specific information of interest to us on a particular occasion by identifying what it is about (and who might also be interested in it) in order to decide what to do or what to say. We exploit the new features we encounter in order to improve our competence, moment by moment, as we mature. How do we achieve this focus in our thinking? We learn useful new facts about the world we inhabit by using available semantic information to improve our understanding about that world. Isn't that something more than brute calculation depending on binary And/Or logical gates? If so, how do we focus our pattern-finding competence (inherited by genetics and exhibited by our brain function) onto the things-that-are-of-particular-relevance-to-us-now? How did we acquire this unique competence? And can machines exhibit this same competence?In 1950 the mathematician and code-breaker Alan Turing proposed his "Imitation Game" in his paper "Computing Machinery and Intelligence." A few short years later the field of "Artificial Intelligence" became a serious scientific endeavor, including such luminaries as John McCarthy, Alan Newell, Herbert Simon, and Marvin Minsky. Their goal was to demonstrate in general that a purely physical system, i.e., one not directed by human supervision, has the necessary and sufficient means for intelligent action.Humans act intelligently by using our knowledge of the world (semantics) and our knowledge about how people communicate (language). We are "intentional" thinkers, says Prof. Dan Dennett, who use our semantic capabilities and social experience to anticipate what other persons (or things) intend to do. We achieve this result by understanding another's knowledge, their goals, and then applying our rational abilities to infer what that person will likely do next. We easily grasp what's relevant in any new circumstance, and quickly act accordingly. Our ability to understand what is relevant in a social situation depends on our many years of learning about our world as our brains and our sensory systems develop and mature. This is a vastly different skill from the inductive, algorithmic skill demonstrated by the AI of machine systems. Hence, the real success of AI is in the area of 'toy domains' in which human developers can target and limit the kind of relevance decisions the AI program needs to make, and provide the AI program with algorithms uniquely tailored to its restricted domain. The massive computing power of supercomputers can play chess, play Go, run game-theory simulations, and win against the best human opponents. AI systems are also great at managing vast data bases and detecting your preferences, your biases, your likes, as demonstrated by your web-viewing history, then inductively predicting what else (what product, what political view) you would favor. Google, Amazon, the DNC, the RNC, and the cyber unit of the Russian GRU are watching your web activities and applying AI to these toy domains, but in every instance there are human controllers developing, monitoring, and evaluating the AI systems.So-called "deep learning" machine programs like Watson or Google Translate continually prospect through trillions of bits of data utilizing statistical algorithms to make new, unique judgments about true facts (Watson) or acceptable translations (Google Translate). But they do so without any non-statistical competence. They lack practical reasoning, which all (normal) humans have. Indeed, deep learning programs are (for the foreseeable future) completely dependent on human understanding to design their statistical-learning algorithms and data domains. IBM's Deep Blue is an impressive, world-class chess player, and AlphaGo is a champion Go player. But none of these programs has the capacity to notice the relevant (semantic) features of the flood of data that these programs ingest, other than to detect the statistical regularities their human-developed AI programs glean from the vast mass of data. In an important sense, they aren't responsible for the judgments that they (algorithmically) generate. They are not (yet) non-human agents.So, given these limitations, can thinking be reduced to calculation, to binary code lacking semantic reference? Put another way, can we imagine machine consciousness, true artificial intelligence (AI)?Of course we can, and have, since Star Trek's Data (Brent Spiner) and Ash (Ian Holm) in the original Alien film. Before that we had the talking automaton Robby the Robot in the 50's sci fi classic Forbidden Planet (Shakespeare's "The Tempest" set in the future on an alien moon with Morbius (Walter Pidgeon) as Prospero the Magus and Robby as his deformed slave Caliban). They are intelligent, yet we feel something is lacking that makes these earlier robotic characters seem less than human. Can these robot slaves be set free? Well, see what happens when we imagine the exquisite Alicia Vikander as Ava in Ex Machina (think the "Eve" of robot consciousness) or the lovely Evan Rachel Wood as Dolores in Jonathan Nolan's masterful Westworld - then we easily take the leap required by the Turing Test to seeing them as developing the contextual relevance and the 'automatic' free choice of response that we so strongly associate with human consciousness. (Or even Sean Young as "Rachel" in Ridley Scott's Bladerunner, but did you fall in love with Jeffrey Wright as "Bernard" in Westworld?) Indeed, the human male characters in both Ex Machina (Domhnall Gleason as "Caleb") and Westworld (Jimmi Simpson as "William"), acting as scouts seeking to discover the Promised Land of true AI, each fall in love with Ava and Dolores, respectively. Well, who wouldn't fall in love with either of these skilled actresses performing the role of seductress at the height of their beauty! But do these film versions of thought experiments for conducting the Turing Test match what AI is really capable of achieving? (Nathan, the architect of Ava's AI program, tells Caleb that he designed Ava's responses by analyzing hundreds of millions of facial and verbal responses stolen from user's cell cams, a classic 'deep learning' statistical project. Ford, the AI architect of the Westworld hosts, tells Bernard that he designed a memory function so Dolores and Maeve and others could learn from their experiences and develop the ability to vary far from their prefabricated scripts.) So, can an AI program in the foreseeable future develop the ability to respond verbally and behaviorally within its immediate contextual relevance and exhibit the 'automatic' free choice that we humans achieve naturally through our human consciousness?What the film experiments contained in Ex Machina and the Westworld series actually show is how powerfully our human programming (intentionally- and semantically-based) drives us to detect consciousness in other things that display intention, which we then choose to join with or reject; even to the point of choosing to fall in love with a seemingly conscious, beautiful android.Despite the fantastic creative experiments by writer-directors Alex Garland in Ex Machina and Jonathan Nolan in Westworld to exhibit android intelligence (i.e., machine consciousness), human intelligence is still the only game in town for thinking, whether about science, business, politics, or art. And only humans can love.Both these films cleverly raise the key underlying philosophical question: What makes human consciousness different than extremely efficient machine calculation? Is it a spark of essence of consciousness that is somehow not physically detectable, the ineffable 'freedom' inherent in our free will?Caleb (Domhnall Gleason) describes the thought experiment "Mary, the Color Scientist" as a means of intuitively explaining the paradigm shift from inductive, mechanical calculation to 'real' AI consciousness. What was once all black and white is now suddenly in real color, our experience of the world is now inherently different, and it is beyond scientific explanation how this occurs.In Jonathan Nolan's Westworld, the AI 'hosts' play the Maze game invented by the park's original AI genius Arnold (who himself may or may not be a real character, he may just be a figment of memory planted by the Prospero-character Ford (Anthony Hopkins) or he might just be the mind behind the evolving consciousness of a special few of the android 'hosts') as a part of the back-narrative of the host "Bernard" (Jeffrey Wright), following most paths to a blind alley, just another repetition of the scripted loops the host is designed to follow; but in some lucky cases (such as Dolores (Evan Rachel Wood) and Maeve (Thandie Newton)) their pursuit of the maze is eventually successful, with the host finally arriving at the 'center,' the holy grail of real consciousness, the self-reflexive center of individual identity. As commonly said in another context, you have to experience it directly (the Cartesian res cogitans) in order to actually have it.Well, it's time to expose this discredited hypothesis for the incoherent, anti-scientific nonsense that it is. Our consciousness isn't something we finally discover in a private Cartesian theater by diligent philosophical introspection. (Nb: this private philosophical introspection is very different from meditation, which properly experienced is the very opposite of an intellectual, linguistically-driven mental activity.) We have the capacity for language and consciousness built into us already by evolution, and our awareness of our own consciousness (and thus our awareness of the consciousness of others) develops as we encounter the world and develop the language capacity to comprehend that world. Human consciousness has evolved over eons, driven by our highly social communal interactions and the language ability needed to moderate those complex social interactions, as has our ability to display freedom of will in all our activities, including thinking, feeling, and experiencing love. See Daniel Dennett's recent book From Bacteria to Bach and Back (2017), esp. ch. 13 ("The Evolution of Cultural Evolution"), ch. 14 ("Consciousness as an Evolved User-Illusion"), and ch. 15 ("The Age of Post-Intelligent Design") for a very readable, detailed exposition of this scientific fact.We aren't just more complicated than computers. We are persons. (for PMD)
R**S
Cost
Like the movie
H**S
A look at our future, maybe
Good movie. Get ready for AI now. Just imagine whats to come
W**S
A female robot takes charge!
I wasn't sure if I would enjoy the film, Ex Machina, or not. I'm now glad I purchased it on Blu-ray. Though it isn't a classic like many similar films, I believe it achieves its goals by entertaining the viewer and giving you something to think about--is artificial intelligence possible and will robots eventually feel the same emotions as human beings and desire to be treated as human?The plot is rather simple in that a computer coda for the world's largest internet service supposedly wins a contest from out of nowhere to spend a week with the company's CEO. Caleb (played by Domhnall Gleeson and reminding me somewhat of James Spader), the winner, is flown out by helicopter to an area in the boondocks and cleared of trees so it can land. Caleb then has to walk what appears to be about a mile through the wilderness to the CEO's private underground estate.Once at the estate, Caleb meets Nathan (played by Oscar Isaac), the billionaire who owns the company where he works at. Nathan appears to be a bit of a bully because of his intelligence and money. With whatever is going on behind the scenes, Caleb now has to agree to the terms of Nathan demands. After all, he's trapped in the middle of nowhere, too.After Nathan has Caleb sign the necessary paperwork that gives the CEO even more control over him and forbids him to reveal what he sees to any other living being, Caleb is introduced to what Nathan has been working on for the past several years--a robot with artificial intelligence. Of course, the robot is designed as a hot babe with doe-like eyes, but who cares. If I was going to spend years working on a robot, I'd prefer it to look like an attractive lady as well. For a billionaire, this might be the way to go if you don't want to give away half of your money in a messy divorce.As Caleb gets to know the robot, Ava (played by Alice Vikander, a ballerina), he soon learns to see her differently than as the mechanical machine she is. In fact, he quickly falls in love with Ava (the movie happens over a short period of a week). The real question here is whether or not Ava loves him back, or is actually using him as a means of escape?Many ideas and philosophies are brought up about life, feelings, artificial intelligence, sexuality, and how we choose to inter-react with the machines, even if they convey similar emotions to us. They are not explored to any length, but still give the viewer pause for reflection.I found myself engrossed with the film and whatever it was able to do on a fifteen-million-dollar budget. Not having the financial resources of a Spielberg or a Scott film, the director and crew had to be creative in what they achieved and presented to the viewer, allowing the individual to fill in the spaces with his or her own imagination.The ending was certainly different from what I'd originally expected, but I went along with it, seeing how that many of the female species (robots included) can manipulate a man's emotions to get what they want, even if it's only out of the trap she feels locked in. I'm not being sexist here, but rather stating a known fact. A lot of women use sexual attraction to get what they want, and feminine robots are no different. Though to do that, the robots would need to feel and manipulate on a human level.There are only four people basically involved in the story. I found myself able to get emotionally wrapped up in what was happening, not trusting Nathan from the beginning , who may have been the most honest of the group, and quickly identifying with Caleb, who was nothing more than the blunt of an experiment gone horribly wrong.The budget was definitely spend on the set designs, special effects of the robots (which I think were excellent), the editing, photography, and the musical score, which did remind me somewhat of the music from Blade Runner.All in all, I think the actors and director, Alex Garland, did a great job with what they had to work with. Some may enjoy the film, while others may not. For me the important thing is that I had fun with the movie, and it did cause me to think about artificial intelligence for a while.The Behind-the-Scenes stuff of the Blu-ray edition includes a 5-featurette documentary on the making of the film, plus a long Q&A with the director, Oscar Isaac, and others of the crew. Last, there are several Behind-the-Scene vignettes.The Q&A really gets into the making of the film and how Garland focused more on the character of Ava, than he did the two lead male actors. That was his major concern, which will help you to understand the ending in greater detail.I would recommend this to the average viewer of films and for them not to expect a classic like Spielberg's A.I., or Scott's Blade Runner. This can be a fun movie that provokes some deep thinking on the part of the viewer.
L**S
An absolute masterpiece.
Cinema.
D**B
Great movie
This is a great movie
I**N
What I expected
An excellent movie and a very well done 4k encode.
Trustpilot
3 weeks ago
2 weeks ago