Robots That Care

Prompt: In 2010, we rely on machines for many of our daily activities. Some argue that this reliance on machines can enhance our lives. Others argue that it may diminish human interactions. Both views are expressed in the article you’ve read, “Robots That Care.” Based on evidence from the article and your own views, write an argument that addresses the question: “What role should machines play in our lives?”

According to Webster’s Dictionary, a machine is “an apparatus consisting of interrelated parts with separate functions, used in the performance of some kind of work.” Under this broad definition, it can be argued that machines are all around us and already play an enormous role in our lives. However, a robot is defined more narrowly—“a machine that resembles a human and does mechanical, routine tasks on command.” It is important, then, to distinguish between intelligent and unintelligent machines when examining what role they should play in 21st century daily life.

The difference between unintelligence and pseudo-intelligence with regard to a machine is found in the definition of the word “robot;” the key part being “resembles a human”. Therefore, because machines resembling humans have different implications for society than machines not resembling humans, two separate definitions are required. For the sake of this essay, we will use Webster’s definition of “robot” while reserving the word “machine” for an apparatus that does not resemble a human being. Under these definitions, the prompt of this essay becomes twofold: “What role should machines play in our lives?” and “What role should robots play in our lives?”

It is undeniable that our modern world is saturated with machines. The use of machines is ingrained in our daily lives. Most of the time we do not realize how many machines we rely upon until their use is taken away from us, such as during a power outage. However, it would be naïve to think that all machines are electronic, that during a power outage all machines are absent from our lives. The wind-up clock, the hunter’s bow and arrow, the screwdriver, the wheel and axle, even the ramp for wheelchairs—these all fit our definition of “machine,” as they help us perform some sort of work. Because they help accomplish a task, machines are inherently good; however, abuse by the user is still a definite possibility.

The use of machines in our daily lives, then, should be a means to a greater good, not merely an end in itself. An individual has the obligation to determine for himself whether his use of machines fits these criteria. If the president of a manufacturing company, for example, is considering replacing human workers with machines, it is his responsibility to first examine the costs and benefits of this proposal to ensure that a greater good will result for everyone. If the president finds that the expanded role of machines will reduce manufacturing costs at his gain and the workers’ expense, he should reflect on whether the expansion demonstrates an ordered or disordered use of machines in light of his true motives. If, on the other hand, the president finds out that the expansion cuts costs, creates jobs in machine design and maintenance, and increases the quality of the finished product, the increased role of machines is justified, as the company, its workers, and its consumers all benefit. Thus, it is not the machine, but rather how we use it that can have positive or negative consequences. Therefore, we must closely monitor the role machines play in our lives to ensure that the way in which we use them truly promotes the common good of society.

Determining the proper role of robots in our society is dramatically more complex. Because they “resemble a human,” robots have further-reaching social, emotional, and ethical implications than do machines. As M.I.T. professor Sherry Turkle, warns in Jerome Groopman’s “Robots That Care: Advances in technological therapy” article in The New Yorker, “Robots…risk distorting the meaning of relationships, the bonds of love, and the types of emotional accommodation required to form human attachments.” In other words, robots are not capable of being a true companion, expressing true love, or showing true concern because they are, in essence, wires and transistor chips pre-programmed with software. Just as there is little satisfaction in programming a computer to display “I love you,” a robot programmed to love, encourage, or enliven is essentially devoid of authentic meaning. It is the fact that robots attempt to portray emotions that only humans can authentically express that makes them so potentially dangerous.

Despite this apparent harm, it is important to recognize that robots in and of themselves pose no danger; rather, their responsible use requires the mutual understanding of both the creator and the user. As Groopman’s article suggests, the creator must “observ[e] Isaac Asimov’s First Law of Robotics: the robot must not injure the patient.” That is, the robot—like machines, as previously discussed—must help the user. Equally importantly, the user must understand the robot for what it is—“a machine that resembles” not a machine that is “a human being.” The user must be perfectly clear that any emotions conveyed by the robot are nothing more than bits and bytes, ones and zeros, and if/else statements so as to mitigate the chance of the user forming an emotional attachment to the robot. This principle may seem easy to follow; most healthy adults would laughingly dismiss the idea of a robot as a person. However, we must examine the setting in which robots are today being employed. According to Groopman’s article, many robots are currently assisting the elderly and persons with autism and other physiological instabilities. Therefore, it is not unconceivable that for such people a robot could become a companion equal to that of a human, a companion who if removed, broken, or reprogrammed could cause serious emotional grief. “What happens if a robot breaks down, or is taken away, after the person invests the robot with the qualities of a grandchild or a companion?” asks Maja Matarić, a leading robot developer and professor of computer science at the University of Southern California quoted in Groopman’s article. Thus, any perceived benefits of mixing robots with elderly or mentally instable patients are outweighed by the potentially drastic emotional consequences.

Nevertheless, studies referenced by Groopman suggest that robots have had limited success in treating those suffering from strokes, limb injuries, and—despite the dangers—even autism. However, serious problems must still be worked out before robots can have as active a role as machines in our lives. The possibility for subjective confusion as to the objective nature of a robot is still too great. While responsible use of machines should continue to play an active role in our everyday lives, at present the safest place for robots is still on the LCD screen.

Advertisements