Broken Christmas Toys
Wednesday, December 26, 2012By Steven H. VanderLeest
They could make toys better. They could make them stronger, less prone to wear and damage. They could make them safer, with fewer dangerous small parts, with fewer toxic materials, with more comprehensive testing. They could make them more educational, smarter, more sophisticated. They could make toys better. But they don’t.
When our children were young, it was not unusual during the days and weeks after Christmas to find a Christmas present already broken and discarded. It started out life beloved and cherished right out of the gift wrapping. The doll joined a tea party. The Hot Wheels car joined a parade and then a race. The new watch went on the wrist for the rest of the week. Some toys came back for repair within minutes, while others lasted days. A few sturdy stalwarts lasted long enough to be handed down to a sibling. Why weren’t all the toys made that sturdy? Why were some made of flimsy materials that easily broke in the hands of an industrious four-year old child?
Toy designers and manufacturers do have a choice. They could make better toys. Why don’t they? Because we consumers so often choose lower price over higher quality. Imagine a toy seller who produces two models of the same toy. The first model is made of the inexpensive materials, with little attention to durability. Costs are reduced further by slimming down the thickness of each part and minimizing the number of fasteners by using an inexpensive sealing process. This makes the toy not only more frangible, but also less repairable. The second model is made to last, with high quality materials. The designer pays attention to likely wear patterns and beefs up the parts where weakness might otherwise lead to breakage. More expensive fasteners are used so that the toy can be repaired, should any problems occur. From the outside, the two toys appear quite similar. A Christmas shopper in a hurry probably couldn’t spot the higher quality of the second toy without close examination. The only clear difference is the price, which is almost three times more for the second model than the first. Towards the end of the shopping season, the first model has sold out, yet stacks of the second remain. Why don’t they make toys better? It isn’t some insidious toy conspiracy. It is because we ourselves won’t pay for the higher quality. You get what you pay for. We choose to pay little, so we get little.
The forced choice in making a toy is not unusual. Trade-offs are implicit in most engineering designs, requiring a balance between multiple goals that each appear to be good. yet more of the one requires less of the other. Balancing cost and quality is just one example. We trade-off weight (and indirectly safety) with high gas mileage in automobiles. We trade-off time to market with thoroughness of clinical testing for new pharmaceutical drugs. We must often prioritize the competing goods of aesthetics, performance, reliability, safety, recyclability, and more. I once asked my students in an engineering class about the difference in the rigor one should use in designing electronics for an MP3 portable music player when compared to designing a medical instrument to monitor an infant’s vital signs. At the one extreme, some students indicated there should be no difference. They thought that Christians should do our best and produce the most excellent and safe designs regardless of the intended use. This position, advocating for an equal attention to all designs regardless of intended use, has some scriptural support. Colossians 3:23 tells us “Whatever you do, work at it with all your heart, as working for the Lord, not for men.” No matter where we find ourselves, every occupation is worthy of our best efforts as an offering to the Lord. At the other extreme, some students indicated that the infant monitor should be designed with the utmost care and much more attention, compared to the music player. This position, advocating for more care when the intended use is more critical, also has some scriptural support. Philippians 4:8 tells us “Finally, brothers, whatever is true, whatever is noble, whatever is right, whatever is pure, whatever is lovely, whatever is admirable—if anything is excellent or praiseworthy—think about such things.”
Can you ever go overboard on safety? Is there ever an acceptable risk? I believe so. Consider two examples. First, look at the common nail hammer. It is designed to pound nails into wood. This purpose leads to a design with a hard striking surface, a relatively heavy weight to provide momentum when the striking surface is swung, and a long handle to harness the centrifugal force of that swing into a powerful impact on the head of the nail. The design is appropriate to the need. The design is also deadly. That same powerful impact on the head of a person will kill. We could alleviate that risk by reducing the weight of the head, softening the striking surface, shortening the handle to reduce the swinging force, and so forth. The resulting pillow on a stub stick would no longer be able to kill, but it wouldn’t be able to pound nails either. Second, look at making your car safer by adding steel plating to protect you during a crash. However, plating makes the car heavier, so gas mileage plummets. Plating in place of fragile windows would be even more protective, but then you wouldn’t be able to see out very well, making driving less aesthetic and probably more accident-prone. If we add even more plating to make it even more safe, the car may not fit in the lane anymore, nor fit in your garage. That extra plating will cost you—so much that we might price the car out of reach of most budgets.
Good designs are thus a balance of competing goods. If the balance is distorted, favoring one goal to the exclusion of all others, the resulting product is usually dysfunctional, because proper function depends on meeting multiple goals simultaneously. Not only are products the result of a trade-off, but the engineering design process itself is also a trade-off. The old saw “Better, faster, cheaper—pick any two” is a reflection of the balance between the scope, schedule, and cost of a project. Does this mean that one must always accept less of one goal in order to achieve more of another? Not necessarily. Sometimes we find a clever new way to achieve both lower cost and higher quality, e.g., by reducing waste. Sometimes we find an innovation that lets us achieve both environmental stewardship and corporate profit, e.g., by reuse and recycling. Sometimes we find a way to make a part both lighter and stronger, e.g., by using composite materials. I think such combinations are particularly excellent and praiseworthy.
When Machines Think
Wednesday, December 05, 2012By Steven H. VanderLeest
It wasn’t really the president, it was a machine. When I was young, my family took a summer vacation trip to Walt Disney World in Orlando, Florida. One of the memorable exhibits was the Hall of Presidents, where Animatronic likenesses of the presidents speak to the audience. This was no static, stale wax museum where a few stiff movements might be jury-rigged into an arm or leg in a few of the displays. This was all the US presidents, displaying life-like movement that looked quite real, at least to a young boy from the distance of a seat mid-way back in the amphitheater. Of course even young children knew these were not truly real men but merely robotic impersonators. Nevertheless it was fascinating to watch the show unfold and enjoy the android replicas.
About that same time I started reading science fiction, a pastime that would become a lifelong appreciation for the genre. I read every single science fiction book the Grandville, Michigan library had to offer (Dune, by Frank Herbert, was one of my early favorites). I bought more books at garage sales. I borrowed more from friends. I signed up for a mail-order book club that offered a special deal on a bonanza of books when you joined, adding dozens more books to my collection like Isaac Asimov’s Foundation series. My enjoyment of science fiction was not limited to the written word, but spilled over to television and the cinema, where Star Trek and Star Wars quickly became favorites.
The thing about science fiction is that it doesn’t always stay fiction. The fantastical babies grown in jars and the abhorrent eugenically-produced societal castes of Huxleys’ Brave New World were imaginative stories of technology. However, only a few generations after his 1931 novel, those technologies became reality. The first test tube baby was born in 1978, the first genetically modified crop appeared in 1982, and Dolly, the first cloned mammal, was born in 1996. I found another imaginative story around futuristic technology in the story of Steve Austin, the eponymous main character of the 1970’s television show “The Six Million Dollar Man”. Just a couple decades later, the technology of bionic limbs has become reality in the incredible robotic prosthetics that provide delicate control and feedback to amputees.
Perhaps the most interesting science fiction technologies are machines that think. Human-looking robots that also act human are no strangers to the silver screen of science fiction. The replicants of Blade Runner and the android Lt. Cmdr. Data of Star Trek: The Next Generation are just two examples. Have those imaginative stories become reality? Not yet. There are certainly fast computational devices with large databases of information, such as IBM’s Watson, which beat two human Jeopardy! champions recently. Can Watson really think? I think not. Could a machine ever think? Possibly.
Machines that could think could also be machines that are dangerous. Asimov considered that possibility in many of his science fiction stories and thus formed his famous three laws of robotics:
- A robot may not injure a human nor through inaction allow a human to come to harm.
- A robot must obey orders from humans, except if they conflict with the First Law.
- A robot must protect itself as long it does not conflict with the First or Second Law.
These laws seem to be reasonable protections for humans, but I see an interesting contradiction. If even sophisticated robots are simply deterministic automatons, then it seems odd to bother with the last law. Why grant self-preservation to a machine? I suppose such a law might be simply reflect the interests of the robot’s owner in protecting valuable property. But that third law could also imply that the robot might really be thinking and not simply following a computational recipe. If we believe that we ourselves are really thinking, and not simply following a deterministic genetic and biological recipe, then we might grant some measure of self-protection to a thinking robot as well. But if we think the robot thinks, then the second law seems rather like slavery. I don’t think we can have it both ways: a convenient mechanistic slave to obey my every command but also smart enough to interpret the world around it and creatively respond to the nuances and complexities of real world situations. If I own a human-looking robot that is smart enough to also act human, may I hurt it? May I torture it? What does that say about the status of the robot? More importantly, what does that say about my own humanity?
Perhaps as a way to avoid any uncomfortable questions, we might simply define humans carefully so that such human-like machines are obviously not in the club, so that we might treat them how I wish. However, I am hesitant to draw lines around human-like androids, thus naming them simply machines with no obligations attached and no attendant responsibilities to worry me. Why does it worry me? As machines become more human-like, I wouldn’t want to be so stingy in defining what it means to be human that my rubric not only disenfranchises the machine but also boxes out the most vulnerable of humans, allowing us to treat them carelessly too, such as the unborn child, the accident victim lying in a coma, the student with a learning disability, the poor, or the terminally ill. God calls his people to protect the weak, as a matter of justice. God calls his people to be generous to the vulnerable, as a matter of mercy. God calls his people to guard against pride that causes us to treat others shabbily, as a matter of humility.