On Hollow Ground by Donald Gavron
My rating: 4 of 5 stars
If it seems clear, you haven’t understood this book. The author weaves a tale that revolves around the protagonist Troy Fox and a mysterious book of knowledge. There’s a lot of ambiguity as Troy proceeds in an underworld odyssey through the world. Does whoever controls the book control the world? Or, is the book just a string of self-help aphorisms worthy of a late night infomercial? The author takes us on a journey where we may find this out (or not). It’s a short fast read; more a novella length than a novel. There are four short stories included at the end; they are pretty good, One is horror and a couple seem autobiographical.
View all my reviews
The heart of this book is the question: How non-speaking people with autism can communicate.The authors detail a potential breakthrough method – Spelling to Communicate (S2C). A brief summary of S2C:a non-speaking person with autism answers questions by pointing to one letter at a time on a letterboard held by an assistant. The reported results are amazing; non-speaking people with autism are able to communicate complex thoughts for the first time. The authors of the book are a father-son pair, the son Jamison is a non-speaker. It touched me how deeply the entire family wanted the best for Jamison. Reading the book, I realized that if my son couldn’t speak, I would certainly embrace S2C.
A major issue with S2C is that it doesn’t land within the domain of current speech therapy science. The best science I can find to support S2C (cited in the book) is by V.K. Jaswal and colleagues at the University of Virginia. It would be great to see some additional supporting papers using other measurement techniques in the neuroscience toolbox. If the results can be substantiated, S2C would be a paradigm shift for non-speaking people with autism. Below is a reference to the paper in question, I would urge those who are interested in the science of S2C to read it.
Jaswal, V.K., Wayne, A. & Golino, H. Eye-tracking reveals agency in assisted autistic communication. Sci Rep 10, 7882 (2020). https://doi.org/10.1038/s41598-020-64553-9
I searched for a paper that would provide a factual counterbalance. This essay by Stuart Vyse provided some cogent discussion of additional experiments that would affirm or deny the usefulness of S2C.
Vyse, S. Of Eye Movements and Autism: The Latest Chapter in a Continuing Controversy.
I’m an engineer but I have spent a few years educating myself in neuroscience. In particular, I have studied advances in brain-computer research. Locked-in quadriplegic patients regain some movement capabilities using brain-computer systems. The best suggestion I have to validate S2C is to develop a system that takes the human assistant out of the loop after some training.
Book Review Army of None: Autonomous Weapons and the Future of War by Paul Scharre (reviewed 8 July 2019)
We are witnessing the evolution of autonomous technologies in our world. As in much of technological evolution, military needs drive much of this development. Peter Scharre has done a remarkable job to explain autonomous technologies and how military establishment embrace autonomy: past, present and future. A critical question: “Would a robot know when it is lawful to kill, but wrong?”
Let me jump to Scharre’s conclusion first: “Machines can do many things, but they cannot create meaning. They cannot answer these questions for us. Machines cannot tell us what we value, what choices we should make. The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” The author has done a remarkable job to explain what an autonomous world might look like.
Scharre spends considerable time to define and explain autonomy, here’s a cogent summary:
- “Autonomy encompasses three distinct concepts: the type of task the machine is performing; the relationship of the human to the machine when performing that task; and the sophistication of the machine’s decision-making when performing the task. This means there are three different dimensions of autonomy. These dimensions are independent, and a machine can be “more autonomous” by increasing the amount of autonomy along any of these spectrums.”
These two quotes summarize some concerns about make autonomous systems fail-safe. (Spoiler alert: it can’t be done…)
- “Failures may be unlikely, but over a long enough timeline they are inevitable. Engineers refer to these incidents as “normal accidents” because their occurrence is inevitable, even normal, in complex systems. “Why would autonomous systems be any different?” Borrie asked. The textbook example of a normal accident is the Three Mile Island nuclear power plant meltdown in 1979.”
- “In 2017, a group of scientific experts called JASON tasked with studying the implications of AI for the Defense Department came to a similar conclusion. After an exhaustive analysis of the current state of the art in AI, they concluded: [T]he sheer magnitude, millions or billions of parameters (i.e. weights/biases/etc.), which are learned as part of the training of the net . . . makes it impossible to really understand exactly how the network does what it does. Thus the response of the network to all possible inputs is unknowable.”
Here are several passages capturing the future of autnomy. I’m trying to summarize a lot of the author’s work into just a few quotes:
- “Artificial general intelligence (AGI) is a hypothetical future AI that would exhibit human-level intelligence across the full range of cognitive tasks. AGI could be applied to solving humanity’s toughest problems, including those that involve nuance, ambiguity, and uncertainty.”
- ““intelligence explosion.” The concept was first outlined by I. J. Good in 1964: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” (This is also known as the Technological Singularity)
- “Hybrid human-machine cognitive systems, often called “centaur warfighters” after the classic Greek myth of the half-human, half-horse creature, can leverage the precision and reliability of automation without sacrificing the robustness and flexibility of human intelligence.”
In summary, “Army of None” is well worth reading to gain an understanding of how autonomous technologies impact our world, now and in the future.