Goal
102/120 booksRead 120 books by Dec 30, 2024. You're 7 books ahead of schedule. 🙌
An interesting topic which deserves better treatment than a collection of Vox-style op-eds. This is not a book that wants to teach you how mathematical models can fail, it's a book that wants you to feel OUTRAGED about UNFAIRNESS.
Here's how it works. There's some area that's supposed to be improved by using a mathematical model (say, teacher evaluation in public schools). But after implementing this system there are some casualties (say, unfairly fired teacher who was well-liked and respected both by students and parents), which is bad and leads to a lengthy discussion of perils of capitalism.
Don't get me wrong, all things discussed in the book (which include recidivism, future job performance, and insurance) are indeed hard to model, but that's not a good way to discuss this models. One of the book's ideas is that you should forgo some of the model's accuracy to make it more fair. However, it's hard to talk about trade-offs without talking about how much we have in accuracy and utility. Did this teacher evaluation model improve overall school performance? If it did, would it be fair to students to make them go back to their horribly unimproved previous school performance? Or was it actually not that bad, and their test results improved simply because of better lunches (or even less lead in water)?
The chapter on credit scores grudgingly admits that human curation wasn't perfect (painting an expected picture of a banker discussing credits with his golf partners). Skip ten pages, and there's a friendly woman who helps to clean up the mess made by automated system that confused a client with a criminal namesake. Humans are winning again!
Except that they still have their own models, which are also bad (albeit in a different way). However, it is much easier to fix biases in algorithms and data if you're dealing with computers. One of the common complaints of the book is that computers can only project past data on the future, saving all those biases. It's not a problem that can't be fixed. Humans are.
More generally, it may be fun to complain about the issues of the model, but it's only useful to compare it to the alternatives. An implicit message of the book seems to be that we should ban usage of some algorithms and data (as expected, there's no discussion of second-order effects—if credits become more expensive, what will happen to the economy? Is this trade-off useful?). However, we can't simply ban things and forget about them, we can only replace them with something else.
I don't think that a book that is strictly about negative sides of something should necessarily strive to be objective. However, I would like to see less diatribes against greediness and more interviews with people who designed the models. What do they think about these problems?
(By the way, if you explain something by greediness, you‘re already wrong).
Some quotes are amazing, though.
fairness is squishy and hard to quantify. It is a concept. And computers, for all of their advances in language and logic, still struggle mightily with concepts. They “understand” beauty only as a word associated with the Grand Canyon, ocean sunsets, and grooming tips in Vogue magazine. They try in vain to measure “friendship” by counting likes and connections on Facebook. And the concept of fairness utterly escapes them. Programmers don't know how to code for it, and few of their bosses ask them to.
But I would argue that the chief reason has to do with profits. If an insurer has a system that can pull in an extra $1,552 a year from a driver with a clean record, why change it?
The clumsiest worldbuilding I've seen in a while (even though the result isn't as bad as the process); overexplanation still sucks; it's easy to predict everything that's going to happen; still mostly fun.
Amazingly, one can be considered an expert and still write a book based on the it-worked-for-my-friend's-company type of argument.
Supposedly, there's the following framework: if the company has some issues, we should just switch to Scrum and then follow it strictly. If there's something messing with our switch, we should call it “impediment”, fix it, and be happily sure that these impediments were the main issues in our company. The question of why this is the case is never discussed.
Well, maybe this book isn't supposed to convert me. So it quickly goes to the description of how one should transit to Scrum (create Scrum team that will guide other teams to Scrum; also, it should fix the impediments) and some discussion of possible problems. Then, there are solutions.
Scrum master (spelled as “ScrumMaster”) isn't good enough? Replace him. Product Owner can't cope with the backlog? Replace him. Team still works slowly? Replace everyone.
Legacy code is discussed for several pages in the most obvious possible way. Proposed solution? Fix it or deal with it.
Finally, there's redundancy, more redundancy and some redundancy again. Also, redundancy.
“Any liveliness comes solely from the ideas,” hilariously writes Mr. Egan in his review of [b:A New Kind of Science|238558|A New Kind of Science|Stephen Wolfram|https://d.gr-assets.com/books/1386925097s/238558.jpg|231083]. However, it's not an issue per se that Distress itself doesn't go further than that.
Neal Stephenson—another author famous for including lots of exposition—claims that “story is everything”; Greg Egan also “wants to tell a story”. However, while Mr. Stephenson writes great fiction, Mr. Egan tries to, and although there's an interview where he rants against standard “development” of the characters, Distress has a lot of remarkably unmotivated stuff that looks like “I was told one needs this in a novel”. Andrew Worth's supposed transformation is the best (and the main) example: he has a break-up, he's bored of his whole life, he talks to interviewees and Stateless locals (and their Ayn Rand-ish transhumanism-related speeches are probably the second main source of liveliness), he falls ill, he's on the verge of death, he's reborn as another person. Well, in fact he doesn't change—maybe because he had no identity before, maybe because all this stuff is incredibly superficial.
It would be great if one day Mr. Egan starts writing non-fiction on sociology and ethical issues of technology adoption, but sadly, this day will probably never come.
4 Books
See all