Thursday, August 5, 2010

Health Care Stats from Time

Time Magazine, my regular bathroom reading material, continues to report statistics from academic journals with insufficient context or explanation. Here are three recent clips.

* 165% - Increase in risk of STDs -- including HIV -- in men who take erectile-dysfunction drugs.

Obviously, men with ED taking ED drugs are going to have more sex overall than if they did not take the drugs, and having more sex means more risk of STDs (unless this sample is comprised entirely of men who only ever have sex with disease-free partners or partners who already share the participants' STDs). But that isn't necessarily what this clip says. This clip could also mean that ED drugs increase the risk of contracting STDs assuming that sex is held constant. The amount of sex study participants had is a confound in this study, and researchers usually try to control for confounds using statistical techniques so that we can better see what direct effects a determinant (drugs) has on an outcome (STDs). Maybe the ED drugs affect membrane permeability, body fluid viscosity, pore dilation, or immune system function. If the drugs affect any or all of those physiological factors, they could increase STD risk even if the men did not have more (or different) sex while taking the drugs. This clip is unclear, and therefore not useful by itself in informing decisions regarding ED drug use or endorsement.

* 767,000 - Number of lives saved through improvements in cancer care over the past 20 years.

"Lives saved" is a terrible term. Everyone dies. If you stop someone from dying from cause A today, you have extended that person's life until he dies from cause B later. A much more useful thing to do is measure life-years saved, which of course requires estimations based on a large sample of data, but is frequently done. Usually even better than that is measuring risk-adjusted life-years saved, when possible, quality life-years (arguably subjective, but useful and there is good consensus), or both together. I am appalled by how much money is spent on certain cancer treatments that extend a person's life by a few months of immobility, pain, and/or mindlessness. That money could do vastly more good for people spent elsewhere. If cancer treatment lets a person live long enough to die from a heart attack, kidney failure, infection, pneumonia, or whatever, it has still "saved that person's life." Baloney.

I want to see a list of each treatment's cost per quality-adjusted life-year extended. All treatments. Our health care system has finite resources. Organize the list in increasing order by that cost. Then have our system provide treatments to people in that order until it is out of resources. This is not a crazy idea. This maximizes the benefit that people get from our system. Real people with families, friends, jobs, and lives. When you or someone you care about gets a condition with an inefficiently high cost, how many other people are you willing to deny more-effective care so that you or your person can have care? Are you more important than other people? Who would you kill so you can have a few more months?

Is 767,000 good or bad? How many people had life-threatening cancer? How many people died from other things, and what would it have cost to save them? What would have happened if we spent that money on nutrition programs, or clean water for Africa? Just examples.

* 13.2 million - Estimated number of people who will die from cancer each year by 2030, double the number who died from it in 2008.

A couple obvious things pop up. One, there will be more people in 20 years. Quantities are not useful to us by themselves. We want to know proportions. Two, what will the people dying from cancer not be dying of in 20 years? Are we expecting more people to die from cancer because people won't die so much from heart disease or accidents or infections? Everyone dies. Causes of death are a zero sum game. As we die less often from heart attacks, we're just living long enough for the cancer to get us. Heck, if we cured all kinds of cancer today, would you be surprised the next day when the forecast was for triple the rate of deaths by heart disease? No.

Monday, July 12, 2010

Has Science Failed Us With the BP Oil Spill?

I came across a quote in Time from Elizabeth Rosenthal, originally in the NYTimes on May 28th 2010, in which she says that America's belief that technology will save us (in general) is apparently misplaced because "scientists" haven't been able to stop the spill.

This is so many kinds of absurd that I felt motivated to write again. Remember that science is only a method by which we determine the probability of certain outcomes given specific circumstances. The more that circumstances are changed, the less able we are to predict outcomes.

Does America believe that science will save us?
I can actually go along with this, but it takes different forms when you look at subgroups of Americans. For example, your typical Republican or Libertarian thinks it's a great idea to remove pollution restrictions from products and companies because they believe science will help us fix the environment and cure cancer and so forth in the future (and they have completely disproven beliefs that consumers will force companies to be environmentally and health conscious by only buying responsible products). Your more liberal person may have the more accurate idea that science will certainly tell us what should be done to save us from various threats, as it greatly already has, but people have to actually take action and engage in prescribed behaviors to be saved. For example, science has informed us that obesity is very bad, and we can be saved from obesity by mindfully eating less, mostly plants, and getting exercise, but some folks keep stuffing themselves at McDonald's and waiting for a drug to be invented that will save them. So, science can certainly save us from threats when we actually use it to guide behavior, and when we have done enough science to make sure we have good results across real circumstances.

How does science relate to the spill?
There have been many oil wells and oil rigs over the last century. Some have spilled. Information has been collected and used scientifically to develop specific pipes, valves, computers, etc... to minimize the risk of spills. It seems that there is a disconnect, though, between the greedy businessmen who control the oil (BP) and construction (Halliburton) companies and the government (MMS) via lobbying and campaign contributions to ignorant shill politicians and actual scientists who are several rungs down the ladder in each organization. The greedy, irresponsible guys found out from their geologists that there was a huge oil deposit way deep in the gulf. It's relatively new to drill so far under water and down into the earth. Since this type of drilling is newer, there is less information, and we have less of an ability to predict what will happen. Science would inform us to be cautious, and maybe start with only a couple wells and monitor them for a long time before making more. The government authorized a whole bunch, though, because money told them to. There was not good oversight, not good monitoring, and an insufficient adjustment in the approach to the new conditions. The business approach was not scientific enough. Abnormally high-pressured oil combined with an unscientific approach to analyzing situations contributed to the failures that led to this spill.

Hasn't science failed to fix the spill?
Trying all of the different (seemingly crazy and often mocked) ideas of fixing the spill IS SCIENCE. Try a bunch of stuff and see what happens, then adjust your ideas based on the results and try new stuff. Has that process failed to fix the spill so far? Yes, but so what? There is nothing better to do. Doing nothing would certainly not fix the spill, and not give us information for the future. If we already knew how to fix the spill, we would have done that, which would have been the result of earlier application of science. We're having a hard time because of the new circumstances, but that doesn't make science the wrong thing to do. When we eventually fix the spill, it will be because of using the scientific method.

Should we pray to God instead of using science?
Louisiana just tried that. How did it work out? By the way, that's also science. The scientific method has shown us (in many contexts) that prayer doesn't do anything (oil spills are not subject to placebo effects). There is no evidence of an interventionist god at present.

Is this Obama's fault?
No. That doesn't even make sense. Why would the president, a former lawyer, know more about fixing oil spills than British Petroleum? What do you expect him to do? Tell the armed forces to fix it? Appoint a responsible person to head MMS? He already did that, and she was so overwhelmed by what a screwup agency it was that she left. We are facing a very large, systemic problem with how our government interacts with our corporate overlords. I'm amazed that Obama was even able to demand $20 billion from BP to compensate people for how the spill has affected their jobs, and Republican congressmen said BP shouldn't have to! I don't agree with everything Obama does, but he's doing some good things for American people, and it doesn't make a lick of sense to hold him responsible for this mess.

Tuesday, January 26, 2010

Dangerous Dousing Deceit

I just found out via SomethingAwful about something that just completely blows my mind. I am flabbergasted. Some jackass sold thousands of glorified dousing rods to the Iraqi government for $85,000,000, claiming that they detect explosives. The Iraqis made no effort to verify that the equipment did what it was supposed to. They spent $85 million on devices they relied on to protect their citizens (and some of our soldiers) without any evidence that they worked, and people have been potentially avoidably blown up since. This level of irresponsibility and stupidity astounds me. The guy also sold junk to Thailand, Pakistan and Lebanon; not countries that come to mind when thinking about scientific cultures.

The BBC article continues with information that highlights our ubiquitous cultural need for an environment of skepticism and accurate evaluation of information:

Major General Jehad al-Jabiri said, "Whether it's magic or scientific, what I care about is it detects bombs," while expressing his belief that his opinions are more correct than those of the company that evaluated the bogus devices. Obviously he does not care if the devices detect bombs anywhere near as much as he cares about his pride.

Read this quote: "They don't work properly," Umm Muhammad, a retired schoolteacher said. "Sometimes when I drive through checkpoints, the device moves simply because I have medications in my handbag. Sometimes it doesn't - even when I have the same handbag." Someone responsible for educating children can't tell the difference between correlation and causation. There's an applicable legal (Latin) term also: post hoc ergo propter hoc. "After something, therefore because of it" presented as illogical and not good evidence. This teacher thought the device sometimes responded to medicine. A rational person would not make such a statement. We see quite a dearth of reason all around.

Our own FBI had to be told in 1995 to stop using bogus devices, and reminded in 1999. At least it seems that they get some independent verification of devices now.

No one has gotten James Randi's money yet! There's been $1 million on the table for decades waiting for anyone to demonstrate real dousing, ESP, or whatever else.

Demand evidence! Don't just believe marketers! Don't blow money on dietary supplements and Airborn and fortune tellers and security measures that don't improve security.


Reblog this post [with Zemanta]

Thursday, January 14, 2010

New York City Murders - 2009

The Jan 11, 2010 issue of TIME, in its "Numbers" section, reports that NYC had 461 murders in 2009 as of Dec 27. It then says that this is the lowest number of murders there "since the city began keeping records in 1962." Is that good? We can't really tell without additional information that is not presented.

Remember, when you see raw numbers presented like this, that they are almost meaningless alone. We need to calculate proportions, for starters, and also consider contextual changes. For example: if the population of NYC decreased, then it is possible that the murder rate per capita actually increased, despite a decrease in the raw number of murders.

Some poking around at the Census Bureau and the NYC Dept. of City Planning shows that NYC's population has been growing recently (from about 8.0 million to about 8.4 million between 2000 and 2008). A look farther back, however, shows that the population dropped significantly in the 1970s, and did not return to the 1960 level until perhaps late in the 1990s. Because of this, it is less interesting that the murder raw number is the lowest since 1962 than it is that it is the lowest raw number since around 1980, whenever the population was lowest. These comparisons are still inferior to knowing the true murder rates per capita.

Why was NYC singled out? No information is presented that makes NYC seem special for its decrease in murders. In fact, the Department of Justice Crime Victimization Survey (I love it) shows that the 2004 murder rate of the country as a whole is down to where it was in the mid-1960s.

It is interesting to note the sharp decline of the murder rate during the Great Depression of the 1930s, eh? That bit of context helps put the current dip into perspective as well. Perhaps affluence permits a greater demand for illegal drugs, which contributes to homicide among competing sellers. Or affluence permits more carousing, which presents more opportunities for interpersonal conflict than if people only stayed home. Affluence certainly permits a greater ability to purchase guns. Most murders are impulsive acts of anger by armed persons with poor inhibition.

My remaining question concerns the consistency of data collection. Over time, have there been changes in classification of murders versus manslaughter, for example? Did the numbers (not the DOJ numbers) come from individual police precincts, and did they all report, or the CDC, or somewhere else?

Hopefully this helped put Time's poorly presented number in a context that provides it with more meaning and utility.

Wednesday, December 9, 2009

Church Safety

I was extremely disappointed in Time magazine for its November 30, 2009 article about church safety and crime. This article was two pages long, which is two too many. It is just more media shock sensationalism, preying on people's irrational fears and lack of understanding of prevalence and probability to get attention. Here is some perspective and context for the numbers that the magazine so irresponsibly presented as "a flurry of violent crimes".

# of murders in churches since 2008 as reported: 5
# of murders in 2006 (most recent year for complete data, CDC): 18,573
# of violent crimes in churches in 2009 (10 months): 40
# of violent crimes in 2006 in USA (DoJ): 5,858,840

If the average church visit lasts about an hour and a half (~1/6000 of a year), we see that the murder rate is about average, and the violent crime rate in churches is minuscule. When you factor in that many people staff and visit church beyond the weekly services, the relative rates of violence are even smaller. When we further examine the murders that did happen, two were going to happen regardless of the church setting (spurning wife and abortion provider), and the church was just convenient. The situation with the stabbed priest is a mystery, but the other shootout was some ignorant redneck who wanted revenge on liberals for his unemployment and targeted a Unitarian Universalist church. Bizarrely, Time doesn't specify the type of church, and continues on to say that a conservative Christian group reacted with polls of church security measures. As I discuss elsewhere, conservatives tend to be overly fearful and less able to usefully evaluate information. "Security experts" go on to talk about churches' vulnerability, but they stand to make money off of scared congregations, so their biased comments should be taken cautiously.

The article includes an anecdotal church in Houston that experienced many burglaries. Far, far from being representative of all churches, this story probably serves more as a warning for churches that sit in bad neighborhoods. If you're in an area riddled with drug addicts, you're going to get robbed whether you're a church or not.

This pathetic fear-mongering is shameful. There is no crime epidemic among churches, and churches are not at high risk for violence. I expected better from Time.

Friday, December 4, 2009

Determinants of Good Parenting

There was a decent Time article in the November 30th, 2009 issue about how "helicopter parents" need to chill out. The first page talks about how irrational it is that parents have become so intrusive and safety-conscious over the last couple decades despite drastic decreases in injuries and violent crime. What the article fails to address is the possibility that injuries and violent crime decreases BECAUSE parents have been more intrusive and safety-conscious. There is no evidence presented in the article of causality in either direction, nor of possible confounds that could explain both correlated phenomena. This is terrible and misleading writing.

The rest of the article is great. It correctly points out that parents have been generally irrational when it comes to risk evaluation. It is vastly more dangerous to drive your kid to school than to let him walk to the store alone. It is worse to take your kids to visit family than to let them eat Halloween candy that hasn't been x-rayed. The sensationalistic media has thoroughly confused people who do not understand or seek out real information about event probabilities.

Finally, the article references the Freakonomics authors Dubner and Levitt, who say that three of the biggest determinants of well-raised kids are: parental education, spouse selection, and waiting to have kids. This is also misleading.

There are very clear factors that contribute to all three of these variables and child-raising. As I write about repeatedly, people are on a continuum of what psychologists call "executive function", the abilities of the frontal lobe: planning, inhibition, predicting consequences, problem-solving. People at the low end (due to complex interactions between genetics and early experiences) are more impulsive and have trouble understanding information. These people are more likely to get pregnant early, do poorly in school, have rocky relationships, be hostile, etc.... Of course their children are raised poorly and have the same genetic predispositions and vulnerabilities, perpetuating a cycle that cannot be interrupted by visits to museums or reading books. Change has to come from long-term exposure to positive relationships with other people that provide models for security, patience, reflection, and compassion. This rarely happens, even when social services are involved, because impulsive ignorant people are often oppositional to services. These people drive away good spouses with hostility, and are more likely to end up in bad relationships due to impulsivity and a lack of understanding of options and the effects of their own behavior. There is a lot of believed futility because they lack exposure to positive behaviors and the ability to accurately evaluate behavior and consequences in general. These people are more inconsistent due to impulsivity, and authoritarian because they can't handle complexity.

People at the higher end of the continuum are more thoughtful, understanding, planning, and calm. They have better relationships because they are in the habit of engaging in intentional goal-oriented behavior that weighs probably consequences. They can think about people's feelings, including their own, and take effective action instead of relying on maladaptive impulsive reactions. They do better in school, are better at delaying/planning parenthood, and are more likely to raise their kids with compassion and productive interactions. They are more consistent with their kids, and less authoritarian.

The saddest part is that the bad parents tend to blame all of their children's failures and problems on the children, and refuse to accept their own roles in their children's development. They often refuse to change because they believe they do everything right. They tell schools and therapists to fix their kids, then blame everyone but themselves for the inevitable failures.

Don't worry so much about museums and reading books and whatnot. Just be a calm, patient, compassionate, responsive, thoughtful, empathetic, planning person, and the rest will tend to fall into place.

Saturday, November 28, 2009

Beer and Relative Rates

Someone forwarded along this article about marketing research for beer. The article and its reader comments highlight the importance of understanding what relative rates mean. Remember, just because one event is more likely than another does not mean that it always happens. It just happens more often.

The article starts out with the unfortunate title, "What Your Taste in Beer Says About You". Your taste does not say anything definite about you. It only says what you are more or less likely to be like than people with different tastes. Projective psychological tests, such as the famous Rorschach, are the same way. The subtitle "How Choice of Brew Relates to Personality, Politics and Purchases" is better in that it uses the word "relates".

The mix continues. "The beer you drink says a lot about you..." Not necessarily. "Your choice of beer can be as telling about your personality as what kind of clothing you wear or the car that you drive." Yes, that is correctly written. Again, this form of marketing research is similar to some psychological tests. It finds patterns among people's choices and behaviors, and these patterns show up in the forms of relative rates. "People who do A tend to also like B more than people who do not do A" means just what it says, and does NOT mean that all people who do A like B, nor does it mean that all people who do not do A hate B.

Depending on what teacher a psychologist had, many psychological reports are written with definitive language, but other reports are written more accurately to portray the true nature of the information's relationship to the client. This article goes back and forth. Overly definitive: "There's a slang term that could sum up Heineken drinkers: posers." Accurate: "The personality traits of people who prefer Blue Moon... tracked similarly to the same type of people who prefer craft beers...." I won't even get into the argument about whether such a thing as personality exists or how it should be defined.

The comments left for this article show the confusion and ignorance that I want to try to correct. MattCrill wrote "Wow...what a bunch of hogwash. Couldn't be further from the truth. I'm a craft beer lover and absolutely none of your descriptions fit my profile." He failed to understand that the study results are about the trends among large groups of people, but his confusion was aided by the inconsistently definitive wording in the article. DarcyBaily had an equally wrong understanding of the article: "This couldn't be further from the truth. I am a craft beer drinker and none of that fits me." Just because you are in group A, but you don't do activity B that most people in A do, does not mean that it's a lie to say that most As do B.

Msalup said, "This article falls squarely on the "Uri Geller/Pseudoscience" arena. Can't believe that someone takes this kind of "segmentation" seriously." These are real statistics based on practices that multi-billion dollar corporations have used for decades because these segmentation practices are effective at guiding marketing and product development decisions. I don't know if it qualifies as science, but it's not just made up nonsense. GaryBuck commented on this article's "meaningless generalizations". Though the article does make some overly definitive statements, they are still not meaningless. The differences between the groups of beer-drinkers are meaningful, which is why the research was conducted. There are some good comments farther down the page.

So, the article could have been written more accurately, but I think many people would have had the same misunderstandings even if it were. Many people do not understand the qualifying language of statistics. This misunderstanding causes problems in people's decision-making and evaluations of the world around them. I am sure I will have more examples in the future.

For the record, I am a major explorer of craft beers (I keep a spreadsheet of what I've had with my reviews), and I do fit the mentioned trends except for buying organic.