New Study on Puberty Blockers, Cross-Sex Hormones and Youth Suicide and a Response to Turban and Singal

Last week the Heritage Foundation released my new study on the effect of state policies easing access to puberty blockers and cross-sex hormones on youth suicide rates. The study generated a large amount of attention from policymakers, the traditional media, and on Twitter. Given that the topic is a highly emotional and politicized one and given the low quality of discourse on Twitter, much of the social media response was inaccurate and ad hominem. Fortunately, Twitter is not the real world and the serious response of a number of policymakers with steps they are taking to address the risks posed by these drugs makes responding to Twitter critics pointless.

The reactions of two prominent commentators, Jack Turban and Jesse Singal, however, warrant a response, not for the merit of their criticisms but because they have enough influence outside of Twitter that their mistaken criticisms might undercut the positive policy responses my study has facilitated. Turban is a professor at Stanford Medical School and is the author of two of the three studies claiming that puberty blockers and hormones are protective against suicide and therefore must be made widely and readily availably. Singal is a journalist who has engaged in extensive criticism of Turban’s work and is therefore someone that skeptics of these medical interventions might look to for guidance on what to think about my new study.

Other than dismissive and ad hominem comments, Turban’s main objection to my research is that minors are not supposed to be getting access to these drugs without parental consent so that looking at variation in state policies regarding the ability of minors to access health care without parental consent would be irrelevant. He writes, “One thing to note is that @TheEndoSociety and @wpath guidelines require parental consent to access gender-affirming hormones. This entire report is based on the incorrect assumption that minors can easily access hormones without parental consent.” He adds, “Since trans youth account for about 1.9% of teens, only a fraction of those desire hormones, only a few percent of those are able to access them, and of those even fewer would access without parental consent, the logical jump made by the Heritage people doesn’t make sense.”

I never claim that “minors can easily access hormones without parental consent.” My study is based on the natural policy experiment that results from some states having one fewer barrier to minors getting these drugs by having a provision in law that allows minors to access healthcare without parental consent, at least under some circumstances. It just has to be easier for minors to get these drugs, not that they can do so “easily.”

But Turban seems to suggest on Twitter that this virtually never happens. Where would I get the idea that minors can access these drugs without parental consent? It comes from Turban’s own research. In his 2022 study on the effects of cross-sex hormones on suicidal ideation, he analyzes the results of a survey given to a convenience sample of adults who identify as transgender. In Table 1, he reports that 3.7% of those who say that they received cross-sex hormones between the ages of 14 and 17 are still not “out” to their families as transgender. These respondents must have obtained those drugs without parental consent since their parents do not even know that they identify as transgender. In addition, we see in that same table that only 79.4% of those adults who got hormones between the ages of 14 and 17 say that their families are supportive. We might reasonably assume that unsupportive families would not have given consent to their teenage children getting hormones, especially if they continue to be unsupportive several years later when those children are now adults. Yet somehow, nearly a fifth of those who got the drugs as teenagers did so despite the lack of support from their families.

If we accept Turban’s claim that 1.9% of teenagers identify as transgender, that translates into 1,900 out of every 100,000 teenagers. My finding is that easing access by reducing the parental consent barrier increases youth suicide rates by 1.6 per 100,000 young people. For my result to be plausible, it would only have to be the case that .08% of teenagers who identify as transgender would have to seek these drugs, find that it is easier to access them when states have minor consent provisions, and then kill themselves to result in an additional 1.6 suicides per 100,000 young people. And this assumes that the entirety of the increase in suicides occurs among individuals who identify as transgender and get the drugs, even though we know that there is a contagion effect with suicide so that the 1.6 increase would include young people who did not get the drugs but were influenced by the deaths of others.

The bottom line is that the magnitude of my study’s estimated increase in suicide risk associated with these drugs is entirely plausible given Turban’s own numbers about the frequency with which minors are accessing these drugs without parental consent.

Other than name-calling with words like “misleading,” “crude,” and “shodd[y],” Singal has two seemingly substantive objections to offer. First, he claims “this isn’t even about blocker and hormones — it’s about which states have lower ages of medical consent. Then he says ‘Well, around the time blockers became available, the suicide rates go up.’ This is an exceptionally crude approach.” Second, he embraces a criticism expressed by Elsie Birnbaum that it is ridiculous to describe states like Texas and Utah as having “easier” access to these drugs, adding: “This is a good catch and should immediately cause anyone with any knowledge of this subject to deeply question the study.”

With both of these criticisms, Singal appears to think that the proper way to study the effects of puberty blockers and cross-sex hormones would be to compare the suicide rates in places based on the number of prescriptions being dispensed, the number of clinics offering these treatments, or their state reputation as being more or less permissive on transgender issues. While I understand why it is tempting to think that these direct comparisons would be better, if our goal is obtaining an unbiased, even if imprecise, estimate of causal effects, it is far better to focus on state minor access provisions. Because modern research designs for isolating causal effects are not necessarily intuitive, let me try to offer a brief explanation for readers (and Singal) who are not trained researchers.

The gold-standard research design for isolating causal effects is a randomized controlled trial (RCT), in which a lottery would assign some people to get the drugs and others not to get the drugs. Researchers would then compare the outcomes for the two groups over time. Any significant differences in their outcomes would have to be caused by the drugs and not by some preexisting difference between the treatment and control groups, since the two groups would be identical, on average, at the start of the experiment. Unfortunately, the effects of puberty blockers and cross-sex hormones in treating what is called gender dysphoria has never been studied with an RCT. Turban, the Biden administration, and others claiming with confidence that these drugs save lives should support an RCT to prove what they say, but they do not, and we are left without the kind of rigorous evidence that is normally required for initial approval of drugs by the FDA.

Short of an RCT, there are a number of research designs that have been developed that attempt to imperfectly approximate the causal rigor of a true experiment. They do so by looking for ways in which exposure to treatment is “exogenous.” That is, they look for why some people get the drugs while others do not for reasons that are unrelated to factors that might separately influence the outcomes. A lottery is perfectly exogenous because chance determines who gets the treatment and chance has nothing to do with causing outcomes. If the reasons that some people get the treatment while others are in the control are related to the outcomes, however, then we have an “endogeneity” problem and the results are biased.

It is easy to illustrate this endogeneity problem in Turban’s research on this issue. Turban examines a survey of adults who identify as transgender and compares those who sought and got these drugs to those who sought but did not get the drugs in terms of their more recent thoughts about suicide. One of the reasons some people who sought these drugs would be unable to get them would be if they were not considered psychologically stable, since being psychologically stable at the time is supposed to be a criteria for prescribing the drugs. Rather than being random and unrelated to later outcomes, the reason that some people end up in Turban’s treatment or control groups is caused by their psychological condition when they sought treatment, which would be related to the suicide outcomes being measured — or endogenous. It is obviously biased and unhelpful to compare treatment and control groups that begin with different average psychological health in terms of their later psychological outcomes.

The same endogeneity problem applies to how Singal seems to think this issue should be examined. We know that there is significant comorbidity between gender dysphoria and other challenges that young people have, including depression, anxiety, and autism spectrum disorder. To the extent that demand for puberty blockers and cross-sex hormones is related to people having other psychological challenges, comparing places based on the number of prescriptions or clinics would be endogenous and misleading. It would be biased by the likelihood that places with more of these drugs being dispensed would also be places with a higher prevalence of other psychological challenges, which would be related to suicide rates in those places independent of whether the drugs helped, hurt, or made no difference. Similarly, comparing states based on whether they had reputations for being permissive or “blue” states would be endogenous and misleading. It would be comparing treatment and control groups that differed at the start in ways that are related to suicide outcomes.

To find something closer to the true causal effect, we would need exogenous sources of variation in the treatment other than a lottery. My study takes advantage of plausibly exogenous variation in exposure to treatment with respect to WHERE there is a lower barrier to accessing treatment, WHEN that treatment is available, and WHO is affected by the treatment. States adopted policies about the ability of minors to access healthcare without parental consent for reasons that had nothing to do with, and generally long preceded, the transgender issue. On the margin, parental consent is one additional barrier to minors getting puberty blockers and cross-sex hormones.

Singal believes that it is a defect of my study that this variation in minor consent policies is not “about blocker and hormones,” but he fails to understand this is a virtue of the research design. Because minor consent provisions are a barrier to accessing puberty blockers and cross-sex hormones that is not “about” this issue, variation in the existence of this barrier is exogenous and helps isolate causal effects. It’s true that minor access provisions are not the most important or direct barrier to access, but they help generate unbiased effects. To the extent that these provision are entirely unrelated to the issue, they would be random noise and would bias effects toward zero but would not bias the direction of the results.

In addition to exogenous variation regarding where these drugs could be accessed with or without an extra barrier, we have exogenous variation in when the drugs became available. This is why it is important that we observe that there is no difference in youth suicide rates between states with or without minor access provisions before the drugs are introduced but there is after.

Lastly, we have exogenous variation in who would be affected. If states with a minor access provision began to differ systematically with respect to suicide only after 2010, we should observe this pattern also among a slightly older population that would not be affected by minor consent provisions. The fact that there is no effect for a slightly older population is also important.

Obviously, it would be far better to have an RCT if we wanted to isolate the causal effects of puberty blockers and cross-sex hormones on youth suicide. But absent an RCT, my study uses quasi-experimental research design features that generate credibly causal effects. It’s imperfect, but it is a huge improvement over the obviously endogenous research design that Turban and Green use and much better than the direct but endogenous approaches that Singal criticizes my study for lacking.

Channeling all of his Twitter erudition and a penchant for research nihilism, Singal asserts that my study is no different in its defects from prior ones: “The dude makes perfectly fair comparisons of some of the past work on this subject, most notably Jack Turban’s, and then he reaches into basically the same bag of tricks. It’s SUCH a bad article.” While I like being compared to The Dude, claiming that my study is comparable to those by Turban and Green is just incorrect.

Twitter is a dangerous place for young people with gender dysphoria, but it is also a dangerous place to discuss the merits of different studies. If Turban or Singal were willing, I’d be happy to get together in a public forum to discuss these issues at greater length. An audience would benefit far more from such a discussion than Twitter name-calling and drive-by research critiques.


I’ll respond to one more substantive objection that Jesse Singal echoes and that was raised initially by Dave Hewitt, an English substacker. Hewitt claims that my results are sensitive to outlier states, such as Alaska or Wyoming, that have above-average youth suicide rates. He attempts to illustrate this concern by switching whether AK and WY are classified as having a minor access provision or not. He then produces a graph that shows the unadjusted difference in suicide rates between states based on the existence of a minor access provision shrinks to zero if both AK and WY are switched in how they are classified.

Importantly, Hewitt only shows us the unadjusted difference, not the final results adjusting for baseline differences in state suicide rates, as displayed in Chart 3 and Appendix Tables 2-5 in the study. States differ in their average suicide rate across the entire time period studied, including the years before puberty blockers and cross-sex hormones were introduced as a therapy for gender dysphoria around 2010. Hewitt is aware of this fact when he notes, “The suicide rate of this age group in Alaska is far higher than any other state… Wyoming has the highest overall suicide rate in the country across all age groups.”

Because there are time-invariant factors that might make some states have higher or lower youth suicide rates, my analysis controls for each state’s suicide rate at baseline. And to the extent that there are time-variant but age-invariant factors that affect the change in suicide rates over time, I control for the annual suicide rate in each state in each year for a slightly older population that should be unaffected by a minor access provision.

Yes, Alaska and Wyoming have high suicide rates and switching states with high rates from the treatment to the control group would alter the unadjusted difference between those groups of states. But this is irrelevant to the question of whether states experience a change in youth suicide rates when cross-sex treatments become available based on having one fewer exogenous barrier to minors accessing those treatments — especially when we control for time-invariant and age-invariant factors that make the rate higher in some states than in others.

Switching states from one category to the other is also the wrong way to test whether the results are sensitive to one or two states. The proper way to test for sensitivity would be to run the regression with all of the controls and to drop individual states from the analysis to see if the result still holds. If any single state is driving the result, then dropping it from the analysis should substantively change the result.

I’ve done this and Hewitt’s (and Singal’s) concerns about sensitivity to outlier cases are unfounded. If I drop Alaska from the analysis presented in Appendix Table 2, the result remains unchanged. If I drop Wyoming, the result remains unchanged. If I drop Alaska and Wyoming at the same time, the result remains unchanged. In fact, I’ve dropped each of the 50 states and DC one by one and the results remain statistically significant and virtually identical in magnitude across all 51 robustness checks.

I’m working on revising this “working paper” and submitting to a peer-reviewed journal and will comply with the replication data set policies of that journal. In the meantime, Hewitt, Singal, or anyone else interested in replicating my analysis and trying other robustness checks can easily do so by downloading and analyzing the data. The study lists the handful of data sources and provides links. The full model specifications are also provided in the appendix tables.

21 Responses to New Study on Puberty Blockers, Cross-Sex Hormones and Youth Suicide and a Response to Turban and Singal

  1. John R says:

    I don’t think you are responding to one of the primary criticisms of your original study.

    You claim, “Obviously, it would be far better to have an RCT if we wanted to isolate the causal effects of puberty blockers and cross-sex hormones on youth suicide. But absent an RCT, my study uses quasi-experimental research design features that generate credibly causal effects. It’s imperfect, but it is a huge improvement over the obviously endogenous research design.”

    However, in your original study, you make the argument that “given the danger of cross-sex treatments demonstrated in this Backgrounder, states should tighten the criteria for receiving these interventions, including raising the minimum eligibility age.”

    It is highly problematic to essentially say “since we don’t know if there is a positive correlation between minor access provisions and increased suicide, we ought to create state policies which assume they cause increased suicide based on a study that looks at overall suicide rates.”

    You have a point in arguing that current research is insufficient in illustrating a cause-and-effect relationship between minor access provisions and decreased suicide. But, in my estimation, it is a far greater sin to advocate for undoing state policies when the study used as the basis of this worldview doesn’t actually evaluate the exact population who is being affected by the legislation (trans youth).

    You have to a do a lot better if you want to convince people of your position.

    • Greg Forster says:

      You do not allege any defect in the method or results of Jay’s study here, you only disagree with his policy preferences. Which you’re free to do, but it does not constitute a criticism of the study.

  2. Greg Gentry says:

    A more fundamental error in your study comes from not consulting with a lawyer about whether the Schoolhouse Connection’s list of states with minor consent laws was an appropriate one to use in this case. The list warns that they do not consider states whose minor consent doctrines come from judicial decisions, like the “mature minor” doctrine. They also do not consider any laws that empower minors to consent to “other kinds of treatment” like mental health treatment, or reproductive health.

    At least two states in your “control” group subscribe to the “mature minor” doctrine, which give them a much BROADER minor consent policy than many in your “experimental” group of states. Tennessee, for example, presumes that any minor 14 or older has enough maturity to consent to any medical procedure without a parent. So a provider could provide care as long as there wasn’t contrary evidence of immaturity. West Virginia adopted Tennessee’s rule, but without the age-based presumptions. If a minor presents as mature to a provider, that provider may provide care.

    The extent of this oversight is clear from an article published two years ago in The Journal of Medical Ethics. (Medically assisted gender affirmation: when children and parents disagree, Dubin, S., et. al., It wrote about how to treat minors when parents wouldn’t consent. It identified specific carve-outs in the law (like mental health or reproductive health exceptions) the mature minor doctrine and neglect statutes. It does not mention the general health laws that SchoolHouse Connection relies on.

    At least 8, and as many as 10 of your control group include carve-outs for mental health. At least seven have a carve-out for reproductive health. As the BMJ article put it, “menstrual manipulation (ie, menstrual cessation through ‘birth control’) could be considered gender-affirming care but could be accessible to a minor without parental consent.”

    On the flip side, Schoolhouse Connection’s list is targeted at homeless youth, not those seeking gender-affirming. So, their list of states that allow minors to access general care is broader than if one were concerned about states whose laws would allow access to GENDER CARE! Many of states’ laws would NOT allow a minor to access gender care. Kansas, for example, only allows minors 16+ to consent to care if the parents are unavailable. If the provider CAN reach the parents on the telephone, then the minor cannot consent. Delaware’s statute, similarly, is understood by providers to only apply in emergencies.

    The law of minor consent to health-care is complicated (one 2010 survey of the 50 states runs to 293 pages) and if one designed a study where the experimental variable seemed on its face to be ill-fit (the control group has 21 gender clinics, fully one-third the total number in the nation) one should be extra careful in assessing whether the division between experimental and control groups makes sense. You failed utterly in this instance.

    • Greg Forster says:

      It’s not my study, but the number of gender clinics in a state does not seem to me to provide much evidence either way on whether the state has policies that make it easier for minors to access medical services without parental consent. I can’t immediately judge the merits of your other criticisms of Jay’s state classifications; that will bear looking into.

      But the conclusion “you failed utterly” seems to me overdrawn in light of the fact that Jay’s state classifications do appear to correlate with a noteworthy spike in teen suicide rates. If I read the study rightly, teens in these states had similar suicide rates to teens in the other states for a long time, then recently saw a spike in suicide rates which was not apparent in the other states.

      Is it plausible that this is just a massive statistical fluke? So plausible that we shouldn’t even investigate? Because if it’s worth following up to investigate and find out whether something is going on, the study can hardly be said to have “failed utterly.”

      “We should run more studies to see if this result is robust across multiple methods for classifying states” would have been a more apt conclusion.

  3. Greg Forster says:

    Regarding your addendum, Jay, it’s outrageously unfair for you to point out that we can easily download all the data for ourselves and run our own robustness checks. You fail to take into account the fact that we are lazy, and would rather take speculative potshots at you instead of doing the work for ourselves!

    Hewitt’s posting of the unadjusted table seems particularly irresponsible. It’s like using a price graph that doesn’t adjust for inflation. Wow, the price of a new car really went way, way up from 1970 to 1990! Car buyers must have really gotten screwed!

  4. H.L. Mitchell says:

    The treatment, as you define it, is that (some) minors are able to access healthcare without the consent of their parents/guardians. This raises two questions I have yet to see addressed in your responses to criticisms:

    1.) These laws do not uniformly apply across some states. Some, such as Arkansas, effectively apply only to those who have been emancipated while others apply to a broader swath of minors. Given this substantial variation in who is treated, why shouldn’t estimated treatment effects be weighted by the proportion of youths who would be treated? How would your estimates then change? This should be easy to estimate and should be presented as a robustness check.

    2.) The treatment effect you are capturing is that states which allow minors to obtain medical care have higher rates of youth suicide. This appears more endogenous than the framing would imply. While you may claim that the pseudo-DiD identification strategy you employ accounts for unrelated impacts, you do not (1) address recent challenges to estimation in the DiD literature and (2) establish common pre-trends in a meaningful way.

    Looking forward to your response.

    • Greg Forster says:

      Jay is not really obligated to respond to these quibbles, since they’re not actually criticisms of his study, they’re either calls for additional studies using different methods or changing the subject to ongoing methodological controversies (and research cannot stop until all methodological controversies are resolved, as they are never resolved).

      But I think he’s already provided the only important response, which is that all the data are public and he’s fully specified his method in the study, so if you are genuinely concerned about this, feel free to rerun the study with any methodology you like!

      • H.L. Mitchell says:


        Certainly I do not expect Jay to reply, but I do look forward to dialogue if he were willing.

        While you might characterize the points I raise as quibbles, it’s worth noting that both points undermine his identification strategy and reduce the results presented to being no more causal than the studies he criticizes. What I proposed amount to robustness checks that would be easily implemented and would respond to these criticisms. I do so not to undermine the study but to advance a free and honest discussion of the issues. In the past, this has been something Jay himself has advocated for.

        As you note, however, the data used in this analysis are public. What I find when attempting to recreate these results is that they are both highly sensitive to specification of treatment intensity and group definition.

        Given that his treatment is mischaracterized in the backgrounder and his results are quite sensitive, I would love to advance this dialogue with a response to my original questions. That’s what an honest discourse would entail.

      • Greg Forster says:

        I look forward to seeing these findings!

    • Greg Gentry says:

      I think you are much more charitable toward Dr. Greene than I think this study deserves. I’ll leave an analysis of the math to those who are more capable than me, a lawyer, but, as a lawyer, I find his engagement on the laws in the various states to be laughably shallow. As is clear from his write-up, the sole source of laws in the states he consulted was Schoolhouse Connection’s list. This list was targeted at homeless youth and had significant caveats on the face of the list that should have warned Dr. Greene that he was treading in areas more complex than it appears at first blush. A review of legal or medical literature about the legal basis for treating minors for trans issues or minors without their parent’s consent in general would have suggested much further study was warranted. Simple surveys of the different state laws run to hundreds of pages.

      The most glaring example of the effect of his lack of engagement in this area of law is missing the “mature minor” doctrine. The Schoolhouse Connection list warns they do not list states with that doctrine as it is mostly judicially created. However, most of the literature on treating minors without their parents’ consent ground the authority for such treatment IN THAT doctrine. (See, “ The Legal Authority of Mature Minors to Consent to General Medical Treatment,” Lambelet Coleman, D., Pediatrics, 2013.) There are 14 states that implement the Mature Minor Doctrine, including at least 2 of Dr. Greene’s control group states.

      As you pointed out, the laws in his experimental group also apply with extremely varied effect. A close examination of the state laws in those states suggests that some of the states listed there don’t even actually allow minors to consent to treatment without their parent’s involvement and consent. Illinois requires a minor to be designated BY THEIR PARENTS as “a minor seeking treatment” and allows doctors to contradict a minor’s consent if it conflicts with a parent’s wishes. Missouri only allows those minors living apart from their parents WITH THE PARENTS’ CONSENT to consent to treatment.

      This is a difficult, state-specific area of law, with statutes interpreted by courts and attorneys-general’ opinions, and state-level regulations. As just one example, Kansas, identified as an experimental state, limits its law to minors aged 16 or 17, emergencies, and only when the parents are unavailable. So, it is clear almost no minor could get puberty blockers BEFORE puberty. And it would require state-specific research to determine if emergencies could encompass HRT.

      At a minimum, a researcher who purported to study states where a minor could or couldn’t access care without their parents’ consent should accurately identify those two groups. This study is so flawed, on its face, that I have no confidence Dr. Greene can ever do that.

      • Greg Forster says:

        So do you think it’s a mega-colossal statistical fluke that Jay’s states have a big spike in teen suicides and the others don’t, or do you have an alternative explanation for the spike? For the reasons outlined above, that’s the question that matters here.

      • Greg Gentry says:

        He describes EXACTLY that sort of statistical fluke above in replying to Hewitt. The results are extremely sensitive to switching one state from control to experiment. Switching Wyoming or Alaska from experiment to control eliminates the effect. He argues, though, that we can’t do that. We can only drop states from the experimental group and must leave the control as it is. That is nonsensical if, in fact, there are experimental states that did not have a law allowing minor access (which there are) or if there are states that did have minor access laws where he says they don’t (which there are).

      • Greg Forster says:

        No, switching those states does not change the results – Jay already said that above, where he explained at length why that’s bogus. Have a nice life!

      • Greg Gentry says:

        No, he absolutely does not say that switching the states does not affect the outcome, he rejects the analysis that switching the states is appropriate:

        “ Switching states from one category to the other is also the wrong way to test whether the results are sensitive to one or two states. The proper way to test for sensitivity would be to run the regression with ALL OF THE CONTROLS and to drop individual states from the analysis to see if the result still holds.” (Emphasis added)

        At no point does he rerun the analysis with Alaska or Wyoming in the control group. His argument that we can check the importance of a state only works if he’s right about the control group.

        He’s wrong.

      • Greg Forster says:

        He actually says: “I’ve done this and Hewitt’s (and Singal’s) concerns about sensitivity to outlier cases are unfounded. If I drop Alaska from the analysis presented in Appendix Table 2, the result remains unchanged. If I drop Wyoming, the result remains unchanged. If I drop Alaska and Wyoming at the same time, the result remains unchanged. In fact, I’ve dropped each of the 50 states and DC one by one and the results remain statistically significant and virtually identical in magnitude across all 51 robustness checks.”

      • Greg Gentry says:

        Read it again:

        “If I DROP Alaska from the analysis…

        If I DROP Wyoming …

        If I DROP Alaska and Wyoming at the same time…”

        Nowhere does he SWITCH Alaska or Wyoming INTO the control group. At no point does he adjust the control group at all. He explains:

        “The proper way to test for sensitivity would be to run the regression with ALL OF THE CONTROLS and to DROP individual states from the analysis to see if the result still holds.”

      • Greg Forster says:

        But if the finding with his original control group survives robustness checks, changing the control group wouldn’t negate that finding. It’s just changing the subject. The question remains: Are we seeing a colossal statistical fluke or is there some other explanation for the spike in suicide rates?

      • Greg Gentry says:

        If, as Hewitt says, “ If you swap Alaska to the other group, the rise almost entirely disappears. If you move the next-highest state (Wyoming) over, the trend vanishes entirely” it suggests there might be something about Alaska and Wyoming that this study is showing, versus minor access to hormones. He gave one example of what it could be showing.

        Check Hewitt’s blog post linked above, he added a response on June 24 addressing Greene’s addendum. Different states in the control absolutely changes the results. Looking at which states spent more on mental health post 2010 shows a falling suicide rate. Randomly choosing 10 states from the control and experimental lists also eliminates it.

        And, as I’ve pointed out, the control and experimental groups are completely wrong. And, if Greene has NOT divided the states based on the variable he states (and he most certainly did not) then what variable is he testing? We have no idea.

      • Greg Forster says:

        Jay found a quarter under Couch A and you’re arguing it’s not a real quarter because if you look under Couch B there’s no quarter!

      • Greg Gentry says:

        Jay’s discovered a correlation between the number of storks in a state and the birth rate and concluded storks deliver babies.,and%20birth%20rate%20is%200.62.

      • Greg Gentry says:

        To stick with couches, Jay’s divided the couches into mine and yours. He’s found there’s more dust under my 30 couches than your 15. Except, many of my couches are yours and some of your couches are mine and some couches belong to someone else entirely.

        And you’re arguing, it doesn’t matter who owns which couch, you’re still worse at housekeeping.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: