Terror Training Grounds

It's only fair to share...Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email

From 1970 to 1980, Bernardine Dohrn was a terrorist fugitive on the FBI’s Ten Most Wanted List. Now she’s a law professor at Northwestern University Law School where she teaches a course on law and the Palestinian conflict with Israel.

 

 

During her time as co-commander of the Weather Underground terrorist organization with her husband, education professor William Ayres, the group committed some sixteen bombings. Among its targets were the State Department, the New York Queens Courthouse, military recruiting offices in Brooklyn and the home of New York Supreme Court Justice John Murtagh while Murtagh was asleep at home with his wife and children.

 

 

Dohrn, her husband and their colleagues emerged from hiding in 1980 when terror charges against them were dropped due to a technicality. None of them ever served hard time for their crimes. Neither Dohrn – whom FBI director J. Edgar Hoover once referred to as “the most dangerous woman in America” – nor Ayres ever expressed the slightest remorse for their actions.

 

 

Indeed, in a 1998 interview with ABC News, Dohrn said that looking back on her career as a terrorist, her only regret was that she hadn’t committed more attacks. As she put it, “We’d do it again. I wish we had done more. I wish we had been more militant.”

 

 

But then, those who cannot do, teach. And now Dohrn is passing on her knowledge of terrorism to a new generation of students through her seminar on Palestinian terrorists.

 

 

 Dohrn is no newcomer to the subject. In 1974, together with her fellow Weather Underground terrorist colleagues, she published a book called Prairie Fire: The Politics of Revolutionary Anti-Imperialism. As the David Horowitz Freedom Center’s Discoverthenetworks.org website makes clear, Prairie Fire was the Weather Underground’s declaration of war against the United States. Among other things, it asserts, “Our intention is to disrupt the empire, to incapacitate it, to put pressure on the cracks, to make it hard to carry out its bloody functioning against the people of the world, to join the world struggle, to attack from the inside.”

 

 

Dohrn and her co-authors dedicated their book to the Who’s Who of the terrorist murderers club. Among their honorees was Sirhan Sirhan, the Palestinian terrorist who murdered Robert F. Kennedy. No doubt, for Dohrn the unapologetic terrorist, teaching a course on the Palestinian terror war against Israel is like closing a circle.

 

 

 Dohrn’s class is notable because it is likely the only current instance in a U.S. university where a formerly active terrorist is teaching a class about terrorists. But by paying someone like Dohrn to teach a course on terrorists, Northwestern is not pushing the envelope very far along.

 

 

In both major and minor universities throughout the U.S., supporters of Palestinian terrorists are teaching American students the history, sociology, politics, law and culture of the Arab and Islamic world’s war against Israel. The fact that the narrative they teach has little to no connection to historic truth or to objective, measurable reality is of no interest to anyone.

 

 

ACTUALLY THAT is not true. In many cases, the professors who stir up the most controversy for their incendiary support for terrorists and condemnation of Israel and its supporters are of great interest to university administrators. Indeed, they arguably owe their careers to their hostile stance toward Israel.

 

 

Case in point is newly tenured Columbia professor Joseph Massad, a little-accomplished professor of Middle East studies whose resume includes no scholarly achievements. Massad rose to prominence with the David Project’s 2005 release of its documentary “Columbia Unbecoming” in which Jewish students at Columbia related the intellectual intimidation they suffered in the classroom at the hands of Massad and his colleagues. A university panel formed to investigate the students’ allegations found that Massad exceeded “commonly accepted bounds” of behavior in his treatment of one of his students.

 

 

As Jacob Gershman reported in the New York Post in June, Columbia quietly granted Massad tenure earlier this year in a secret procedure that likely was unprecedented in its lack of transparency or justification.

 

 

Massad’s “academic” achievements to date consist of diatribes against Israel, Jews, gays and feminists. Jews and Israel are guilty of stealing Palestinian land and murdering Palestinians and of being Jewish “Nazis” to the Palestinian “Jews.” Homosexuals – whom Massad refers to as “Gay International” – are involved in a nefarious plot to force otherwise happy gays in the Arab world out the closet. And “Imperialist Feminists” are involved in a conspiracy to destroy the Arab way of life by objecting to so-called “honor-killings,” which in Massad’s view are nothing more than “crimes of passion.”

 

 

As Gershman reported, Massad was granted tenure this year despite the fact that a tenure panel convened two years ago rejected his tenure application. Apparently, the 2007 tenure committee was unconvinced that Massad’s work excusing Arab males for murdering their womenfolk for the “crime” of “dishonoring” them was up to Columbia’s traditional academic standards.

 

 

The first tenure committee’s loyalty to traditional academic standards apparently didn’t sit well with Columbia President Lee Bollinger and Provost Alan Brinkley. The two men must have felt that in light of the celebrity Massad brings to the school, Columbia stood to benefit from granting him a lifelong appointment. And so Massad was given an unheard-of second hearing before a different committee this past year.

 

 

All of this naturally raises the question of why American universities embrace men and women like Dohrn and Massad. Why do America’s greatest institutions of higher learning bend over backward to make terrorists and crackpots, bigots and intellectual pygmies feel welcome to spew their bile in the classroom in front of students whose families pay upwards of $50,000 per year to enroll in their sub-level courses?

 

 

The answer, unfortunately, is that in the long history of academia bad scholarship and bad ideas flourish until two conditions are met. First, bad ideas must be so widely discredited that professors become too embarrassed to espouse them openly or associate with colleagues who do. Second, funding must dry up for work in the discredited field for professors to stop engaging in it. The occasions on which both conditions are met are so rare that only one example from the last century comes to mind: eugenics.

 

 

Eugenics, the pseudo-science of racial pecking orders, was so much the rage in academic circles from the late 19th century through the 1930s that even a towering liberal figure like Supreme Court Justice Oliver Wendell Holmes believed in it. In 1927 Holmes upheld a Virginia law imposing forcible sterilization of the mentally disabled, arguing that “Three generations of imbeciles is enough.”

 

 

It took the Holocaust to force American academics to desert their support for eugenics and American financiers to stop endowing professorships and underwriting conferences in the “scientific” quackery they propounded.

 

 

We cannot know what it will take for the current fashion of Israel bashing in campus classrooms to be discredited. But we can safely assume that whenever it is finally abandoned, it will be replaced by something equally reprehensible and intellectually dishonest.

 

 

Originally published in The Jewish Press.

 

It's only fair to share...Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email

5 Comments

  • Mike Berman 08/02/2009 at 10:28

    I have come to rely on Caroline Glick for her analysis of all things Israel and the Middle East. Her depth of understanding and storehouse of knowledge is unparalleled. Ms. Glick is a world treasure.
    It saddens me, though, to learn from this article that on another issue she has cast her lot with the liberals. Here is the quote where Ms. Glick identifies herself as a race denier:

    Eugenics, the pseudo-science of racial pecking orders, was so much the rage in academic circles from the late 19th century through the 1930s that even a towering liberal figure like Supreme Court Justice Oliver Wendell Holmes believed in it. In 1927 Holmes upheld a Virginia law imposing forcible sterilization of the mentally disabled, arguing that “Three generations of imbeciles is enough.”

    It took the Holocaust to force American academics to desert their support for eugenics and American financiers to stop endowing professorships and underwriting conferences in the “scientific” quackery they propounded.

    This statement will play well in the political, legal, and media communities. It will be most welcome in academia’s social science departments. But it is a statement based on emotion. The hard scientific evidence directly contradicts your sentiment. What I am stating here is not controversial among psychometricians and other scientists who study intelligence. Some of best books concerning the subject are: A Question of Intelligence by Daniel Seligman, The Bell Curve by Richard J. Hernstein and Charles Murray, Why Race Matters by Michael Levin and Understanding Human History By Michael Hart. It might interest Ms. Glick that these books were written or co-written by NYC Jews. Even one of the great pioneers in this area, Arthur Jensen, is part Jewish.
    The only positive result of the Holocaust was the birth of Israel. There is nothing good about the leftist conquest of academia and a victory of emotion over logic is not cause for celebration.

    Reply
  • Mike Berman 08/03/2009 at 9:50

    Here is a non-angry primer on race differences in intelligence the then science writer for Newsweek wrote after The Bell Curve was published. It’s excellent, not available on the net, and would never appear in the MSM today.
    Newsweek
    Volume 124,
    Issue 17
    October 24, 1994
    Testing the Science of Intelligence
    Contrary to popular wisdom, IQ has not been discredited. Neither has the idea of differences among races and classes.
       Author:
       GEOFFREY COWLEY
    Section: Society
    Page: 56-60
    Article Text:
    IF YOU’VE FOLLOWED THE NEWS for the past 20 years, you’ve no doubt heard the case against intelligence testing. IQ tests are biased, the argument goes, for they favor privileged whites over blacks and the poor. They’re unreliable, because someone who scores badly one year may do better the next. And they’re ultimately worthless, for they don’t predict what a person will actually achieve in life. Richard Herrnstein and Charles Murray defy that wisdom in “The Bell Curve,” and they’re drawing a predictable response. The New York Times Magazine asks whether they’ve “gone too far” by positing race- and class-based differences in IQ. A Harper’s editor, writing in The New Republic, places Herrnstein and Murray within an “eccentric and impassioned sect,” whose view of IQ and inequality is out of touch with “mainstream scientific thinking.” A columnist for New York Magazine notes that “the phrenologists thought they were onto something, too.”
       
    The rush of hot words is hardly surprising, in light of the book’s grim findings and its baldly elitist policy prescriptions. But as the shouting begins, it’s worth noting that the science behind “The Bell Curve” is overwhelmingly mainstream. As psychologist Mark Snyderman and political scientist Stanley Rothman discovered in a 1984 survey, social scientists have already reached a broad consensus on most points in the so-called IQ debate. They may disagree about the extent to which racial differences are genetically based, and they may argue about the prospects for narrowing them. But most agree that IQ tests measure something real and substantially heritable. And the evidence is overwhelming that IQ affects what people accomplish in school and the workplace. In short, cognitive inequality is not a political preference. It’s a simple fact of life.
     Problem solving: Mental testing has a checkered past (chart). The tests of the 19th century were notoriously goofy, concerned more with the shapes of people’s skulls than with anything happening inside. But by the turn of the 20th century, psychologists had started measuring people’s aptitudes for reasoning and problem solving, and in 1904 the British psychologist Charles Spearman made a critical discovery. He found that people who did well on one mental test did well on others, regardless of their content. He reasoned that different tests must draw on the same global capacity, and dubbed that capacity g, for general intelligence.
     Some psychologists have since rejected the concept of g, saying it undervalues special talents and overemphasizes logical thinking. Harvard’s Howard Gardner, for example, parses intelligence into seven different realms, ranging from “logical-mathematical” to “bodily-kinesthetic.” But most of the 661 scholars in the Snyderman survey endorsed a simple g-based model of mental ability. No one claims that g is the only thing that matters in life, or even in mental testing. Dozens of tests are now published every year. Some (like the SAT) focus on acquired knowledge, others on skills for particular jobs. But straight IQ tests, such as the Wechsler Intelligence Survey, all draw heavily on g.
     The Wechsler, in its adult version, includes 11 subtests and takes about an hour and a half. After recalling strings of numbers, assembling puzzles, arranging cartoon panels and wrestling with various abstractions, the test taker gets a score ranking his overall standing among other people his age. The average score is set at 100, and everyone is rated accordingly. Expressed in this currency, IQs for a whole population can be arrayed on a single graph. Roughly two thirds of all Americans fall between 85 and 115, in the fat midsection of the bell-shaped curve, and 95 percent score between 70 and 130.
     IQ scores wouldn’t mean much if they changed dramatically from year to year, but they’re surprisingly stable over a lifetime. IQ at 4 is a good predictor of IQ at 18 (the link is nearly as strong as the one between childhood and adult height), and fluctuations are usually negligible after the age of 8. For reasons no one fully understands, average ability can shift slightly as generations pass. Throughout the developed world, raw IQ scores have risen by about 3 points every decade since the early part of the century, meaning that a performance that drew a score of 100 in the 1930s would rate only 85 today. Unfortunately, no one has discovered a regimen for raising g at will.
     Genetic factors: Fixed or not, mental ability is obviously a biological phenomenon. Researchers have recently found that differences in IQ correspond to physiological differences, such as the rate of glucose metabolism in the brain. And there’s no question that heredity is a significant source of individual differences in IQ. Fully 94 percent of the experts surveyed by Snyderman and Rothman agreed with that claim. “The heritability of IQ differences isn’t a matter of opinion,” says University of Virginia psychologist Sandra Scarr. “It’s a question of fact that’s been pretty well resolved.”
     The evidence is compelling, but understanding it requires a grasp of correlations. By computing a value known as the correlation coefficient, a scientist can measure the degree of association between any two phenomena that are plausibly linked. The correlation between unrelated variables is 0, while phenomena that vary in perfect lock step have a correlation of 1. A correlation of .4 would tell you that 40 percent of the variation in one thing is matched by variation in another, while 60 percent of it is not. Within families, the pattern among test scores is striking. Studies find no IQ correlation among grown adoptive siblings. But the typical correlations are roughly .35 for half siblings (who share a quarter of their genes), .47 for full siblings (who share half of their genes) and .86 for identical twins (who share all their genes).
     So how much of the variation in IQ is linked to genetic factors and how much to environmental ones? The best way to get a direct estimate is to look at people who share all their genes but grow up in separate settings. Four years ago, in the best single study to date, researchers led by University of Minnesota psychologist Thomas Bouchard published data on 100 sets of middle-aged twins who had been raised apart. These twins exhibited IQ correlations of .7, suggesting that genetic factors account for fully 70 percent of the variation in IQ.
     Obviously, that figure leaves ample room for other influences. No one denies that the difference between a punishing environment and an adequate one can be substantial. When children raised in the Tennessee mountains emerged from premodern living conditions in the 1930s, their average IQ rose by 10 points. But it doesn’t follow that the nongenetic influences on IQ are just sitting there waiting to be tapped. Scarr, the University of Virginia psychologist, has found that adoptees placed with educated city dwellers score no better on IQ tests than kids placed with farm couples with eighth-grade educations. “As long as the environment is adequate,” she says, “the differences don’t seem to have much effect.”
     Worldly success: IQ aside, qualities like motivation and diligence obviously help determine what a person achieves. People with low IQs sometimes accomplish great things – Muhammad Ali made the big time with an IQ of 78 – but rare exceptions don’t invalidate a rule. The fact is, mental ability corresponds strongly to almost any measure of worldly success. Studies dating back to the 1940s have consistently found that kids with higher IQs complete more years of school than those with lower IQs – even when they grow up in the same households. The same pattern emerges from studies of income and occupational status.
     Can IQ be used to predict bad things as well as good ones? Until now, researchers haven’t focused much on the role of mental ability in social problems, such as poverty and crime, but “The Bell Curve” offers compelling new data on that front. Most of it comes from the National Longitudinal Survey of Youth (NLSY), a study that has tracked the lives of 12,000 young people since 1979, when they were 14 to 22 years old. The NLSY records everything from earnings to arrests among people whose IQs and backgrounds have been thoroughly documented. The NLSY participants come from various racial and ethnic groups. But to keep the number of variables to a minimum, Herrnstein and Murray looked first at how IQ affects the social experience of whites. Their analysis suggests that although growing up poor is a disadvantage in life, growing up with low mental ability is a far greater one.
     Consider the patterns for poverty, illegitimacy and incarceration. Poverty, in the NLSY sample, is eight times more common among whites from poor backgrounds than among those who grew up in privilege – yet it’s 15 times more common at the low end of the IQ spectrum. Illegitimacy is twice as common among the poorest whites as among the most prosperous, but it’s eight times as common among the dullest (IQ under 75) as it is among the brightest (IQ over 125). And males in the bottom half of the IQ distribution are nearly 10 times as likely as those in the top half to find themselves in jail.
     Race gap: If the analysis stopped there, the findings probably wouldn’t excite much controversy. But Herrnstein and Murray pursue the same line of analysis into the painful realm of racial differences. Much of what they report is not new. It’s well established, if not well known, that average IQ scores differ markedly among racial groups. Blacks, like whites, span the whole spectrum of ability. But whereas whites average 102 points on the Wechsler test, blacks average 87. That gap has changed little in recent decades, despite the overall rise in both groups’ performance, and it’s not simply an artifact of culturally biased tests. That issue was largely resolved 14 years ago by Berkeley psychologist Arthur Jensen. Jensen is still notorious for an article he wrote in 1969, arguing that the disappointing results of programs like Head Start were due partly to racial differences in IQ. His lectures were disrupted for years afterward, and his name became publicly synonymous with !
    racism. But Jensen’s scientific output continued to shape the field. In a massive 1980 review titled “Bias in Mental Testing,” he showed that the questions on IQ tests provide equally reliable readings of blacks’ and whites’ abilities. He also showed that scores had the same predictive power for people of both races. In 1982, a panel assembled by the National Academy of Sciences reviewed the evidence and reached the same conclusion.
     If the tests aren’t to blame, why does the gap persist? Social scientists still differ sharply on that question. Poverty is not an adequate explanation, for the black-white IQ gap is as wide among the prosperous as it is among the poor. Some scholars, like University of Michigan psychologist Richard Nisbett, argue that the disparity could simply reflect differences in the ways children are socialized. Writing in The New Republic this week, Nisbett cites a North Carolina study that found working-class whites more intent than blacks on preparing their children to read. In light of such social differences, he concludes, any talk of a genetic basis for the IQ gap is “utterly unfounded.” Jensen, by contrast, suspects a large genetic component. The gap between black and white performance is not restricted to literacy measures, he says. It’s evident even on such culturally neutral (but highly g-loaded) tasks as repeating number sequences backward or reacting quickly to a flashing !
    light.
     Jensen’s views as it happens, is more mainstream than Nisbett’s. Roughly two thirds of those responding to the Snyderman survey identified themselves as liberals. Yet 53 percent agreed that the black-white gap involves genetic as well as environmental factors. Only 17 percent favored strictly environmental explanations.
     Herrnstein and Murray say they’re less concerned with the causes of the IQ gap than they are with its arguable consequences. Are Black America’s social problems strictly the legacy of racism, they wonder, or do they also reflect differences in mental ability? To answer that question, Herrnstein and Murray chart out the overall black-white disparities in areas like education, income and crime. Then they perform the same exercise using only blacks and whites of equal IQ. Overall, blacks in the NLSY are less than half as likely as whites to have college degrees by the age of 29. Yet among blacks and whites with IQs of 114, there is no disparity at all. The results aren’t always so striking, but matching blacks and whites for IQ wipes out half or more of the disparities in poverty, welfare dependency and arrest rates.
     Social scientists traffic in correlations, and these are strong ones. Correlations are of course different from causes; low IQ does not cause crime or illegitimacy. While Herrnstein and Murray clearly understand that vital distinction, it’s easily lost in their torrent of numbers. There is no longer any question, however, that IQ tests measure something hugely important in life. It’s also clear that whatever mental ability is made of – dense neural circuitry, highly charged synapses or sheer brain mass – we didn’t all get equal shares.
    Caption:
    JOHN BLACKFORDCopyright (c) 1994 Newsweek, Inc.
    Record Number: 005510CCB728AD9DE3ECA

    Reply
  • Marc Handelsman, USA 08/03/2009 at 12:42

    Unfortunately, many American campuses are infiltrated by liberal ideology. That explains why a former terrorist with a “seared conscience” can teach a law course on the Arab-Israeli Conflict. What academics need to realize is that Israel-bashing and Anti-Semitism are not mutually exclusive. Only when genuine intellectually honesty returns to academia, will views like Israel-bashing be discredited. And today’s students are truly cheated out of a first-rate education by some professors who have a second-rate intellect.

    Reply
  • Brian017 08/13/2009 at 3:16

    Caroline
    The case you refer to involving Holmes had nothing to do with racial pecking orders. I think you are being a little dishonest in conflating this with ideas of racial differences. A Jewish academic, John Glad, has actually written an e-book about eugenics with a foreword by Seymour Itzkoff. Itzkoff comments:
    “The term eugen-ics has been on an ideological hit list both by the irrational left as well as by an intimidated public. However, as Dr. Glad points out, clearly and authoritatively, there is virtually no factual basis for what can only be seen as a totemic reaction. The mere mention of eugenics elicits knee-jerk reaction—“Nazi genocide, forced sterilization.” Yet by any standard of rational analysis, this vision of improvement for the human species has a strong humanistic tradition to support its fur-ther application.
    The real history of eugenics, as Dr. Glad points, out is rich in a truly liberal vision for the improvement in the state of all of humankind. And modern research in the biological nature of human function is opening up opportunities for the enhancement of both the physical as well as the mental con-dition of the human species. This, at a blazing speed of dis-covery. Thus, we need thinkers such as John Glad who will step up to challenge blind prejudice with fact and possibility….
    Eugenics, a vision of human betterment, with real scien-tific and then social-policy potential for enhancing the evolu-tionary future of our species, is buried within a demonization of language and misunderstanding. Critical to the linguistic and semantic morass that surrounds this paralysis of under-standing is the spectral memories of the German and Euro-pean perpetration of the Holocaust…
    I would like to add a comment to Dr. Glad’s clear and de-cisive puncturing of the balloon of myth that argues that the Nazis claimed to have actually engaged in a program of eugenics. The Nazis also claimed to be a party of socialism! If we define eugenics as encompassing programs of human bet-terment, physical as well as mental, practices that benefit community in the local sense as well as the species in gen-eral, we can say that the Holocaust was the antithesis of eugenic practice. Not only did the Nazis not argue for their participation in the eugenics movement, but they knew that they were practicing dysgenics.
    They hid their practices, as do all totalitarian regimes, within a babble of propaganda that presumably validated to the naïve, this mirage of self-justification. A careful reading of their mission statements, and, of course, their unspeakable practices, clearly reveals that that they recognized that they were eliminating a people who they knew to be superior to themselves, a millennial threat to German dominance. They covered these actions by heaping slime on the Jewish people, their racial heritage, their ghetto and post-ghetto cultural behavior, their arrogance and purported economic conspira-cies, above all their dominance in all walks of life, quickly at-tained only a brief moment beyond the ghetto. this became a universal challenge to German pretensions to leadership. And this from a people that in Germany was a scant one percent of the population, in the entire Austro-Hungarian Empire, about four percent.
    One has only to read the literature of polemics arising from the German/Austrian political/cultural scene, from the mid-nineteenth century on, to realize that the hatred of the Jews was not a hatred of religion, but rather of race. The so-lution, clearly and early bandied about by a wide variety of European hate groups, was one of potential cleansing of the Jews from Europe, if not the world. Simply, the polemics of hate was engendered to facilitate the elimination of a dan-gerous contender for dominance in this self-same continental environment.
    Thus the genocide of the Jews, in which all of Europe be-came eager participants, was not an example of eugenics gone astray, as Dr. Glad suggests. I here, gently demur. Rather, the Holocaust was a vast dysgenic program to rid Europe of superior intelligent challengers to the existing Christian domination by a numerically and politically minus-cule minority.”
    http://www.whatwemaybe.org/

    Reply
  • Brian017 08/13/2009 at 3:22

    Further to Mr Berman’s comments regarding IQ, I note William Saletan’s recent articles in Slate observing:
    “research is constantly finding new gene-trait correlations and group differences. If your faith in equality depends on an ethnically or racially even distribution of all ability-influencing genes, you’re in trouble.”
    Professor Steve Hsu has commented on recent research by Professor Neil Risch at Stanford showing that genetically groups form identifiable clusters. Hsu comments:
    “Two groups that form distinct clusters are likely to exhibit different frequency distributions over various genes, leading to group differences.
    This leads us to two very distinct possibilities in human genetic variation:
    Hypothesis 1: (the PC mantra) The only group differences that exist between the clusters (races) are innocuous and superficial, for example related to skin color, hair color, body type, etc.
    Hypothesis 2: (the dangerous one) Group differences exist which might affect important (let us say, deep rather than superficial) and measurable characteristics, such as cognitive abilities, personality, athletic prowess, etc.
    Note H1 is under constant revision, as new genetically driven group differences (e.g., particularly in disease resistance) are being discovered. According to the mantra of H1 these must all (by definition) be superficial differences.
    A standard argument against H2 is that the 50k years during which groups have been separated is not long enough for differential natural selection to cause any group differences in deep characteristics. I find this argument quite naive, given what we know about animal breeding and how evolution has affected the (ever expanding list of) “superficial” characteristics. Many genes are now suspected of having been subject to strong selection over timescales of order 5k years or less. For further discussion of H2 by Steve Pinker, see here.
    The predominant view among social scientists is that H1 is obviously correct and H2 obviously false. However, this is mainly wishful thinking. Official statements by the American Sociological Association and the American Anthropological Association even endorse the view that race is not a valid biological concept, which is clearly incorrect.
    As scientists, we don’t know whether H1 or H2 is correct, but given the revolution in biotechnology, we will eventually. Let me reiterate, before someone labels me a racist: we don’t know with high confidence whether H1 or H2 is correct.
    Finally, it is important to note that any group differences are statistical in nature and do not imply anything about particular individuals. Rather than rely on the scientifically unsupported claim that we are all equal, it would be better to emphasize that we all have inalienable human rights regardless of our abilities or genetic makeup.”

    Reply

Leave a Comment