These are two true and famous stories, and one (true!) personal observation, about biased sampling -- two about polling, the other about sampling from events -- two about nonrandom bias, one about stratum bias.

In 1936, the Democratic candidate for President was Franklin Delano Roosevelt (x-y), just finishing his first term as President. The Republican candidate was Alfred M. Landon, Governor of Kansas. (Landon's daughter, Nancy Landon Kassenbaum, concluded, in 1996, a distinguished career as Senator for Kansas.) Editors of a well known magazine, The Literary Digest, decided to conduct a poll to predict the winner in the election. (No, not The Reader's Digest, which wasn't founded until 19xx, but The Literary Digest, which no longer exists, for reasons you'll soon understand.)

The problem facing the editors of The Literary Digest was how to obtain the names of those to be queried. The Digest editors took the easy and foolish way out. Telephone books were available for cities, towns, and regions over the country. And lists could be obtained of those who buying state license plates. So the names of those to be polled were taken from these two types of lists. It was concluded, from the poll, that Landon would win the election for the Presidency. However, Roosevelt overwhelmingly won in all states except Maine and Vermont, the biggest Electoral College victory in U. S. history. What went wrong will The Digest polling?

Well, at this time, in "The Great Depression", more Republicans owned telephones and cars than did Democrats. So sample was BIASED by a proportion of Republicans not representative of the country at large. The pollsters thought they were asking the question, "Who are most American citizens going to vote for President?" But, due to the bias, they were asking, "Who are most Republicans going to vote for President?" And the election's answer was the same as the poll's answer. Most Republicans did vote for Landon. But there were, at this time, so many more Democratic voters than Republican voters, hence the editors got the right answer to the wrong question!

The other famous sampling fiasco concerns an enquiry conducted in 1956 as to whether seeding clouds with silver iodide crystals can cause rain. An extensive experiment was to be conducted by Captain Orville of the Navy who had a good reputation, at this time.

You'll read elsewhere that I spent nearly five years as a weather observer and weather forecaster in the Army Air Corps before, during, and after American participation in World War II ("The Big War", as Archy calls it). Later I taught college courses in meteorology. And, for a period in Puerto Rico, I supervised a weather station operated by students. (It's excellent scimath discipline!) So I was very interested in this seeding enquiry, although then, as now, I'm skeptical about the statistics seeming to indicate that rainmaking is possible. As a physics teacher, I was put on the list of those who would receive the final report.

I began to hear disturbing criticisms about the report long before it came to me, which puzzled me. For I'd read that many distinguished statisticians had been consulted, among them, John Tukey of Princeton U., a statistician I very much admire and one whose papers have taught me much. (Tukey is the author of a delightful and informative paper, "Quick and Dirty Methods in Statistics", describing conditions under which statistical shortcuts are permissible.) But when I received the report, I immediately understood the criticism. BIAS!

As too often happens, funding for the experiment was insufficient to do it properly -- which DAMMIT! DAMMIT! DAMMIT! means that the money ends up wasted! because the experiment becomes suspect! To eke out the funds, two shortcuts were employed. Seeding was to be done only in clouds randomly chosen on randomly chosen days. But -- cheap! cheap! -- seeding was done on clouds "at the experimenter's convenience", spoiling the sampling plan. However, the snapper was padding the survey statistics with data taken from private companies in the business of selling "rainmaking" to farmers! CONFLICT OF INTEREST! One of the consulting statistician said that Congress might as well have thrown the money in the ocean!

(Here's an aside on that rainmaking subject. After Orville's fiasco, a massive study on rainmaking was carried out in 19xx by xxx Vonnegut, who happens to be a brother of the writer, Kurt Vonnegut. The rumor in "the field" that not only a brief unhappy work period in a research company but also sibling rivalry soured Kurt to science to the point of being anti-science.)

There is another type of bias which is related to the two cases cited in the above stories, which I can describe by a personal (true) anecdote. A colleague asked me to prepare a set of 100 random numbers so he could draw a 10%-sample (a size usually thought to be "representative") from the student body of our university to sample for opinions on certain matters. I said I would do this as soon as I had statistics on the make-up of the student body -- numbers of men, numbers of women, numbers of native Puerto Ricans, numbers from the States or other countries, numbers of "freshmen", sophmores, juniors, and seniors, etc. My colleague became irritable and refused to allow me to explain the need for these statistics, saying that he wanted to get the random numbers "right away". So I gave him a set of 100 random numbers from a Random Number Table, and he proceeded with his sampling.

Later, he returned, very incensed. "I thought those numbers would give me a random sampling. But this sampling has many more women students than men, although the student population as somewhat more men than women students. And there are too many non-Puerto Ricans in it. And it is distorted in juniors and seniors. I thought a random sampling was supposed to take care of those things!"

"Look. You've tossed a coin -- a two-sided coin, with "heads" and "tails"; and you've seen such tossings by others. A "fair" coin is supposed to land "heads" 50% of the time -- IN THE LONG RUN! But you've seen "runs" of "heads" before "tails", and vice versa. Also, you've played bridge. Spades make up a fourth of the pack, so -- IN THE LONG RUN! -- should appear about one-fourth of the time. But you've seen bridge hands with no spades, and hands with all spades. A small random sample can be biased. Bias of this kind disappears or becomes insignificant only with a LARGE SAMPLE." The only surety is TOTAL SAMPLING -- looking at everything.

This made him even more irritated. "Do you mean I have to poll the entire, or nearly entire, student body to get an answer about this matter?"

"No. There's a way to avoid the kind of bias you observed in your sampling, without going beyond the 100 number. I was trying to tell you about it, but you woulldn't listen. The procedure is called 'Stratified sampling'. You use the statistics of the population to determine the important strata of the population and the numbers in each stratum. For this case, it's the numbers of men and women students, the numbers of Puerto Ricans and non-Puerto Ricans, the numbers of students ineach of the four yearly classes, etc. If, say, the population shows 51% men (49% women), then you keep count of the men and women student names drawn in your sampling. If you get to the 50th woman student's name (more than 49% of 150) before you have drawn 51 (51%) men's names, YOU REJECT THAT NAME AND ANY FUTURE WOMAN STUDENT'S NAME UNTIL YOU HAVE BUILT THE GENDER STRATUM IN THE IMAGE OF ITS STRATUM IN THE POPULATION. Similalry, with the other STRATA of origin, class-year, etc."

When I had obtained the relevant statistics and another set of 100 random numbers, he proceeded with his sampling and felt satisfied.

This procedure introduces the very minor bias of REJECTION into the sampling, But it is considered far less important that that of STRATUM BIAS for certain statistical enquiries.