A **multiple comparisons** statistical procedure is one that attempts to alleviate a multiple inference issue (such as being likely to reject some null hypotheses when you carry out a large number of tests).

**Multiple testing** can result in “false positives” in the sense that when a large number of significance tests are conducted, some individual tests may be deemed significant just by chance even if the null hypothesis is true for each one.

It is important to know how to compare and contrast **statistical significance versus practical significance** and how that comparison relates to both the large sample caution and the small sample caution. With a large sample size, even small unimportant deviations from the null hypothesis would be statistically significant (though not of any practical significance). With a small sample, differences that might be of practical importance cannot be detected (and would not be **statistically significant**).

Important effects may become **undetected differences** and thus create **type 2 errors** with a small sample size. That is because the large amount of variability inherent with small sample sizes leaves the null hypothesis as a plausible explanation of the data even when there is a true effect.

Institutions like universities and major hospitals that conduct research have an **Institutional Review Board (IRB)**, which is a diverse panel of scientists and community members whose job is to evaluate the ethical conduct of the research conducted at that institution.

When research involves the use of human subjects, it must demonstrate that **informed consent** has been given by each subject. The **informed consent document** must describe in plain English i) the research and what is requested of the subjects, ii) any anticipated risks and potential benefits, iii) any alternatives to participation, iv) provisions for maintaining the subject’s privacy and confidentiality of records, and v) the right to leave the study at any time without any detriment to the subject.