In this post, I will cover some basics of conducting a psychological research. Specifically, I will briefly outline its different methods and types, and then - talk about the 5 steps involved in any research.
Types of research
Type of any research depends on its goals. If a study aims to describe individual variables as they occur in natural environment, then Descriptive Methods should be applied. On the other hand, Correlational and Experimental methods examine the relationship between numerous variables.
a) Correlational design describes the relationship between variables simply by observing them how they occur naturally; no manipulation of variables is involved. This design shows if such relationship exists, however it does not say anything about the cause of this relationship; it does NOT prove causality (in other words, that change in one of the variables is caused by the change in another one). It is a very important thing to remember; sometimes it might be tempting to argue for a causality, especially if a strong correlation is found, however it is not at all necessarily the case, as there could be another, not accounted for, variable which could have caused these changes.
b) Experimental design is used to detect cause-effect relationship. There are always at least two variables involved in an experiment; experimenter manipulates one of them (independent variable, or IV) to see whether it would cause changes in another one (dependent variable, or DV).
Let us consider the following example. Say, we want to find out whether a new drug gets rid of a headache. In this case we would need two - experimental and control - groups, which would give us an opportunity to manipulate our IV - in our case, taking a drug. Experimental group would take the real drug, and control group would take placebo drug instead.
If the only difference between them is the presence or absence of IV, then any DV differences between the groups must be due to our manipulation of IV. Thus, if headaches did go away in the experimental group but remained there in the control group, we can quite safely conclude that the treatment worked. The control groups are essential for determining causality.
There are several important points to remember though. As I said, it is essential to make sure that IV is the only difference between the groups. Now, in real life it is hardly possible of course - we simply can not find people with the same genes, in the same mood, with the same health history etc. This is why it is very important to have a substantial sample and to assign participants to groups absolutely randomly (each subject has an equal chance of being in either group). This will equalise individual differences between the groups.
a) Correlational design describes the relationship between variables simply by observing them how they occur naturally; no manipulation of variables is involved. This design shows if such relationship exists, however it does not say anything about the cause of this relationship; it does NOT prove causality (in other words, that change in one of the variables is caused by the change in another one). It is a very important thing to remember; sometimes it might be tempting to argue for a causality, especially if a strong correlation is found, however it is not at all necessarily the case, as there could be another, not accounted for, variable which could have caused these changes.
b) Experimental design is used to detect cause-effect relationship. There are always at least two variables involved in an experiment; experimenter manipulates one of them (independent variable, or IV) to see whether it would cause changes in another one (dependent variable, or DV).
Let us consider the following example. Say, we want to find out whether a new drug gets rid of a headache. In this case we would need two - experimental and control - groups, which would give us an opportunity to manipulate our IV - in our case, taking a drug. Experimental group would take the real drug, and control group would take placebo drug instead.
If the only difference between them is the presence or absence of IV, then any DV differences between the groups must be due to our manipulation of IV. Thus, if headaches did go away in the experimental group but remained there in the control group, we can quite safely conclude that the treatment worked. The control groups are essential for determining causality.
There are several important points to remember though. As I said, it is essential to make sure that IV is the only difference between the groups. Now, in real life it is hardly possible of course - we simply can not find people with the same genes, in the same mood, with the same health history etc. This is why it is very important to have a substantial sample and to assign participants to groups absolutely randomly (each subject has an equal chance of being in either group). This will equalise individual differences between the groups.
Research Cycle
5 steps can be identified in a psychology research.
1. Review of the literature
This step is necessary in order to identify a problem with previous research in area and state why a new experiment is needed.
2. Formulate hypothesis
Hypothesis is a formal statement of predictions derived from evidence from earlier research or theory, and needs to be operationally stated. In other words, it predicts the relationship between different variables. Operational definition means statements of what actions/operations exactly need to be carried out in order to detect/measure a phenomenon. Example of an experimental hypothesis could for example be: 'The later students wake up on average, the worse their exam grades are'.
3. Design the study
The study can be designed in several ways depending on how the IV(s) will be manipulated.
a) Within-subjects design. This means collecting data from the same sample - but on numerous occasions, perhaps under different conditions.
b) Between-subjects design. This means having two, mutually exclusive groups (just like in our drug experiment).
c) Mixed design.
It is important to think of the sample, which should be representative of a population. It should be random and of a substantial size. The appropriate size of sample can be determined by looking at the similar studies and finding the size effects in them. As to the importance of randomising your sample, I talked about it in more detail in the Statistics section. In short, all the members of a population should have an equal chance to be represented in a sample.
In general, experiment should be designed so that it addresses the posed question (i.e., tests the hypothesis), controls for extraneous variables and produces results which are generalisable.
4. Analyse the data
After we find relationship (if any) between the variables, the sample results need to be transferred to a population. This is where we use Inferential Statistics: a process which, basing on a sample data, evaluates whether the experimental hypothesis is likely to be true for a population.
In order to be able to accept experimental hypothesis, we need to reject the Null Hypothesis (Ho) first, which predicts no systematic difference between the groups/conditions; it states that all such findings occurred by chance. We test for the null hypothesis with the use of statistical formulas; if it is true, then it is not credible to generalise findings to a population.
5. Draw conclusions
After the results are analysed, there several things that need to be considered.
1. Theoretical and applied implications of findings
2. Possible limitations of the study (i.e., generalisability, representativeness of a sample used, research method and protocol)
3. Suggestions for future research in the area
1. Review of the literature
This step is necessary in order to identify a problem with previous research in area and state why a new experiment is needed.
2. Formulate hypothesis
Hypothesis is a formal statement of predictions derived from evidence from earlier research or theory, and needs to be operationally stated. In other words, it predicts the relationship between different variables. Operational definition means statements of what actions/operations exactly need to be carried out in order to detect/measure a phenomenon. Example of an experimental hypothesis could for example be: 'The later students wake up on average, the worse their exam grades are'.
3. Design the study
The study can be designed in several ways depending on how the IV(s) will be manipulated.
a) Within-subjects design. This means collecting data from the same sample - but on numerous occasions, perhaps under different conditions.
b) Between-subjects design. This means having two, mutually exclusive groups (just like in our drug experiment).
c) Mixed design.
It is important to think of the sample, which should be representative of a population. It should be random and of a substantial size. The appropriate size of sample can be determined by looking at the similar studies and finding the size effects in them. As to the importance of randomising your sample, I talked about it in more detail in the Statistics section. In short, all the members of a population should have an equal chance to be represented in a sample.
In general, experiment should be designed so that it addresses the posed question (i.e., tests the hypothesis), controls for extraneous variables and produces results which are generalisable.
4. Analyse the data
After we find relationship (if any) between the variables, the sample results need to be transferred to a population. This is where we use Inferential Statistics: a process which, basing on a sample data, evaluates whether the experimental hypothesis is likely to be true for a population.
In order to be able to accept experimental hypothesis, we need to reject the Null Hypothesis (Ho) first, which predicts no systematic difference between the groups/conditions; it states that all such findings occurred by chance. We test for the null hypothesis with the use of statistical formulas; if it is true, then it is not credible to generalise findings to a population.
5. Draw conclusions
After the results are analysed, there several things that need to be considered.
1. Theoretical and applied implications of findings
2. Possible limitations of the study (i.e., generalisability, representativeness of a sample used, research method and protocol)
3. Suggestions for future research in the area