Deception In Research

“There are no circumstances in which deception is acceptable if there is a reasonable expectation that physical pain or emotional distress will be caused.” (Howitt & Cramer, 2011, p.151).  Despite this, there are situations where deception can be used. It depends on the value of the research, for instance, if it has scientific or educational value then the use of deception may be taken into account. Before an ethics boards would accept this method, they would make sure that there were no other avenues in which deception could be avoided. It may also be important to get the opinion of an independent researcher, that way researcher bias can be eliminated, or at least reduced.

If the research is then given the go ahead, the psychologist must then reveal the deception to the participant as soon as available. This is likely to be in a debrief form, explaining why they were debriefed and give them the opportunity to withdraw their data. If the participant is not entirely happy that the deception has occurred then the research must then be put on hold until a further review in conducted. Fortunately ethics boards have emplaced guidelines and restrictions to the degree that deception can be used. However throughout psychology, deception has played a key role. For instance, Loftus’ eyewitness testimony experiment. In which participants sat outside a laboratory, when a loud noise occurs, and a scientist sweating and holding a pen, and the other condition where the scientist comes out holding a bloody knife. Participants were then asked to identify the scientists. This study will have caused anxiety to the participants, however if the aim was outlined from the beginning, the participants will not have been subjected to the ‘weapons effect’.

Another study that demonstrates deception would be Milgrams’ study, (which everyone knows about so I won’t go into detail). The participants may have suffered from anxiety and minor psychological harm. However the findings of the study, I believe far outweighed the possibility of harm to the participant. In addition the harm could be resolved through a counselling session. This view point may seem somewhat unsympathetic, however, Christensen (2012) argues that deception is not as unacceptable as it is made out. Christensen states that participants enjoy deception experiments more than non-deception experiments. They received more educational benefit from the research and did not mind being deceived or having their privacy breached. Christensen concludes in favour of continuing deception in studies.

I am also in favour of deception in studies, certainly from a SONA participant view, the studies are much more interesting when there is something to it, other than pressing buttons. I agree with BPS etc that pain and emotional distress should be minimised, however if the ends justifies the means as in Milgrams’ study, I think it should be acceptable.

Grounded theory

The grounded theory was developed by sociologists Glaser and Strauss in 1967, and was a reaction against unbridled empiricism and grand theory uninformed by research evidence (Howitt and Cramer, 2011). Grounded theory is mainly used in qualitative research and usually documented as being a reaction against the dominant sociology of the twentieth century. It is a systematic  methodology used in the social sciences, involving the discovery of theories through data analysis. Grounded theory works in a mirror image, or reversed nature of the dominant sociology of the 1960’s, which may appear to contradict the traditional scientific method. Because of this, I cannot see it as being a reliable means of research. Despite this, It works on the basis of collecting data through various methods rather than forming a hypothesis and working from that. The data is then coded, and these codes are then grouped into similar concepts. From these concepts, categories are formed, from which theories are devised.

The reason for this approach was because qualitative research was becoming a more widely accepted domain, not merely a precursor to quantitative research. Also that case studies in themselves are not considered to attain the full potential of qualitative research. However, the grounded theory did not match all the other features of other qualitative research methods. For instance, some of those who use grounded theory reject realism, i.e. the idea that somewhere there is a social reality, despite many others accepting it. Likewise, some followers of grounded theory aim for objective measures, as opposed to measures subjective to the researcher. The founders of grounded theory went their separate ways, mainly because of disputes regarding the extent to which researchers should be directed by their pre-existing ideas.

Like many approaches, it has its flaws. For instance, this method can cause pointless data collection. This could be because there is no consensus stating which areas to research, seeing as the theory is developed after the completion of research. Another problem Potter (1998) discovered is that the method usually revolves are common sense theories, and notions which go beyond this are usual dismissed. The process of the grounded theory merely codifies what normal people usually think about something they engage in. There is also a risk that this method is used to excuse inadequate qualitative analysis (Howitt & Cramer, 2011). The area to look into is a matter of choice, with no guarantee of any significance or finding. This method requires much effort, and as a result, may be difficult to abandon once nothing is found.

I consider this method to be inadequate as it lacks scientific reasoning. In comparison to other methods it is mediocre as it looks into basic ideas, whereas other methods have a much further insight. I believe that scientific approaches, for instance laboratory research is a much more accurate and decisive way of producing research. I deem it necessary to form a hypothesis or theorum previous to conducting the data collection.

Howitt, D. & Cramer, D. ( 2011). Introduction to research methods in  psychology (3rd  ed.). Harlow: Pearson.

Potter, J., 1998, “Qualitative and discourse analysis”, in Bellack, A.S., Hersen, M., (Eds), Comprehensive Clinical Psychology: Volume 3, Oxford: Pergamon.

Why is the “file drawer problem” a problem?

The file drawer problem is a publication bias which can effect whether or not a study is published or not. This is largely dependent on the significance of the results gained, or whether it matches the expectations of the researcher or sponsor. For instance, the file drawer problem will often occur when the study fails to reject the null hypothesis, in other words, it does not show any statistical significance. Therefore they are much less likely to be published than a significant study.

 

I believe that the file drawer problem can be a great problem, this is because if only statistically significant studies are being published it creates a misrepresentation of the actual situation. I consider the main  problem to be that the effects shown, despite being supported by research, may not be real. If something is replicated over and over, but only the statistically significant ones are being published, then researchers are able to state a claim  that is in fact not true. I would argue that this could have a massive impact on some areas of research and in some cases may be dangerous. For example, if a drug company endorse a researcher to study the effects of a new drug, and the effects were not what the company expected, these results would be locked away in a ‘file drawer’ and never published. The same research may be replicated over and over until some sort of relationship is established. Another problem is that a certain topic may have been covered by one researcher that finds no significance, and therefore does not publish it. Another researcher may then waste their time trying to study the same topic, to no avail. I feel this to be un unfair and unprofessional way of practicing research.

 

There have been many suggestions as to counter this problem. One of the more recognised methods was developed by Rosenthal (1979). This method which was coined the ‘fail-safe file drawer’ analysis. The file drawer problem can create trouble for certain reviews and can be a difficulty when producing a meta analysis. The fail-safe file drawer involves calculating a fail-safe number to predict whether the file drawer problem will affect these reviews or meta analyses. However, Rosenburg (2007) comments that fail-safe calculations are unweighted, and they are not based on the common framework of how most meta-analyses are performed. Another problem is this method fails to incorporate the bias contained in the ‘file drawer’ of unpublished studies. Therefore, will give misleading results, Scargle (2000).

 

To sum up, I believe that the file drawer problem is a considerable hindrance to research. Despite attempts to reduced this, more is needed to be  done. Fortunately, since 2004 several medical journals stated they would not publish drug research endorsed by pharmaceutical companies, unless it was registered in a public data base from the outset. However, not all journals agreed have agreed to do this, so some unreliable research may get through. I feel there is a clear solution to this; similar to that of the medical journals, all research should be registered from the beginning. I feel that it is necessary for the researcher to publish all findings. At the very least, I think this ‘file draw’ of unpublished research should be made accessible for others to view if they so wish.

 

Rosenthal, Robert (1979), “The “File Drawer Problem” and the Tolerance for Null Results”, Psychological Bulletin 86 (3): 638–641

Rosenberg, M. S. (2005), THE FILE-DRAWER PROBLEM REVISITED: A GENERAL WEIGHTED METHOD FOR CALCULATING FAIL-SAFE NUMBERS IN META-ANALYSIS. Evolution, 59: 464–468. doi: 10.1111/j.0014-3820.2005.tb01004.x

Scargle, J. (2000), Publication bias: The “file-drawer” problem in scientific inference, Journal of Scientific Exploration, Vol. 14, No. 1, pp. 91-106.

Multiple choice questions…for better or for worse?

Multiple choice questions (MCQs) were developed by E. L. Thorndike, although Frederick J. Kelly was the first to use them as a large scale assessment. They are a common way of testing, especially when there is a large amount of people sitting the test. After sitting the end of semester one exams I was interested into finding out how reliable MCQs really are.

Various research suggests that as long as the they are set out following the correct procedure then they are valid. For instance, Considine, Botti and Thomas stated MCQs need three components for reliability, the stem (which is the stimulus) the correct answer (or key) and several incorrect but possible answers (or distracters). Haladyna argues that MCQs are considered to have a high degree of reliability because they have an objective scoring process (as cited in Considine et al,2005). Haladyna also noticed problems with other methods of testing such as essay tests. For instance essay tests may be influenced by subjectivity or the differences between the views of each marker. I.E. different grades depending on who is marking it. Essay can also be subject to a number of biases, for example people’s hand writing or the length of the sentence, of which both have been shown to affect essay grades (as cited in Considine et al, 2005). In regards to the validity of MCQs there is no way to directly measure MCQs, instead it requires judgement. For this reason they need to go through a panel consisting of at least three experts (Polit & Hungler 1999).

However, this view of MCQs being an effective testing method is not embraced by all. Frederiksen (1984) argues that MCQs are directed at lower-level cognitive processes, for instance memorising simple factual knowledge. Frederiksen (1984) states that elaborate learning activities are not needed for passing an MCQ test, but instead the learning involves superficial dealing with the MCQ topic. It is claimed that people with low quality knowledge can still arrive at a adequate score in the test (Frederiksen 1984). Shepard (as cited in Haladyna, 1994) pointed out that by paying too much attention to memorising and testing knowledge, we can overlook more important things. For instance the application of knowledge and skills in real life situations.

In my opinion  MCQs are a hindrance rather than a help. I do not think they are an appropriate method of testing. This is because if a student  has some knowledge of the question but answers wrong on an MCQ they get no marks. In a free response test the student may still get some marks for their knowledge. Also, if a student has no idea they can guess and still get the right answer. It is possible, although unlikely that a student could get full marks on a topic they know nothing about, merely by guessing. Which I feel is an unfair method of assessing intelligence.

 

Considine, J., Botti, M., Thomas, S. (2005) Design, format, validity and reliability of

multiple choice questions for use in nursing research and education. P. 21.

 

Polit, D. F., Hungler, B. P. (1999) Nursing research. Principles and Methods.

 

Frederiksen, N. (1984), “The Real Test Bias: Influences of Testing on Teaching and Learning,” American Psychologist, 39, 193-202.

 

Haladyna, T., M.(1994) Developing and validating multiple-choice test items. Lawrence Er1baum, New Jersey.

 

 

Ethics and research with animals

A contentious topic that many have strong feelings about.

There are several ethical precautions that are deemed necessary for animal testing to be ethical. Psychologists need to be experienced when working with animals, they need to take into considerations the ‘comfort, health and humane treatment’ of the animals. They need to have the training so that they can deal with things in an appropriate manner. They need to minimise any adverse effects for instance pain and freedom from infection and illness. In order to be ethical, there must be no other ways of conducting the research. In addiction the process needs to be justified, in the sense that it benefits education, science and has an applied value. It’s deemed necessary for anaesthetics to be used before and after so that any pain is reduced to a minimum. It is required that if needed the animal’s life is terminated in a quick, painless way (Howitt and Cramer 2011).

However, regardless of whether these methods are employed, is animal testing right or wrong?

Generally those in the scientific community would be in favour of using animals in research. The main point they claim is, humans are superior to animals. They would argue, despite the pain to animal being minimal, it’s preferable to cause pain to animals than humans. The reasons for continuing animal testing is due to the medical breakthroughs unearthed by animal testing. For instance, the drug to control HIV was developed by testing on dogs, resulting in many being killed. Despite this, the drug has prevented many people dying, therefore, the ends justified the means.

However, this view is not supported by all. Some would argue that one main issue with animal testing is that animals cannot consent to being part of the research. The other major concern is that it causes animals a considerable amount of pain and suffering. Despite anaesthetics being necessary for ‘ethical testing’, some tests require pain to be felt. Thus, many would considered this to be unethical.

My opinion is that animal testing is right under certain circumstances. For instance, I think that as long as the ends justify the means, then it would be appropriate for it to be carried out. Seligman (1967) used dogs in his learned helplessness experiment, it involved dogs being put in cages and administered electric shocks. Even when the dogs could avoid the shocks they did not, as this was what they were used too. Despite this experiment being considered unethical, it has been influential in many areas, especially that of depression. I also feel that the conditions that the animals are kept in should be improved, as some of them are horrendous. As of 2009, cosmetic testing on animals was banned. I complete agree with the banning of cosmetic testing on animal, as cosmetics are a luxury. There are not a necessity, unlike medicine which is required to save many lives. I also feel that testing household products on animals is wrong, again, it’s not a necessity.

This video displays some of the conditions the animals subjected too…http://www.youtube.com/watch?v=nKhFFcQag1M

 

Carmelite Nuns

This week I have chosen to talk about Carmelite nuns and their neurological components… The main study for this was by Beauregard and Paquette(2006). The title ‘neural correlates of a mystical experience in Carmelite nuns’ is somewhat accurate seeing as the main aim of this study was discovering which parts of the brain were active when having a mystical experience. A mystical experience is a sense of union with God. The results showed that mystical experiences caused significant brain activity in certain areas of the brain. However, I believe that the findings are not entirely justified, this is for several reasons. For instance, it merely looks at the biochemical explanation, therefore it is reductionist. It could be argued that other factors played a role. It was a correlation study, meaning that cause and effect cannot be established. Although, in Beauregard and Paquette’s defence they never claimed that the neural impact on mystical experiences would diminish the value of God. They states the neural impact will neither confirm or disconfirm the reality of God.

All of the participants gave written informed consent and the study was approved by an ethics committee. Regardless, there are further limitations of the study. One being the sample. In Beauregard and Paquette’s credit they went through the correct procedure in choosing the participants, for instance there was a varied age range (23-64) and non had any history with psychiatric or neurological disorders. The Carmelite nuns were non smokers and did not use psychotropic medications. Despite this the sample size was relatively small. It only used 15 participants which makes it harder to generalise to a whole population. Another limitation is the procedure of the study. The Carmelite nuns were asked to draw on previous experiences when they were being assessed on the FMRI scanner to detect brain activity. This could arguably be different from the way in which the brain chemicals respond to an actual mystical experience. Highfield is an editor for the telegraph.

Highfield wrote an article in regards to the Beauregard and Paquette (2006) study. However the headline made it out to be something it is not. ‘Nuns prove God is not figment of the mind’. This suggests that there is empirical evidence showing that God is real, this is not true. The study conducted, as I mentioned was a correlation. There is no evidence showing that increased neurochemicals in certain areas of the brain, shows God is real. The headline from the telegraph I would argue is wrong as it only tells half the story. The article itself describes the study reasonably accurately, as it includes the procedure, findings etc. However sugar-coats it to make the headline seem more accurate. In regard to the whole topic, I am a man of science and think that the study down plays science, which is further exacerbated by the news article.

Depression

Something we should all avoid, unless you believe it too be hereditary, in which case there’s not a lot you can do…

Depression is a mood disorder which can affect how someone functions normally. It can affect a person’s perception, the way in which they think and the way that they behave. There are many mood disorders, however depression is the most common. There are several clinical characteristics of the disorder. Physical symptoms include; Insomnia or hypersomnia, a change in appetite, and pain (often headaches). The emotional symptoms are; feeling sad, mood changes throughout the day and anhedonia (this is no longer enjoying activities that were once found pleasant). Cognitive symptoms include negative beliefs, suicidal thoughts and difficulty in concentrating.

There are many explanations of depression, which would take more than one blog to explain, for this reason I am only mentioning the biological approach. There are two factors in the biological explanation to depression, the first being genetic factors. These are inherited traits. It would seem that being related to someone with depression increases a person’s chances of developing the disorder. McGuffin et al (1996) discovered that if one monozygotic twin had major depressive disorder (MDD), then in 46% of cases their twin was also diagnosed with MDD. In dizygotic twins concordance rates were 20%. Wender et al (1986) studied adopted children that had MDD, they found that the biological parents were 8 times more likely to have depression than the children’s adoptive parents.

The second biological explanation would be biochemical factors. The Permissive Amine theory developed by Kety (1975) states that serotonin controls levels of noradrenaline. When levels of serotonin are low, it causes low levels of noradrenaline, resulting in depression. Anti depressant drugs work by increasing the availability of serotonin by preventing reuptake (for instance SSRI’s). This suggests that low levels of serotonin could be the explanation of depression.  Post mortems of suicide victims are shown to have abnormally low serotonin levels.

However, both of these biological explanations have weaknesses, for instance, with the genetic factors the concordance rates are not 100%, therefore we cannot reach a firm conclusion that this is the cause. It could be that environmental factors play a significant role. Another problem with the genetic factors is that it only explains depression caused by internal factors (endogenous depression), it is the psychological factors that seem to better explain reactive depression. The problems with the biochemical factors is that just because anti depressants relieve that symptoms, doesn’t mean they treat the cause, this can be seen as a ‘chemical straightjacket’. Another difficulty with the biochemical factors is that depression may be the cause low levels of serotonin, not the result. And the final drawback is that there are many other psychological explanations which offer other reasons for depression.

Of course without looking at other explanations this is reductionist and there may be explanations that offer much more sound reasoning. Regardless from the factors I have described I believe that the biochemical factors hold a stronger argument than the genetic  factors can offer.

Sampling

There are several methods of how we can sample participants to research groups, these types fall under three main categories; the first is random sampling, the second is stratified random sampling and the third is non-random sampling.

Random sampling; under this category there are three main types of sampling; simple random sample, this is merely picking your samples from a list of names. The second is systemic sampling in which the sample is chosen every ‘nth’ name in a list. The third is multi stage sample where, for instance schools are chosen at random and then the pupils are again chosen at random from these schools (Howitt and Cramer 2011). The advantages of random sampling is the ease of assembling the sample. It is also a fair methods as it gives everyone equal opportunity to be chosen. It is also representative, the only thing that hinders the representativeness is luck, which cannot be avoided. However there are disadvantages also, the main one being that is too be truly random it would need a list of the whole population, which for large populations is unfeasible.

Again there are three types of stratified random sampling, the first is a stratified sample, in which it is structured, for instance taking random male and females proportionate to the population. The second is disproportionate stratified sample, this involves oversampling from groups of interest which are relatively uncommon. The final one is a cluster sample this requires choosing geographical areas and then picking the random samples from within each cluster that has been chosen (Howitt and Cramer 2011). The main advantage of this method is that it captures key population characteristics, and creates an often proportionate sample. However this may work for populations containing people with varied attributes, but this is hard to come by. Also its deemed ineffective as subgroups cannot be formed.

Finally there is non-random sampling this has many types, the first is quota sample, these samples are chosen by the researcher as they fit in a specific criteria of the research. Another is convenience sampling, in which the researcher selects participants as they can be easily found. The third is a snowball sample in which the participants that are chosen are then asked to nominate people that are similar to themselves. Purposive sampling is a method of chosen people because they are of theoretical interest to the research. And finally theoretical sampling which is choosing participants throughout the research to test, as the original theory progresses (Howitt and Cramer 2011) . The obvious advantage of these methods is that they are easy to use, inexpensive and convenient. However its argued that this advantage is heavily outweigh by the disadvantage of researcher bias.

The three main categories have both advantages and disadvantages.  Regardless I think that the advantages of random sampling are more effective than those of the other methods. To sum up, I believe that it is more beneficial to use any of the random sampling methods for instance systemic random sample, as it is the closest to a representative sample possible.

Alcoholism

After the introduction to university night life many of us
have had, I thought this could be a fitting topic.

The neurological approach to addiction suggests that the
highs and lows of addiction are caused by fluctuating levels of the neurotransmitter
dopamine. Dopamine is released at synapses in the brain and can affect
motivation and pleasure. Drugs such as alcohol effect these levels. Once the
dopamine has been removed through reuptake, the feeling s disappear. To regain
these feelings, more of the substance is needed, repeated use can cause a
tolerance, and more of the substance is needed to gain the same effect. If the
user stops taking the substance they feel withdraw symptoms, which are the
opposite effect of the drug.

The genetic approach suggests that addictions are inherited.
Sayette and Hufford (1997) concluded that MZ twins showed a higher concordance
rate for alcoholism than DZ twins, this would suggest that to some degree
alcoholism is caused by genes. This would explain why despite a large amount of
people drinking alcohol, only a minority of people become alcoholics. However
concordance rates were not 100% so it cannot be fully attributed to genes.

The cognitive model of addiction suggests that, in this case
alcoholism, can be explained by looking at the thought processing that makes
the decision to drink. If someone has faulty processing in regards to dealing
with social situation they might fall back on alcohol. This can be affected by
several things, for instance, the addicts perception towards the behaviour. An
example would be thinking that alcohol makes them feel more confident. Another
factor would be the perceptions of others opinion, an example would be the
feeling that they need to drink to fit in. The final factor would be their
perception of their aptitude to control their own behaviour, for instance not
being able to cope in social situations without drinking.

The learning approach to alcoholism looks at the
responsibility the environment plays. Repeatedly using a substance, in this
case alcohol, in the same environment leads to links between the alcohol and
the stimuli in the environment, in this case a pub/club. So for example, loud
music and lighting. When an alcoholic is faced with one of the stimuli the body
compensates for the effect it thinks it will receive. When the effects of not
felt, the opposite feelings occur. This is the reason why recovering alcoholics
often end up regressing back.

In conclusion, there are several theories as to why we can
become addicted to substances such as alcohol, however due to the topic it’s
hard to conduct research as its ethically wrong to test alcoholism. In my
opinion the learning approach is most likely. This is because studies showed
that the same principle worked for heroin addicts, therefore anti drug
campaigns no longer use any posters showing drug paraphernalia.

What is the best method when designing a psychological investigation, correlational research, or experiments?

Investigations are critical to psychologists, without them
they would be left with mere theories and speculations. There are many ways in
which investigations can be conducted, some arguably more effective than
others. One example would be correlation research, this is investigating a
relationship between two things. There are positive correlations; as one
variable increases as does the other, and there are negative correlations; as
one variable increases the other decreases. An example of a correlation would
be Akers and Lee’s ‘effect of social learning over time’ in which they aimed to
find a relationship between how frequently school students smoked, and ‘social
learning variables’. They found significant positive correlations between
social learning variables and smoking (Akers, R. and Lee, G. 1996). The main
advantage to a correlation is that it allows us to study areas that otherwise
would be unethical, for instance, the amount of cigarettes you smoke and the
chances of cancer. However a major disadvantage is that no causal relationship
can be established, as another variable may have an influence.

Another way in which investigations are designed, is through
experiments  in a laboratory setting.
Studies conducted in a laboratory have high control over all variables apart
from one, which is manipulated to see what the outcome is. An example of a
laboratory experiment would be Zimbardo’s experiment showing that anonymity affects
behaviour (Zimbardo 1969). This study was tightly controlled and a casual
relationship was recognised. An advantage of this method is that it can be
replicated, so you can retest the experiment to make sure that the same outcome
is reached. Another advantage is that cause and effect can be established.
However there are several disadvantages, for instance, because the setting is
artificial, there is a lack of ecological validity. Another negative is that
the tests may be open to demand characteristics, as participants behaviour can
often change if they know that they are being watched. The final down side
would be ethical implications, as deception is often used. This makes informed
consent very difficult.

Both of the methods described have pros and cons, however I
believe that laboratory experiments are a more effective way of gathering data.
There may be more drawbacks to this method, despite this the fact that it
establishes a causal relationship is very important. Without this the research
is still speculation, because without proper controls in place, not every
variable will be identified.

This is my opinion, what do you think?