Research Missing the Point…

This week I stumbled across an article regarding the benefits of playing Tetris in regards to PTSD (Post Traumatic Stress Disorder) flashbacks.  The title of the article is Can Playing the Computer Game “Tetris” Reduce the Build-Up of Flashbacks for Trauma? A Proposal from Cognitive Science, this seemed an excellent idea, until I read it…

The research in my view is essentially pointless.  Firstly it focuses on the immediate time after a traumatic event (6hrs) due to the theory of this being the time in which the traumatic memories will be stored into long-term capacity.  This is theoretically logical, however its practicalities are limited.  If an individual experiences a traumatic event, the likelihood of being able to play a game of Tetris within six hours is impractical. (Think of the people who are at high risk of experiencing PTSD flashbacks such as those in the army – is playing Tetris during battle realistic?).

Secondly the ecological validity of this experiment is seriously compromised.  The researchers showed participants a distressing video and then separated them into two groups; one of which was assigned to play the spatio-visual game Tetris and the other had no intervention.  For the researchers to compare a video to the traumatic events sufferers of PTSD go through is not valid, it has no basis in reality.

Finally I contend that as well as the research being impractical and not valid it is also unethical.  The researchers put the participants in a position that was by their own admittance intentionally distressing and meant to cause traumatic flashbacks, and all this for a research that is in my opinion pointless.

Posted in Uncategorized | 4 Comments


A study by Schwartz and Susser (2011) has stated that when we use control groups in research we are making a crucial mistake of selecting participants that are TOO healthy.  For example below are two adverts that I have made up to illustrate their position:


People with schizophrenia that live in highly urban or rural areas are required to undertake a brief psychological experiment.  Males and females permitted, between the ages of 18-60.


People that live in highly urban or rural areas are required to undertake a brief psychological experiment. Males and females permitted between the ages of 18-60.  Volunteers must not have any on-going medical conditions and/or a history of psychiatric problems.

Say I received plenty of applicants for both the study and for my control group and I want to test to see if there is a relationship between where you live and a diagnosis of schizophrenia.  The problem that Schwartz and Susser highlighted is that further exclusions are made on the control group than in the experimental group.  For example it could be that the majority of people with schizophrenia also have diabetes, and any relationship I found in regards to where they live could be correlated not with schizophrenia as I was trying to find out but with diabetes.  Therefore the results I conclude will be invalid.

As you can see this poses a huge problem, we need a control group to compare with to see if there is a definite link or not, without it our results would be limited.  The control group has to be healthy to ensure there are no confounding variables, but it seems doing this classic case-controlled study has serious flaws.  One way to combat this is applying the same restrictions for the experimental group as the control group, see below


People with schizophrenia that live in highly urban or rural areas are required to undertake a brief psychological experiment.  Males and females permitted between the ages of 18-60.  Volunteers must not have any on-going medical conditions and/or a history of psychiatric problems OTHER THAN schizophrenia.


People that live in highly urban or rural areas are required to undertake a brief psychological experiment. Males and females permitted between the ages of 18-60.  Volunteers must not have any on-going medical conditions and/or a history of psychiatric problems.

This, to me, seems the only way to tackle this issue.  However it would create its own problems, there is an extremely high rate of comorbidity in people diagnosed with schizophrenia and would limit the sample size considerably, thus making the results found in this instance not as reliable.

What do you think, should we carry on as we are with the classic controls or do we need to look at an alternative way?

Posted in Uncategorized | 3 Comments

Psychology students detrimental to psychological research…

This week’s blog is discussing whether or not psychology students are limiting the results gathered in universities.


I found an article by Witt, Donnellan and Orlando (2011) where they evaluated whether demographic and personality variables affected the mode and the timing of participation in a subject pool study.  The mode tested was whether participants would rather complete the study online or in-person, the timing was the week of the semester.


The evaluation focused on psychology undergraduates, their reasoning being that most universities use students as their participants and a lot of the studies use psychology students in order for them to gain academic credit (think SONA experiments).


Witt, Donnellan and Orlando found some interesting results:

  • The students who chose in-person experiments rather than online were more extroverted


  • Women and more conscientious students were more likely to complete the studies at the beginning of the semester


The implications for further research that they outlined was that researchers using student samples need to consider how these found participant characteristics may affect the conclusions drawn from their work.


One or the problems with this study are the personality tests used to decide on which students are more extroverted. Block (1995) in particular was sceptically of the validity of these tests.  If as Block suggests and these personality tests do not show what they are meant to then this puts serious doubts on what Witt, Donnellan and Orlando found in their study regarding the mode of experiments.  However other articles (Egan, Deary and Austin, (2000)) contradict this and show that this particular personality test is reliable and a useful tool in determining personality traits.  Taking both of these into account and using personal experience it makes sense that students who are more extrovert will prefer experiments where they get to see the experimenter in person rather than someone who is more introverted who may feel less comfortable in this situation.


The second finding also provides some issues, to test whether student were conscientious or not they gave them a self-reported questionnaire, there is no way to know if the results were fabricated or not.  However again I believe that in this case the results can be trusted, they are statistically significant and make logical sense.


Overall I believe the results of this study and feel it is something that needs to be considered.  Especially when the data collected is relied upon so much it should be thoroughly examined.  What do you think – do you agree with the results they found?

Posted in Uncategorized | Leave a comment

Likert them or not likert them?

This week’s blog is inspired by last weeks Research Methods seminar.  As I was typing in the data into SPSS my mind drifted to how useful the likert scale is.  The questionnaire we were looking at and indeed a lot of questionnaires have likert scales in them to try and measure the participant’s attitude towards a particular subject.  What I want to know is how reliable they are as a research tool.

Likert scales have the advantage that it provides a range of responses rather than just a yes/no, black/white response.  It allows for degrees of opinion (those grey areas) and even an option for no opinion at all (even though this isn’t the case for ALL forms of likert scales).  I think providing this range of responses is essential, especially when dealing with humans where there attitudes aren’t clear cut, we may think something like abortion is wrong but may allow it in certain circumstances such as if the mother has been raped.  The likert scale allows for this aspect of human nature.

And as we all know from Friday’s seminar we can even obtain quantitative data from likert scales that makes it easy to analyse and to see if the results are statistically significant.  The likert scale to me bridges the gap between qualitative and quantitative data.  It encapsulates the range of human attitude but allows it to be scientifically analysed – the best of both worlds!

HOWEVER there are some aspects of using the likert scale that aren’t that great.  Like all surveys the validity of the likert scale could be compromised due to social desirability.  Individuals do not want to paint themselves in a bad light so may lie, or reduce what they strongly believe in to appear more socially acceptable.  This is a recognised problem within psychology and other social sciences; one method that has seemed to help is self-administration of questionnaires with no questions regarding self-identity on the questionnaire (Nederhof, 2006).   

There has also been research conducted that suggests that the likert scale responses differ based on culture and nationality (Lee et al., (2002).  This is a variable that could influence the results that is not reflected in the evaluation of likert scale responses.  However this would not be an issue if the questionnaire were only conducted on one culture at a specific time.

One of the final criticisms of the likert scale that I am going to outline is the problem of pseudoneglect.  Pseudoneglect is an attentional bias in normal individuals that makes the left-sided features of a stimulus more noticeable than the right.  Nicholls et al., (2006) found that in response to likert scales pseudoneglect influences participants responses, they are more likely to respond to the answer on the left of the page than the right.  This can cause issues for the reliability of the scale and also for the validity of any research conducted that uses the likert scale.  There is a way to overcome this bias though; the most favourable suggestion is that half of the respondents should complete the survey with an ascending scale (1-5, Disagree –> Agree) and the other half should complete the survey with a descending scale, (5-1, Agree –> Disagree).  By taking an average of the responses to the two surveys it should provide an accurate indication of opinion.

In conclusion likert scales have both benefits and drawbacks, however I believe that the benefits outweigh the possible negatives.  Saying this it is important that researchers are aware of these undesirable aspects and address them correctly; some of these ways to address them have been outlined in my blog.  What do you think, are likert scales a good or bad research method?

Posted in Uncategorized | 3 Comments

When Correlations aren’t all what they are cracked up to be…

This week’s blog is on a research paper named, “Puzzling high correlations in fMRI studies of emotion, personality, and social cognition” (Vul. Et al, 2009), previously titled ‘Voodoo Correlations.’

Vul et al observed that correlations reported between brain imaging (fMRI) and studies on emotion, personality, and social cognition was extremely high (approx. >.8) and wanted to work out why this was after a thorough analysis on the previous studies two major points came up


  1. The initial problem that they identified was that the results obtained throughout these different studies were “impossibly high”, Vul et al claimed that the reliability of both fMRI studies and personality measures are limited and thus provide an upper bound barrier on the possible correlation that can be observed between the two measures (fMRI and personality measures).  Vul et al used a well-known equation to theoretically test their assumption that these statistics were impossibly high.

rObservedA, ObeservedB = rA,B × √ (reliability ×  reliabilityB)

This basically means that the strength of the correlation observed between Measures A and B[1] (rObservedA,ObservedB) reflects not only the strength of the relationship between the traits underlying A and B (rA,B) but also the reliability of the measures of A and B (reliabiltyA and reliabilityB).  Vul et al estimated that the reliability of fMRI studies computed at the voxel level is no greater than .7.  They also estimated that current measures of personality have a reliability of between .7 and .8.  With these reliability estimates posing an upper-bound limit on the possible correlations >.8 does seem “impossibly high” as Vul suggests.

2.         The second problem identified was the methodology that 53% of the studies analysed used.  When analysing the fMRI’s these researchers selected the voxels that exceeded a certain threshold to the particular behaviour and then only correlated the data provided by these voxels.  With only the statistical significant voxels being selected it led to impossibly high data.


The histogram shows the correlation values reported within the studies Vul et al were analysing.  The red squares indicate where the methodology was used by selecting voxels when they exceed a certain threshold, orange squares are where the researchers declined to comment on their methodology and finally the green squares indicate where a different methodology is used that does not hand-pick the higher scoring voxels.




There have been many comments in reply to Vul et al’s paper;

  1. Lieberman (2009) said in response to the disagreement in methodology that the researchers were confused by the questions posed to the them by Vul regarding their methodology and that isn’t how they retrieve their data.
  2. Lieberman (2009) also commented that although Vul et al used the correct formula in determining whether these results were “impossibly high” he did not make appropriate estimates on the reliability of the measures and therefore his conclusions are false.
  3. Fiedler (2011) however defended Vul et al stating that Voodoo Correlations are everywhere not just in neuro research, researchers are always manipulating elements of their research to achieve the most visible results

There has been outrage over this publication, primarily due to the sinister connotations of its original title “Voodoo Correlations” insinuating that the studies involved are fraudulent and not scientific.  Secondly the implications the paper has on the field, if the conclusions drawn from these studies are incorrect then where do we go with further studies in the area, the methodology will have to be completely overhauled and any progress that has seemingly happened will have to be discounted.

Whilst reading these papers I have to side with Vul et al and the conclusions they made.  It is important as psychologists that any results we find are as unbiased as they possibly can be and if Vul reported on the correct methodology this isn’t the case.  There needs to be a way to be able to understand the mind brain connection without manipulating the data collected.

What do you think, is Vul et al just releasing a personal attack on neuroscience to try and undermine its credibility or have they identified a true issue?

 [1] Observed A = fMRI studies, Observed B = personality studies

Posted in Uncategorized | Leave a comment

It isn’t always the Media to blame

We are all aware now of the issues with the media misrepresenting scientific research.  This was shown in an article produced by The Telegraph titled, “Nuns prove God is not a figment of the mind”.  The original report does nothing of the sort, it is not trying to use brain imaging to prove the existence of God, nor is it trying to disprove the notion that a God may exist.  It is simply looking at what parts of the human brain are activated when in a mystical state.  This research was conducted due to previous observations on temporal lobe epilepsy causing apparent religious hallucinations.  Saying this the article itself is truer to the original research report than the title heading it.  It is an accurate account of the research produced for the average layman.


However my issue is not necessarily with the report in the paper (with exception of the obviously misleading title!) but with the original research itself.


The research does not show anything about Mystical Experiences.  It shows that different parts of the human brain are activated when a certain group of people experience emotions.  Yes they call them mystical experiences in relation to a group of nuns, but I could feel the same eating a bar of chocolate and the same fMRI readings could occur but I am not going to call this a mystical experience!  Their criteria of a mystical experience is as follows:


“Mystical experience is characterized by a sense of union with God. It can also include a number of other elements, such as the sense of having touched the ultimate ground of reality, the experience of timelessness and spacelessness, the sense of union with humankind and the universe, as well as feelings of positive affect, peace, joy and unconditional love”

These feelings can be felt whilst completing various activities and yet are not mystical experiences.  It seems somewhat presumptuous of the researchers to call them mystical experiences with no scientific evidence to support the mystical experiences exist other than what are a range of emotion – which will of course activate different areas of the brain.

Finally, and my last issue with the original research that I am going to comment on in this blog is the last section of whom the researchers give acknowledgements to: The John Templeton Foundation.  An organization that funds research into ‘The Big Questions’ of spirituality and science – somewhat biased funding I feel!

After looking at the original research report although I feel that The Telegraph sensationalized the research for media impact I feel that this was the original researchers intention.  The way it was written feels like certain conclusions are meant to be made even though they are unable to explicitly say them. 

Posted in Uncategorized | Leave a comment

Unethical or Entertainment?

This blog is inspired by my growing addiction to watching I’m a Celebrity – Get me out of here!  As we all know ethical considerations are an essential part of our psychology degree and for psychologists as well.  And whilst watching once again another episode of I’m a Celebrity I sat there thinking to myself – If this was an experiment there is no way it would be allowed, so why can the TV industry get away with doing it?


The BPS sets out a code of conduct and ethics that must be upheld by all psychologists.  The most relevant section is Section 3.3 Standard of protection of research participants.  Whilst watching any of the I’m a Celebrity episodes, it is evident that if TV producers had to follow a similar set of guidelines that the BPS produce then it would not be allowed.  As can be shown below:


Section 3.3(iv) states,

Refrain from using financial compensation or other inducements for research participants to risk harm beyond that which they face in their normal lifestyles.

The participants on I’m a Celebrity are given a huge financial incentive to take part in the game show, and on numerous occasions are at risk of harm, both physically and mentally.  This also leads on to Section 3.3(vi) regarding a right to withdraw at anytime without any repercussions.  The Celebrity’s are given the option to leave at any time, however this decision will be heavily influenced by the fact that a proportion of their fees will be deducted if this is the case.


I see two solutions between the discrepancies of restrictions put on psychological research and the free reign of the TV industry:


  1. An external body (that can work in a similar way as the BPS does on psychologist and ensure all TV broadcasts are ethical) should govern TV industries.
  2. Psychologists should film and broadcast every research project that is conducted so that it classed as ‘entertainment’.


What do you all think?

Posted in Uncategorized | 8 Comments