Response to reviewers
Dear Editor,
We wish to thank the reviewers for their thoughtful suggestions to improve our manuscript.
After careful consideration, we have significantly revised the manuscript. We have taken on board many of the reviewers’ suggestions with the exception of modifying our approach to statistical modelling. Below we detail the revisions that have been made to the manuscript as well as provide justification for our statistical modelling techniques.
1. Both reviewers questioned whether we should not consider analysing the data in a different way. In our manuscript, we used scale-level confirmatory factor analysis (CFA) rather than item-level analysis. Basically, this means that we fit the 5 MASQ scale totals to the postulated latent constructs rather than the 77 individual items. We have justified our decision and included new sections in the manuscript to support this decision. After consultation with experienced statisticians (Professor Andrew Mackinnon and Dr Sue Cotton) at our organisation, it was decided that using scale-level rather than item-level analysis would be a more appropriate and valid technique for a number of reasons. On page 5, we note:
This approach [scale-level analysis] has several advantages compared to item-level data analysis in terms of higher reliability, higher communality, a greater ratio of common-to-unique factor variance, and less chance of distributional violations [33]. Further, such an approach has the benefit that fewer parameters are required to define a construct, which is a particular advantage for smaller sample sizes
[33].
And, on page 9, we note:
There were two potential options when analysing the latent structure of the MASQ: at item-level or at scale-level. It was determined that the present analyses would be most valid if MASQ scales were entered (i.e. AD, AA, GD:A, GD: D, GD: M) rather than the individual 77 items. By entering the scales rather than items, we were able to maximise our ratio of participants to variables and, therefore, increasing the validity and interpretability of our subsequent results. Both previously published clinical studies that have used CFA to explore the Tripartite model have used similar methodology [29, 31].
Boschen and Oei [31] also used item-level analyses but reported a substantially better fit when the scale-level analyses were conducted. The psychometric benefits of CFA on scale-level data have already been discussed [33]. Further, the focus of the current study is not on how individual MASQ items map onto the five scales, but rather how the scales fit the Tripartite theory.
29. Burns DD, Eidelson RJ: Why are depression and anxiety correlated? A test of the tripartite model . Journal of Consulting and Clinical Psychology 1998, 66 (3):461-473.
31. Boschen MJ, Oei TPS: Factor structure of the Mood and Anxiety Symptom
Questionnaire does not generalize to an anxious/depressed sample . Australian and New Zealand
Journal of Psychiatry 2006, 40 :1016-1024.
33. Little TD, Cunningham WA, Shahar G, Widaman KF: To parcel or not to parcel: exploring the question, weighing the merits . Structural Equation Modelling 2002, 9 (2):151-173.
To the best of our knowledge, only three studies have employed CFA in the assessment of the Tripartite model. The two studies that used clinical samples (which are comparable to the sample in our manuscript) used scale-level analysis, making our results more comparable. Though one of the studies also used item-level analysis, a poorer fit of items-to-constructs was reported compared to scale-to-construct. We briefly replicated this in the present manuscript by testing the original structure of the Tripartite mode using item-level analysis. When fitting the individual items to their hypothesised construct a significantly less parsimonious model was found compared to any of the tested models at scale-level. We therefore believe that our choice of methodology is justified by this finding. We have revised our introduction to more thoroughly review the literature surrounding the assessment of the
Tripartite model through CFA and have revised our aims accordingly. Our discussion was also modified to better link our findings to those obtained in previous studies.
In addition to inserting this justification for our choice of methodology, we have also significantly revised our results. Previously, we presented 9 competing models. In order to refine and simplify our manuscript, we have retained only five models in our present manuscript (4 models through scale-level analysis and 1 model with item-level). We believe that these models are more theory driven and in line with our stated aims. We have expanded the data analysis section to more explicitly compare these models to those investigated in previous studies (see page 10).
2. In addition to this issue, Reviewer 1 also questioned the accuracy of a statement that we had made in the previous version of this manuscript. We previously stated that the MASQ was the only self-report instrument that we were aware of that was specifically designed to measure all components of the Tripartite model. This line has been removed from the present manuscript. In accordance with the Reviewer’s suggestion, we have improved our introduction and discussion to more directly tie our study and its results with those obtained in previous studies. Specifically, we have placed a greater emphasis upon those studies that have previously used CFA to assess the Tripartite model in clinical samples (Boschen and
Oei, 2006; Burns and Eidelson, 1998). We feel that by relating our manuscript more closely to these studies, we have enhanced the interpretability and generalisability of our findings.
Congruent with Burn and Eidelson, we found that a 2-factor rather than 3-factor structure best fit our data. And, like Boschen and Oei, we found no support for a 3-factor structure.
Though we did not follow Reviewer 1’s advice and use item-level analysis, we hope that we have sufficiently explained our rationale (see above) for our decision to use scale-level analysis. We hope that our improved introduction/discussion makes our choice of analysis more acceptable to the reviewer.
3. Reviewer 2 questioned whether our findings were due to our statistical methodology (using scale-level analysis vs. item-level analysis) rather than the uniqueness of our sample. We found that a 2-factor, rather than 3-factor, model best represented the data in our sample of
young help-seekers, a finding that is divergent from much of the literature surrounding the
Tripartite model of affect. However, much of this research has been conducted in non-clinical samples or used exploratory factor analysis. We are aware of only three studies that have used the more sophisticated confirmatory factor analysis. Two of these studies used clinical samples and failed to support the postulated 3-factor structure of affect. Our results are supportive of these previous findings. Reviewer 2 also questioned whether a method effect
(correlated errors) may be responsible for our findings. The items from the depressionspecific scale (AD) are represented by 2 distinct clusters of symptoms: loss of interest
(negatively valanced) and high positive affect (positively valanced). Previous studies have found that the high positive affect items load distinctly from the loss of interest items and have postulated that the latent depression factor may in fact best be represented by these items exclusively. Although we did not specifically test this possibility (see above for rationale re: using scale-level analysis), we did test the possibility that these oppositely valanced items do not belong together. We report excellent internal consistency for the AD scale (alpha = .93), indicating that all items load meaningfully onto the same scale. In addition, we tested a number of alternative models (not presented in the manuscript), in which we split the AD scale and fit ‘loss of interest’ and ‘high positive affect’ to a combination of constructs. Our findings lead us to conclude that a method effect does not underpin our results. There was a significant decrease in model parsimony when these items were split into the potential subscales, indicating that the solution most appropriately fit the data when they were included together. We therefore believe that our choice of methodology
(using scale-level analysis) is valid and that our findings are due to the instability of the
Tripartite model across different sample.
We thank both the reviewers for their comments and hope that they find our revisions acceptable. We look forward to hearing from you soon.
Kind regards, Joe Buckby.