editorial self and identity

Self & Identity, the flagship journal of the International Society for Self and Identity, has published
innovative research on an abundance of topics related to the self for over a decade. The journal has
been steered by the extremely capable hands of Mark Leary, Carolyn Morf, Mark Alicke, and Rick Hoyle.
I am extremely honored and humbled to be taking over as editor along with associate editors Ken
DeMarree, Cami Johnson, Mark Seery, Ericka Slotter, and Virgil Zeigler-Hill.
My main goal as editor is to continue along the impressive trajectory established by past editorial teams.
Were that not daunting enough, I also strongly believe that specialized journals like Self & Identity have
an essential role to play in our field now more than ever before. It is my opinion that some of the issues
the field has faced in the last few years stem, in part, from unhealthy levels of competition to publish in
just three or four “top” journals where rejection rates can hover around 85 - 90%. The intense pressure
to publish in these journals, and these journals alone, has led, I believe, to a creeping and insidious rise
in standards for data where manuscripts have had to present only “perfect” data which fits neatly within
well-established theoretical frameworks. I believe that it has led to, among other things, a decreased
acceptance of innovation and creativity.
It is here that Self & Identity (and other specialized journals) can play a special role. Every good paper
should both provide novel results and build on previous research. At issue is where the balance should
be. Some papers make small, incremental advances over existing theory, whereas others take work in a
new direction and thus have less existing research to reference. Existing data suggest that, when
seriously considering papers for publication, reviewers tend to favor work that makes smaller advances
and sticks more to what has already been established as compared to work that is more novel (Crandall
& Schaller, 2002). My hope is that we will be able to alter that bias at Self & Identity. Of course, it is of
the utmost importance that work be theoretically grounded. However, it is currently very difficult to
publish work that strays from the status quo and moves research in a new direction. This is unfortunate
because it leads to a stifling of creative and novel thought. When we study such an amazing, fun area,
why should we stifle creativity? My hope is that Self & Identity can be a home to both incremental steps
and broader and riskier research with the potential of saying something new and exciting.
Indeed, Self and Identity is a journal with a history of encouraging innovation and risk taking. In her
impressive tenure as the journal’s second Editor, Carolyn Morf pushed the journal to “become the place
for new ideas and new directions.” Her emphasis was on innovation and the encouragement of risk and
new ideas without pretending that any one paper could provide all of the answers. Her own words
describe it best: “This does not mean that Self and Identity will accept work that is ‘‘half-baked’’ or
clearly flawed. It does mean that a serious attempt will be made to weigh genuine promise and potential
excitement against the size of faults, and to try to rule with a tendency toward creativity, if
imperfections are small and/or can clearly be pursued in subsequent research.”
I also believe that the focus on only three or four scholarly outlets has contributed to the practice called
“p hacking.” As previously mentioned, intense competition for limited slots in just a few journals has
allowed the journals and their reviewers to demand “perfect” data before publishing. This is troubling
because all of our studies are done on humans -- and humans sometimes do weird things. Anyone who
collects experimental data knows that occasionally a condition that should affect participants one way
leads to other behavior. Furthermore, it is not unusual for one cell in a multi-cell experiment to look
slightly different from what is expected, or for one contrast to not reach significance. This is the nature
of studying complex organisms with multiple, and sometimes unpredictable, motivations. Thus, at Self
& Identity we understand that not every analysis in a multiple study paper is going to reach .05, not
every contrast will be significant, and sometimes one cell in a multiple cell design will look odd without
any reasonable, available explanation. A recent paper suggests that the majority of p hacking occurs
when researchers are trying to pull a marginally significant effect down to a significant effect (Head et
al., 2015). Perhaps, if we make it ok to sometimes have a .07 in a multi-study paper, we can reduce the
pressure to p hack. This is a luxury that Self & Identity, as a journal that has no short paper format and
thus focuses on multiple study papers, can have.
We would also like to encourage all paper should tell a story while avoiding HARKing (hypothesizing
after results are known). Although all papers should tell a coherent story, it is ok to present hypotheses
as tentative in the introduction, if they were, in fact, tentative. It is also ok to have a footnote
mentioning that the initial findings were surprising and then the rest of the studies were done to
confirm the pattern. Many of the most exciting findings in psychology were not initially predicted. As
long as results are consistent and replicated across multiple studies, it is not essential for the hypotheses
to have been present before Study 1 was run. As long as the authors are honest about that in the text,
or even in a footnote, it will not disqualify a multiple study paper from publication.
As a field, we have learned a lot in the last few years about power and its importance. We will keep
power in mind as one very important factor when evaluating a paper. For example, individual studies
should be adequately powered to find effects. In addition, good papers should make mention of
statistical power. However, I am not sold on the idea of solid and inflexible rules when it comes to
sample size. Although the days are over for publishing studies with tiny sample sizes (as they should be),
power estimates are -- like any other statistical estimates -- estimates and they should not be applied as
if they contain some kind of magical formula for evaluating research. No one criterion will determine a
good paper from a bad one - there are multiple criteria that we will balance (novelty/innovation,
statistical significance, statistical power, etc.). So, rather than using hard and fast rules, we will attempt
to find a balance that increases the likelihood of both interesting and replicable research.
Finally, the goal of our editorial team is to work with researchers to create the strongest journal
possible. In service of that goal we aim to balance doing careful work with not wasting authors’ time.
Thus, we plan to utilize the desk reject (i.e. rejection without peer review) in any cases in which we are
fairly certain that the paper will not be a good fit for the journal. Manuscripts will only be sent out for
review if we truly believe that they have a chance (with appropriate reviews) of being published. We
also plan to be generous with the offer to revise and resubmit (R&R). The tendency in the field is to only
give R&R on papers that are already close to being ready. However, I learned from my time as an
associate editor that most authors are highly responsive to feedback. There were times I read
reviewers’ feedback and thought, “I should reject this paper. There is no chance the authors are going
to be willing to collect all that new data and/or reanalyze all their data and/or reframe their findings.”
However, I often found that when I took the chance, authors were willing to do the hard work necessary
to produce research worthy of publication in a quality academic/scholarly journal. Indeed, some of my
favorite papers as an editor are those that went through multiple revisions, had the most negative initial
reviews, and took the most time. I hope we can move away from the traditional model and offer R&R
on papers that are potentially exciting and wonderful, even if they still have a lot of room for
improvement. Of course, this is a risky course for authors. The more an editor asks for, the more
chance there is that the paper ultimately won’t be accepted. However, this also provides a real
opportunity for both the author and the editor to work as a team with the reviewers to make the best
paper possible and make a meaningful contribution to the field.
In closing, I suspect there will be people who balk at some of the approaches I have described. I suspect
some of them may balk loudly. One possible criticism of my approach is that we may be encouraging
Type 1 error. I don’t think that is the case. I think what we are encouraging is honesty and openmindedness and a focus on innovation. Is it possible that we will end up publishing some false
positives? Sure. No matter what the editorial policy, every journal faces that risk. That risk has to be
weighed against the risk of missing innovative, good, and important science due to fear and closemindedness. I don’t want us to do that either. In other words, Type II error matters as well, and has real
costs. I am sure at some point we will, through error or misjudgment, miss out on a paper that could
have been great. It is also possible that we will publish some papers that won’t replicate. That is how
science works -- it is imperfect and people are sometimes wrong, but we do our best. And we do our
best because we love our science. We do it because we think understanding the self and how it
interacts with, and is affected by, the social world is of utmost importance.
I hope you all will strongly consider Self & Identity as an outlet for your work.
Shira Gabriel