1 PERSPECTIVE AND MEMORY The Effects of Visual and Verbal Perspectives in Action and Perspective Memory Ahmet Efe Ekici, Aylin Gündoğdu, Fadime Esen, Hüma Gedemenli and Melisa İşbilen Boğaziçi University 2 PERSPECTIVE AND MEMORY Abstract Previous studies investigated perspective and its relation to other concepts such as autobiographical memory and sense of identification. While the frame of reference from which we observe our surroundings constitutes first-person perspective, third-person perspective refers to the position where individuals see themselves from the eyes of an outsider. The current study is conducted to examine the effects of first-person and third-person perspectives on memory by using both visual and verbal stimuli. Fifty-four undergraduate students were presented with 16 video-sentence pairings which were either consistent or inconsistent in terms of the perspectives in videos and sentences. Videos included simple actions whereas sentences were formed by firstperson or third-person conjugation of these actions’ descriptions. After the video-sentence pairings, participants were given a free recall task in which they wrote the actions that they encountered in videos and sentences. Participants were also asked to indicate the perspectives from which the videos were presented in a perspective recall task. The results showed no significant difference between consistent and inconsistent pairings in participants’ action recall and perspective recall performances. Interestingly, participants significantly remembered more actions when actions were given from a third-person perspective. These results demonstrate that stimuli presented from third-person perspective are better recalled compared to stimuli with firstperson perspective. Keywords: first-person, third-person, perspective, memory, action identification 3 PERSPECTIVE AND MEMORY The Effects of Visual and Verbal Perspectives in Action and Perspective Memory The perspective from which a stimulus is presented or remembered has been linked to many cognitive domains in the literature. Even though we have a natural first-person perspective on our actions while we are experiencing our daily lives, we are not necessarily presented with stimuli in first-person perspective in all domains. While we are observing an action that we do not participate in, we have an observer viewing point. The variation in perspective also exists in the verbal domains. The perspective in which a sentence is constructed changes from context to context and with preference. A daily sentence that we come across or produce can either be formulated in first-person conjugation or in third-person conjugation. On the other hand, we are not only exposed to visual or verbal content that is a direct production of our perception and experience, but also manipulated stimuli that is presented in first-person or third-person perspective. The perspective in which a visual stimulus is presented, first-person or third-person, can be manipulated in virtual stimuli, such as in games, movies, photographs and illustrations (Lindgren, 2012; Marcotti & St. Jacques, 2021). In other words, both in real life experiences and in virtual stimuli, we are surrounded by scenes where we have a third-person or first-person perspective. This variation in perspective and its implications have been widely studied in the literature. Researchers have mostly focused on the perspective from which an autobiographical event is remembered by individuals. In autobiographical memory literature, third-person memories are characterized by a novel visual perspective on the revived incident where the individuals who recall the event see themselves in the scene, in analogy with a camera which frames the event from an outsider perspective (Sutton, 2014). Studies show that the perspective from which an incident is recalled depends on the goals and emotions that are brought up by that 4 PERSPECTIVE AND MEMORY specific memory (Sutin & Robins, 2008). As Nigro and Neisser (1983) suggest, when individuals are asked to focus on their emotions in a specific event, they tend to recall it from a first-person view, whereas they take a third-person view when asked for focusing on the objective and physical elements. In conjunction with this, avoidant individuals with trauma tend to switch to third-person perspective when recalling the traumatic events as first-person perspective intensifies the emotional load (Kenny & Bryant, 2007). On the other hand, the implications of the perspective in which a stimulus is presented in laboratory conditions have also been widely studied in the literature. For creating the variation in visual perspective, one commonly used technique is the manipulation of the camera angle. The images of an agent’s actions from both first-person and third-person visual perspectives are used as stimuli in these studies. There are various findings reached by using this technique. In these studies, the variation in perspective is found to be related to reaction time in imitating (Vogt et al., 2003), the inferred causality of an event to situational or dispositional factors (Storms, 1973), how rich the narrative is (Brunye et al., 2009), and whether the presented action is interpreted in an abstract or concrete manner (Libby & Eibach, 2011). In a remarkable study conducted by Kockler et al. (2010), it was found that when individuals viewed a novel action in the first-person perspective of an agent who is competent in performing that specific action, they became less hesitant and more self-confident for performing the same action. In parallel with these findings, according to the results of the study of Lindgren (2012), it was found that the participants in the first-person perspective condition were more involved in the event compared to the participants in the third-person perspective condition. The participants in the first-person perspective condition felt more competent and seeked less help than the people in the third-person perspective condition. They argued that these findings 5 PERSPECTIVE AND MEMORY implicate that individuals have identified better with the domain when the stimuli were presented in first-person perspective. There are other studies which focus on the identification dimension of different perspectives. For instance, in a study by Libby et al. (2009), researchers questioned whether individuals identify themselves with the actions more when the actions are presented in firstperson perspective. They have measured the action identification level in both conditions and found that first-person perspective results in a higher level of identification with the actions. Following these findings, one might be curious about how the presented perspective of the stimuli will affect the performance in recalling the stimuli, when the self-relevance effect (Bower & Gilligan, 1979) is taken into consideration. Self-relevance effect suggests that when an item is relevant to one's experience, it is likely for it to be better remembered by the individual. In addition to this, when the rich literature on perspective shift in autobiographical memory is taken into consideration (McIsaac & Eich, 2004), examining the effects of perspective and its implications on action identification on recalling the perspective of a previously viewed stimuli might introduce some fruitful insight to the literature. Not only the visual perspective but also the verbal perspective created by syntactic and semantic cues affect how a specific memory is encoded. The findings about the implications of the perspective of a given verbal and visual stimuli is very rich to the day. In our lives, we are exposed to both verbal and visual stimuli, whether their mode of integration in terms of perspective has a meaningful effect on our cognition or not might be an interesting point to look at. Aziz-Zadeh et al. (2006) examined whether linguistic phrases invoke the same sensory-motor representations that are generated through the activation of the relevant mirror neurons by visual observation. They suggested when people read a sentence, a visual representation of the action in 6 PERSPECTIVE AND MEMORY that sentence is produced in relevant sensory-motor areas. If so, language produces the representation of an action which is also generated when a different person performing the same action is observed. In the light of these findings which show an important connection between verbal and visual stimuli, our study aims to look at how the perspective of the stimuli in these two domains interact with each other. We designed a study where we look at how the visual perspective and verbal perspective together with the consistency between them affect the recall performances of individuals. Hence, in our study, we firstly hypothesized that the actions shown in the stimuli are better recalled when the separately shown verbal and visual stimuli for the same action are consistent in terms of their perspective. Secondly, we also hypothesized that when an action is presented in first-person perspective both in visual and verbal stimuli, it will be better recalled compared to when presented in third-person perspective in a consistent way. Finally, our third hypothesis is that the perspective of the visual stimulus presented will be better recalled when the verbal stimulus of the same action is presented in the same perspective. Method Participants Fifty-eight undergraduate students from Boğaziçi University participated in the experiment via Boğaziçi University Research Participation System in exchange for course credit. Four participants did not complete the experiment. In addition to these four participants, one participant was excluded due to the missing data. Therefore, all analyses were carried out with 53 participants. There were 34 female and 17 male participants together with one genderfluid and one agender participant. The average age was 21.2 (SD = .36). Participants’ ages ranged from 18 to 36. Participants received 0.5 course credit for their participation in the experiment. 7 PERSPECTIVE AND MEMORY Measures The entire experiment was created by Qualtrics Survey. Participants gave demographic information such as age, gender, and department in an open-ended format. There were a total of 16 video and sentence pairings. The videos were taken from The Chieti Affective Action Videos Database by Di Crosta et al. (2020). In Di Crosta et al.’s study (2020), participants also rated the videos in terms of arousal and valence. According to Di Crosta et al.’s results (2020), for the 16 videos that we used in our experiment, the average of arousal ratings was 2.17 (SD = .46) whereas the average of valence ratings was 5.07 (SD = .42). We also conducted a pilot study with a Turkish sample consisting of twelve undergraduate students in which participants rated the degree of valence and arousal of 16 videos that are included in our study. The aim of this pilot study was to check whether these videos evoke different arousal and valence levels in our target population. As a result of this study, the average of arousal ratings was 4.07 (SD = .65) and the average of valence ratings was 5.00 (SD = .79). A repeated-measures t-tests regarding the results of our pilot study showed that participants’ emotional valence ratings of videos were not different from Di Crosta et al.’s sample, t(15) = -.32, p > .05, Cohen’s d = -.08. However, according to the results of another repeated measures t-test, emotional arousal ratings of the individuals from our pilot study was different from Di Crosta et al.’s sample, t(15) = 10.29, p < .001, Cohen’s d = 2.57. The details of these results are discussed in a later section. While half of the videos were from the first-person perspective, the other half consisted of third-person perspective videos. Figure 1 shows examples of videos from first-person and third-person perspectives. Videos included simple actions such as wearing a jacket and getting up from a chair. The list of actions can be found in Appendix A. The number of male and female actors in the videos was equal in both first-person and third-person perspective videos. All 8 PERSPECTIVE AND MEMORY videos lasted for 15 seconds and were shown in 556x311. Sentences were formed based on Di Crosta et al.’s (2020) action descriptions. These descriptions were translated into Turkish and conjugated in the present tense. All sentences were formed either in the first-person or thirdperson singular. For the distraction task, participants were presented with an n-back task which involved 69 numbers in total. There were action recall and perspective recall tasks. While the former task was in free recall format, the latter one was designed in a table format. The learning style of participants was measured by using Akgün et al. 's (2014) Turkish adaptation of the Style of Processing Scale by Childers et al. (1985). This scale included items such as “I generally prefer to use a diagram rather than a written set of instructions.” (Turkish: Genellikle yazılı bir yönerge yerine görsel bir diyagramı kullanmayı tercih ederim.) and “My thinking often consists of mental “pictures” or images.” (Turkish: Düşünürken çoğunlukla aklıma resimler veya görüntüler gelir.) There were 16 items and participants were asked to answer by choosing the most appropriate option out of four alternatives. Answers ranged from “always true” to “always false”. In order to measure perspective taking abilities of participants, Engeler’s (2005) Turkish adaptation of Interpersonal Reactivity Index (Davis, 1980) was used. The scale consisted of 28 items and four subscales such as Perspective Taking, Fantasy, Empathic Concern and Personal Distress. Perspective Taking factor included items such as “I try to look at everybody’s side of a disagreement before I make a decision.” (Turkish: “Bir karara varmadan önce diğerlerinin anlaşamadığı yönlerden olaya bakmaya çalışırım.) Participants were asked to choose from four options ranging from “does not describe me well” to “describes very well”. Procedure 9 PERSPECTIVE AND MEMORY The sample was recruited through the website of the Boğaziçi University Research Participation System. The students taking the courses ‘PSY 101’ (Introduction to Psychology), ‘PSY 111’ (Fundamentals of Psychology), or ‘PSY 241’ (Social Psychology) enrolled in the experiment and participated online via their computers, tablets, or cellphones. In order to continue with the experiment, participants were required to answer each question. Participants filled out the consent form before the experiment. After giving their consent, participants gave demographic information including age, gender, department, and e-mail. The survey included only visual materials consisting of written sentences and videos. The order of videos and sentences were randomized within individuals which means that half of the pairings started with a sentence while the other half started with a video of the same action. Each sentence stayed on the screen for 5 seconds and the duration of each video was 15 seconds. After each sentence-video pairing, there was a blank screen for two seconds. All videos were set to autoplay. Videos, sentences, and blank screens continued by themselves without any interference from the participants. Sixteen video-sentence pairings were followed by the distraction task. In the distraction task, participants attended a 2-back task and were asked to click on the screen if the presented number is the same as the number that was shown in two items before. Each number stayed on the screen for 1500 ms and there was a blank screen for 45 ms between each number. The distraction task was followed by the action recall task. Participants were given two minutes to write down all actions they can remember. Participants were also asked to write the actions in the infinitive form. After the action recall task, the experiment continued with the perspective recall task. In the perspective recall task, participants were presented with all action descriptions. The task was to select whether these actions were shown in the first-person or thirdperson perspective in the videos. Participants were not required to complete this task in any 10 PERSPECTIVE AND MEMORY restricted time. After participants finished the perspective recall task, they were requested to answer the questions from the Turkish version of the Style of Processing Scale (Childers et al., 1985). This scale was used in this experiment to determine whether participants were verbal or visual learners. Style of Processing Scale was followed by the Interpersonal Reactivity Index (Davis, 1980). Both scales contained no time limit for participants to complete. Before they move on with the final part of the experiment, participants answered a question which asked whether they had any internet connection problems during the experiment. Finally, participants indicated their student number and the course that they would like to take credit for. Results Age was not normally distributed among participants as indicated by a Shapiro-Wilk test, p < .05. In addition, a Chi-square test for independence showed that there was a significant difference between participants for their gender distribution, p < .001. In order to examine whether the gender of video actors has an effect on the number of correct action recall of participants, we conducted a paired-samples t-test. The results revealed no difference in the number of correct action recalls when the video included male actors (M = 3.85, SD = 1.52) or female actors (M = 3.64, SD = 1.56), t(52) = -.78, p > .05. Therefore, the gender of actors in videos did not impact participants’ correct recall of actions. Moreover, in order to look at the effect of the order of the sentences and videos on action recall, we used a paired-samples t-test and could not find any significant results. The t-test showed that there was no significant difference between the trials where the sentence came before the video (M = 3.66, SD = 1.24) and the trials where the video was displayed before the sentence (M = 3.83, SD = 1.65), t(52) = .74, p > .05. 11 PERSPECTIVE AND MEMORY When analyzing participants’ scores on Style of Processing Scale (Childers et al., 1985), we computed two different scores for visual and verbal learning styles. According to ShapiroWilk test results, participants’ visual and verbal learning style scores were distributed normally, all ps > .05. In addition, for the Interpersonal Reactivity Index (Davis, 1980), we only calculated participants’ scores on the perspective taking subscale which included seven items. Participants’ perspective taking scores were also normally distributed which is shown by the Shapiro-Wilk results, p > .05. For outlier detection, we checked box plots and z-scores that were created for participants’ performances in action recall task, perspective recall task, Style of Processing Scale (Childers et al., 1985) and Interpersonal Reactivity Index (Davis, 1980). In all task scores, a performance that is below or above three units of standard deviation was considered to be an outlier. As a result, there were no outliers. Our first hypothesis was that pairings with consistent verbal and visual perspectives (e.g., first-person perspective video and first-person singular sentence pairing) will result in more correct action recalls compared to the pairings with inconsistent verbal and visual perspectives (e.g., first-person perspective video and third-person singular sentence pairing). A one-way repeated measures ANOVA was conducted in order to investigate this hypothesis. Before the analyses, we checked the assumptions. Our dependent variables were continuous and the sphericity assumption was met because our factor consisted of two levels. The dependent variables were not normally distributed which was shown by Shapiro-Wilk results, both ps < .05. Since we did not detect any outliers and prioritized the representativeness of our data, we did not make any further actions that can violate integrity of the data. The results of the one-way repeated measures ANOVA showed that there was no difference between participants’ recall 12 PERSPECTIVE AND MEMORY of consistent pairings and inconsistent pairings, F(1, 52) = 2.93, p > .05, η2 = .02 (see Table 1). These results indicate that the consistency or inconsistency of perspectives in video and sentence pairings did not have an impact on action recall which was incompatible with our hypothesis. In order to test our second hypothesis, we performed another one-way repeated measures ANOVA which compares the number of correct action recalls in consistent pairings with firstperson perspective (first-person perspective video and first-person singular sentence pairing) to consistent pairings with third-person perspective (third-person perspective video and thirdperson singular sentence pairing). Our dependent variables were continuous. Since our repeated measures factor had only two levels, the sphericity assumption was satisfied. Both dependent variables were not normally distributed as shown by Shapiro-Wilk test, p < .05. Similar to the previous analyses, we did not make further actions. In order to see whether participants’ perspective taking scores were related to their number of correct recalls when the consistent pairings were only from first-person perspective or third-person perspective, we checked their correlation. According to Pearson’s correlation coefficient, the number of correct action recall of consistent pairings with first-person perspective was not correlated with participants’ perspective taking scores, r(53) = .24, p > .05. The number of participants’ correct action recall in consistent pairings with third-person perspective was also not correlated with their perspective taking scores, r(53) = -.03, p > .05. Therefore, we decided not to regard participants’ perspective taking scores as a covariate. Contrary to our hypothesis, a one-way repeated measures ANOVA showed that participants remembered more actions when the consistent pairings were in third-person perspective (M = 2.34, SE = .15) compared to first-person perspective (M = 1.64, SE = .16), F(1, 52) = 10.6 , p = .002, η2 = .09. Figure 2 shows the results of this analysis. These results indicate that participants recalled more actions correctly when the videos and sentences were in third- 13 PERSPECTIVE AND MEMORY person perspective compared to the consistent pairings where videos and sentences were in firstperson perspective. Finally, a one-way repeated measures ANOVA was conducted to examine our third hypothesis which claimed that the number of correct perspectives recalls for videos will be higher for consistent pairings compared to inconsistent pairings. Our dependent variables were participants’ correct perspective recalls of consistent and inconsistent pairings which were both continuous. The sphericity assumption was similarly met due to our factor which consisted of two levels. Both dependent variables were not normally distributed as shown by Shapiro-Wilk test, p < .05. We did not intervene with the data to protect the representativeness. To check whether participants who scored higher as visual learners in Style of Processing Scale (Childers et al, 1985) also gave more correct answers in recalling video perspectives, we used Pearson’s correlation coefficient. According to the results, visual learning style scores of participants were not correlated with their correct perspective recalls for consistent pairings, r(53) = .21, p > .05. Moreover, participants’ number of correct perspective recalls for inconsistent pairings were not correlated with their visual learning style scores, r(53) = .15, p > .05. The results of one-way repeated measures ANOVA revealed that there was no significant difference in participants correct perspective recall in consistent pairings and inconsistent pairings, F(1, 52) = 3.22, p > .05, η2 = .02 (see Table 1). These results suggest that consistency or inconsistency of perspectives in video and sentence pairings did not impact participants’ perspective recall of videos which was inconsistent with our third hypothesis. Discussion The present study was done to investigate how first-person and third-person perspectives in visual and verbal stimuli affect action and perspective recall performances of individuals. We 14 PERSPECTIVE AND MEMORY found that consistent pairings with the same perspective present during the videos and sentences did not significantly differ from inconsistent pairings which have different perspectives in videos and sentences in the number of correct action recall by participants. This suggests that individuals are not affected by the consistency or inconsistency of the pairings when they remember the actions that were previously shown. Our second hypothesis was also not supported by the results which revealed an opposite pattern. According to our results, participants significantly remembered more actions correctly when these actions were presented from a thirdperson perspective in both videos and sentences compared to the consistent trials where firstperson perspective was given throughout the actions. Lastly, we found that consistent and inconsistent pairings do not result in significantly different correct scores of perspective recall by participants. This result was also incompatible with our third hypothesis. Our results contradict with the findings from Lindgren’s study (2012). In this study, participants were virtually placed in a factory environment and asked to complete some tasks that lead to the accomplishment of a process called “cold rolling” by attending either a first-person perspective or third-person perspective group. While the former group observed their actions through the first-person view, the latter group perceived their actions from an outsider viewpoint. Afterwards, participants took part in a recall task which required them to write down the tasks that they were asked to do during their time at the virtual factory. Their results showed that participants who were at the first-person perspective group recalled more tasks than the participants at the third-person perspective group. Although this result supports our second hypothesis, we failed to achieve this result in our study. The reason for the mismatching results can be the virtual environment experience of the participants in Lindgren’s study (2012). In our study, participants were exposed to first-person and third-person perspectives only through 15 PERSPECTIVE AND MEMORY watching videos rather than being in an interactive simulation. As indicated by Lindgren (2012), participants in first-person perspective condition may have more self-identification and thus show more self-relevance effect compared to the participants in our experiment. Furthermore, Lindgren (2012) created a rich environment for participants that involves additional items and locations which may have served as cues for participants in the process of recall task. Since the videos in our study were specifically designed for experimental procedures, they were controlled for various elements including unnecessary items. Therefore, the lack of environment that gives participants the feeling of relatedness in our experiment may have reduced participants’ memory performance. Another study by Marcotti and St. Jacques (2021) also found that participants’ spatial memory about the photographs diminish when photographs are presented from the third-person perspective during the recall process. In their study, Marcotti and St. Jacques (2021) took pictures of their participants from either first-person or third-person perspective while they were performing several tasks. After approximately one week, participants were shown their photographs from a first-person or third-person perspective and asked to recall some details from the particular task that is depicted in the image such as the location of objects or the names of the tools. The methods of studies from Marcotti and St. Jacques (2021) and Lindgren (2012) are similar due to participants’ active engagement during the experiment which is in contrast to our passive and observational design and can be the reason for different results. Our study had few limitations. Firstly, our experiment was carried out entirely online. Therefore, we cannot control whether participants really paid attention to videos and sentences. In addition, participants might have used other methods to remember actions by themselves such as taking notes. Due to both the online nature of our experiment and the availability of other 16 PERSPECTIVE AND MEMORY devices such as mobile phones or tablets to attend the experiment, participants who used mobile phones or tablets might have encountered visual stimuli that have a different size and location on the screen compared to participants who used computers to access the experiment. Secondly, our sample size is not very large and consists merely of college students. For this reason, our results may not be generalizable. Finally, the results of our pilot study showed that Turkish participants significantly differed from Di Crosta et al.’s (2020) sample in their emotional arousal ratings of 16 chosen videos. The reason behind these results can be the different number of videos shown in both studies. While our study we only presented videos that had average arousal and valence ratings in Di Crosta et al.’s study (2020), participants in their study watched all videos which also include high and low arousal ratings as well. Therefore, participants in our pilot study might lack comparison in terms of arousal levels between videos. Nevertheless, this aspect of our study still poses a limitation about the possible effects of emotional arousal levels of participants. Also, our study includes some strengths. First of all, the conceptual background of our study is unique. Although first-person and third-person perspectives in visual and verbal stimuli are considerably studied, this subject is rarely linked to the action memory of the events. This study contributes to the literature by revealing the possible connections between first-person and third-person perspectives and memory. Therefore, this study may give new opportunities for researchers to find out what are the properties that make certain events more memorable compared to others which in turn contributes to understanding the mechanisms underlying the memory. Moreover, we controlled for a few possible confounds such as participants’ visual or verbal learning styles and their perspective taking abilities. Finally, while choosing the videos from The Chieti Affective Action Videos Database (Di Crosta et al., 2020), we selected videos that had average valence and arousal scores. In order to prevent cultural differences, we also 17 PERSPECTIVE AND MEMORY collected data from a small Turkish sample that includes individuals’ arousal and valence scores for the chosen videos. As a result of this pilot study, we were able to control for the emotional valence of the videos. Future studies about this topic can replicate this experiment in another language. Because Turkish verbs are agglutinative, the subject is not given explicitly in a sentence. On the other hand, in some other languages such as English, sentences are formed through the isolation of the subject. For instance, the Turkish equivalent of the sentence “I closed the bag.” is “Çantayı kapattım.” whereas “He closed the bag.” means “Çantayı kapattı.” in Turkish. Therefore, firstperson, and third-person perspective differences in a sentence can be seen more clearly in another language. 18 PERSPECTIVE AND MEMORY References Akgün, Ö. E., Küçük, Ş., Çukurbaşı, B. & Tonbuloğlu, İ. (2014). Sözel veya Görsel Baskın Öğrenme Stilini Belirleme Ölçeği Türkçe formunun geçerlik ve güvenirlik çalışması. Bartın Üniversitesi Eğitim Fakültesi Dergisi, 3(1), 277 - 297. http://dx.doi.org/10.14686/BUEFAD.201416218 Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., & Iacoboni, M. (2006). Embodied semantics and the premotor cortex: Congruent representations for visually presented actions and linguistic phrases describing actions. Current Biology, 16, 1818-1823. Beveridge, M. E. L., & Pickering, M. J. (2013). Perspective taking in language: integrating the spatial and action domains. Frontiers in Human Neuroscience, 7, 577. Bower, G. H., & Gilligan, S. G. (1979). Remembering information related to one's self. Journal of Research in Personality, 13(4), 420-432. Childers, T. L., Houston, M. J., & Heckler, S. E. (1985). Measurement of individual differences in visual versus verbal information processing. Journal of Consumer Research, 12(2), 125–134. https://doi.org/10.1086/208501 D'argembeau, A., & Van der Linden, M. (2008). Remembering pride and shame: Selfenhancement and the phenomenology of autobiographical memory. Memory, 16(5), 538547. Davis, M. H. (1980). A multidimensional approach to individual differences in empathy. JSAS Catalog of Selected Documents in Psychology, 10, 85. Di Crosta, A., La Malva, P., Manna, C., Marin, A., Palumbo, R., Verrocchio, M. C., ... & Di Domenico, A. (2020). The Chieti Affective Action Videos database, a resource for the study of emotions in psychology. Scientific Data, 7(1), 1-6. 19 PERSPECTIVE AND MEMORY Engeler, Alper. (2005). Psikopati ve Antisosyal Kişilik Bozukluğu. [Unpublished doctoral thesis]. İstanbul Üniversitesi, Adli Tıp Enstitüsü. Giacomo, R., & Laila, C. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27(1), 169-192. Kenny, L. M., & Bryant, R. A. (2007). Keeping memories at an arm's length: Vantage point of trauma memories. Behaviour Research and Therapy, 45(8), 1915-1920. Kockler, H., Scheef, L., Tepest, R., David, N., Bewernick, B. H., Newen, A., ... & Vogeley, K. (2010). Visuospatial perspective taking in a dynamic environment: Perceiving moving objects from a first-person-perspective induces a disposition to act. Consciousness and Cognition, 19(3), 690-701. Libby, L. K., & Eibach, R. P. (2011). Visual perspective in mental imagery: A representational tool that functions in judgment, emotion, and self-insight. In Advances in Experimental Social Psychology (Vol. 44, pp. 185-245). Academic Press. Libby, L. K., Shaeffer, E. M., & Eibach, R. P. (2009). Seeing meaning in action: A bidirectional link between visual perspective and action identification level. Journal of Experimental Psychology: General, 138(4), 503. Lindgren, R. (2012). Generating a learning stance through perspective-taking in a virtual environment. Computers in Human Behavior, 28(4), 1130-1139. Marcotti, P., & St. Jacques, P. L. (2021). Third-person perspectives in photographs influence visual and spatial perspectives during subsequent memory retrieval. Journal of Cognitive Psychology. Advance online publication. https://doi.org/10.1080/20445911.2021.1935972 20 PERSPECTIVE AND MEMORY McIsaac, H. K., & Eich, E. (2004). Vantage point in traumatic memory. Psychological Science, 15(4), 248-253. Nigro, G., & Neisser, U. (1983). Point of view in personal memories. Cognitive Psychology, 15(4), 467-482. Storms, M. D. (1973). Videotape and the attribution process: Reversing actors' and observers' points of view. Journal of Personality and Social Psychology, 27(2), 165. Sutin, A. R., & Robins, R. W. (2008). When the “I” looks at the “Me”: Autobiographical memory, visual perspective, and the self. Consciousness and Cognition, 17(4), 13861397. Sutton, J. (2014). Memory perspectives. Memory Studies, 7(2), 141-145. Vogt, S., Taylor, P., & Hopkins, B. (2003). Visuomotor priming by pictures of hand postures: perspective matters. Neuropsychologia, 41(8), 941-951. 21 PERSPECTIVE AND MEMORY Figure 1 Examples of First-Person and Third-Person Perspective Videos Note. The upper image is taken from a video with first-person perspective and shows the action of “Drawing a triangle.”. The lower image is taken from a video with third-person perspective and shows the action of “Watering a plant.”. 22 PERSPECTIVE AND MEMORY Figure 2 Action Recall Performances of Participants for First-Person and Third-Person Perspectives Note. The error bars represent the standard error. 23 PERSPECTIVE AND MEMORY Table 1 Perspective Recall and Action Recall Test Means for Consistent and Inconsistent Pairings Consistency of Videos and Sentences Perspective Recall Test Action Recall Test Mean (SD) Consistent 5.72 (1.85) 3.98 (1.55) Inconsistent 5.19 (2.22) 3.55 (1.49) Total 10.9 (3.48) 7.49 (2.39) Note. The term “Consistent” refers to situations where video and sentence pairs are given from the same perspective. The term “Inconsistent” refers to situations where video and sentence pairs are given from different perspectives. 24 PERSPECTIVE AND MEMORY Appendix A Complete list of actions that were presented in videos and sentences. Closing an umbrella. Sharpening a pencil. Closing the bag. Drawing a triangle. Wearing a bracelet. Stapling together sheets of paper. Wearing a jacket. Measuring something with the tape measurer. Highlighting a written passage. Getting up from a chair. Making a call on the phone. Setting the hands of the clock. Tying shoelaces. Watering a plant. Setting the table.