Collecting primary (qualitative?) data Questionnaires -.. techniques of data collection in which each person is asked to respond to the same set of questions in a predetermined + large samples + quantitative analysis - hard to produce a good questionnaire - only one possibility to ask the sample you are interested in Questionnaire design …affects reliability, validity, response rates • Careful design of individual questions; • Clear layout of the questionnaire form; • Lucid explanation of the purpose of the questionnaire; • Pilot testing; • Carefully planned and executed administration When to use • …for descriptive research – to identify / describe the variability of different phenomena (e.g., attitudes, opinions, practices) • …explanatory/analytical research - examine / explain relationships between variables; cause-and-effect relationships Suitable in a multiple-methods research design Types of questionnaires • Self-administered questionnaires (internet, email, post, delivered by hand) • Interviewer-administered questionnaires (telephone; face-to face or structured interviews) The choice of questionnaire …depends on various factors: • Characeristics of the respondents from whom you wish to collect data • Importance of reaching a particular person as respondent • Possible contamination of respondent’s answers - insufficient knowledge/experience - uninformed response - socially desirable answers - discussing answers with others - the poor recording • Size of sample required for analysis (likely response rate) • Types and number of question you need to ask Questionnaire design requirements You should think of… - representativeness/ accurateness of sample –> generalising; comparing/relating results to earlier research - are your measurement tools compatible - good overview of the relevant literature define the theory you wish to test - understanding of research context (e.g., organisation) Designing the questionnaire • Validity and reliability - does the questions and answers of the questionnaire make sense The question must be understood by the respondent in the way intended by the researcher and the anser given by the respondent must be understood by the researcher in the way intended by the respondent Assessing validity • Internal validity - the ability of your questionnaire to measure what you intend to measure • Content validity – refers to the extent to which the measurement tool provides adequate coverage of the questions you study. How to check: 1) definition of the research through the literature 2) discussion with some experts-supervisor 3) panel of individuals to assess whether each measurement question is essential or not essential • Criterion-related validity/ predictive validity – concerned with the ability of the measurement tool to make accurate predictions. How to check: - compare the data you gathered with that specified in the criterion in some way (e.g., correlation) • Construct validity – refers to the extent to which your measurement questions actually measure the presence of those constructs you intended them to measure – (attitude scales, ability/ personality tests). How to check: - how well can you generalise from your measurement questions to your construct?“ Testing for reliability …whether or not your tool will produce consistent findings at different times and under different conditions (different samples; different interviewers etc) 1) Test re-test – correlating data with those from the same questionnaire collected under as near equivalent conditions as possible 2) Internal consistency – correlating the responses to each question with those to other questions in the questionnaire (e.g., Cronbach’s alpha – we, psychologists, all love it) 3) Alternative form – comparing responses to alternative forms of the same question / groups of questions - check questions (However, respondents may get mad) Designing individual questions • Adopt, adapt, or develop • Open questions (open-ended questions) - allow respondents to give answers in their own way • Closed questions (forced-choice questions) provide a number of alternative answers from which the respondent is instructed to choose - quicker/ easier to answer - easier to compare the responses - easier to analyse - must be clearly interpretable Types of closed questions • List – list of items, any of which may be selected Useful: when you need to be sure that the respondent has considered all possible responses. For example: „What kind of music styles to you listen to?“ indie/ rock/ folk/ indie/ jazz/ easy listening/ indie/ world / classical/ indie • Category – only one response can be selected from a given set of categories Useful: if you need to collect data about behaviour or attributes; categories should not overlap and should cover all possible responses! For example: „How often do you go to the cinema?“ 2 or more times a week/ once a week/ couple times a month/ never • Ranking – the respondent is asked to place something in order. Useful: when you want to discover the relative importance of the things to the respondent. Seven / eight items maximum -> respondents rank accurately when they can remember all items For example: „Rank the factors listed in order of importance to you in your choice of field of study“ other people’s advice/ personal interest/ potential future income/ career possibilities / possibility to travel etc. • Rating – a rating device to record responses Useful: when you need to collect opinion data; Likert-type rating scales; 4, 5, 6, 7-point rating scale. For example: „I feel that my opinion is important in decision processes in the organisation“ agree/tend to agree/ tend to disagree/ disagree Interviews • …purposeful discussion between two or more people • Structured interviews – questionnaires based on a predetermined/ standardised / identical set of questions -> intervieweradministered questionnaires (already discussed). The social interaction should not mediate the responses, the questions should be read exactly as written and in the same tone of voice – be as computer-like as possible. Also known as quantitative research interview • Semi-structured interviews – non-standardized; list of themes/questions to be covered, partly varying from interview to interview; additional questions may be added; the order of questions may vary. Also known as qualititive research interview • In-depth interviews – informal; to explore in depth a general area of interest; no predetermined list of questions; only clear idea about the aspect(s) you want to explore. The interviewee is given the opportunity to talk freely. Also known as non-directive interview Links to the purpose and research strategy • Semi-structured and in-depth interviews: - „what“, „how“ and „why“ questions - exploratory studies - to find out what is happening and to seek new insights - explanatory studies - semi-structured interviews -> to understand the relationships between variables • Useful: - to help identify the questions for your questionnaire - explore / explain themes that have emerged from your questionnaire - to validate findings from your questionnaire Situations favouring non-standardised interviews • The purpose of the research – to understand the reasons for the decisions of participants; to understand the reasons for the decisions for their attitudes/ opinions. Understanding if interviewees use words/ ideas in a particular way; to lead discussion into areas you hadn’t previously considered; to formulate some new question • The significance of establishing personal contact – managers likely to agree to be interviewed rather than complete a questionnaire. Receiving the questionnaire through the post ->problematic to provide sensitive information; trust the way in which the information is used; understanding the meaning of the questions • The nature of questions – large number of questions; complex or openended questions; the order and logic of questioning may need to be varied. • Length of time required and completeness of the process – issues that are complex, unclear, or large in number; empty answers in questionnaire give no hint, why they are unanswered; by personal interview, you will form some indication of why a response could not provided -> incentive to modify the question Data quality issues • Forms of bias – the interviewer bias – comments, tone or non-verbal behaviour of the interviewer creates bias in the way that interviewees respond to the questions being asked. Imposing interviewer’s own beliefs/ frame of reference bias in interpreting responses Data quality issues • Response/ interviewee bias – respondent’s sensitiveness to the unstructured exploration to certain themes –> not reveal and discuss a topic in question; providing a partial „picture“ of the situation casting him/herself in a socially desirable role • Validity and generalisability – getting the right sample - some parts of the sample remove themselves from taking part because of the lack of time -> biased sample - no generalisations can be made about the population small and unrepresentative samples Successful interview preparation • Level of knowledge about the organisational / situational context - your prior knowledges raise your credibility • Level of information supplied to the interviewee – credibility through the supply of relevant information to participants before the interview; possibility for them to consider the information being requested and assemble supporting organisational documentation if needed • Appropriate location – comfortable; unlikely to be disturbed; quiet • The beginning of interview – shaping the credibility - explaining your research - gaining the consent/ confidence of the interviewee - supplying the information how the data is used/ how the anonymity is guaranteed Successful interview preparation 2 • Approach to questioning – - questions be phrased clearly, understandably, with neutral tone of voice; - avoid too long questions; - avoid questions made up of two or more questions; - avoid too many theoretical concepts; - ground your questions in the real-life experiences of your participant rather than abstract concepts; - use critical incident technique - respondent is asked to describe in detail a critical incident that is key to the research question; - leave sensitive questions until near the end - greater time for the building up the trust; if the question upset the respondent, the irritation will not affect further questions :P Successful interview preparation 3 • Interviewer’s behaviour during the interview – comments / nonverbal behaviour indicating any bias in your thinking should be avoided; - neutral, but not uninterested response; - do not look bored! - listen attentively – hold back your own thoughts, give enough time to develop the response, avoid projecting your own views. Careful listening allows yo to identify comments that may be significant to the research topic. • Test your understanding – e.g., summarize an explanation provided by the respondent – so he/she can evaluate the adequacy of your interpretation Observation • If your research question/objective is concerned with what people do, an obvious way in which to discover this is to watch them do it. • Definition - the systematic observation, recording, description, analysis and interpretation of people’s behaviour Participant observation Researcher… - participate fully in the lives and activities of subjects and thus becomes a member of their group, organisation or community - not only observes, but also feels, what is happening The meanings and context of the phenomena are discovered (hopefully). Researcher roles • Complete participant – you as a researcher attempt to become a member of a group in which you conduct research. Example: studying luchtime drinking in particular work setting. The question of ethics. Threat: in gaining the trust of your „colleagues“, you might value that trust so much that you lose the sight of your research purpose • Complete observer – you would not reveal the purpose of your activity to those you are observing. You do not take part in the activities of the group. Example: studying the consumer behaviour in the supermarkets - how people tend to react to the „rush hours“ Structured observation • • • • systematic has high level of predetermined structure quantify behaviour how often things happen rather than why they happen • …moves more and more into the Internet – „indirect observation“, traces of behaviour • personal digital assistants Threats to validity and reliability • Subject error – observing a situation that is not representative – but you do not know it –> choose subjects who in as many respects as possible are „normal“ examples of the population under study • Time error – at different times a day and a week there may be very different processes occuring • Observer effect – the process of the observer’s observation of behaviour changes the nature of that behaviour owing to the fact that the subject is conscious of being observed. It is experimentally demonstrated that also cockroacher run faster when other cockroaches are watching.. To overcome this effect: 1) to observe in secret 2) minimal interaction – observer tries to melt into the background, has no or little interaction, avoids eye contact, sits in an unobtrusive position 3) habituation – subjects become familiar with the process of observation so they take it for granted. Several observation sessions are necessary in the same setting with the same subjects Homework 3 • You have for now chosen your research strategy. Now, specify your data collection method (write just a short passage to extend and make more clear, how your measurement tool should look like. For example – if you wanted to use questionnaire, design some questions and explain, why you choose to ask these. If you want to interview someone, then design some topic/question and specify, would it be structured or semi/structured interview, etc) • Email it.. Before our next lecture