2013 Online Research Conference Evaluation Summary

advertisement
Overall Program Rating
13%
2%
28%
57%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
Mean Rating = 7.89
(N = 61, out of 200, 31% response rate)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate this CASRO program overall?
Total Experience & Insights
Positive Comments*
Negative Comments*
•
Good Topics/Content (22 mentions)
•
Poor Presenters (13 mentions)
•
Good Presenters (16 mentions)
•
Poor Topics/Content (5 mentions)
•
Good Venue/Food (8 mentions)
•
Repetitive Presentations (5mentions)
•
Informational (6 mentions)
•
Not Relevant (3 mentions)
•
Well Organized (5 mentions)
•
Exhibition Issues (2 mentions)
•
Relevant (3 mentions)
•
Poor Networking (2 mentions)
•
Good Networking (3 mentions)
•
Good Mix of Attendees (3 mentions)
•
Interesting (2 mentions)
When asked for additional insights, comments included:
Positive Comments*
Negative Comments*
•
Good Presenters (45 mentions)
•
Poor Networking (12 mentions)
•
Good Format (2 mentions)
•
Rushed/Too Much in Too Little Time (4 mentions)
•
Good Food (2 mentions)
•
Exhibit Issues (3 mentions)
•
Good Content (2 mentions)
•
Poor Content (3 mentions)
•
Good Networking (2 mentions)
*2+ Mentions Shown
What specifically about the conference caused you to rate it [INSERT]?
Please provide us with any additional insights regarding your experience at the conference.
Other Comments & Suggestions
Positive Comments*
•
Good Overall (10 mentions)
•
Good Venue (2 mentions)
Negative Comments*
•
Need More/Better Networking (10 mentions)
•
Improve Content (9 mentions)
•
Rushed/Need Better Schedule (4 mentions)
•
Poor Hotel (2 mentions)
•
Exhibit Issues (2 mentions)
*2+ Mentions Shown.
What other comments or suggestions do you have concerning this CASRO event?
Other Topics/Issues
Multiple Mention Topics Shown
•
Mobile Market Research (20 mentions)
•
Sample/Community Panels (6 mentions)
•
Social Media Market Research (6 mentions)
•
New Techniques/Trends (5 mentions)
•
Data Quality (4 mentions)
•
Sampling (3 mentions)
•
Big Data (3 mentions)
•
Data Integration (3 mentions)
•
DIY (3 mentions)
•
Qualitative (2 mentions)
•
Privacy Issues (2 mentions)
Educational Content
12%
3%
31%
54%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
Mean Rating = 7.87
N = 61
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate the educational content?
Amount of Information Learned
2%
18%
28%
53%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
Mean Rating = 7.61
N = 61
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate the amount of information learned?
Networking Opportunities
5%
25%
20%
51%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
Mean Rating = 7.15
N = 61
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate the networking opportunities?
Why Register for This Event
N = 61
72%
67%
56%
Subject
Educational
Value
Meeting with
Peers
15%
17%
Location
Other
Other mentions included Presenter (6 mentions),
Exhibitor (2 mentions), and Become CASRO
Member (1 mention).
Why did you register for this event?
Gayle Fuguitt
Mean Rating = 8.43
N = 53
4% 2% 2%
34%
59%
Comments:
 Thoroughly enjoyed her insights.
 Really moving.
 Very engaging with fun examples.
 Loved her general suggestions on where the industry is headed
and specific suggestions on how to keep up with the trends.
 Loved you can only have 2 out of 3.Better, Faster, or Cheaper.
 Great speaker.
 One of the best speeches.
 Not very interesting for me.
 Incredible business insights, really enjoyed speaking with her
privately as well. What a brilliant mind.
 Great panorama and insights.
 She's impressive.
 I thought this was a great presentation to kick off the
conference.
 Sounded like it was a mash-up of MBA clichés; was hard to
follow the narrative of the presentation (i.e., was "all over the
place").
 Energizing, female, smart, future-focused.
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Pete Cape
Mean Rating = 8.09
N = 56
4%
5%
2%
48%
41%
Comments:
 Great topic!
 Insightful.
 Informative and directly impactful on the dangers of throwing
out these respondents.
 Good information, good speaker.
 Very engaging.
 Really regretted missing this. Was a reason I went.
 Interesting.
 Dynamic speaker.
 Good reflection.
 Meandering. No practical suggestions or recommendations.
 Didn't feel much additional learning was gained from what we've
seen before in other presentations.
 Awesome. Loved the technical analysis.
 Very engaging presentation; enjoyed it very much.
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
David Bakken
Mean Rating = 6.88
N = 57
9%
4%
19%
28%
40%
Comments:
 Interesting research.
 Interesting approach - but complicated.
 Good reminder that you need at least 100, but hard to be
practically applicable.
 I think he lost most people so it may have needed to be more
practical in the application than academic...
 Lot of info and numbers to take in first thing in the morning, but
good information.
 Old school.
 Sort of a silly topic.
 Was like sampling 101.
 Little too data heavy for me.
 Too technical with unclear implications. Smart guy though.
 A little hard to follow the argument, but it's a difficult subject.
 He's a living legend!
 And the conclusion was n=100 is better than n=50 or n=25!
really? If it doesn't challenge conventional wisdom then a very
short "what you've always thought has been re-confirmed" is
enough.
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Courtright &
Pettit
Mean Rating = 7.23
N = 53
6%
2%
21%
19%
Comments:
 Valuable information as we move into global work.
 Very interesting.
 A lot of this research has already been done in the AAPOR
literature.
 Well done, fun, relaxed presentation.
 There was almost no reference to past research on the subject.
 Interesting paper, but not a huge take away.
 Expected more much from this authors.
 The speakers didn't connect with the audience.
 No new news. This is perhaps the most researched area in
survey research. They did not seem to build on the past.
 Didn't seem to really tell us anything. It was even said in the
presentation that they weren't looking to make
recommendations on scale so I'm not sure i got it.
 Annie's work is highly respected, so this was enjoyable.
53%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Craig Overpeck
Mean Rating = 6.23
N = 47
13%
13%
6%
23%
45%
Comments:
 Not relevant to my work.
 Okay.
 I did really enjoy the empty chair.
 Marketing pitch for standards. Wasn't interested.
 It was really about ISO so wasn't sure where or how the title
and the overview related.. Could have been tied together a little
more.
 A good subject, but the case for ISO was not clearly made.
 What he spoke about did not seem relevant to the topic, it just
felt like a sales pitch for CIRQ.
 A bit of marketing pitch for the certification program but well
delivered.
 This really was the wrong name for this session. It should have
just been called ISO 26252 or something like that. He did not
address the question above at all.
 Was a sales pitch for ISO - not educational.
 Not applicable to most.
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
DIY
Mean Rating = 7.51
N = 55
11%
2%
35%
11%
42%
Excellent (9/10)
Good (7/8)
Satisfactory
(5/6)
Comments:
 Loved the dialogue and opinions of the implications of DIY for the future.
 Interesting parties on this panel - somewhat of a battle.
 A lot of attitude from one of the presenters.
 Provocative yet somehow not especially informative.
 Great understanding of everyone there of the benefits of DIY and how it is
changing the industry to be faster and cheaper.
 Excellent and riveting discussion.
 A little less arrogance on the part of one of the speakers would have been
helpful.
 This panel seemed more of an us vs. them conversation towards the end.
 I wasn't impressed with this one.. seems there were two panelists that were
there to sell their product rather than to participate in the discussion.
Happens all the time..someone comes from technology and says we have
the next best thing so get on our platform or be left behind.
 Sales pitch.
 One bombastic speaker, but overall this was informative and useful.
 Excellent. I expected Ryan from Qualtrics to pull out a sword and start
swinging at the guy form GFK. I felt I was watching the new world
(Qualtics) battling the old order (GFK) and the new world was winning.
Ryan dominated that conversation while everyone else did not seem
prepared.
 Certainly one of the highlights of the conference.
 I thought the panelists were great - on both sides of the argument. I
thought the audience was super defensive with their questions.
 Fun discussion, but so many people in our industry have a mental block
about some of these types of products. This was evident in some of the
audience questioning.
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
DIY
(Continued)
Mean Rating = 7.51
N = 55
11%
2%
35%
11%
Comments:
 It was sort of self-defense position.
 Mixed. Great introduction by Bremer. Great stories by Terhanian.
Smith from Qualtrics off topic and self-serving. Difficult to understand
Garland.
 Not sure that we learned anything except the guy from survey monkey
was extremely defensive and the guy from Qualtrics kept trying to bail
him out. could have been really good if it kept a more professional
tone.
 Loved seeing all the industry heavyweights together.
 This was more of a sales pitch then a discussion.
 Lively discussion due to inappropriate statements, but I don't think this
forum did the topic justice.
 Even though this wasn't what I expected based on the title, it was
interesting/engaging. Garage techies came off as cocky though, and
researchers as a little whiney.
42%
Excellent (9/10)
Good (7/8)
Satisfactory
(5/6)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Pingitore &
Cavallaro
Mean Rating = 7.00
N = 50
4%
Comments:
 Good presentation, but not very relevant to my work.
 Good.
 Important issues to look at in a realm of passive data collection.
 Some good information but I didn't learn a lot.
 Not actionable information.
 Just OK. Could have gone deeper into the subject of privacy and its
relation to engagement and response rates.
 Made some good points.
14%
30%
52%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Carol Haney
Mean Rating = 7.40
N = 50
4%
12%
Comments:
 Very good.
 Oversold it a bit.
 Good approach.
 Impressive.
 A bit ironic having this so close to the discussion of privacy.
 Felt like a Webinar where she was reading her notes.
18%
66%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Comments:
 Techniques too proprietary.
 Good case study.
 Fascinating insight into how they optimized their website.
 Relevance?
Roslyn Ku
Mean Rating = 6.02
N = 44
7%
7%
14%
39%
34%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Piet Hein
van Dam
Comments:
 Cool!
 Excellent presentation.
 Behavioral algorithms are important and a better way to achieve
balancing.
 A good presenter.
 Interesting presentation. Not sure of the practicality as it applies to
my business.
Mean Rating = 7.84
N = 44
11%
2%
34%
52%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Margie
Strickland
Comments:
 Awesome!
 Good idea, needs to be scalable though. Need a much bigger panel
size.
 Too proprietary.
 A solution looking for a problem?
Mean Rating = 7.56
N = 43
9%
30%
14%
47%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Kelly & Stevens
Mean Rating = 7.71
N = 55
15%
Comments:
 Very important topic... need to continue to do research in this area.
 A bit complicated.
 Great information here and a trend that will happen in the industry.
 An important topic, generally well done.
 Far to complex for most research organizations that don't have the
resources to tackle such an undertaking....but interesting none the
less.
2%
31%
53%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Johnson &
Siluk
Comments:
 Excellent analysis and very relevant.
 Again, complicated and time consuming idea.
 Complex area. Nice work although they did not go that deep and
they did not address the issues.
Mean Rating = 7.83
N = 54
19%
33%
48%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Inna Burdein
Mean Rating = 8.15
N = 53
Comments:
 Great research to better understand the length issue.
 Good.
 I liked how difficulty drove abandon rates better than length.
 My favorite session.
 I appreciated the candor, but wondering if her bosses did.
 Title doesn't fit findings.
 Excellent presentation. She was quite well prepared too.
 I object to the premise that we should care most about "good
respondents.“
4%
32%
64%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Sweeney, Goldstein
& Becker
Mean Rating = 7.39
N = 51
2% 4%
27%
22%
Comments:
 Very interesting, would have like to see more examples.
 Dan was by far the best speaker of the conference and I enjoyed the
topic. He was also very effective in the Q&A and had humorous and
on target comments.
 A very good and enjoyable speaker although I didn't agree with
conclusions.
 Fun presentation that left me wondering if it truly has validity.
 I liked Terry, but the other two being from the same company was
odd. Needed another viewpoint.
 The measurements obtained through gamification approaches don't
seem to match those obtained through other methods.
 Seems like you wanted a presentation on gamification in the worst
way and you achieved that goal.
 Excellent presentation style, but content was soft compared to the
other presentations.
 Good presentation style, but too little detail to be relevant.
45%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Jamie BakerPrewitt
Comments:
 Extremely important and relevant topic.
 Very interesting.
 Good tests and made me feel better about what was happening.
 Too much reading of the content of her own slides.
 Good content and information.
 Great presenter; didn't have great answers to audience questions,
though.
Mean Rating = 8.09
N = 53
10%
4%
45%
42%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Lattery &
Bartolone
Comments:
 Extremely relevant and important research.
 Very good.
 Good information and test design. Informative results and made us
feel better about our current designs.
 Too much time spent prepping and not enough time for explaining
and understanding.
 Their conclusions didn't seem to follow from their data, but solid
presentation.
 Great topic, but too few practical details.
Mean Rating = 7.92
N = 53
13%
36%
51%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Greg
Peterson
Comments:
 Great presentation.
 Good.
 Too much time spent prepping and not enough time for explaining
and understanding.
 Interesting content but I was disappointed that he was so scripted
and read a lot of his presentation.
Mean Rating = 7.98
N = 47
6%
33%
58%
Excellent (9/10)
Good (7/8)
Satisfactory (5/6)
Fair (3/4)
Poor (1/2)
On a 1 – 10 scale with 10 being Excellent and 1 being Poor, how would you rate [insert]?
Reasons for Overall Rating
Excellent (9/10):
•
Content and presenters were great!
•
Excellent organization of the event, thoughtful presentations and networking opportunities.
•
High level content.
•
Just about all of the presentations had practical implications that are relevant to the work we do.
•
The presentations were awesome and the food was very good.
•
Quality of presentations.
•
The event for networking was great, but it was not as much for networking.
•
Strong follow-up to last year. I liked that there was a greater focus on new methods as well as greater participation
from full service research companies.
•
Mostly interesting speakers, variety of relevant topics. I did not give it a "10" because I wanted to hear about online
communities.
•
Great content and well thought out agenda.
•
Quality of the presentations and networking opportunities.
•
Great content and venue. Good attendance as well.
•
It was extremely informational, the content was great, the organization was good but not perfect. Everything else was
just done very well.
•
The FOCUS of the agenda on methodology issues. There's no other place to go for this.
•
I enjoyed 90% of the presentations and panels and learned a lot about mobile research.
•
The content and venue was very good.
•
Content and relevance of the papers.
What specifically about the conference caused you to rate it [INSERT]?
Reasons for Overall Rating
Good (7/8):
•
Overall I really enjoyed the event, however, a few sessions weren't extremely relevant to me.
•
Good set of papers- right on the money topic wise.
•
Great content. Good speakers. Very quant focused.
•
It was more geared towards panel survey companies. However, there was some valuable mobile information
presented. I would have liked to see more actionable recommendations from the research on research in terms of
"This is what you should to" to go mobile.
•
Generally good content. I liked that the organizers kept things moving and on track. Good downtown location with
easy transit access. Quite dissatisfied that the DIY panel -- turned into a sales pitch.
•
Good presentations.
•
More presentations are based on R&D than at any other conference.
•
While I found benefit in almost all of the conference, I was disappointed in a couple of the speakers, especially one in
particular who read her presentation.
•
Solid group of speakers and topics, great hotel and location. One of the better conferences I have been to put on by
CASRO in the last 15+ years.
•
Overall, it was quite excellent. A few of the presentations were weak and I think an outing that more people could
participate in and enjoy would be excellent.
•
The subject of mobile research. Apps on mobile devices.
•
Overall the presentations were engaging, but there were a few that were not.
•
I enjoyed most of the speakers and felt it was well organized overall.
•
Only saw Friday, but would have liked to see CASRO connect more dots, summarize trends, etc.
What specifically about the conference caused you to rate it [INSERT]?
Reasons for Overall Rating
Good (7/8) - Continued:
•
Mix of a few great papers and a lot of weak ones
•
Some very good and innovative papers were presented.
•
It was a good conference, with good content. I was missing more business and end user perspective.
•
Many of the presentations were the same thing we see over and over.
•
The topics were interesting and relevant.
•
The talk about mobile is very interesting. I found the sessions on the second day very good and the sessions on the
first day slightly better than average.
•
Very informative sessions, good location, wish there had been some more exhibitors. It was interesting that some of
the papers presented really contradicted each other in some of the specifics.
•
Had great speakers, but others not. Some panels were boring.
•
Didn't love all the sessions.
•
Very good but not great. Some of the presentations were repetitive.
•
The quality and variety of the presentations.
•
It was a good conference, but some of the presentations could have been a bit shorter.
What specifically about the conference caused you to rate it [INSERT]?
Reasons for Overall Rating
Good (7/8) - Continued:
•
It was a good conference and in a great location, but I felt some of the presentations fell a little short of their
description.
•
Good stuff on mobile, great keynote, some good stuff on online, but some real stinkers as well.
•
I expected the conference in general to be more geared towards mobile research and moving forward in that directions
and not online in general. I felt like the mobile portion was saved for the very end.
•
I think there were some good sessions, but then there were also some mediocre sessions. Also, I don't think the
conference gave enough time with the exhibitors.
•
Attendees were high quality but not high quantity, which somewhat limited networking opportunities. Most speakers
were great but many made "public speaking 101" mistakes (e.g., too much text on a page) that could have been quickly
fixed by having a professional review each presentation the day before. Overall, it was definitely worth attending.
•
A few of the presentations were too data heavy, which overshadows the important takeaways. Most presentations were
very good, insightful.
•
It was good.
Satisfactory (5/6):
•
Presentations were informative, but for the most part pretty dry. Visuals were fairly boring. Especially when it comes to
mobile, most presenters seemed to be a few steps behind the industry.
•
Some good sessions - some very repetitive sessions (heard it before at other conferences).
•
It was too long on the first day and the content was not entirely applicable for me until day 2, which was much more
usable information.
•
Some presentations were a bit redundant with the one I've see in New Orleans in 2009-2010.
What specifically about the conference caused you to rate it [INSERT]?
Reasons for Overall Rating
Satisfactory (5/6) - Continued:
•
Presentations were not as strategic as in past years-more in the weeds, so to speak.
•
Lack of energy. Many ideas rehashed. Mobile session in particular was very sub par and far from the cutting edge.
•
Speakers didn't seem very polished, prepared.
Fair (3/4) - Continued:
•
The attendance was meager and the content, with a couple of exceptions, seemed dated and not reflective of the
dynamic changes happening right now.
What specifically about the conference caused you to rate it [INSERT]?
Additional Insight Verbatims
Excellent (9/10):
•
The vendor exhibit hall was too small. The time also seemed short.
•
Please include some topics on the B2B side of research.
•
First day was long. I suggest adding a 1/2 day or going to full days to shorten the first day.
•
Everything was great. Conference was great. I would have liked more networking breaks.
•
Please speak about online communities and community panels at future conferences. I manage one, and it's really
hard to webinars and conferences on the topic.
•
I think conference could be spread out over a bit longer time.
•
Good format.
•
The connections are important at the conference.
Good (7/8):
•
I really liked the panel discussions.
•
In terms of networking, we're healthcare based so it was more networking with potential fielding vendors as opposed to
clients.
•
Need a black list for speakers who sell.
Please provide us with any additional insights regarding your experience at the conference.
Additional Insight Verbatims
Good (7/8) - Continued:
•
Keep up the good work with getting great speakers.
•
It would be fun if there was a better networking event, like a speed networking event or something that really pushed
attendees to meet new people...it felt like for the most part people stuck with other folks they already knew.
•
Networking was hard to break in as a newcomer. Seemed like an old school reunion. Conference could have offered
more active peer to peer engagement.
•
Great venue, great group of participants, great to hear more on mobile as the next step of market research.
•
Right amount of time, size was manageable to find and talk to people. Topics were very relevant to today's market.
•
The presentations were generally good, with only a few that were sub par.
•
Liked to hear about online from senior researchers, not only newcomers.
•
If this is a tabletop exhibit event, it would be great if you enforced tabletop exhibits only -- "nothing on the floor except
the table" would be a good start.
•
I thought in general it was good, but I thought many of the papers that were presented were on specific niche projects
that one company was working on, and less general information to help the field move forward with the future of
mobile research.
•
I thought the DIY panel discussion was interesting. Also, the opening session was good.
•
More content catered to client-side would be great.
•
I thought the panel was one of the best panels I've ever seen because the people were more frank and direct than
most panel, which tend to be very polite and boring.
Please provide us with any additional insights regarding your experience at the conference.
Additional Insight Verbatims
Satisfactory (5/6):
•
Food was great! Networking opportunities were pretty good. Presentations just seemed a bit stagnant, although a few
were relevant.
•
More networking opportunities - not enough break out sessions.
•
Day 1 was too long. Very few vendors to talk to at the exhibitor section which was a little disappointing.
•
Networking opportunities are always great; I liked the program over the past 3 years better as they fostered more
discussions about strategic issues which are important given the dynamic changes in our industry. The topics were
good but R on R was a bit granular and redundant.
•
Wish there were more networking opportunities; more breaks....? An evening event that isn't as exclusive; hence
includes all conference goers...
•
Was most interested in the DIY Research Panel Discussion which I had interpreted as a discussion about DIY panels.
•
I thought in general it was good, but I thought many of the papers that were presented were on specific niche projects
that one company was working on, and less general information to help the field move forward with the future of
mobile research.
•
I thought the DIY panel discussion was interesting. Also, the opening session was good.
•
More content catered to client-side would be great.
•
I thought the panel was one of the best panels I've ever seen because the people were more frank and direct than
most panel, which tend to be very polite and boring.
Fair (3/4):
•
CASRO has some superior shows, so the standard is high in my mind. This show seems to need a rejuvenation though.
Please provide us with any additional insights regarding your experience at the conference.
Other Comments & Suggestions
Excellent (9/10):
•
The hotel was not great.
•
Please try and include more B2B side of the research. Currently it is absolutely consumer focus. I would also like to see
an event for the CATI business, if possible.
•
Slightly shorter schedule - felt a bit jam packed.
•
I think this was the best conference I've been to in terms of practical, useful, relevant topics/information being
presented and discussed.
•
The food was good. Would like the conference to be longer - three days.
•
More networking.
•
More social media promotion for the event.
•
Overall, event was good. would like exploration of online communities (not just an introduction, but best practices for
those who already have one). Also it was really hard to rate the individual speakers because I didn't remember all of
them...a picture of each speaker would have helped.
•
More on mobile would be nice.
•
Nothing else. Really good conference.
•
One idea: I loved the Academy of Arts & Sciences event, but it didn't facilitate meeting other delegates. A cocktail
party with dinner BUFFET would be a way to maximize connections. Keep everyone together and encourage round
robin discussions.
Good (7/8):
•
First afternoon was very long. like the number of papers but would prefer to finish an hour later on the Friday.
•
Balance some of the content with qualitative. This was really heavy on the quant side... would have been nice to see
more rich texture of Qual to explain the numbers presented on the Quant side. Great conference though.
What other comments or suggestions do you have concerning this CASRO event?
Other Comments & Suggestions
Good (7/8) - Continued:
•
Just providing more actionable recommendations in terms of advancing into the mobile research space.
•
It was odd to not have an afternoon break but it works when lunch is later. I think I prefer that.
•
I liked that similar presentations were grouped. I can’t remember the name of the presentation that turned out to be a
commercial for CIRQ. That was not at all what I was expecting nor did I feel it was appropriate for this conference.
•
I do not have a thing to complain about. Well done.
•
Would like a social event where all are invited.
•
Free coffee mugs and bumper stickers (I love CASRO - if you can read this, you're too damn close).
•
Keep up the good work.
•
More ice breaker or mingling events.
•
I was glad that an app was created for the event, but I couldn't figure out how to take advantage of all the pieces.
•
Longer happy hours (more than an hour).
•
Good venue.
•
Stick to R&D-based papers using generally available data sources.
•
Good all around.
•
Spend more time on the results in balance with the methodologies.
•
I really wish you'd have this event on the east coast at some point. There are warm places people can go in March,
like Florida.
•
Cheaper hotel.
•
The CASRO team did a fantastic job.
•
I think this conference was very good--no additional comments.
•
is to network. I think the event should be planned with that in mind.
What other comments or suggestions do you have concerning this CASRO event?
Other Comments & Suggestions
Good (7/8) - Continued:
•
Cocktail hour at hotel too short. Networking event was neat but failed to deliver on actual networking because
everyone dispersed.
•
It would have been nice to have the more mobile specific presentations on Day 1 or spread out throughout the two
days.
•
Overall, let's have more interesting content.
•
Try to get more booth exhibitors next time - that will increase networking opportunities. Also, add a panel/speaker
about startups in the market research space, so industry leaders can see what's “coming next”.
•
Focus on more client-side related content.
What other comments or suggestions do you have concerning this CASRO event?
Other Topics/Issues
•
A more concentration on mobile applications.
•
Apps vs HTML5 vs Web-based surveys. More in DIY and how we can best integrate it into the industry. 'big data'
applications, such as scraping Facebook graphs - where else are people finding data. Meters are showing up every
where. Is anyone doing anything useful with them? (I think not!).
•
As long as you keep up with current issues, technologies and keep the conferences at least partly focus on these new
trends along with the norm which is data quality, survey quality, etc, I am good.
•
Automotive owner sample.
•
B2B side of the research.
•
Behavioral economics.
•
Better use of technology.
•
Big data - how online is working WITH rather than against big data.
•
Blending the results gathered from social MR and traditional research methods (includes mobile).
•
Continue discussing gamification, dealing with missing data, DIY and mobile research.
•
Continue with mobile. online communities (and beyond just Communispace...I want to hear best practices from a
number of different companies that do this, e.g., VisionCritical and Toluna).
•
Data and insight dissemination.
•
Development of the panel business; updates on mobile survey technology; blending of survey data with other data.
•
Gen pop vs. targeted sample.
•
Handling privacy issues across borders. Challenges for privacy when using clients' lists.
•
I wonder if we should move away from the concept of online research. Is that even relevant given where we are
headed?
•
I would like to see more about what techniques people are using to get response rates up. This will start to plague us
more as people respond to fewer surveys. This is what has happened to us with telephone surveys so how do we get in
front of this?
Other Topics/Issues
(Continued)
•
Just more on the future of sample panels and data collection techniques.
•
Mobile data collection and the quality of dynamic sources plus the impacts of routing on data quality.
•
Mobile research as INTENTIONAL and as the natural way people engage and communicate.
•
Mobile research recommendations - step-by-step.
•
Mobile research, respondent engagement, quality measures.
•
Mobile. Social CRM. Social media research.
•
Mobile, qualitative, storytelling, rich media, social validation.
•
More about market research firms as a business and the impact of today's realities on tomorrow's MR model. I think
there are many angles to pursue here.
•
More about issues about have a community vs. panel.
•
More about mobile.
•
More business strategy discussions.
•
More end users case studies of research implementation.
•
More mobile.
•
More mobile and better stuff on big data.
•
More of the same.
•
More on DIY, Mobile, and integration of social media, online and behavioral data; more on impact of Big Data.
•
More on mobile and it's progression would be great. Sample alternatives to traditional panels would be good. Big data
is also useful.
•
More on mobile survey design, some ground breaking case studies?
•
More on mobile. Continue a focus on data quality in general.
•
More on trends - some of the talks were blatant sales pitches
Other Topics/Issues
(Continued)
•
Market research company loyalty - from the perspective of the employee and employer and what that means to each
group. hey, just to mix things up a bit.
•
New techniques in market research and how to implement and use them... Future research, crowd sourcing, predictive
research.
•
Passive data integration. Blend of Marketing and Market Research.
•
Possibly more end-client involvement and what they are seeing/wanting to do with market research in future.
•
Research design.
•
Respondent engagement on a global level.
•
Sample representativeness in online research.
•
Smartphones in surveys again.
•
The issues were good, the speakers just didn't go deep enough. seemed they skimmed the surface of the hard topics.
•
Update on mobile.
•
What about an individual organization's researchers setting up and implementing a panel (DIY)?
•
What startups are innovating in the market research space.
•
Whatever is cutting edge.
Respondent Profile
•
Title:
–
–
–
–
–
–
–
•
Years of Experience in this Position
–
•
C-Level (10 mentions)
Partner (1 mention)
Vice President (14 mentions)
Manager/Director (29 mentions)
Research Analyst (4 mentions)
Consultant (2 mentions)
Business Development (1 mention)
On average, respondents have held their current job position for 5.07 years.
Years of Experience in Survey Research Industry
–
On average, respondents have worked in the survey research industry for 15.57 years.
Download