cation for the AIChE Concept Warehouse

advertisement
Paper ID #9608
Work in Progress: Development of an Android-based Student Mobile Application for the AIChE Concept Warehouse
Rachel M. White, Oregon State University
Rachel White is a junior in chemical engineering at Oregon State University. Her interest in engineering
education comes from being a student and observing fellow classmates struggling with their studies. She
is interested in promoting conceptual understanding in the chemical engineering core curriculum so that
students can perform better both in the classroom and beyond.
Dr. Bill Jay Brooks, Oregon State University
Bill Brooks is a postdoctoral scholar in the School of Chemical, Biological, and Environmental Engineering at Oregon State University. His Ph.D used written explanations to concept questions to investigate
technology mediated active learning in the undergraduate chemical engineering classroom. He current interests involve using technology to enhance educational practices in promoting conceptual understanding.
He is the primary programmer of the AIChE Concept Warehouse and his current focus is on its continued
development, specifically creating and integrating Interactive Virtual Labs.
Dr. Debra M. Gilbuena, Oregon State University
Debra Gilbuena is a postdoctoral scholar in the School of Chemical, Biological, and Environmental Engineering at Oregon State University. Debra has an M.BA, an M.S, and four years of industrial experience
including a position in sensor development. Sensor development is also an area in which she holds a
patent. She currently has research focused on student learning in virtual laboratories and the diffusion of
educational interventions and practices.
Dr. Milo Koretsky, Oregon State University
Milo Koretsky is a Professor of Chemical Engineering at Oregon State University. He received his B.S.
and M.S. degrees from UC San Diego and his Ph.D. from UC Berkeley, all in Chemical Engineering.
He currently has research activity in areas related engineering education and is interested in integrating
technology into effective educational practices and in promoting the use of higher-level cognitive skills
in engineering problem solving. His research interests particularly focus on what prevents students from
being able to integrate and extend the knowledge developed in specific courses in the core curriculum to
the more complex, authentic problems and projects they face as professionals. Dr. Koretsky is one of the
founding members of the Center for Lifelong STEM Education Research at OSU.
c
American
Society for Engineering Education, 2014
Work-in-Progress: Development of an Android-based Student Mobile
Application for the AIChE Concept Warehouse
Abstract
Incorporating user feedback to continually improve educational innovations is imperative for the
adoption and sustained use of those innovations. We report on the development of a usersuggested improvement to the AIChE Concept Warehouse: incorporation of an Android
operating system based Student Mobile Application. Our intent is to share what we have learned
through our improvement process, such that other innovators can benefit from the lessons
learned through our experience.
The AIChE Concept Warehouse was developed with the goal of fostering a community of
learning within chemical engineering. The Concept Warehouse’s cyber-enabled database
infrastructure is designed to promote concept-based instruction through the use of concept
questions in core curriculum courses like Material/Energy Balances, Thermodynamics, Transport
Phenomena, Kinetics and Reactor Design, and Materials Science. Concept questions, both as
Concept Inventories and as ConcepTests, are available to help lower the barrier of using conceptbased instruction and assessment. This instruction and these assessments can be used to promote
and evaluate student learning in real-time. The instructor can then adjust the pace of lecture in
response to student understanding, spending more time on more difficult concepts. This tool also
allows for reflective assessments such as the “muddiest point.”
A Student Mobile Application is being developed to make it easier for students to submit
answers and written explanations to these assessments using mobile devices. Previously, students
could input their answers to conceptual questions using clickers, smartphones, and laptops.
However, input via smartphones was cumbersome because it depended on a student’s web
browser and the full size web page. The improved student interface will facilitate student
participation by making it easier for them to submit responses via smartphone.
Once the application is developed, we will conduct initial usability testing with students who
have been using the previous web-based options for answer submissions. In order to assess the
usability, usage statistics from student responses to usability surveys will be collected. Survey
responses will be used to identify student likes and dislikes and compare the different available
options for answer submission. Results from usage statistics will be used to improve the design
of the application.
Introduction
Both engineering educators and industry professionals express a need for students to have the
ability to apply their knowledge to new and challenging problems1. Traditional lecture-based
courses in science and engineering, however, encourage and promote rote memorization over the
conceptual understanding needed to apply knowledge in new situations2,3. Instructors need to
place greater emphasis on conceptual understanding and utilize concept-based learning in their
classrooms.
The AIChE Concept Warehouse was designed to lower one of the biggest barriers that prevents
instructors from using concept-based instruction: access to high quality concept
questions. Construction of good concept questions is often difficult and time-intensive4. The
Concept Warehouse alleviates this barrier by giving instructors access to a variety of concept
questions in the core chemical engineering curriculum along with providing a variety of ways to
utilize these questions in their courses. Instructors can either assign these questions as homework
or use them in class as part of active learning pedagogies (e.g. peer instruction). If using concept
questions in class, the instructor can have students respond using their clickers, laptops, or
smartphones and receive a distribution of student responses in real time. This is useful for
determining student understanding of new topics and whether students hold misconceptions
about new course content.
Currently, use of the Concept Warehouse on a smartphone is cumbersome and inconvenient for
students. The web pages are not optimized for use on mobile devices, and students must resize
pages in order to read and input text. Navigation menus also are not touch-friendly. The AIChE
Concept Warehouse Student Application for Android-based devices seeks to improve the student
user experience with touch-friendly navigation and optimization for small screens. In this paper,
we present a description of the current status and available features of the student
application. We also provide a detailed description of the design and development process to
provide a reference for future design processes. Finally, we report on future plans and activities
for the student application.
Related Work
Applications for mobile devices have been used as learning aids before. Pikme, an iPhone
specific application, was developed to help the instructor manage student class lists to learn
student names, randomly select students to participate in class discussion, and rate these solicited
student responses5. The application can then store these ratings to aid the professor in grading
student participation. Overall student and instructor feedback was positive: use of the
application in class led to increased student motivation, improved engagement, increased overall
participation, and more balanced feedback. Students felt that random selection through the use
of Pikme was fair and provided good motivation to prepare outside of class. Some students
reported an increased willingness to volunteer regardless of the random selection tool.
Another application, made for iPads specifically, was developed to aid instructors in the rubricbased grading of assignments and presentations. Called evaluA+, the application allows
instructors to create a rubric, import student assignments, and then view the rubric and
assignment simultaneously in both online and offline modes6. The instructor is also able to share
the rubric with students as they work on their assignments. There is also a presentation mode
where an instructor can use a rubric to grade a student as he or she gives an oral presentation.
Pikme and evaluA+ are examples of mobile applications designed to aid the instructor in the
classroom. Mobile applications have also been developed with the intent of providing formative
assessment of student understanding in real-time. InkSurvey is a free, web-based software that
collects student responses to open format questions. Students “ink” their responses through use
of pen-enabled Android devices, iPads, iPhones, and/or tablet PCs7. The instructor can pose
questions during lecture, and students can respond with words, drawings, graphs, or equations
through InkSurvey. Creating these responses gives students an opportunity to interact with the
subject material and increases student metacognition. The instructor, in return, gains a real-time
window into what students are thinking and can address misconceptions and further questions.
Mobile applications like InkSurvey help promote active learning by encouraging students to
reflect on subject material and explain concepts in their own words. Studies of more than 5,000
science and engineering students have found that active learning methods double conceptual
learning gains8 and give way to a 25% higher pass rate9 than traditional lecture. Active learning
methods, like peer instruction, help place greater emphasis on concept-based
instruction. Though traditional engineering courses often reward rote learning2,3, a lack of
conceptual understanding can inhibit a student’s ability to solve new problems because of the
inability to use new knowledge in new situations10.
Concept-based Instruction and Peer Instruction
The AIChE Concept Warehouse, along with the student mobile application, is designed to make
it easier for instructors to implement concept-based instruction and use of concept questions in
class. Concept questions can be difficult to create and are perceived as time-consuming to use,
deterring many instructors from adapting a more concept-based approach to instruction4, 11,
12
. The AIChE Concept Warehouse is a database-driven website that gives instructors access to
approximately 2000 questions related to the core chemical engineering curriculum. The tool also
gives instructors the infrastructure to write and share their own questions as well as assign these
to students either in class or as homework. Students can then use their laptops, clickers, or cell
phones to answer these concept questions in class. The Concept Warehouse then records student
answers and presents an answer distribution to the instructor, who can use this as formative
assessment to determine how well students grasp new material.
Students have a variety of options when it comes to submitting their answers to concept
questions posed in class, as the Concept Warehouse accepts answers from clickers, laptops, and
cell phones. Many higher education institutions have chosen to adopt clickers into their
curriculums. This technology is not without its own set of disadvantages, however. Clicker
limitations include difficulty in assessing students’ ability to answer more open-ended questions
involving higher level thinking13, distortion of student responses by only providing a set menu of
possibilities, and the inability to simulate job-like environments14. Clickers are not compatible
with one of the Concept Warehouse’s more powerful features – short answer written
explanations. Instructors can choose to ask students to include a written explanation of their
answer selection to multiple-choice concept questions. While laptops support students writing
explanations to justify their answers, some students have also reported the inconvenience of
carrying their laptops from class to class15. This leaves mobile devices such as smartphones and
tablets as the tool of choice.
The goal of the the AIChE Concept Warehouse Student Application is to provide an improved
student experience. Currently, web pages are slow to load on mobile devices and require manual
resizing on the part of the user. With the Student Application, users will have access to touchfriendly navigation and a version of the student interface that is optimized for mobile devices.
The Student Application Design Process
The design process of the AIChE Concept Warehouse Student Application mirrors that of the
AIChE Concept Warehouse user interface16. This is logical since the application is an extension
of the student user interface. Specifically, the process includes the following steps:
1. Develop a function list for each screen.
2. Create a storyboard or mockup for each activity that includes the necessary functions.
3. Implement the mockup concepts into the live application.
4. Conduct internal testing via an emulator and a developer-enabled phone.
5. Test usability with students in a classroom setting.
We have completed the storyboarding part of the process and are currently iterating between
implementations of features and internal testing. Design conversations have led to changes in
the user interface layout. Usability testing is planned for Spring 2014.
The AIChE Concept Warehouse Student Application is designed to have the same functionality
as the student version of the web interface. Thus, the application must present the student with
assigned question sets, allow the student to select an answer, provide a written explanation and
confidence follow-up if prompted, and then submit that answer to the Concept Warehouse so
participation is recorded. Table 1 lists the functions that the student application will have, which
is identical to the functions that the student web interface performs.
After determining the required functionality, mock-ups of the user interface were created. The
application is optimized to run on a mobile device and this was the driving factor in the
development of the application’s user interface. Screen space is at a premium on smartphones,
so the main menu had to be hidden when not needed. When the student is viewing and
answering questions, the question text, image, and answer options need to be the most important
objects on the screen and therefore must take up the majority of the available screen space.
Menus and navigation must also be optimized for touch input, which means limiting the number
of submenus for the user to navigate. Finally, there are some design considerations for mobile
devices that did not manifest in the mockups. These considerations include checking for
network connectivity before initializing the application and low-power states for the application
to conserve battery.
The application is currently in development, where the functions and concepts realized by the
mockups are being implemented into a version of the application for internal testing. Once this
phase is complete and any necessary changes are implemented, a stable version of the student
application will be made available to students to test out in their own classes through the Google
Play Store and publicity on the AIChE Concept Warehouse webpage.
Table 1. Summary of functionality in the AIChE Concept Warehouse Student Application. This functionality mirrors the
capabilities of the student web interface.
Screen
Function
Description
Log In
The user can log into the application.
There is a checkbox to allow the app to
remember the login information.
Home
Class Question Sets
The user can click the names of the
classes that appear to access the
questions assigned. If there are no
questions available, the application tells
the user this.
Touch Menu**
A hidden menu can be opened by
swiping from left to right.
Questions
Navigation Between Questions
The user can move between a set of
assigned questions.
Answer Submission
After selecting the answer (and
providing a written explanation and/or
confidence follow-up if requested), the
user can submit his/her response.
Touch Menu**
A hidden menu can be opened by
swiping from left to right.
Profile
Demographic Information
The user can edit his/her personal
information such as birth year, year
started at the university, gender, race,
and major.
Touch Menu**
A hidden menu can be opened by
swiping from left to right.
Settings
Clicker Registration
The user can select clicker type and
register his/her receiver for use as an
alternative input device.
Touch Menu**
A hidden menu can be opened by
swiping from left to right.
Informed Consent
Accept/Decline Inclusion in Study
The user can view important
information regarding the ongoing
AIChE Concept Warehouse study and
choose whether to participate.
Touch Menu**
A hidden menu can be opened by
swiping from left to right.
**Touch Menu
Access to Home
Clicking this brings the user back to the
home screen.
Access to Questions
Clicking this brings the user to the
questions screen.
Access to Informed Consent
Clicking this brings the user to the
informed consent screen.
Access to Profile
Clicking this brings the user to the
profile screen.
Access to Settings
Clicking this brings the user to the
settings screen.
Log Out
This allows the user to log out of the
application.
Functionality
In many ways, the AIChE Concept Warehouse student application is an extension of the
webpage student user interface. After initializing the application, students come to a log in
screen where they must supply their username and password. The application will have a feature
that will allow students to save their login credentials for later use.
Once they are logged in, students will come to a home screen. If questions have been assigned,
question sets will be listed by class with the number of questions available to answer (see Figure
1a). If no questions have been assigned, the home screen will tell students this (Figure 1b). The
Home screen is also the first experience students will have with the hidden menu (Figure 2).
To conserve screen
space, the menu will
exists as a small image
at the top of the screen.
Students will be able to
access this menu by
either tapping the image
or swiping from left to
right. The menu will
serve as the main
method of navigation
through the application;
students will be able to
access their profile,
settings, informed
consent information,
and log out of the
application. More
importantly, students
will be able to answer
questions. The Profile
screen will allow
students to update their
demographic
B
A
information. This screen will
Figure 1. Mockup of the Home screen, both (A) with questions and (B) when no
contain the same fields
questions have been assigned.
seen in the student
webpage version: birth year, year started at the university, gender, race, and major. The Settings
screen will allow students to register their clickers as a secondary method of submitting answer
choices. The Informed Consent screen will provide students information about the ongoing
AIChE Concept Warehouse study, Integration of Conceptual Learning Throughout the Core
Chemical Engineering Curriculum, and allow them to opt-in.
When questions are assigned, students will be able to navigate the Questions screen (Figure 3) by
clicking on the names of the classes with assigned questions or by clicking the Questions button
in the navigation screen. This screen will present the question text, associated diagram if
applicable, and the answer options available to the student. If the instructor has chosen to ask for
a written explanation and/or confidence follow-up, these will be available as well. Students will
also be able to use the arrows at the top of the screen to navigate between linked or grouped
questions as well as use the dropdown menu to navigate between classes with assigned
questions.
Figure 2. Students can swipe from left to
right to access the hidden menu.
Figure 3. The Questions screen shows question
text, applicable images, and answer options. If
the instructor has requested a written
explanation and confidence follow-up, these
fields will appear as well.
Instructor and Student Usability Feedback
Once the AIChE Concept Warehouse Student Application has passed internal design testing, it
will undergo usability testing in a classroom setting. The application will be made available on
the Google Play Store, and links on the AIChE Concept Warehouse homepage will notify
instructors and encourage students to try it out. Specifically, students will be encouraged to use
the application throughout the course and then provide feedback in the form of a usability
questionnaire at the end of the term. Table 2 outlines what questions will be on the usability
questionnaire. The questionnaire is designed to elicit both quantitative results and provide an
open-ended space for students to discuss what they like about the application and problems they
think should be fixed. This data will be used to inform future updates and features of the AIChE
Concept Warehouse Student Application
Table 2. Questions to elicit feedback from students at the conclusions of usability testing.
Usability Questions
Question
My overall opinion of the AIChE Concept Warehouse Student Application is:
I found the application easy to use.
I would recommend the application to a fellow student.
I will likely continue using this application to answer questions through the
AIChE Concept Warehouse.
I find the application easier to use and/or more convenient than using my
clicker.
I find the application easier to use and/or more convenient than using my
laptop.
One aspect I like about the application is:
One aspect I would change about the application is:
Other comments that I have:
Type
Likert Scale
Likert Scale
Likert Scale
Likert Scale
Likert Scale
Likert Scale
Open-ended
Open-ended
Open-ended
Conclusions and Future Work
Initial development of the AIChE Concept Warehouse Student Application will culminate at the
end of March 2014. With a live version of the application available to students, the first round of
usability testing will begin in April and end in June (approximately when the Spring 2014 term
ends). This will give developers an opportunity to implement any changes and/or additional
features discovered by student feedback.
If there is positive reception of the student application, this could make way for future Concept
Warehouse mobile applications. Instructors may benefit from a tablet-friendly mobile
application that allows them to control the AIChE Concept Warehouse instructor interface
remotely while moving about the classroom to listen to student conversation. This application
could let them assign and close questions, view student answers, and re-poll students after class
discussion.
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No.
DUE 1023099. Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of the National Science
Foundation.
References
1.
2.
3.
Educating the engineer of 2020: Adapting engineering education to the new century. The National
Academies Press: Washington, DC, 2005.
Elby, A., Another reason that physics students learn by rote. American Journal of Physics 1999, S52.
Felder, R. M.; Brent, R., Understanding Student Differences. Journal of Engineering Education 2005, 5772.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
Crouch, C.; Watkins, J.; Fagen, A.; Mazur, E., Peer Instruction: Engaging Students One-on-One, All At
Once. Research-Based Reform of University Physics 2007.
Bakrania, S., “Getting Students Involved in a Classroom with an iPhone App,” Proceedings of the 2012
ASEE Conference and Exposition, San Antonio, TX, June 2012.
Bakrania, S., “A rubric-based grading app for iPads,” Proceedings of the 2013 ASE Conference and
Exposition, Atlanta, GA, June 2013.
Kowalski, F.V., Kowalski, S.E., & T.Q. Gardner, “Using Mixed Mobile Computing Devices for Real-Time
Formative Assessment,” Proceedings of the 2013 ASEE Conference and Exposition, Atlanta, GA, June
2013.
Hake, R. R., Interactive-engagement versus traditional methods: A six-thousand-student survey of
mechanics test data for introductory physics courses. American Journal of Physics 1998, 64-74.
Poulis, J.; Massen, C.; Robens, E.; Gilbert, M., Physics lecturing with audience paced feedback. American
Journal of Physics 1998, 66, 439-441.
Hestenes, D.; Wells, M.; Swackhamer, G., Force Concept Inventory. The Physics Teacher 1992, 141-158.
Fagen, A.; Crouch, C.; Mazur, E., Peer Instruction: Results from a Range of Classrooms. The Physics
Teacher 2002, 206-209.
Dancy, M., & Henderson, C., “Pedagogical practices and instructional change of physics
faculty,” American Journal of Physics, 78, 2010, 1056.
Yorke, M., “Formative Assessment in Higher Education: Moves towards Theory and the Enhancement of
Pedagogic Practice,” Higher Education 45, 2003, pp. 477 - 501.
Kowalski, S.E., F.V. Kowalski, & E. Hoover, “Using InkSurvey: A Free Web-Based Tool for Open-Ended
Questioning to Promote Active Learning and Real-Time Formative Assessment of Tablet PC-Equipped
Engineering Students,” Proceedings of the 2007 ASEE Conference and Exposition, Honolulu, HI, June
2007.
Koretsky, M. D., & Brooks, B. J., “Student Attitudes in the Transition to an Active-Learning Technology,”
Chemical Engineering Education, 46(1), 41-49, 2012.
Brooks, B.J., Gilbuena, D., Falconer, J.L., Silverstein, D.L., Miller, R.L., & M.D. Koretsky, “Preliminary
Development of the AIChE Concept Warehouse,” Proceedings of the 2012 ASEE Conference and
Exposition, San Antonio, TX, June 2012.
Paper ID #9993
Work-in-Progress: Developing Online Graduate Courses in Electrical Engineering
Petr Johanes, Stanford University
Larry Lagerstrom, Stanford University
Larry Lagerstrom is the Director of Online Learning for the School of Engineering at Stanford University.
He has eighteen years of experience teaching engineering and physics classes, including in blended and
MOOC formats. He holds degrees in physics, mathematics, interdisciplinary studies, and history.
c
American
Society for Engineering Education, 2014
Work-in-Progress: Developing Online Graduate
Courses in Electrical Engineering
A. Introduction
The Department of Electrical Engineering at Stanford University has a long history of
teaching large-enrollment master’s level and advanced undergraduate courses with broad
appeal and applicability. At present twelve such courses are offered, each with annual
enrollment of more than 80 students. Another dozen or so courses have somewhat smaller
enrollments. These courses are taken by Electrical Engineering students as well as students
from other departments within the School of Engineering and the rest of the University.
Many of the courses also make up the core of a professional development program offered to
working engineers. In order to test the learning efficacy of online education, develop a set of
best practices, and provide more scheduling flexibility to students by scheduling multiple
instances of a course during the year, the Department proposed to develop online versions of
a number of these courses over a three-year period. The proposal was accepted and the “EE
Online Program” started in academic year 2012-2013 with course planning and development.
Student learning patterns, outcomes, and satisfaction are being measured both quantitatively
and qualitatively. This work-in-progress paper reports on the mid-point results of the EE
Online (EEO) program.
B. Program and Course Development
The proposed plan for EE Online course development included four courses the first year, six
courses the second year, and up to nine courses the third year (all of which already existed in
traditional course formats). The initial four courses—applied quantum mechanics, digital
signal processing, digital image processing, and convex optimization—were chosen based
primarily on the interest and availability of the regular instructors to develop online course
materials. All were graduate courses, though at the introductory level, and therefore open to
advanced undergraduates.
Funding for the program came from the University’s recently created Office of the Vice
Provost for Online Learning (VPOL), which had requested proposals from departments that
went beyond single-course projects. Each of the four initial courses was budgeted in the
range of $35,000 to $50,000 for development of online materials as well as the actual
delivery of the course. These amounts included teaching fellow salaries and benefits
(advanced graduate students who would assist in teaching the new version of the course) and
200 hours of videographer and editor time at $90 per hour, as well as funds for a teaching
assistant who would focus on assessment issues across all the courses. Each course also had
its regular assignment of one or more teaching assistants, which were not part of this budget.
The faculty involved did not receive extra compensation or release time, even though each
spent 200-300 hours or more on course development. (This lack of an incentive may change
in the future, as it is a clear obstacle to the participation of faculty, most of whom are already
overscheduled. It should also be noted that it was discovered that teaching assistants were
very capable of performing the video editing required, and therefore much of the budget for
video editing was not needed and could be better spent on increasing the amount of general
TA assistance available.)
In addition, there were significant instructional development resources available to each
instructor at no cost through the VPOL Office. The Office currently has several full-time
instructional designers as well as dedicated teams for media production and for online
platforms. A number of classrooms are available that are outfitted with full video capture
capabilities and staffed by student operators. A video studio with a green screen is also
available for instructor use.
The instructors were given great leeway in how they chose to structure and develop the
online versions of their courses, including traditional classroom teaching supplemented with
online material, flipped classrooms, tutored online education (of which more below), and a
MOOC. In the latter case, the MOOC was to be offered in addition to the regular for-credit
course. The University views its MOOCs both as a public service and as laboratories for
exploring online teaching and learning—the School of Education at the University has an
active learning analytics group with a focus on online learning. But the University has no
plans at present for granting credit to non-Stanford students taking MOOCs.
Although instructors were encouraged to experiment with different forms of online
instruction, the Department’s Academic Affairs Committee (consisting of five faculty, two
graduate students, and two staff members) reserved the right to determine when and in which
forms (offline, online, blended, etc.) a course would be offered for credit, in order to ensure
the overall quality of the EE Department’s academic program. In addition, the Department’s
Associate Chair for Graduate Education took the lead in overseeing the EE Online program.
Three of the initial EE Online courses were offered during the Autumn quarter of 2013:
quantum mechanics, digital signal processing, and digital image processing. The quantum
mechanics course was taught as a flipped, for-credit course using online video modules and
assessment quizzes in place of traditional lectures. It was made available to the public as a
MOOC at the same time. The digital signal processing course was also taught in a flipped
format, and the digital image processing course was taught in an “online with tutored
instruction” format. All three courses assigned offline problem sets.
In the online-with-tutored-instruction format used in the digital image processing course,
students viewed video modules and worked through assessment quizzes online, while a
teaching fellow (an advanced graduate student) offered in-person Q&A times to supplement
the videos and regular teaching assistants provided additional help to students. In other
words, the main teaching role of the faculty person was via the online videos (although in this
pilot version the faculty instructor did hold weekly office hours). In order to compare
teaching methods and learning outcomes, this course is being offered in a more traditional
format by the faculty instructor during the Winter quarter 2014, and it will be offered again
by a teaching fellow in the online-with-tutored-instruction format in the Spring. The quantum
mechanics course may also be offered in the teaching fellow format in the Spring or Summer
of 2014. The fourth of the initial EE Online courses—convex optimization—is being offered
as a regular course with online materials as well as a MOOC open to the public in Winter
2014.
Though the overall program plan called for the development of online materials for six more
EE courses for the 2014-2015 academic year, at this point in time it is unclear which and how
many EE courses will be included. As mentioned previously, a significant obstacle is the
200+ hours of course development required of the instructor, though we are investigating
ways to reduce this time.
C. Surveying Student Perceptions
In order to provide a preliminary assessment of the use of online learning materials, during
the Autumn 2013 term we conducted mid-course student surveys for five classes with online
components, three of which are part of the EEO Program (applied quantum mechanics,
digital signal processing, and digital image processing) and two of which are not
(nanomanufacturing and introduction to computer networking). (Because engineering faculty
because it allowed us to gather data from a more diverse set of courses, we agreed to extend
the service to them.)
The survey sought to gather both quantitative and qualitative information from students,
while at the same time not being overly burdensome in terms of the time needed to complete
it. Taking the survey was optional, though in one case it counted toward the course
participation grade. The survey was online and contained twelve questions (not counting
three demographic questions concerning degree-level, year-in-school, and department). Both
open-ended questions and choose-a-response questions were included:
1. How are you finding the course so far? What would you like more of? Less of?
2. What do you like/dislike about the online videos?
3. What do you like/dislike about the other online components of the course?
4. What do you like/dislike about the homework assignments distributed so far?
5. On average, how many hours per week are you spending on the online materials
(videos, quizzes, etc.)? [A range of possibilities was given from which to choose, such
as <1, 1-2, 3-4, 5-8, and so on.]
6. On average, how many hours per week are you spending on the problem sets or other
homework assignments? [A range of possibilities was given.]
7. What is your typical practice in terms of the videos and assessment quizzes? [A list of
possibilities was given, such as attempting the quiz before watching the video.]
8. When you take an online assessment quiz, are you usually confident of your answers?
[A list of possibilities was given, including “usually,” “sometimes,” and “much of the
time I am guessing.”]
9. What is your typical practice in terms of the online materials and the problem sets or
other homework? [A list of possibilities was given, such as viewing all the materials
before starting the homework.]
10. When do you normally start working on the problem sets or other homework? [A list
of possibilities was given.]
11. Which of the following support resources have been helpful for finishing the problem
sets and other homework? [A list of possibilities was given.]
12. Anything else you would like us to know or that you would find helpful?
We received an average 35% response rate from all courses generally and 40% from EEO
courses specifically, as summarized in Table 1 below.
Course
Enrollment (n)
Response Count (n)
Response Rate (%)
QM
82
28
34%
DSP
13
7
54%
DIP
25
13
52%
Nanomfg
41
18
44%
Networking
160
46
29%
Totals
321
112
35%
Table 1. Enrollment, response counts, and calculated response rates for the surveyed courses.
(QM = applied quantum mechanics; DSP = digital signal processing; DIP = digital image
processing; Nanomfg = nanomanufacturing; Networking = introduction to networking.)
D. Survey Methodology: The Quantitative Data
Using the responses from the “hours-per-week” survey questions, we aimed to better
understand how much time students were spending on the individual class components and
the class as a whole. For the individual components, we summed up the counts in each
weekly time duration range. For the class as a whole, however, we created a range for each
responding student by summing the minimum and then the maximum values. If a student
reported spending 3-4 hours per week on online materials, 1-2 hours per week on online
quizzes, and 3-4 hours per week on paper-based problem sets, the calculated range would be
7-10 hours per week. We then classified each responder’s range as low, normal, high, or very
high based on the rules outlined in Table 2. For comparison purposes, we compiled historical
data for the courses involved from the University’s course and section evaluation reporting
website. This data went back several years and as far back as 2008 in one case. The results
are discussed in Section G.1 below.
Workload Category
Rule Notation
Rule Statement
Range Example
LOW
xMAX < 10
if the student’s reported maximum
weekly workload is close to or below
the minimum of the expected range
For example, the
student reports 5-8
hrs/week
NORMAL
xMAX < 13
if the student’s reported maximum
weekly workload is close to or below
the maximum of the expected range
For example, the
student reports 7-12
hrs/week
HIGH
VERY HIGH
xMIN ≤ 12 & xMAX ≥ 13 if the student’s reported minimum
weekly workload is close to or below
the maximum expected range and the
maximum weekly workload is above
the maximum of the expected range
xMIN > 12
if the student’s reported minimum
weekly workload is above the
maximum of the expected range
For example, the
student reports 1116 hrs/week
For example, the
student reports1318 hrs/week
Table 2. Categorization of students’ reported workload for quantitative analysis.
E. Survey Methodology: The Qualitative Data
To analyze the free-response and qualitative questions on the mid-course survey, we
employed primarily low-level coding and pattern matching. Because the data would be used
to produce reports for the teaching staff in each course, we analyzed the student responses in
terms of (1) course component, (2) component specifics, and (3) student sentiment. Course
component refers to the substance of the response we were coding, as when students are
referring to online videos or the flipped classroom model, for instance. Component specifics
refers to the student reporting a particular aspect of the component, such as the long length or
large quantity of online videos. This part of the code is not always present and we included it
in our analysis primarily when we began to notice a pattern in the student responses that we
specifically wanted to explore. The student sentiment refers to whether the comment is
positive or negative. Table 3 showcases several examples of these codes.
Because we were analyzing the data sequentially and not all at once, we developed our
coding repertoire as we analyzed more and more student responses. Consequently, we also
developed certain codes that are at present unique to an individual course (and thus helpful to
the instructor), but we expect that they will become more prevalent as our analysis expands.
Using the coded student responses, we not only counted the frequency of individual codes,
but also looked for broader patterns within an individual course and across a group of
courses. Although we discussed all of these results as we produced the report for each course,
we did not explicitly formulate any hypotheses about the broader patterns until we had
finished analyzing the data from all courses.
Code
Explanation
Example
Course :: Positive
Comment regarding the overall
course that is positive
“I'm enjoying the course--really
learning a lot.”
Online Videos :: Positive
(Transcript)
Comment regarding online
videos that is positive,
specifically about video
transcript
“I really like the transcript that goes
along the side.”
Online Videos :: Negative
(Time-Consuming)
Comment regarding online
videos that is negative,
specifically referencing the
videos’ time-consuming nature
“They [the online videos] are
extremely long (if you consider that
you’re replacing one Wednesday
class with 3 online videos
approximately 25-30mins each at
regular speed).”
Course Materials ::
Negative (Not Enough
Detail)
Comment regarding the course
materials that is negative,
specifically describing their lack
of detail
“The lecture notes on their own are
too sparse to be useful for review and
reference.”
Online Assessments ::
Positive (Thorough
Explanations)
Comment regarding online
assessments that is positive,
specifically referencing the
thorough explanations to the
assessments’ answers
“The answers to the quizzes are well
written, so, when I truly don't
understand a question, I feel like the
answer really helps w/ understanding
the solution.”
Table 3. Samples of codes used for qualitative analysis.
F. Sharing Insights with Instructors
Once we finished analyzing the student responses for a particular course, we produced a
report that summarized our key findings for the course teaching staff. The reports include an
executive summary, question-by-question brief data breakdown, additional feedback, and
screenshots of the survey itself. Throughout each report and especially in the one-page
executive summaries, we highlighted any suggestions and recommendations that we felt were
relevant for the teaching staff to notice.
G. Trends and Analysis
From the mid-course survey results in the three EEO and two non-EEO courses, we identified
three major areas that students are particularly sensitive to and that instructors should be
aware of:
 Time usage
 Structure of online assessments
 Offline-online integration
For the EEO classes, we also contextualized these trends through end-of-quarter student
course evaluations.
G.1. Student Sensitivity to Time Usage
The most immediate trend noticed from the survey results concerns students’ reporting of
how much time they spend on a given course. From data gathered across all five courses, we
found that on average students spend 3-4 hours per week on online materials/videos, 1-2
hours per week on online quizzes/assessments, and 3-4 hours per week on paper-based
problem sets (if they are part of the course). (See Figure 1 below.)
The total time spent outside of class time is therefore 7-10 hours per week. Given that these
courses are 3-4 units apiece, this is consistent with the definition of a Carnegie unit, which
states that 1 unit of academic credit reflects approximately 3 hours of work per week inside or
outside of class. To confirm this conclusion, we calculated the hourly range that each student
reported spending on the course overall, and defined that range as low, normal, high, or very
high. (See Figure 2 below. Section D and Table 2 above give the definitions of “low,”
“normal,” etc.) For the EEO courses we found that 14% of the responders fit the very high
category, and 13% for all five courses. More importantly, more than half of the students in
each case responded as experiencing a low or normal workload: 58% for EEO courses and
60% for all courses. From a purely numerical standpoint, therefore, these courses provided a
manageable workload for more than 80% of their students.
50%
22%
21%
4%
1%
1%
44%
28%
12%
6%
9%
3%
39%
28%
12%
22%
8%
2%
Figure 1. Reported times students spent weekly on online videos, online quizzes and
assignments, and offline (paper-based) problem sets.
29%
29%
30%
29%
30%
27%
14%
13%
Figure 2. Occurrence of low, normal, high, and very high workloads as calculated from
student responses in three EEO courses (left) and two non-EEO courses (right). (Note:
Rounding yields a total greater than 100% for the left plot.)
Despite these quantitative results, the subjective experience of the students reveals a different
perspective, with students communicating an acute sensitivity to the time spent on a course.
In their responses, multiple students specifically referenced how much time they were
spending on the online portion of the class. This is not completely surprising, since the
average student was spending 4-6 hours per week on the online portions. Given the reported
total of 7-10 hours, this represents slightly more than half of the average student’s course
time.
It is unclear if the online components add time that was not previously used by any offline
component or if the student is now simply more aware of this time. Students can see the
length of each video, and so they are probably conscious of how much time they spend
watching them. The time the students spend online is also strictly delineated by activity:
watching videos and answering quizzes. In the absence of these online course components,
the students might have still used that time to read textbooks, check notes, and so on. The
difference in perception might be that the offline activities are not as explicitly differentiated
or that students had no way to consciously track and remind themselves of the time they were
spending on them. This is a topic of particular interest that we hope to explore in the future.
The students’ general dissatisfaction with the time-consuming nature of these courses is not
always that students desire less work. There is a small minority of students who desire less
workload so that they can process and elaborate on what they are learning. The following
response illustrates this desire:
“In most classes, I review the material several times, make sure I understanding
everything, then explore extensions to what was taught. This is how I learn. There is no
opportunity to do so in this course - I simply do not have enough time to do any extra
work. The result is that despite spending more time doing stuff for the class, I learn (in a
‘retain for longer than the quarter’ sense) significantly less than I would in a normal class.”
As the same student aptly summarized, the additional workload creates a situation in which
the student is “spending twice as much time to learn half as much.” The student wanted to
spend more time on the course, but not in the structured manner that the teaching staff
intended for all students. Although we have seen only a few responses that mention or hint at
a similar distinction, it is important to note that by structuring more of a student’s time, less
time is available for unstructured learning.
G.2. Student Sensitivity to the Structure of Online Assessments
This tension with forced structure also revealed itself in student comments regarding the
online and in-video assessment quizzes. In the quantum mechanics course, for example, the
teaching staff used multiple-submission as well as single-submission questions. Several
students reported an appreciation for the multiple-submission questions, and not only because
multiple attempts allow them to get the question right and earn points. As one student
explained:
“I put a lot more thought into an answer if I try once, get it wrong, and then have the
chance to get it right than I do if I get it wrong and it becomes immediately
inconsequential to my grade.”
The incorrectness of the initial attempt prompts further thought, whereas the restriction of a
single submission may encourage the student simply to move on.
In addition, many students reported that they thought the online quizzes would be more
effective without points attached to them (in those courses where points were given). The
students report various reasons for why this is the case, but the following response highlights
a common sentiment: “Since we only get two guesses [in this case], there is little room for
experimentation and exploration. It transforms a useful learning tool into yet another attempt
to grade us.” The tension here is whether the quizzes are for the benefit of the students (a
“learning tool”) or for the benefit of the teachers (“yet another attempt to grade us”). As the
student also points out, between examinations, homework assignments, and other forms of
assessment, the online quizzes represent “yet another” method by which to grade students.
Another common complaint across most of the courses was the poor wording and lack of
clarity in the questions. Although the questions were perceived to be clear and
straightforward by the instructors, many students felt otherwise. Students seemed to notice
every grammatical and syntactic inconsistency. They also noticed any inconsistency between
the crafted questions and the material presented in the video. Commenting on this mismatch,
one student wrote that students can “easily spend at least the amount of time it took to watch
the video (or more) answering quiz questions.” Several students from across multiple courses
reported having to rewind videos, reference texts, and even search elsewhere online to be able
to answer the quiz questions. The result is that, as another student wrote, “[I] spend so much
time trying to get the question right that I lose sight of what’s really important, the material.”
Students in most of the courses also requested more comprehensive explanations to the quiz
answers, in order that they might become better opportunities for learning.
G.3. Student Sensitivity to Offline-Online Integration
The various types of mismatch and inconsistencies in online assessment are indicative of a
broader and deeper need for online-offline integration. Simply put, the online and offline
components in a course need to fit together. On one hand, the online components must
complement the offline ones (which, in the case of these courses, came first). On the other
hand, the online components cannot render the offline components (such as class time)
superfluous. One student summarized the situation well in noting that the course was in
danger of becoming “just a really time consuming TV show as opposed to a fun class.” From
our data, it seems that even if students viewed the offline and online components positively,
unless the two integrate well with each other, the overall experience becomes negative. The
quality of integration is at least as important as the quality of the components.
Probably the most significant point of misalignment centered on homework and other forms
of assessment. Expressing a sentiment we found across most of the courses, one student
wrote that “it sometimes feels like the treatment [in videos] is too simple for us to do the
homework.” This mismatch, compounded by other previously stated problems such as poor
wording, can result in students feeling that “the course staff doesn’t put as much detail into
the course material as they expect out of the students” (especially when the course does not
have a textbook).
G.4. Overall Student Evaluation of the Courses
Apart from the mid-course survey through which we collected student responses, students
also filled out university-administered end-of-term course and section evaluations. From
these, we found that students rated all three EE Online courses above the School of
Engineering mean for the Autumn 2013 term. Technological and pedagogical
experimentation, therefore, does not necessarily imply negative student rating of the course
and its instructor. On the other hand, of course, this does not mean that turning an offline
course into an online-offline blended one automatically produces an above-average student
rating. In the case of one EEO course, for example, the end-of-term evaluations produced the
highest rating in the history of the course since 2008. For another course, however, there was
a drop in the course rating from the prior year’s course, although it still remained above the
School of Engineering average. Unsurprisingly, one significant factor leading to more
negative student evaluations was a mismatch between instructor-communicated course
expectations and the actual student experience, such as when the online videos and
assessments took much more time than advertised.
H. Analysis of Video Usage
We are also currently analyzing the details of student viewing behavior in the digital image
processing course. The video player and data collection system that the course employed is
an internal system developed by the Stanford Center for Professional Development. The
system allows the collection of data on a variety of behaviors, including usage of the
pause/play and speed changing buttons, jumping from one segment to another (whether
forwards or backwards), and the day of week and time of day when students viewed the
videos the most frequently.
We have found several preliminary patterns in our analysis. For example, significant changes
in viewership behavior occur when videos exceed a certain length. Viewership begins to drop
at the 3-4 minute mark and even more so when the 5 or 6 minute mark is reached. To explore
viewer behaviors at the levels of individual videos, we are isolating time segments of
significance (e.g., very high or very low viewership) and correlating them with the content,
and presentation of the content, in the video at that point. We are also examining the
sequence of interaction. What did student X do right after watching video Y? What did
student X do after answering quiz A? After watching video Y, did student X go straight to
quiz A? And after answering quiz A, did the student return to video Y? The aim is to explore
which videos and which parts of videos receive higher viewership, and why, as well as to
connect viewership data to quiz/assessment data.
I. Analysis of Learning Outcomes
At this point in time (March 2014), the analysis of learning outcomes from the EE Online
courses is still in process. We are using the digital image processing course, in particular, to
compare the efficacy of the online-with-tutored-instruction method (described in Section B
above) with a more traditional lecture-based class. During the summer of 2013 the faculty
instructor for the course spent 200+ hours developing short online video modules and
associated assessment quizzes for the course material. The structure of the course was
designed to cover all of the material during the first seven weeks of the ten-week quarter,
with weekly offline homework assignments. During the final three weeks of the quarter
students work in small teams on projects. A take-home midterm exam is given in week 9.
There is no final exam. The final course grade is based on final project 40%, midterm 30%,
homework 20%, and participation 10%.
The course was offered during the Autumn 2013 quarter via the OpenEdX platform using the
online-with-tutored-instruction method. Students watched the online videos and then took
short assessment quizzes (1-5 questions) that tested their understanding of the material. There
were 80 quizzes (one for each video) and a total of 199 questions, and most of the questions
were multiple-choice or true-false. Taking the quizzes counted for a student’s participation
grade in the course, but the actual performance on the quizzes did not count for or against a
student’s overall course grade. The instructor and two TAs for the course held regular office
hours, and the TAs also conducted weekly problem sessions related to the homework. An
online discussion forum was set up using Piazza.
The course is being offered again during the Winter 2014 quarter by the faculty instructor
using a more traditional lecture format (though with some time devoted to in-class exercises).
Students were again required to take the online assessment quizzes for their participation
grade, but watching the online videos was optional, given that the material was covered in
class. (Though most students did watch the videos, at least in part.) Class attendance was
required and counted in the participation grade. The quiz questions were identical for each
quarter (aside from one or two corrections).
Final results are still pending for the Winter 2014 course. But a preliminary comparison
shows minor differences between the outcomes of the Autumn and Winter versions. For the
online course in Autumn, the average homework score was 84% (n = 26 students), while for
the more traditional course in Winter it was a very high 96% (n = 24). But midterm exam
results were more similar: 83% for Autumn students and 87% for Winter students.
For both courses, the student performance on the online assessment quizzes was much lower
than expected. The quiz questions were intended to be relatively straightforward in most
cases, assuming that the student watched the associated video (and paid attention). Yet the
average quiz score was 45% for the online Autumn course and 43% for the traditional Winter
course. (These figures include a correction to take into account the possibility of student
guessing on the multiple-choice questions; raw averages were still low, however: 63% and
61%, respectively.) Possible reasons for these low scores, besides student inattention, include
the common occurrence where experts underestimate the difficulty of a question for novice
learners in a field, the inadequacy of the video modules to explain the material well enough,
and/or poorly formed questions and answers.
In addition, a multiple-choice pre-course test was given to all students to establish a baseline
for the knowledge of the incoming students and at the end of the course a second test
(essentially identical to the first) was given. Results for these tests from each quarter will
provide another way to compare the learning outcomes of the two instructional methods.
(Post-course test results for the Winter 2014 course are not available yet.)
Given just the two course offerings at this point and the relatively small number of students,
we cannot draw statistically rigorous conclusions. But our primary goal is to check whether
the online-with-tutored-instruction method in introductory EE graduate classes can produce
learning outcomes that are approximately equivalent to a more traditional face-to-face class
with some online elements. The tentative conclusion is that it can. Clearly more iterations and
fine-tuning are required. But given the preliminary results, it seems viable to offer students
more flexibility by scheduling multiple instances of a course throughout the year without
having to have the faculty instructor be directly involved in all of them. (It also allows the
possibility of the faculty member not being involved in any of them, of course. But whether
that is desirable or not is another question.) If it is important to students to take a course with
face-to-face interaction with the instructor, then they may enroll in the course during the term
in which the instructor is directly involved. On the other hand, if scheduling flexibility is a
higher priority for them, then students would have the option to take it during another term
via the online-with-tutored-instruction method.
A secondary benefit of the online-with-tutored-instruction method that should be mentioned
is the opportunity it gives to advanced graduate students to gain teaching experience beyond
that of the typical TA experience.
J. Conclusions
At this point in the development of the EE Online program, we have identified three major
areas that students are particularly sensitive to and that instructors should be aware of: extra
requirements on the students’ time, the structure of online assessment exercises, and the
integration of the online and offline components of a course. There are more moving parts in
a blended course compared to a typical traditional course, and managing student expectations
and then delivering on those expectations with a well-integrated online-offline pedagogy is a
crucial skill in online and blended instruction. Instructors should also keep in mind that for
many students, as for instructors, online and blended instruction is a new experience, and
they sometimes need to learn how to learn differently, and it is hoped, more effectively.
Graduate students and advanced undergraduates tend to be set in their study habits, and
therefore change does not come easily.
Given the preliminary and roughly equivalent results from the two offerings of the digital
image processing course, one using the online-with-tutored-instruction model and the other a
more traditional in-class model, it is concluded that the online-with-tutored-instruction
approach shows promise for allowing departments to schedule more instances of a course
than a regular faculty member would normally teach. The initial requirement of time and
resources is not inconsiderable, however, and appropriate incentives for faculty members to
make the investment should be considered. But in the long run the increased flexibility for the
department and for students may make the investment worthwhile.
Paper ID #10347
Work-in-Progress: Learning Embedded Smartphone Sensing technology On
a Novel Strategy (LESSONS): A novel learning labware design, development
and implementation
Dr. Kuosheng Ma, Southern Polytechnic State University
Kuo-Sheng Ma, Ph.D. is currently an Assistant Professor in the Department of Electrical Engineering at
the Southern Polytechnic State University. His research interests include MEMS and embedded systems
design on biomedical applications, mobile Health and the use of technology in engineering education.
Dr. Liang Hong, Tennessee State University
Dr. Kai Qian, Southern Polytechnic State University
Professor of computer science
Dr. Dan Lo, Southern Polytechnic State University
c
American
Society for Engineering Education, 2014
Work-in-Progress: Learning Embedded Smartphone Sensing
technology On a Novel Strategy (LESSONS): A novel learning
labware design, development and implementation
I. Motivation
The exponentially evolved mobile devices and applications have played important roles in all
aspects of our society1. In addition, the growth on hardware and software of embedded
technologies has demonstrated their capabilities to influence the physical world via their
complex functionalities2. Combining embedded system, wireless communication and mobile
technology, remote sensing have shown significance on promoting better services in healthcare,
environmental protection, and national security, etc. As its importance at the nexus of the future
global economic competitions, these technology trends fuel the growing demand of well-trained
workforce in mobile embedded system.
On the other hand, the embedded system has been demonstrated their capabilities to influence
the physical world via complex functionalities instead of being a solely electronic device. Many
universities realized its importance and have put efforts on promoting underlying courses in
college-level education3,4. Due to its characteristic involving both hardware and software,
embedded system is always listed as an essential course in both electrical engineering and
computer sciences (or software engineering) curriculum. However, one main challenge in these
major-oriented curricula is that students won’t gain comprehensive knowledge in both hardware
and software design by only taking one course in their major department. For example, students
in EE major might learn materials focus on hardware design but lack of programming skill
trainings. Rather, students in CS major might only learn microcontroller programming without
hardware exploring. Thus, students can’t be exposed to all cutting-edge technologies and can’t
gain strengths from both disciplines.
In addition, most engineering curricula required lab sections5. Students need to attend the
physical laboratory section and to finish the specific project in the labs. They need to accomplish
all pre-set lab activities in a limited time with many constrains and pressure. This instruction
model jeopardizes students’ learning effectiveness by reducing students’ interests, blockading
creative thinking, and hindering transformative innovations. Further, the training on the
emerging mobile embedded systems education is even less and unavailable.
II. Portable labware design
In response to these dilemmas, we are working on developing a labware to be implemented in
our embedded systems curriculum without further increase students’ learning burdens. This
labware is proposed and built based on our experiences on mobile learning environment6.
Students can learn materials and work on lab activities anywhere/anytime7. The labware features
of mobile embedded sensing systems design with student-center learning, multidisciplinary
applications and sustainability characteristics. It is comprised of modules from introductory to
ultimate system design modalities. The learning scheme is carried out through collaborative
activities on building mobile embedded sensing systems with various environmental and
biomedical applications. The inexpensive hands-on tools will assist students to acquire authentic
experiences without physical lab setting. Throughout this labware training, students are expected
to demonstrate their ability on building mobile applications, constructing embedded sensing
systems, and performing remote sensing on different applications. The project will be hosted in a
repository to ease the dissemination to the whole academic community.
We have developed the pilot modules in this labware. As an example, figure 1 shows the
repository page of the prototype design. The labware is comprised of modules which are
designed to be used from introductory of mobile device program to ultimate embedded sensor
modalities. Currently there are six modules have been developed and each module contains three
major components. The “pre-lab” is used to introduce concepts, background, and some activities
for lab preparation. The “in-lab” activities provide instructions for students to hands-on required
practices. The “post-lab” activities include student add-on labs and open-end projects. The
labware will be delivery as an integrated package and deploy it on Google site to provide a
“ready-to-adopt” model.
Fig. 1 Different modules of the labware for mobile embedded system engineering education
Within these six modules, module 6 “Android apps with external environmental sensors” is the
module which students will work on the real-world relevant problem. The sensor platform is
built by integrating an external dust sensor with microcontroller and communicate with an app
on an Android based smartphone through a Bluetooth device. In this lab (module), students need
to have comprehensive knowledge of how to make an Android app, how to use the wireless
communication device in the phone to receive the signal from microcontroller, how to connect
the dust sensor to the microcontroller and program it to acquire data from it. To avoid the highcost financial burden, we adopted an off shelf dust sensors in this lab activity. The total cost of
the whole platform is around $100. After completed the lab activity, in the “post-lab”, students
can find the alternative sensors such as CO, CO2, CH4 gas sensor to replace dust sensor and
further extend their knowledge in real-world relevant topics as well as increase their hands-on
experiences. The figure 2 shows the components of the environmental sensor platform in this
module activity.
(a)
(b)
(c)
Fig. 2 Components of the environmental sensor platform show in module 6 activity. (a) Android
phone. (b) Bluetooth external module to MCU. (c) Environmental dust sensor
III. Preliminary evaluation
The prototype of the labware (first 5 modules) has been demonstrated to be used on students who
participated the NSF Peach State Louis Stokes Alliance for Minority Participation summer
research sessions at SPSU for preliminary evaluation. Several students in this program worked
on the project leaded by authors on mobile embedded system designs and developments and
learned the materials through the prototype of the labware. In the end of the summer program,
those students were requested to evaluate the efficacy of the labware to improve their learning
efficiency and provided their feedback of the labware for further improvements.
Most feedbacks were positive and encouraging. Students were excited about the experiences of
the new learning approach through modules in the labware and enjoy working on the real-world
relevant problem with the android-based mobile sensing platform. They felt their learning
efficiency dramatic improved on practicing the hands-on activities in the labware without limited
time constrains as in the normal lab settings. In addition, they were able to seize any free-time
opportunity to learn course materials through the mobile enabled labware in anywhere they liked.
Finally, they were confident and motivated to work on more advanced topics and various
applications in mobile sensing systems developments.
IV. Evaluation plan
With regards to evaluation of the leaning approach over the long term, we considered a
comprehensive qualitative and quantitative evaluation plan to assess the project developments
and progresses periodically; and gather evaluation results as on-going feedbacks from
participants to improve the project in the future. All evaluation criteria are designed focus on the
implementation progress, the effectiveness of the learning approach as well as learning materials
and the facilitating faculty development under the new tool. All evaluations will be performed
using standard testing instruments, surveys and interviews.
Qualitative Evaluation: the qualitative evaluations will be conducted to assess the effectiveness
of the labware in terms of improving student learning and faculty participation in mobile learning
education. Qualitative evaluations will document with surveys, interviews, case studies, and the
analysis of evaluation data. One important assessment in this evaluation is the improvement of
student knowledge in mobile embedded system design with/without assistant of our labware. A
comparison analysis of the student instructional evaluations (SIR II) will be thoroughly
examined to understand the impact on students’ learning behavior with our labware.
Quantitative Evaluation: We will conduct quantitative evaluations which include the number
of modules being developed, the number of examples in each hands-on practice, how many
students involve in the learning approach, and more. Surveys on students in the class will be
conducted by a Likert-type scale questionnaire (some survey questions in Table 1). This survey
will help us on deeper understanding behavioral changes of students learning interests. We will
also use quantitative evaluations to assess the learning outcomes, such as the average grades of
exams, quiz and assignments, and students’ satisfaction on labware design, pedagogy setting and
the assessment method.
Table 1. Likert-type scale questionnaires for project participating students survey evaluation
Questions
Strong
Strong
Agree
Disagree
5-----4-----3-----2-----1
I am interested in learning mobile embedded system design
I have previously learned how to design an embedded system
I am good at either hardware or software design on controlling
embedded systems before this class
I like to use labware to learn class materials actively
I finished all the hands-on examples in each module and
understand how to solve each question
I worked on the labware in anytime I want to learn it
I like the challenge of all hands-on questions
I learned more knowledge by working on this labware
I want to learn other different engineering courses with the
method in this course
V. Project implementation
As the completion of the project, the labware will cover both basic and advanced mobile
embedded system design topics including java programming, wireless local area network
(WLAN) communication (i.e. WiFi, Blutooth), hardware (sensors, actuators) design, real-time
software programing, and I/O interface. Thus, the labware is constituted by different modules
which can be used as an integrated and sequential lab material to be implemented in a single
embedded systems course or to be implemented as learning supplements for the specific course
by employing the selected module in different engineering curriculum.
The authors are currently following the model curriculum of 2004 IEEE/ACM8 and redesigning
the curriculum in electrical engineering, computer engineering, and software engineering and
gradually implement the developed labware to the related courses they offer. We are seeking
longitudinal implementation strategy to maximize the influence of our labware to train our
students. With this setting, we highly expect to drastically improve students’ learning outcome
and increases their proficiency in mobile embedded system design.
VI. References
1. C. Albanesius, "IBM predicts computers that embrace the 5 senses,"
http://www.pcmag.com/article2/0,2817,2413300,00.asp, Dec., 2012.
2. Y. Adu-Gyamfi, “Embedded systems in our everyday life,” http://tekedia.com/46019/embedded-systems-in-oureveryday-life/, Aug., 2012.
3. X. Hu, M. Wang, Y. Xu, and K. Qian, “Modular design and adoption of embedded system courseware with
portable labs in a box,” Proc. World Congress on Engineering and Computer Science (WCECS), 2012.
4. M. Agarwal, A. Sharma, P. Bhardwaj, and V. Singh, "A survey on impact of embedded system on teaching" MIT
International Journal of Electronics and Communication Engineering, vol. 3, no. 1, Jan. 2013, pp. 36-38.
5. Nielsen, M.L., Lenhert, D.H., Mizunol, M., Singh, G., Staver, J., Zhang, N., Kramer, K., Rust, W.J., Stoll, Q., and
Uddinl, M.S., “Encouraging interest in engineering through embedded system design”, ASEE Annual Conference
& Exposition, 2004.
6. K. Ma, M. Yang and K. Qian, “Contradistinction and relevant learning for transform processing with smartphones
in engineering education”, Proc. IEEE 13th International Conference on Advanced Learning Technologies,
Beijing, China, July 2013.
7. L. Hong, K. Qian and C. Hung, “Multi-faceted penetration of fast Fourier transform by interactively analyzing
real-world objects via mobile technology,” ASEE/IEEE Frontiers in Education Conference, Seattle WA, Oct.
2012.
8. Computing Curricula for Computer Engineering Joint Task Force, “Computer Engineering 2005: Curricular
Guidelines for Undergraduate Degree Programs in Computer Engineering.”, Oct. 2004.
Paper ID #9633
Work-in-Progress: A Novel Approach to Collaborative Learning in the Flipped
Classroom
Dr. Neelam Soundarajan, Ohio State University
Neelam Soundarajan is a faculty member in the Computer Science and Engineering Department at the
Ohio State University. His research interests include software engineering and engineering education.
Swaroop Joshi, The Ohio State University
Swaroop is a PhD student in Computer Science and Engineering at the Ohio State University. His interests
include a range of problems in software engineering as well as the use of technology in the classroom.
Dr. Rajiv Ramnath, Ohio State University
Dr. Rajiv Ramnath is Director of Practice at the Collaborative for Enterprise Transformation and Innovation (CETI), and an evangelist for AweSim, a consortium that seeks to bring high-performance computing based modelling and simulation to small and medium enterprises in the Midwest. He was formerly
Vice President and Chief Technology Officer at Concentus Technology Corp., in Columbus, Ohio, and
led product-development and government-funded R&D – notably through the National Information Infrastructure Integration Protocols program funded by Vice President Gore’s ATP initiative. He is now
engaged in developing industry-facing programs of applied R&D, classroom and professional education
and technology transfer. His expertise ranges from wireless sensor networking and pervasive computing to business-IT alignment, enterprise architecture, software engineering, e-Government, collaborative
environments and work-management systems. He teaches software engineering at OSU and is involved
in industry-relevant and inter-disciplinary curriculum development initiatives. Dr. Ramnath received his
Doctorate and Master’s degrees in Computer Science from OSU and his Bachelor’s degree in Electrical
Engineering from the Indian Institute of Technology.
c
American
Society for Engineering Education, 2014
Work-in-Progress: A Novel Approach to
Collaborative Learning in the Flipped Classroom
Abstract
The flipped classroom is widely regarded as an excellent approach to exploit the affordances of
digital and on-line technologies to actively engage students and improve learning. The traditional
lectures “covering” course content are moved to on-line videos accessible to students before the
class meetings, with the class meeting times being devoted mostly to discussion and application
of the new ideas, and other active learning tasks. The expectation has been that this will make the
courses much more effective and students will be able to achieve the intended course outcomes to
a much greater extent than in the traditional classroom. But the results have been disappointing.
Although students find the flipped classroom engaging, student achievement of course learning
outcomes, as reported by most researchers who have used the approach, has been roughly the
same as in traditional classes.
How do we tailor the flipped classroom to achieve its full potential? That is the question our workin-progress attempts to address. The thesis underlying our approach, based on classic work in the
area of how people learn, is that it is not enough to have students watch the on-line videos before
the class meeting. They should also engage in serious, structured discussions with other students,
and thoughtfully consider ideas that may conflict with their own understanding of the topic in
question both in order to help them develop a deeper understanding of the topic and in order to
highlight problem areas that need further elaboration by the instructor. We discuss the theoretical
basis behind the work, provide some details of the prototype implementation of an on-line tool that
enables such structured discussions, and describe our plans for using it in an undergraduate course
on software engineering and for assessing the approach.
1. Introduction
The most widely accepted definition of the flipped classroom is one where “events that have traditionally taken place inside the classroom now take place outside the classroom and vice versa”,
see, e.g., Lage et al. 1 . Thus the knowledge transfer that the traditional lecture tries to achieve is
instead intended to be achieved, typically, via on-line video lectures which the students are responsible for viewing before attending the in-person class meeting. The in-person meeting is devoted
to answering questions (that students may have based on their viewing of the corresponding video
lecture(s)), joint problem solving activities, as well as other active learning tasks that provide individual and group practice. The expectation is that, given the ability of active learning tasks to
engage students in learning, the approach will help students better achieve the intended learning
outcomes of the course; and, as an added bonus, students’ abilities with respect to such important
professional skills as team work and communication will also be improved.
A number of researchers have reported on their experiences with the approach of the flipped classroom, henceforth abbreviated FC. We will briefly summarize some of this work. Lage et al. 1
were among the earliest to use the approach of the flipped or inverted classroom. They used the
approach in an introductory economics course. They argue that a main problem with traditional
courses is that they do not allow the instructor to match the teaching style to the learning style of
the student; and that by posting the usual lectures as on-line videos that students would be responsible for viewing before class meetings, the instructor would be able to tailor the class meetings
to match the needs of particular groups of students. They report that students reported satisfaction
with the course and expressed a preference for this type of course over traditional courses. The
instructors were also satisfied with the inverted course. Lage et al. do not, however, report on how
the students in their inverted classroom performed with respect to achieving the intended learning
outcomes of the course compared to students in the traditional version of the course. Foertsch
et al. 2 , another group of early adopters of the flipped approach, report on their experiences with it
in a large, previously lecture-based, computer science course for engineering majors. The lectures
were converted to online videos that students were responsible for viewing before class meetings
to free up class time for working on problems similar to ones that were previously assigned as
homeworks. Students who took the course reported satisfaction with a number of aspects of the
course including, in particular, the ability to view/review the lectures as needed and on their own
schedule although a few felt that in-class lectures, being more “formal”, would have encouraged
them to pay fuller attention. Foerstsch et al., like Lage et al., do not report on how the students
in their class performed in comparison to students in the traditional version of the course. Zappe
et al. 3 , similarly, report on student satisfaction with their FC course but do not say how the class
performed in comparison to a traditional version of the course.
Thomas and Philpot 4 present a detailed description of their experience, over several years, using
the FC approach in a course on the mechanics of materials. They present a detailed evaluation
of what seems to work and what needs improvement in terms of assessing which concepts in the
course students seem to have problems with, as well as what kinds of materials (on-line videos,
worked homeworks, etc.) the students seem to prefer to learn from. More importantly, from the
point of view of our work, they also present results comparing the performance of students in
the FC version of the course with students in the traditional course. They report that there were
“no statistically significant differences between the two groups . . . based on performance on the
common final exam”.
Redekopp and Ragusa 5 , while they report on student satisfaction with the FC approach in their
computer organization course, also focus specifically on comparing performance, with respect to
achieving intended course outcomes, of students in the FC version of the course with the performance of students in the traditional version. They divide the performance into two categories,
one with respect to “higher-order outcomes” which they identify as such abilities as team work in
projects, and the other with respect to “lower-order outcomes” which they identify with student
performance with respect to conceptual and technical knowledge as measured by performance in
course exams. They report that student performance in the course projects in the FC version of the
course was better than student performance in the traditional version by an average of 12 percent.
They also report that this improvement was not seen in one section of the FC version of the course;
and they attribute this to the fact that the instructor in that section “neglected to utilize modeling
and demonstration techniques . . . ”. This, of course, raises the question, which the authors do not
consider, of whether the performance of the students in the projects in the traditional version of
the course would have matched the performance in the other FC sections of the course if the instructors in the traditional course had used these techniques. In any case, the point remains that
the performance of students in the FC courses, at least with respect to conceptual and technical
knowledge, is no better than in the traditional courses.
We conclude this brief survey by considering the work of Swartz et al. 6 . They use a fairly broad
conception of FC on the basis of which they consider three different approaches. What is different
about their conception of FC, compared to those of some others, e.g., Bishop and Verleger 7 , is
that the course lectures do not necessarily have to be on-line; instead, students may be asked
to do relevant focused reading and review of worked problems before class meetings. The key
requirement is that the class meetings are mainly devoted, as in all models of FC, to active learning
tasks rather than lectures by the instructor. In their first approach, used in a small class on the
mechanics of materials, the lectures are in the form of videos; in addition, audio explanations are
added to the pdf files of class notes. In the second approach, used in a large class on environmental
engineering, the weekly lectures are in the form of 90 minute on-line videos, divided into 10 minute
chunks. In the third approach, used in a medium sized class on structural design of foundations,
instead of lectures, students are expected to perform focused readings as noted above. In each case,
there is a “quiz” before the class meeting which allows the instructor to identify specific problems
that students might have in understanding the videos/readings, that can be addressed briefly at
the start of the class meeting. Much of the class meeting is devoted to working on individual or
group problem solving activities. The authors report, for each of their approaches, such benefits
as students being better prepared for class and faculty having time to discuss applications and
develop deeper level thinking (as well as guest lectures, field trips, etc.) They do not present
any information on performance of students with respect to understanding course concepts and
technical knowledge.
The key question that the work of various authors summarized above and others in the literature
raises is, why is it that student achievement with respect to the learning outcomes, with respect to
concepts and technical knowledge, for the courses is roughly the same as (and in some cases 5 even
worse than) in traditional classes? Given the increased level of active learning componenents in the
FC, shouldn’t the levels of achievement not just of such outcomes as team-working abilities, but
also those related to understanding of the technical material in the course, be superior compared to
that of students in the the regular versions of the same course? Or, to put it differently, how do we
fine-tune the FC model to improve achievement of outcomes related to the technical content of the
FC courses? This is the question that our work-in-progress tries to address.
In the next section, we outline the theoretical framework underlying our approach to FC to help
improve student performance with respect to conceptual and technical knowledge. The key idea,
as we will see, is to engage the students in on-line discussions, about the underlying technical
material, with each other in a carefully orchestrated manner. Widely accepted models of learning
stress the importance, to student learning, of detailed discussions with peers, especially when those
peers have conflicting ideas about the topic in question. Our approach extends the FC model to take
account of this important notion to improve the quality of learning. In Section 3, we will present
details of our prototype implementation of the approach in the form of an on-line tool, CONSIDER,
that will be available on smart phones, etc., as well as on the web. In Section 4, we briefly describe
our plans for using the approach in a course on software engineering; and for assessment of the
approach. Section 5 concludes the paper by summarizing why our approach has the potential to
significantly enhance the effectiveness of the FC model with respect to student achievement of
course outcomes related to conceptual and technical knowledge.
2. Theoretical Framework
The main thesis underlying our work is that in order to exploit the full potential of the FC model,
it is not sufficient to use the class meeting times, freed up by having students access on-line videos
(and other materials) of the lectures prior to the class meetings, in various active learning tasks such
as problem solving and project work; the thesis is that we must also use the capabilities of on-line
systems to have students engage in serious discussions with their peers about specific questions
related to the technical content of the on-line videos.
Over the years, a number of researchers have investigated the importance of interactions among
students in order to best enable learning. Bishop and Verleger 7 summarize results from many of
these researchers’ investigations. For our work, a key notion is that of cognitive conflict, introduced
by Piaget 8 . Although Piaget was concerned mainly with the intellectual growth of children, his
ideas are very relevant for adult learners as well, including undergraduate engineering students. A
key point in Piaget’s theory was that peer interaction was a potent component of a learner’s grasp of
new concepts; in particular, cognitive conflict, i.e., disagreements with other learners’ conception
of the same problem or topic was fundamental since it highlighted alternatives to the learner’s own
conception. The learner is forced to consider and evaluate these alternatives on equal terms. This
is quite different from a teacher telling a learner that his or her conception is incorrect because
then, given the authority of the teacher, the learner simply accepts this without critical evaluation.
By contrast, when the (cognitive) disagreement is with peers, the learner is forced to evaluate the
alternatives critically and pick one after careful deliberation (although, naturally, how critical this
evaluation is will depend on the level of maturity of the learner). As Howe and Tolmie 9 put it,
“conceptual growth depends on equilibriation, that is the reconciliation of conflicts between prior
and newly experienced conceptions.” A distinction between Piaget and the work of Vygotsky 10
is that the latter stresses the importance of a “more competent other” in the interaction. In other
words, according to Vygotsky, interaction among peers is most fruitful when one of the members of
the group is more competent than the others. Interestingly, while some researchers have confirmed
the importance of Vygotsky’s “more competent other”, the results of other researchers suggest
that what seems to matter most is the cognitive conflict that a student experiences because of
disagreements with other students’ conception of the same problem or topic. It is indeed possible
that the importance of having a somewhat more competent peer in the group may depend on the
level of development of the learners. Since much of the research that has been carried out thus far
has focused on relatively young children, it may be useful to investigate this question carefully as
it applies to groups of undergraduate engineering students. As we will see later in the paper, our
system is designed to allow us to do so.
We should also mention a more general framework that concerns the importance of interaction
among students to ensure effective learning. Fig. 1 depicts the Community of Inquiry (CoI) model
created by Swan and Ice 11 . The CoI was designed for analyzing on-line educational systems.
Cognitive
Presence
Social
Presence
EDUCATIONAL
EXPERIENCE
Teaching Presence
(Structure/Process)
Fig. 1. Community of Inquiry
But it is also appropriate for learning environments that are partly face-to-face and partly online. The three principal elements of the CoI model are social presence, cognitive presence and
teaching presence. Social presence may be defined as the degree to which participants in the
learning environment feel affectively connected one to another; cognitive presence represents the
extent to which learners are able to, via interactions with each other, construct and refine their
understanding of important ideas through reflection and discussion; and teaching presence is the
design of various instructional activities such as lectures as well as activities intended to facilitate
interactions among students to help their learning. In terms of the CoI framework, the focus of our
work is to effectively use on-line systems to improve the cognitive presence component in the FC
approach. The social presence component, as we will see later, raises interesting question that we
plan to investigate in our work.
3. Approach and Prototype System
For the last several years, we have used a flipped approach in our undergraduate junior/senior-level
course on software engineering. Our approach has some aspects in common with the approaches
described in the papers we briefly considered in Section 1. Lectures are made available as online videos which are typically 20 minutes in length. Students are expected to watch the relevant
video(s) prior to the class meeting. The class meeting starts with a 15 minute quiz on the topic; below, we consider a typical quiz from the course. Students may post questions about the topic on the
class’s electronic forum before the class meeting and the instructor or grader answers them; typi-
cally, however, no students post any questions. Following the quiz, the instructor presents a brief
summary of the topic, focusing on the highlights. Then the instructor asks students what choice
they made for a particular question on the quiz, picks an individual student, and asks him/her to
explain the choice; other students who picked other choices are then asked to explain their choices.
The intent is that the resulting discussion will help address any misconceptions that students may
have about the topic. But there is no further follow-up; so it is unclear how effective this is in
helping students develop a good understanding of the topic.
The main purpose of our software engineering course, like that of similar courses elsewhere, is
to help students understand the importance of a systematic approach to understanding the overall
domain in which the software system to be built is intended to operate, understand the problem, in
the context of the domain, that the software system will help address, and the solution approach
to be adopted in the software system. Quite often, however, students want to jump straight into
designing and coding the software system without going through a careful analysis of the domain,
the problem in the context of the domain, etc. Indeed, frequently there is confusion between the
domain problem and specific algorithmic or data-structure related problems that might be encountered when developing the software system. The quiz below is intended to help identify such
misunderstandings:
Quiz 6: Your team has been asked to build a campus wayfinding system to help visually impaired
students at OSU. Five items identified during analysis are listed belw. Identify which category of analysis –that is, domain, problem, or solution– each element falls under.
Briefly explain why.
1. A catalog of the various types of building on a college campus;
2. The list of hard-to-find buildings on campus;
3. The range of visual and cognitive impairments that people suffer from;
4. Strategies by which people find their way in an unknown area – such as asking passersby or by identifying major streets.
Item (3) is especially interesting. A casual reading might suggest that it should be classified under
the problem category. But, in fact, it is part of the domain because it provides information about
the overall range of impairments (including cognitive impairments) that people suffer from; the
software system is not intended to solve the problem of visual impairments (e.g., by developing an
artificial eye or something along those lines).
Different students come up with different answers to that item and with different justifications.
While the class discussion helps clarify the issues for some students, many others remain unclear
about the distinction between the concepts of domain, problem, and solution. How do we help the
students overcome the underlying misconceptions? Based on the theoretical framework outlined in
the last section, in particular on the basis of Piaget’s notion of cognitive conflict, a good approach
would be to divide the class into small groups of 3 or 4 students each, and have them discuss
the problem and convince each other of their point of view. But such a discussion cannot take
place in class. A key reason is that the students in a group need time to mull over the arguments
of their peers, especially of those peers whose opinions they disagree with, in order to convince
themselves of the validity/invalidity of those arguments. They need to have time present their
counterarguments and the other students, in turn, need time to think about the validity of the
counterarguments. Moreover, a class discussion would be ephemeral and there would be no record
of it that the students in the group could refer back to at a later point to remind themselves of the
arguments and counterarguments.
The goal of our work thus is to implement a system that will enable engineering students in a
flipped classroom to engage in deep discussions with their peers, especially peers who have conflicting ideas with their own, about concepts and technical details that are the subject of a a given
video lecture. In more detail, in preparing for a class meeting, each student in the class is required
to individually watch the corresponding video lecture(s). The student is then required to electronically submit answers to a quiz that will be posted to the course’s website. The quizzes will be
analogous to the example quiz considered above. In other words, it will require the student, in
answering a question on the quiz, to make a specific choice (such as “domain” or “problem” or
“solution”) but, in addition, will require the student to include a justification of his or her choice.
Once the students have submitted their answers by a specified time, the system will automatically
form heterogeneous groups of 4 or 5 students each with each group containing students who chose
different answers1 .
Each student in a group will receive an e-mail indicating that he/she has been assigned to a group
and should start engaging in an electronic discussion on the topic with the group; the e-mail will
provide a link to the particular group’s forum for this discussion. In order to encourage free-flowing
discussion, the students in a group will not know who the other students in the group are. Instead,
students in the group will simply be identified as S1, S2, etc. Let us assume that a given group
has four students, S1 through S4. At the start of the discussion, the initial posts in the forum will
be the answers submitted by each of S1 through S4 in response to the question in the quiz. As
the discussion proceeds, each student will be expected to argue in favor of or against the ideas in
the posts that have been made thus far. A student will have three distinct ways to react to a given
post. The student could respond by creating a completely new post; the student could indicate that
he/she supports the position expressed in the given post and provide an explanation why; or the
student could indicate a conflict with the position in the given post and provide an explanation as
to why.
The students in the class will be required to submit their original answers to the quiz approximately
4 days before the class meeting where the topic will be discussed. Within 24 hours of that submission, each student in a given group will receive the emails from the system indicating their group
assignments, with links to the location of the discussion forum for that particular group. During
1
It is possible that, for a particular quiz, most students choose the same set of answers. In that case, the system
will not be able to form such heterogeneous groups automatically. The instructor or the grader will then have to
intervene to form suitable groups on the basis of differences in the students’ explanations for their answers. If there
are no substantial differences in the explanations as well, then that may be an indication, if most students pick the
right answer and give the right explanations, that the topic is simple and the instructor can simply move on to the next
topic; or if most students get the answers or the explanations wrong, the topic may be too difficult and it may be an
indication that additional video lectures and other resources might be called for.
the next 48 hours, the group will be expected to engage in their discussion. At the end of that
period, the discussion forum will no longer accept new posts but the students in the group will
be able to read all the posts. At that point, each student will be required to individually submit a
three part report consisting of: a summary of the starting positions of each student in the group;
a summary of the discussion/debate in the group and any conclusions that were reached; the particular student’s final answer (which may or may not be the same as the group’s conclusion). The
quality of this report, including especially the summary of the discussion, will contribute toward
the student’s grade for the quiz. This report will not be made available to the other students in
the group until after all students in the group have submitted their reports2 . These reports which
the instructor will have available about 24 hours prior to the class meeting will give the instructor
detailed information about common misconceptions, how the students tried to resolve them, and
what issues remained after the discussions. The timeline for the activity is tentative and will be
adjusted as we gain experience with the system.
(a) Login
(b) Quiz
Figure 1: Initial Screens
Since many students use smartphones regularly, we are implementing our system to be accessible
both on smartphones as well as via the desktop. Figure 1 displays the initial screens as seen on
an Android device. The login screen is standard and authenticates the user. Once the student has
logged in, he/she will see the current quiz, as in Fig. 1(b). The student will then be able to submit
2
It is not clear that it should be available even after all students in the group have submitted their reports. The
question is, what effect does having or not having access to the other students’ reports have on the learning of an
individual student. This is a question we plan to explore.
the answer along with an explanation for each part of the quiz. The system is named CONSIDER,
as an acronym for CONflicting Student IDeas Explored and Resolved.
In Fig. 2, we present the screens seen during the discussion. Figure 2(a) shows a post made by
student S4. The red block indicates that there are 3 posts that conflict with this post; i.e., since the
time that S4 made this post, other students in the group have made three posts that conflict with
the position of this post; the green block on the right similarly indicates that there are two posts
that support this post. Clicking on the “SUPPORTING POSTS” tab on the top right (partially
obscured) will bring up the supporting posts, seen in Fig. 2(c); clicking on the “CONFLICTING
POSTS” tab on the top left (partially obscured) will bring up the conflicting posts, seen in Fig. 2(b).
A student in the group can read any of these posts any time. And can respond by creating a new
post (which will bring up a screen that will allow the student to specify whether the new post is
supporting or conflicting), or report conflict with another existing post, or report support with an
existing post.
(a) Initial post
(b) Conflicting posts
(c) Supporting posts
Figure 2: Post, Conflicting and Supporting Posts
As indicated earlier, the final report that each student will be required to submit individually (we
have not shown the screen for those), will be available to the instructor about 24 hours prior to the
class meeting will give the instructor detailed information about common misconceptions, how the
students tried to resolve them, and what issues remained after the discussions. The timeline for the
activity is tentative and will be adjusted as we gain experience with the system.
4. Assessment
We will briefly summarize our plans for using the CONSIDER system and assessing the approach.
Several sections of the software engineering course are typically offered each semester. The FC
approach is used in all the sections of the course. In order to evaluate the system to improve it
and in order to assess the effectiveness of the approach, we will compare performance of students
in two sections of the course, one that uses the CONSIDER system and one that does not. We
expect that the students in the section using the system will indeed perform better than the control
group in terms of the grades they receive in final examination questions related to topics discussed
in the quizzes in both sections of the course. But there are additional questions that we need to
consider. As we noted earlier, the Community of Inquiry framework suggests that social presence
is an important component of learning in student groups. We intend to explore this by comparing
student performance in quizzes in which students in each group in the CONSIDER system know
each others’ identities and are encouraged to interact socially either on-line or in person with their
performance in quizzes in which students in each group in the CONSIDER system are anonymous
and know each other as “S1” or “S2”, etc. This part of our work will allow us to assess exactly
how important the social presence component of the CoI framework is.
There are still more issues to be explored. As noted earlier, Vygotsky’s theory of learning suggests
that discussions in a group of students is most effective when there is a “more competent other” in
the group. And, indeed, the students in the group, according to this theory, should know who the
competent other is. On the other hand, Piaget’s theory suggests that students in a group of peers
learn as long as they experience cognitive conflicts because of differing ideas from their peers
about the concept being learned; and it is not necessary that any member of the group be more
competent than the others. Our work will enable us to investigate this question carefully. We will
seed some groups with contributions from the course grader who is the “competent other”; and
will compare the performance of students in such groups with students in other groups that do not
have such members. Our system and approach will, in fact, allow us to research a number of other
similar questions. And since all the posts made by students in the various groups, along with the
order in which they were made, will be available, we expect to have a rich source of data to answer
these questions.
5. Conclusion
McClelland 12 presents the experience of using the FC approach in a large enrollment fluid mechanics course. Interestingly, students in this course did worse than students in a traditional version
of the course! The author also reports that the students did not watch the assigned videos, etc.
We believe that the FC approach has a lot of potential to improve student learning, not just by
freeing up class time to spend on activities that contribute to their soft skills such as teamwork,
but also improve the extent to which individual students attain the course outcomes related to the
technical contents of the course. But in order to reach this potential, it is important to go beyond
what has been done so far in most FC classrooms. In particular, it is necessary to engage small
groups of students in deep discussions about the technical material; and to organize these groups
on the basis of well understood theoretical principles. Our approach, and the CONSIDER system
are designed to do that. We plan to use our approach initially in our software engineering course;
and, over time, in other courses in our curriculum.
References
[1] M Lage, G Platt, and M Treglia. Inverting the classroom: A gateway to creating an inclusive
learning environment. Journal of Economic Education, 31(1):30–43, 2000.
[2] J Foertsch, G Moses, J Strikwerda, and M Litzkow. Reversing the Lecture/Homework
Paradigm Using eTEACH Web-based Streaming Video Software. Journal of Engineering
Education, 91(3):267–274, 2002.
[3] S Zappe, R Leicht, J Messner, T Litzinger, and H Lee. Flipping the classroom to explore
active learning in a large undergraduate course. In Proc. of ASEE Annual Conf., pages 1–21.
ASEE, 2009.
[4] J Thomas and T Philpot. An inverted teaching model for a mechanics of materials course. In
Proc. of ASEE Annual Conf., pages 1–25. ASEE, 2012.
[5] W Redekopp and G Ragusa. Evaluating flipped classroom strategies and tools for computer
engineering, Paper ID #7063. In Proc. of ASEE Annual Conf., pages 1–18. ASEE, 2013.
[6] B Swartz, S Velegol, and J Laman. Three approaches to flipping CE courses: Faculty perspectives and suggestions, paper id #7982. In Proc. of ASEE Annual Conf., pages 1–18. ASEE,
2013.
[7] J Bishop and M Verleger. The flipped classroom: A survey of the reearch, Paper ID #6219.
In Proc. of ASEE Annual Conf., pages 1–17. ASEE, 2013.
[8] J Piaget. The early growth of logic in the child. Routledge and Kegan Paul, 1964.
[9] C Howe and A Tolmie. Productive interaction in the context of computer-supported collaborative learning in science. In Learning with computers, pages 24–46. Routledge, 1999.
[10] L Vygotsky. Mind in society: The development of higher psychological processes. Harvard
University Press, 1978.
[11] K Swan and P Ice. The community of inquirty framework ten years later. Internet and Higher
Education, 13:1–4, 2010.
[12] C McClelland. Flipping a large-enrollment fluid mechanics course: Is it effective?, Paper ID
#7911. In Proc. of ASEE Annual Conf., pages 1–9. ASEE, 2013.
Paper ID #10088
Work-in-Progress: The Platform-Independent Remote Monitoring System
(PIRMS) for Situating Users in the Field Virtually
Mr. Daniel S. Brogan, Virginia Tech
Daniel S. Brogan is a PhD student in Engineering Education with BS and MS degrees in Electrical Engineering. He has completed several graduate courses in engineering education pertinent to this research.
He is the key developer of the PIRMS and leads the LEWAS lab development and implementation work.
He has mentored two NSF/REU Site students in the LEWAS lab. He assisted in the development and
implementation of curricula for introducing the LEWAS at VWCC including the development of pre-test
and post-test assessment questions. Additionally, he has a background in remote sensing, data analysis
and signal processing from the University of New Hampshire.
Dr. Vinod K Lohani, Virginia Tech
Dr. Vinod K Lohani is a professor in the Engineering Education Department and an adjunct faculty
in the Civil and Environmental Engineering at Virginia Tech. His research interests are in the areas of
sustainability, computer-supported research and learning systems, hydrology, and water resources. In a
major ($1M+, NSF) curriculum reform and engineering education research project from 2004 to 2009,
he led a team of engineering and education faculty to reform engineering curriculum of an engineering
department (Biological Systems Engineering) using Jerome Bruner’s spiral curriculum theory. Currently,
Dr. Lohani leads an NSF/REU Site on ”interdisciplinary water sciences and engineering” which has
already graduated 56 excellent undergraduate researchers since 2007. This Site is renewed for the third
cycle which will be implemented during 2014-16. He also leads an NSF/TUES type I project in which a
real-time environmental monitoring lab is being integrated into a freshman engineering course, a seniorlevel Hydrology course at Virginia Tech, and a couple of courses at Virginia Western Community College,
Roanoke for enhancing water sustainability education. He is a member of ASCE and ASEE and has
published 70+ refereed publications.
Dr. Randel L. Dymond, Virginia Tech
Dr. Randy Dymond is an Associate Professor of Civil and Environmental Engineering at Virginia Tech.
With degrees from Bucknell and Penn State, Dr. Dymond has more than 30 years of experience academics, consulting, and software development. He has taught at Penn State, the University of WisconsinPlatteville, and has been at Virginia Tech for 15 years. Dr. Dymond has published more than 50 refereed
journal articles and proceedings papers, and been the principal or co-principal investigator for more than
110 research proposals from many diverse funding agencies. His research areas include urban stormwater
modeling, low impact development, watershed and floodplain management, and sustainable land development. Dr. Dymond has had previous grants working with the Montgomery County Public Schools and
with the Town of Blacksburg on stormwater research and public education. He teaches classes in GIS,
land development, and water resources and has won numerous teaching awards, at the Departmental,
College, and National levels.
c
American
Society for Engineering Education, 2014
Work-in-Progress:
The Platform-Independent Remote Monitoring System
(PIRMS) for Situating Users in the Field Virtually
Abstract: A recent report on Challenges and Opportunities in the Hydrologic Sciences by the
National Academy of Sciences states that the solutions to the complex water-related challenges
facing society today begin with education. Given the increasing levels of integration of
technology into modern society, how can this technology best be harnessed to educated people at
various academic levels about water sustainability issues? The Platform-Independent Remote
Monitoring System (PIRMS) interactively delivers integrated live and/or historical remote
system data (visual, environmental, geographical, etc.) to end users regardless of the hardware
(desktop, laptop, tablet, smartphone, etc.) and software (Windows, Linux, iOS, Android, etc.)
platforms of their choice. The PIRMS accomplishes this via an HTML5-driven web-interface.
One of the strengths of such a design is the idea of anywhere, anytime access to live system data.
In this research, weather and water quantity and quality data and time-stamped imagery from the
LabVIEW Enabled Watershed Assessment System (LEWAS) have been integrated with local
geographical data in the PIRMS environment in order to situate users within a small urban
watershed virtually. Previous studies using exposure to the LEWAS showed increased levels of
student motivation. The current research investigates increases in student learning related to
water sustainability topics. Bloom’s Revised Cognitive Taxonomy is used to link components of
PIRMS to water sustainability topics on different learning levels. Using the framework of
situated learning, longitudinal true-experimental and pre-test-post-test quasi-experimental
designs are applied to students in a senior level undergraduate course and freshmen engineering
community college courses, respectively, in order to compare student learning from physical
field visits, virtual field visits via PIRMS and/or virtual field visits via pre-recorded videos. In
addition to these physical and/or virtual field visits, all students are given LEWAS imagery files
and measurement data in spreadsheet formats. Pre- and post-test assessments entail the students
writing narrative responses to prompts. These narrative responses are assessed using rubrics to
look for increases in student learning. Preliminary results are presented. This work is ongoing.
Introduction
A recent report on Challenges and Opportunities in Hydrologic Sciences by the National
Academy of Sciences states that the solution to the complex water-related challenges facing
society today begins with education.1 The realization of the need to educate people about water
sustainability is not new. At least as far back as 1974, there was a realization that water quality
was difficult for people to describe.2 Around the same time, various indices of water quality were
developed to help quantify water quality in a way that could be more easily understood.3 More
recently, Covitt, Gunckel and Anderson assessed students’ understanding of water quantity and
quality relationships in both natural and man-made hydrologic systems for students in grades 312.4 By coding5 a subset of student results, they developed a rubric that was used to assess a
random sample of other students’ work. They determined that water literacy is not sufficiently
taught in schools, and recommended that, “Instruction should first address the structure and
movement of water and other substances in individual systems, and then it should gradually
move toward building connections among these systems to help students develop deep,
meaningful understanding.” 4 p. 50 This progressive instructional approach suggests the
implementation of a spiral curriculum.
Given that the sustainability of water resources is one of the major engineering challenges facing
us in this century6 and that humans play a major role (for both good and bad) in this process1, it
is vital that students on every level are exposed to this challenge. Spiral curricula7-9 allow for
water sustainability education to be integrated into academic programs by introducing
increasingly difficult water sustainability concepts as students progress academically. However,
prior to this integration, it is imperative to determine how students at various academic levels are
best able to learn such material.
In this research, we focus on students at the freshman and senior undergraduate levels. At these
levels, Armstrong and Bennett proposed MoGeo (mobile computing in geographic education) to
integrate mobile computing technology and field visits in order to bring geospatial capabilities to
the field using location-aware mobile computers.10 Iqbal supplemented classroom learning for
senior-level hydrology students by having them visit on-campus and off-campus habitats and
analyze the chemical, biological and hydrological characteristics of various water samples.11
Habib et al. discuss the use of HydroViz, a “web-based, student-centered, educational tool
designed to support active learning in the field of Engineering Hydrology.”12 p. 3778 They integrate
geospatial, in-situ and model-generated data in a “highly-visual and interactive” web-based
interface with the goal of creating “authentic and hands-on inquiry-based activities that can
improve students’ learning.” 12 p. 3771 However, their study investigates only water quantity rather
than water quality and its relationships to water quantity as is essential for water sustainability
education. They found that student learning of hydrologic concepts was impacted by the learning
environment and that using HydroViz increased students’ motivation.
Two themes emerge from these studies. One is the desire to provide students with a more
authentic learning experience by exposing them, either physically or virtually, to the physical
environments where their theory becomes practice. The other entails the utilization of
technological advances in order to integrate this exposure into the students learning experiences.
This leads to the question, “Given the increasing levels of integration of technology into modern
society, how can this technology best be harnessed to educated people at various academic levels
about water sustainability issues?”
The present research, developed by an interdisciplinary team of faculty and graduate students
from Virginia Tech (VT) and two community colleges in Virginia (i.e., Virginia Western
Community College (VWCC) and John Tyler Community College (JTCC)), examines the
potential of a Platform-Independent Remote Monitoring System (PIRMS) in water sustainability
education for students pursuing various academic pathways within engineering.13 The PIRMS
uses real-time (delivering data to end users within a few seconds), high-temporal-resolution
(sampling at least once every three minutes) water quantity, quality and weather data from a
small urbanized watershed to generate various water sustainability learning scenarios in a
platform-independent environment. The research is accomplished by deploying the PIRMS into
courses at VT, VWCC and JTCC. Before discussing the development and classroom
implementation of the PIRMS, we briefly discuss our prior work that has led to the development
of the PIRMS.
The LabVIEW Enabled Watershed Assessment System (LEWAS)
The LabVIEW Enabled Watershed Assessment System (LEWAS) was developed starting in
2009 as a practical implementation of LabVIEW for use in a large freshman-level engineering
course at VT.14-15 The LEWAS is a unique real-time water and weather monitoring system which
is installed at the outlet of a water quality impaired creek that flows through the campus of VT.
The watershed measured at the LEWAS field site contains about 2.78 km2 with approximately
95% urban/residential land use. This creek was chosen as the site of the lab because of its
location and its environmental significance. This creek was found to be benthically impaired for
8 km starting at the outfall of the pond immediately below the LEWAS field site. Some of the
stressors of the stream include sedimentation, urban pollutants, increased development, and
stream channel modifications16 Examples of stressors include specific conductivity rising from a
normal range of 600-800 µs/cm to nearly 5000 µs/cm during a winter storm salt wash17, turbidity
in the stream ranging from 0 to 450 NTU, flow varying from 0.02 m3/s at base flow to a peak
flow of 13.2 m3/s on July 3, 2013, and water temperature jumping 4.7 deg. C in 3 minutes on
July 23, 2012.
The LEWAS has sensors to measure water quality and quantity data including flow rate, depth,
pH, dissolved oxygen, turbidity, oxidation reduction potential, total dissolved solids, specific
conductivity, and temperature. In addition, weather parameters (temperature, barometric
pressure, relative humidity, precipitation and wind speed and direction) are measured at the
LEWAS outdoor site. All of these environmental parameters can be accessed by remote users in
real-time through a web-based interface for education and research. The LEWAS is solar
powered and uses the campus wireless network through a high-gain antenna to transmit data to
remote clients in real-time. This lab has provided research opportunities to a number of graduate
and undergraduate students, and to date 1 PhD, 3 MS, and 10+ undergraduate researchers have
graduated from this lab. In addition, this lab has had 5 NSF/REU students. Currently, 3 PhD
students, 1 MS, and 4 undergraduate students work in this lab.
To study the educational applications of the LEWAS, an observational study was conducted as
the system was gradually introduced to engineering freshmen at VT between 2009 and 2012.14, 18
Positive student attitudes on the role of the LEWAS to enhance their environmental awareness
led to an experimental design which was implemented to study the motivational outcomes
associated with the system. Accordingly, appropriate educational interventions and a hands-on
activity on the importance of environmental monitoring were developed for both control and
treatment groups, with only the latter given access to the LEWAS to retrieve the environmental
parameters for the activity. An instrument was developed on the theoretical foundation of the
expectancy value theory19-20 of motivation and was administered to control and experimental
groups in the course. Altogether, 150 students participated in the study. After conducting
parametric and nonparametric statistical analyses, it was determined that providing real-time
access to environmental parameters can increase student interest and their perception of the
feasibility of environmental monitoring – both major components of motivation to learn about
the environment.19, 21
Motivated by the outcome of PhD research discussed above, the LEWAS was incorporated into a
senior level hydrology course at VT and an introductory engineering course at VWCC. Results
of pre-and post-tests in both courses show positive learning gains, and students’ blogs22-23 show
their active participation in the LEWAS-based water sustainability learning modules.24 This prior
LEWAS research provided motivation for the development of the PIRMS.
The Platform-Independent Remote Monitoring System (PIRMS)
Development of the Platform-Independent Remote Monitoring System (PIRMS) was undertaken
with the goal of interactively delivering integrated live and/or historical remote system data
(visual, environmental, geographical, etc.) to end users regardless of the hardware (desktop,
laptop, tablet, smartphone, etc.) and software (Windows, Linux, iOS, Android, etc.) platforms of
their choice. The PIRMS accomplished this via an HTML5-driven web-interface, as discussed
below. One of the strengths of such a design is the idea of anywhere, anytime access to live
system data. Another strength is the graphical and visual integration of the data that virtually
situates the user at the remote measurement site. The PIRMS addresses four shortcomings of the
LEWAS system: 1) it adds the ability to use historical data, 2) it does not require users to install
of the LabVIEW runtime engine, 3) it does not crash when accessed simultaneously by a large
number of users and 4) it virtually situates users at the LEWAS field site.
The PIRMS was developed via the storyboarding process25 (including the development of a
process book and a design document) as an open-ended learning environment.26 Figure 1 shows
the site map view of the PIRMS storyboard in the context of the LEWAS. The camera icon in
the upper right allows the user to capture the current view as an image for later use. The
SPLASH SCREEN transitions automatically to the HOME SCREEN, from which the user is
able to navigate in any of eight different directions. For example, from the HOME SCREEN, a
user can follow arrow number 5 to select an overhead view. From, e.g. the SREET MAP VIEW,
the user can see the watershed boundary, waterways in the watershed and data collection sites
within the watershed all overlaid on a local street map. By selecting one of these data sites, the
user will be taken to the DATA SITE SUMMARY, which includes information about which
instruments are measuring which parameters. From this view, the user can go to the SINGLE
GRAPH VIEW to plot these parameters. From this graph, the user can access a time stamped
image of the field site in the SINGLE IMAGE VIEW. In this way, the imagery data serves as the
users eyes into the remote system. This spatial and visual situational context serves to increase
the user’s insights into the meaning of the data displayed in the single graph and six graph views.
Case studies then use this integrated environment to investigate particular events that occur in the
system being studied.
Figure 1. Site Map view of the LEWAS-specific PIRMS storyboard. In addition to the
connections shown, every block except GLOSSARY TOPIC, SPLASH SCREEN and HOME
SCREEN links to a specific glossary topic. Every block except SPLASH SCREEN and HOME
SCREEN links to the HOME SCREEN.
Suppose that an instructor wants to use the PIRMS to discuss a water sustainability case study
with her or his students. He or she would ask students to go to a website to access the PIRMS on
the platforms of their choice and follow arrow 2 to select a case study. Figure 2 shows an
example case study in the single graph view. In this view, the user is able to select up to six of
the measured parameters in the system for display on either of the two vertical axes, which
allows for the display of parameters on highly different scales. In this case study, precipitation
on the LEWAS’ watershed began as rain around 2PM on April 4, 2013, before quickly changing
to snow and changing back to ice and rain between 6PM and 7PM. Another small rain storm
passed over the LEWAS site around midnight on April 5.
This case study shows several examples of the types of insights into the system that users can
gain by the data integration of the PIRMS. For example, specific conductivity usually drops and
turbidity usually rises during rain events as compared to base flow conditions. This allows the
user to estimate that precipitation began around 2PM despite the absence of temperature and
precipitation data from just before 12PM to just before 4PM on April 4. A time-stamped image
at 3:42 PM confirms that it was snowing at the LEWAS site and appears to have been doing so
for some time (Figure 3). Around 4PM the specific conductivity level began to climb rapidly,
which is the result of road salt being washed into the stream. This road salt resulted in an acute
chloride toxicity event in the stream, which would have gone unnoticed if not for the hightemporal-resolution of the data.17 The small rainfall around midnight on April 5 resulted in some
residual salt being washed into the stream. Finally, the specific conductivity and turbidity levels
suggest a precipitation event between 9AM and 6PM on April 5, but negligible precipitation
occurs during this period. Rather, the air temperature indicates that the event is the result of
melting snow from the previous day’s storm. These insights can be gained by the integration of
the data when users are virtually situated at the field site. A camera currently being integrated
into the LEWAS will provide regular imagery.
Figure 2. Single graph view of the LEWAS-specific PIRMS storyboard using measured case
study data.
Figure 3. Single image view of the LEWAS-specific PIRMS storyboard.
In addition to the clickable time shift and zoom buttons shown in Figure 3, the interface allows
the user to alter the time axis using one and two finger motions on touch screens. The six graph
view (Figure 4) acts in a similar way to the single graph view except that each of the axes can
display only a single parameter but can display that parameter from multiple measurement sites.
All six graphs and the imagery axis move synchronously in time.
Figure 4. Six graph view of the LEWAS-specific PIRMS storyboard using artificial data.
The PIRMS is an adaptable system that can be used with other watersheds or generalized to other
remote measurement systems. Without the watershed context, sitemap in Figure 1 can be
adjusted such that the watershed becomes simply a system and the overhead views of the
watershed become various system views.
Conversion of the storyboard into an end product is ongoing. During the summer of 2013, a
database was developed to act as an interface between LabVIEW and the PIRMS in order to
automate data storage and retrieval.27 The PIRMS uses HTML5 to allow for device and platform
independence. In this setting, the canvas object is being used to generate interactive graphs.28-29
The PIRMS focuses on the benefits of integrated visualization rather than on computational
power. However, the final product will allow the user to save data locally for further
computational analysis.
PIRMS classroom implementation
Theoretical framework
While the prior studies of water sustainability education do not subscribe to specific theoretical
frameworks, they contain a recurring theme that students learn more about the environment they
are studying if they have the opportunity to connect classroom learning to experiences in that
physical environment. Furthermore, several of these studies indicate that these experiences can
be a combination of physical field visits and virtual field visits. These results fit under the
framework of situated learning, which argues that knowledge is “distributed among people and
their environments.”30 p. 17 This definition divides situated learning into two primary areas, i.e.
knowledge is distributed across people, e.g. a community of practice31, and knowledge is
dependent on the learning environment.32 The former follows the sociocultural tradition, while
the latter follows the sociocognitive tradition.33 While no two learning environments are exactly
alike,34 we are able to make judgments about the best previously learned knowledge to apply to
new learning environments based on common features.30, 35
According to Newstetter and Svinicki, “Effective learning environments support the learner in
developing an ability to integrate the external environment structures and internal knowledge in
problem solving.”36 p. 39 Graphs and images are types of data representations that engineers often
use to help them understand systems, and these representations are increasingly being
communicated via digital technology. Within the context of water sustainability, technology
advances have increased our ability to integrate remotely sensed environmental data into the
learning environment.37 The ways by which physical objects and data representations alter the
learning environment is called mediation.33 One of the strengths of the PIRMS is its ability to
interactively integrate graphs and images in order to virtually situate users at the LEWAS field
site. In this way, the PIRMS can be used as a remote lab. Remote labs, which allow users to be
situated at the study site without physically being present, are spreading within engineering
curricula.38-41 Additionally, it has been estimated that more than half of U.S. internet users will
access the internet via mobile devices by 2015,42 and platform-independence allows the PIRMS
to reach a larger number of people by working across mobile platforms.
Remote labs rely on digital technology to provide remote access to users, and this technology is
especially powerful when it is interactive.43 Multimedia uses digital technology to reach users via
multiple types of content, e.g. text, imagery, video and audio. Many types of interactive
multimedia can be used in learning, e.g. open-ended learning environments, tutorials and serious
games.44 However, according to Johri, Olds and O’Connor, “The role of technological tools,
particularly digital tools, is extremely under-theorized in engineering education and a perspective
of mediation can prove useful to develop a deeper understanding of technology use and
design.”33 p. 53 They have listed “Empirical studies of mediation by tools used in learning and
practice” as a potential engineering learning research topic, which is an excellent match for the
present research.33 p. 55
According to Prus and Johnson, it is essential to choose assessment methods that are relevant,
accurate and useful, i.e. those that provide detailed measures the desired outcomes and indicate
areas for improvement.45 Since this research seeks to assess student learning, it is more
appropriate to use a direct measure of learning, i.e. measure what students learned, rather than an
indirect measure of learning, i.e. measure what students think they learned.5, 46 When used with
performance measures, rubrics provide a direct assessment of student learning when a judgment
of quality is required.47 Analytic rubrics allow for multiple learning objectives to be assessed
using a single rubric. Rubrics have the advantage of being “more objective and consistent” than
are other assessment methods.5, 48
In its report on the Challenges and Opportunities in the Hydrologic Sciences, the National
Academy of Sciences states that, “Ensuring clean water for the future requires an ability to
understand, predict and manage changes in water quality.”1 p. 8 These three abilities can be
aligned with the levels of Bloom’s revised cognitive taxonomy.49-50 Understanding, as evidenced
by an ability to explain the occurrence of changes in water quality, fits with the second level of
this taxonomy i.e. understanding. Predicting what is going to happen as the result of a particular
event in a watershed fits with the fifth level of this taxonomy, i.e. evaluating. Developing
management plans for a watershed requires the synthesis of diverse factors impacting this
system. This ability fits with the top level of the revised taxonomy, i.e. creating. As students
progress through various academic levels, they should likewise advance through all six levels of
cognition. Having a high level of cognition about such water systems allows individuals to move
beyond solving water sustainability problems to defining water sustainability problems, which
allows them to effectively manage water systems.51
Both research designs described below seek to assess students’ learning of water sustainability
topics. Using Bloom’s revised cognitive taxonomy as a guide, Figure 5 suggests topics that are
appropriate for each course level and components of the PIRMS that can be used to help students
learn these topics.
Figure 5. Lesson plan guide including examples of water sustainability education topics
appropriate for each level of Bloom’s revised cognitive taxonomy49-50 and the corresponding
PIRMS components that are appropriate for learning these topics. Levels 1-2 are applicable to
the freshman-level community college courses, and levels 1-5 are applicable to the senior-level
hydrology course. Level 6 would apply to a graduate-level hydrology course.
Research question
Within this theoretical framework, the overall question of this research is
1) How effective is the PIRMS at increasing student learning of water sustainability topics at
different academic levels?
Research methods
The current research uses the theoretical framework of situated learning by using the PIRMS to
virtually situate students at the LEWAS field site for both the freshman-level and the senior-level
courses. However, due to differences in the learning levels of the courses, the research designs
used are not identical. The senior-level hydrology course at VT typically consists of one section
of roughly ten female and twenty male students with about 10% graduate students. The LEWAS
was integrated into this course during the fall 2012 semester using three learning modules as part
of a TUES grant. The first module entailed characterizing the water quantity relationships
between rainfall and runoff for rain events in the LEWAS watershed. The second module related
water quantity and quality during rain events to landcover within the watershed. For the third
module, students assessed the watershed on a rotating weekly basis by visiting the field site,
analyzing data and writing on a course wiki about their observations. Overall, student
assessment results indicated that students believed exposure to the LEWAS was beneficial for
learning hydrologic concepts.24
Table 1 outlines the details of the longitudinal true experimental research design52-53 in the
senior-level Hydrology course. Since this course has only a single section, random assignment
will be used to break students into groups A and B of roughly five subgroups each with each
subgroup consisting of roughly three students each. Following the pre-test, one subgroup from
each group will complete treatment 1 (see Table 1) for one week, post-test 1 at the end of the
first week, treatment 2 for one week, post-test 2 at the end of the second week and the post-test 3
during the third week. A new subgroup from each group will begin treatment 1 every two weeks
until all students have completed the process. For the pre-test and each post-test, the students
will write narrative descriptions in response to the same set of prompts. Using the same prompts
for the pre-test and each post-test allows for direct comparison of the students’ results. Prompts
and scoring rubrics will be developed using Figure 6 as a guide. Sample prompts are shown in
Table 1.
Several factors have been taken into consideration in the development of this research design. It
was originally considered to use students from a previous year as a control group. However, this
was considered to be a poor choice since this would likely introduce several confounding
variables.52 One significant threat to the internal validity of this research is imitation of
treatments, which will occur if students in group B visit the field site prior to the completion of
post-test 1. In order to minimize the impact of this threat, students in both groups will be given
access to the PIRMS only when they first need it for their assignments. A design where all
students visited the field in treatment 1 and added the PIRMS in treatment 2 was considered.
However, this was rejected over concerns of the maturation threat to internal validity and the
absence of a comparison group. Finally, the students will not be given access to the blog/wiki
until after post-test 2 so that the sociocognitive aspect of situated learning, i.e. the PIRMS, can be
tested before the sociocultural aspect of situated learning is implemented. As for reliability, care
will be taken to minimize the threat of inter-rater reliability issues. The test-retest threat to
reliability52, 54 is minimal in this research because the increasing cognitive levels, rather than the
memorization of facts, are sought. That is, although the prompts are identical for each
assessment, the expected responses are not.
It was originally planned to implement this research design in the spring 2014 semester. In this
plan, the first two learning modules from the 2012 course would be retained and the third
learning module would be replaced by this research design. However, due to delays in the
technical development of the PIRMS, the full implementation could not be implemented in the
spring 2014 semester. Rather, a pilot test of the PIRMS will be implemented in week 13 of the
semester. In this pilot test, the students will be given access to the PIRMS and asked to write
about the parameter relationships that they see during a summer rain event. They will also
inform the technical development of the PIRMS by discussing the strengths and weaknesses of
the interface.
Post-test 3: Blog/wiki
Post-test 2
Post-test 1
Pre-test
Table 1: Research Methods - Senior Undergraduate Level
Research Question 1 – Senior Level: Hydrology class at VT
Instrument Used in Data Collection - Pre-test and post-test prompts:
1) What value, if any, do you see in real-time monitoring of water quantity and quality?
2) How can the LEWAS system help you learn hydrologic concepts?
3) What types of unusual water quality events might this system detect?
4) Describe three limitations of the LEWAS system.
5) How can this system be used for advancing research questions relevant to hydrology?
6) Describe the relationship between water quantity and pH during and after a rain event.
7) What are the typical and extreme values of water flow at the LEWAS site in cfs?
8) What would be the added value of a product that delivers live and/or historical remote
system data (visual, environmental, geographical, etc.) to end users regardless of the
hardware (desktop, laptop, tablet, smartphone, etc.) and software (Windows, Linux, iOS,
Android, etc.) platforms of their choice?
9) What difficulties can you anticipate in your one week assignment to monitor the water
quantity, quality and weather parameters?
A rubric will be developed and applied to narrative responses to questions 3, 4, 6 & 7 in order
to convert qualitative data to quantitative data for analysis.
Timeline: Student recruitment: Spring 2015; Data collection: Spring 2015; Data analysis and
interpretation: Summer 2015.
Experimental Procedure: Pre-tests, post-tests 1, 2 & 3
Student
Random
Treatment 1
Treatment 2
Population Assignment
Students
Group A
Exposure to field
Exposure to the PIRMS
from
site and the
and field site and the
single
LEWAS meas. data
LEWAS meas. data and
section
and imagery
imagery
Hydrology Group B
Exposure to the
Exposure to the PIRMS
course
PIRMS and the
and field site and the
LEWAS meas. data
LEWAS meas. data and
and imagery
imagery
Variables
Statistical Test and Inferential
Hypothesis
Dependent:
One-way and multi-way ANOVA,
Student scores from rubric.
ANCOVA, non-parametric and postIndependent:
hoc tests to assess differences in
Gender, Race, group assignment.
response and interactions among
independent variables.45, 55, 56
The freshman-level research design will be implemented in first semester engineering courses at
VWCC and JTCC in the fall of 2014. Both of these courses typically have three sections of 1520 students each. The LEWAS was previously used in the spring and fall 2013 semesters in a
freshman engineering course at VWCC as part of the NSF TUES grant. In these courses, four 50
minute lecture periods and multiple assignments were used to introduce students to the general
water sustainability concepts and the LEWAS, complete data collection from a local waterway,
and complete computational exercises. Overall, assessment results indicated that students in the
course believed that exposure to the LEWAS was beneficial for increasing public awareness of
human impacts on water quantity and quality. For example, one student believed that the
LEWAS could be used to “Show [the public] the effects of humans on the environment in simple
terms.” While another believed that the LEWAS can “Show negative side effects of
Runoff/uncontrolled urbanization.”
Table 2 outlines the details of the pre-test-post-test quasi experimental design.52-53 This design is
considerably simpler than that of the senior-level course, and it is constrained to two fifty-minute
class periods for each of three successive weeks. In these courses, the students will self-select
into the course sections of their choice. Students in all three sections will be given a common
pre-test and common post-tests. In these courses, the students will write narrative descriptions
that are assessed using rubrics, and the prompts will be the same for the pre-test and both posttests as in the senior-level course. However, the prompts and rubrics will be appropriate to the
level of the course based on Figure 6. Sample prompts are included in Table 2. Since there is not
a natural sequential structure as in the senior-level course, no blog/wiki is included for this
group. Rather, focus groups will be used to assess the sociocultural aspect of situated learning.
Focus group prompts will be similar to those used in the pre-test and first post-test.
Post-test 2: Focus
Groups
Post-test 1
Pre-test
Table 2: Research Methods - Freshman Undergraduate Level
Research Question 1 – Freshman Level: EGR 124 Intro to Engineering and Engineering
Methods at VWCC and EGR 120 Introduction to Engineering at JTCC (same for each school)
Instrument Used in Data Collection - Pre-test and post-test sample prompts:
1) Describe the sources of water arriving at the field site and where the water goes afterward.
2) How do the actions of people impact the watershed? Provide examples.
3) What are five water quantity/quality parameters, and what do they tell us?
A rubric will be developed and applied to narrative responses to questions in order to convert
qualitative data to quantitative data for analysis.
Timeline: Student recruitment: Fall 2014; Data collection: Fall 2014; Data analysis and
interpretation: Fall 2014/Spring 2015.
Experimental Procedure: Pre-test, post-tests and focus groups.
Student
SelfTreatment
Population selection
Students
Course
Exposure to the LEWAS measurement data and
from 3
Section A
imagery (control)
sections of Course
Exposure to the PIRMS; Exposure to the LEWAS
EGR 120
Section B
measurement data and imagery
Course
Exposure to video about the LEWAS field site;
Section C
Exposure to the LEWAS measurement data and
imagery
Variables
Statistical Test and Inferential Hypothesis
Dependent:
One-way and multi-way ANOVA, ANCOVA, nonStudent scores from rubric.
parametric and post-hoc tests to assess differences in
Independent:
response and interactions among independent
Gender, Race, group assignment
variables.45, 54, 55
Several factors help to minimize the imitation of treatment threat to internal validity for this
course. These include that treatments are applied to different course sections of first semester
freshmen who are primarily commuting students, not living in close proximity to each other and
that the duration of the exposure is relatively short. However, the inter-rater threat to reliability
still exists for this course.54
Conclusion
The PIRMS has been developed in response to the need for increased water sustainability
education. Within the framework of situated learning, the PIRMS interactively delivers live
and/or historical remote system data (visual, environmental, geographical, etc.) to end users
regardless of the hardware (desktop, laptop, tablet, smartphone, etc.) and software (Windows,
Linux, iOS, Android, etc.) platforms of their choice to virtually situate users at the LEWAS field
site. As part of this Work-in-Progress, the PIRMS is being applied to water sustainability
education at multiple undergraduate levels. Initial results and a demonstration of the PIRMS will
be given in the presentation.
Acknowledgement
This work has been supported by NSF/TUES type I grant (award# 1140467). Any opinions,
finding, and conclusion or recommendations expressed in this paper are those of the author (s)
and do not necessarily reflect the views of the National Science Foundation.
References
1.
National Research Council, 2012. Challenges and Opportunities in the Hydrologic Sciences. Washington, DC:
The National Academies Press.
2. Ditton, R. B., & Goodale, T. L., 1974. Water Quality Perceptions and Attitudes. The Journal of Environmental
Education, vol. 6, no. 2, pp. 21–27.
3. Schaeffer, D. J., & Janardan, K. G., 1977. Communicating Environmental Information to the Public: A New
Water Quality Index. The Journal of Environmental Education, vol. 8, no. 4, pp. 18–26.
4. Covitt, B. A., Gunckel, K. L., & Anderson, C. W., 2009. Students’ Developing Understanding of Water in
Environmental Systems. The Journal of Environmental Education, vol. 40, no. 3, pp. 37–51.
5. Spurlin, J. E., Rajala, S. A., & Lavelle, J. P., 2004. Designing Better Engineering Education Through
Assessment. Sterling, VA, USA: Stylus.
6. http://www.engineeringchallenges.org/cms/8996/9142.aspx accessed on Sept. 10, 2013.
7. Lohani, V. K., Wolfe, M. L., Wildman, T., Mallikarjunan, K., and Connor, J., 2011. Reformulating General
Engineering and Biological Systems Engineering Programs at Virginia Tech, Advances in Engineering
Education Journal, ASEE, vol. 2, no. 4, pp. 1-30.
8. Dowding, T. J., 1993. The Application of a Spiral Curriculum Model to Technical Training Curricula.
Educational Technology, vol. 33, no. 7, pp. 18-28.
9. Dibiasio, D., Clark, W. M., Dixon, A. G., Comparini, L. & O'Connor, K., 1999. Evaluation of a spiral
curriculum for engineering. Frontiers in Education Conference (FIE '99. 29th Annual), vol. 2,
pp.12D1/15,12D1/18.
10. Armstrong, M., & Bennett, D., 2005. A Manifesto on Mobile Computing in Geographic Education. The
Professional Geographer, vol. 57, no. 4, pp. 506–515.
11. Iqbal, M., 2013. Field and Lab-based Activities for Undergraduate Students to Study the Hydrologic
Environment. In Proc. 2013 TUES PIs Conference (p. A135). Washington, D.C.
12. Habib, E., Ma, Y., Williams, D., Sharif, H. O., & Hossain, F., 2012. HydroViz: design and evaluation of a Webbased tool for improving hydrology education. Hydrology and Earth System Sciences, vol. 16, no. 10, pp. 3767–
3781.
13. http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503584 accessed on Sept. 14, 2013.
14. Delgoshaei, P., and Lohani, V. K., 2012. Implementation of a Real-Time Water Quality Monitoring
Lab with Applications in Sustainability Education, Proc. 2012 ASEE Annual Conference, June 10 - 13,
2012, San Antonio, Texas.
15. www.lewas.centers.vt.edu accessed on Sept. 10, 2013.
16. Virginia Department of Environmental Quality, 2006. Upper Stroubles Creek Watershed TMDL
Implementation Plan Montgomery County, Virginia. Blacksburg, VA, USA.
17. Clarke, H., Mcdonald, W., Raamanthan, H., Brogan, D., Lohani, V. K., and Dymond, R., Investigating
the Response of a Small, Urban Watershed to Acute Toxicity Events via Real-Time Data Analysis,
2013. Proceedings of Research, NSF/REU Site on Interdisciplinary Water Sciences and Engineering
(under preparation), Virginia Tech
18. Delgoshaei, P., Green, C., and Lohani, V. K., 2010. Real-Time Water Quality Monitoring using LabVIEW:
Applications in Freshman Sustainability Education, Proc. 2010 ASEE Annual Conference, Louisville,
Kentucky, June 20-23, 2010.
19. Delgoshaei, P., 2012. Design and Implementation of a Real-Time Environmental Monitoring Lab with
Applications in Sustainability Education, PhD Dissertation submitted to Virginia Tech, Dec. 2012.
20. Wigfield A. and Eccles, J. S., 2000. Expectancy-value theory of achievement motivation.
Contemporary Educational Psychology, vol. 25, pp. 68–81.
21. Delgoshaei, P. and Lohani, V. K., Design and Application of a Real-Time Water Quality Monitoring
Lab in Sustainability Education. Paper accepted for publication, International Journal of Engineering
Education.
22. https://blogs.lt.vt.edu/cee4304f2012 accessed on Sept. 16, 2013.
23. http://vwcclewas.blogspot.com/ accessed on Sept. 16, 2013.
24. Dymond, R., Lohani, V. K., Brogan, D., and Martinez, M., 2013. Integration of a Real-Time Water and
Weather Monitoring System into a Hydrology Course, Proc. 2013 Annual Conference of American Society for
Engineering Education, June 23 - 26, 2013, Atlanta, GA.
25. Golombisky, K., & Hagen, R., 2010. White Space is Not Your Enemy: A Beginner's Guide to Communicating
Visually through Graphic, Web and Multimedia Design. New York: Focal Press.
26. Alessi, S., & Trollip, S., 2000. Multimedia for Learning: Methods and Development (3rd Edition). New York:
Allyn & Bacon.
27. Rai, A., Brogan, D., and Lohani, V. K., 2013. A LabVIEW Driven Real-time Weather Monitoring
System with an Interactive Database, Proceedings of Research, NSF/REU Site on Interdisciplinary
Water Sciences and Engineering (under preparation), Virginia Tech.
28. Zhu, Y., 2012, Introducing Google Chart Tools and Google Maps API in Data Visualization Courses, IEEE
Computer Graphics and Applications, vol. 32, no. 6, pp. 6-9, Nov-Dec 2012.
29. Grady, M., 2010. Functional programming using JavaScript and the HTML5 canvas element. Journal of
Computing Sciences in Colleges, vol. 26, no. 2, pp. 97–105.
30. Greeno, J. G., Collins, A. M., & Resnick, L. B., 1996. Cognition and Learning. In D. C. Berliner & R. C. Calfee
(Eds.), Handbook of Educational Psychology (pp. 15–46). New York, NY, USA: Macmillan Library Reference
USA.
31. Lave, J. and Wegner, E., 1991. Situated Learning: Legitimate Peripheral Participation. Cambridge University
Press.
32. Scribner, S., 1997. Studying Working Intelligence. In E. Tobach, R. J. Falmagne, M. B. Parlee, L. M. W.
Martin, & A. S. Kapelman (Eds.), Mind and social practice: Selected writings of Sylvia Scribner (pp. 308–318).
Cambridge: Cambridge University Press.
33. Johri, A., Olds, B. M. and O’Connor K. Situative Frameworks for Engineering Learning Research. In
Cambridge Handbook of Engineering Education Research (Chapter 3, pp. 47–66), Johri A. and Olds B. M. Eds.
Available Dec. 2013, Cambridge University Press. ISBN: 9781107014107.
34. Wertsch, J. V., 1998. Voices of the mind. New York:Oxford University Press.
35. Engle, R. A., 2006. Framing interactions to foster generative learning: A situative explanation of transfer in a
community of learners classroom. Journal of the Learning Sciences, vol. 15, no. 4, pp. 451-498.
36. Newstetter W. C. and Svinicki M. D. Learning Theories for Engineering Education Practice. In Cambridge
Handbook of Engineering Education Research (Chapter 2, pp. 29–46), Johri A. and Olds B. M. Eds. Available
Dec. 2013, Cambridge University Press. ISBN: 9781107014107.
37. Glasgow, H. B., Burkholder, J. M., Reed, R. E., Lewitus, A. J., & Kleinman, J. E., 2004. Real-time remote
monitoring of water quality: a review of current applications, and advancements in sensor, telemetry, and
computing technologies. Journal of Experimental Marine Biology and Ecology, vol. 300, no. 1-2, pp. 409–448.
38. Ma, J., & Nickerson, J. V., 2006. Hands-on, simulated, and remote laboratories: A Comparative Literature
Review. ACM Computing Surveys, vol. 38, no. 3, pp. 1–24.
39. Balamuralithara, B., & Woods, P. C., 2009. Virtual laboratories in engineering education: The simulation lab
and remote lab. Computer Applications in Engineering Education, vol. 17, no. 1, pp. 108–118.
40. Gomes, L., & García-zubía, J. (Eds.), 2007. Advances on remote laboratories and e-learning experiences.
Bilbao, Spain: Duesto Publications.
41. Nedic, Z., Machotkd, J., & Najhlsk, A., 2003. Remote Laboratories Versus Virtual and Real Laboratories. In
2003 33rd Annual Conference Frontiers in Education (pp. T3E1–T3E6). Boulder, Colorado: IEEE.
42. International Data Corporation’s Worldwide New Media Market Model, 1H11.
http://www.idc.com/getdoc.jsp?containerId=prUS23028711 accessed on Apr 25, 2012.
43. Crawford, C., 2002. Art of Interactive Design: A Euphonius and Illuminating Guide to Building Successful
Software. No Starch Press. ISBN-13: 978-1886411845.
44. Alessi, S., & Trollip, S., 2000. Multimedia for Learning: Methods and Development (3rd Edition). New York:
Allyn & Bacon.
45. Prus, J. and Johnson, R., 1994, A critical review of student assessment options. New Directions for Community
Colleges, vol. 1994, no. 88, pp. 69–83.
46. Pedhazur, E. J. & Schmelkin, L. P., 1991. Measurement, Design, and Analysis: An integrated approach.
Hillsdale, New Jersey: Lawrence Erlbaum Associates.
47. Brookhart, S. M., 1999. The art and science of classroom assessment: The missing part of pedagogy. ASHEERIC Higher Education Report, Vol. 27, No. 1. Washington, DC: The George Washington University Graduate
School of Education and Human Development.
48. Educational Technology Center - Kennesaw State University, n.d. Assessment Rubrics. Retrieved August 07,
2012, from http://edtech.kennesaw.edu/intech/rubrics.htm
49. A Committee of College and University Examiners., 1956. Taxonomy of educational objectives: The
classification of educational goals: Handbook I - cognitive domain. (B. S. Bloom, M. D. Engelhart, E. J. Furst,
W. H. Hill, & D. R. Krathwohl, Eds.). New York: Longmans, Green and Co.
50. Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J.
and Wittrock, M. C. (eds.), 2001. A taxonomy for learning and teaching and assessing: A revision of Bloom's
taxonomy of educational objectives. Addison Wesley Longman.
51. Downey, G. L., 2005. Keynote lecture: Are engineers losing control of technology? From “problem solving” to
“problem definition and solution” in engineering education. Chemical Engineering Research and Design, vol.
83, no. A8, pp. 1-12.
52. Creswell, J. W., 2009. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand
Oaks, CA, USA: SAGE Publications.
53. Leedy, P. D., & Ormrod, J. E., 2005. Experimental and Ex Post Facto Designs. In Practical Research: Planning
and Design (8th ed., pp. 217–244). Upper Saddle River, NJ, USA: Prentice Hall.
54. Moskal, B. M., Leydens, J. A., & Pavelich, M. J., 2002. Validity, reliability and the assessment of engineering
education. Journal of Engineering Education, vol. 91, no. 3, pp. 351–354.
55. Ott, L. and Longnecker, M., 2008. An Introduction to Statistical Methods and Data Analysis (sixth edition).
Duxbury Press. ISBN-10: 0495017582, ISBN-13: 978-0495017585.
56. Howell, D.C., 2011. Statistical Methods for Psychology (8th ed.). Belmont, CA: Wadsworth.
Paper ID #10137
Work in Progress: Developing Senior Experimental Design Course Projects
Involving the Use of a Smartphone
Dr. Denise H Bauer, University of Idaho, Moscow
Dr. Denise Bauer is an Assistant Professor in the Department of Mechanical Engineering at the University
of Idaho. Dr. Bauer teaches both first-year and senior-level courses and is developing a new engineering
course for first-year students that are under-prepared in math. Her main research area is Human Factors
and Ergonomics where she is currently working on a pedestrian guidance system for the visually impaired.
She is also working on several initiatives to improve the undergraduate experience in the College of
Engineering and the retention rates of under-served students (women, underrepresented minorities, and
first-generation students).
Dr. Edwin M. Odom, University of Idaho, Moscow
Dr. Odom teaches courses that include introductory CAD, advanced CAD, mechanics of materials, machine design, experimental stress analysis and manufacturing technical electives within the Mechanical
Engineering program at the University of Idaho.
c
American
Society for Engineering Education, 2014
Work in Progress: Developing Senior Experimental Design
Course Projects Involving the Use of a Smartphone
Introduction
The Mechanical Engineering senior laboratory course at the University of Idaho is a projectbased course that focuses on experimental design and requires students to design, perform and
analyze their own statistically based experiments. A difficulty that arises each semester,
especially in the Fall when there are 40 plus students, is finding enough appropriate experiments
that can be designed, ran, and analyzed in the last two-thirds of the semester (the course is one
semester) with minimal funds. In the past, we used “canned” projects or Senior Capstone
projects; however, the canned projects were not interesting to the students and it is becoming
harder to develop short, fast statistical experiments (must use either confidence intervals,
factorial ANOVA, or regression) from the capstone projects. With the widespread use of
Smartphones and mobile computing devices, we thought using these devices would be an
interesting and inexpensive way to develop new projects each semester.
In the Fall 2013 semester, we had two student teams develop experiments to obtain engineering
data on human balance using a balance board and their Smartphone with a purchased app. The
purpose of the experiments was not to teach students to use Smartphone apps, they can already
do that effortlessly, but to have an inexpensive way for them to collect engineering data that they
could analyze and make statistical conclusions. We did anticipate using the app to show students
ways to collect data in addition to the traditional methods and how to transform the data into
something useful for analysis. Overall, what we hoped to accomplish in this pilot semester were
developing projects that could apply one of the three statistical designs taught in the course,
would result in data related to engineering, could be reused each semester without being exactly
the same each time, and, most importantly, students are excited about doing.
This paper will present the process the students took while developing and completing their
projects, the observations of the faculty during that process, the lessons learned, and the future of
the projects.
Student Experimental Design Development and Testing
It was decided to use a Fitter Rocker balance board (Figure 1) due to the ability to reuse it each
semester and the ease of attaching a holder for the Smartphone while testing. We also knew a
variety of tests could be performed with minimal time commitment and funds, two very
important aspects of the course. Eight students selected the “Stability” project in the Fall 2013
semester even after the class was told that they would be creating this project from scratch. They
formed two teams of four students each and were initially tasked with finding an appropriate app
(Accelerometer Monitor, Dev: Mobile Tools, Version 1.6, Android IOS) to use (Figure 2) and
build a casing to hold the measuring device. Developing an app would have been more time
consuming and possibly limited to one Smartphone platform. Therefore, it was decided to
purchase an app instead to allow the use of any Smartphone and an app that has been thoroughly
tested. Purchasing traditional balance testing equipment, such as force plates, motion capture,
and force sensors, to use in the experiments is not possible due to limited funds. In contrast, the
purchase of an app on a Smartphone the students already own costs $1 to $5 per team. The
experiment can also be repeated each semester without wear and tear on expensive equipment
that would eventually need to be replaced. The balance board may need to be replaced every two
to three years, which translates to approximately $35 per year; much less expensive than other
equipment. Additionally, as apps are becoming more common as tools to collect and analyze
data; students should learn how to use them for more than just fun and games.
The seed research questions provided to the students were “what is human balance?” and “can
balance be measured and thereby understood?” From these two questions, the students began to
research the topic and develop their experiments. Based on previous studies on balance,1, 2, 3, 4, 5
the students formed their research questions and objectives. Both teams focused on using the
results as a possible way to design sports equipment, such as bicycles, for an individual based on
their balance profile. This guided their experimental design and selection of independent
variables. The faculty members mentored both teams in the development of the experiments, but
ultimately the students used the prior studies and their own experiences with the equipment to
create the final designs.
Figure 1. Balance board
Figure 2. Accelerometer App
Team 1 decided to test if gender and/or Body Mass Index (BMI) had an effect on time of balance
and the frequency; they tested balance in only the side-to-side direction. Team 2 wanted to test
the affect of height and direction (side-to-side versus front-to-back) on the amplitude and
frequency of an individual’s balance test. To keep the experiments simple, both teams decided to
use the 22 factorial experimental design they had previously used in a class assignment. Team 1
tested the two levels of gender (male/female) and divided BMI into low and high levels. A low
level of BMI was less than 25 and corresponds to a person that is considered normal weight or
underweight. Therefore, a high BMI level was any value 25 or over and corresponds to an
overweight or obese individual.6 The team tested three individuals in each category (three
females with a low BMI, three with a high BMI, etc.). Each subject completed the test once
resulting in 12 total data points for the experiment. Team 2 tested the two levels for direction
(side-to-side and front-to-back) and then tested only males with a “normal” BMI at two levels of
height. The cutoff height for the low/high levels was the average male height of 69.1 in.7 A
height below this value was considered “low” and anything above was considered “high”. Team
2 collected data from ten individuals in each category of height. Each subject performed both of
the tests, resulting in ten samples for each combination (side-to-side and low height, side-to-side
and high height, etc.), with a total of 40 data points.
Once the teams decided upon their experimental designs, they worked out a schedule to share the
board and collect their data. They then developed a method to convert the raw data into values
that could be used in Minitab 16.1 for their statistical analysis. Examples of the data from Team
1 and Team 2 can be seen in Figures 3 and 4, respectively. The only independent variable found
to have a statistically significant effect on balance from both experiments was height (p-value =
0.067) for Team 2 at a confidence level of 90%.
Figure 3. Subject data example from Team 1
Figure 4. Subject data example from Team 2
Faculty Observations of the Process
During the first meetings with the teams, the faculty members discussed possible research
questions for the experiment such as is there a difference between front-to-back and side-to-side
balance, does it change over time (i.e., does the mere act of measuring it influence future
measurements), and does mild fatigue change balance and if so can it be measured? Some of the
considerations we were hoping the students would consider were sampling rates, length of
experiment, data recording and transmission, and transforming the data into something useful.
Since this was a new experiment for everyone involved, we left the project very open-ended and
let the students develop their own experimental design. Although the students met regularly with
the course instructor to make sure their statistical design was sound (it was required), they did
not take advantage of the other faculty member to help transform their data after collection. It
appeared that the students wanted to do their own thing and not what may have made their
projects more applicable and given them better results.
We as faculty hoped that the projects would result in the students finding what a balance profile
looks like, discovering how much movement there still is even if the person appears to be
balanced, how some factors such as gender, body mass index, and gender may affect balance,
and being able to design a successful experiment. The students did develop fairly successful
experiments (will be discussed in the next section) and determined that not just one or two
factors determine a balance profile, but not to the extent that they could have. It was also evident
that although they could use the app successfully in collecting the data, they had trouble with the
transformation and knowing how extract the exact data they needed for the analysis. Although
we were not completely pleased with the results the students produced, we were pleased with the
possibilities we discovered throughout the process. We had used this as a trial-and-error process
to develop successful projects for future semesters, and that goal was achieved. However, there
are still a few things that should be adjusted and researched further.
Discussion of the Process and Future Use of the Project
The students included their opinions of what went wrong in their experiments as part of their
final reports. Some of their suggestions were to include more factors, increase the number of
subjects, be more restrictive in the testing process (e.g., make sure it is exactly the same for each
subject), increase the levels of the factors (e.g., not just low and high BMI), improve the
sampling rate and extraction of data, and improve the testing platform (e.g., one that is more
stable). These findings from the students reinforced to the faculty that the projects did indeed
teach the students what was intended even if there were issues with the end results. All the goals
of the pilot semester were met: the students applied one of the three statistical methods and
designed appropriate experiments; they collected engineering data using a non-traditional
method through their Smartphone and acceleration app; two teams used the same equipment but
developed two completely different experiments that could easily be repeated without
duplicating the exact same results or minutely tweaked to create additional experiments; the
students were engaged and invested in the projects throughout the process.
Overall, the students were very pleased with their experiences with the project. They stated that
it “improved [their] design outlook” and they enjoyed that the project allowed them to
“experience a full project through all the design stages.” The two teams also expressed that the
project helped with their scheduling skills because the two teams had to share the board. They
had to test when they were scheduled or they may not get the chance again for a few days.
Although the results did not show significant results, the teams mentioned that it was “interesting
to see how different people balanced and to figure out why they had better balance than others.”
These responses indicate that the project was a success in terms of student interest, learning the
experimental design process, and trying to understand how different factors affect balance.
The faculty member teaching the course had a unique insight into the projects as she had regular
meetings with both teams and was actually a subject in one of the experiments. None of the
comments and observations made by the students was a surprise to her, and was pleased that the
students recognized the short-comings of their experiments yet felt that they learned a great deal
from the process. Since there have been senior capstone related projects in the past that do not
receive the same student feedback, we believe that this project will continue to be appropriate for
the course and keep the students interested. It will also allow for a minimum of two projects each
semester, which will reduce the burden of finding suitable projects greatly. The use of the
Smartphone and app also showed that we could create additional projects with different apps,
further decreasing the number of outside projects needed. This project showed that although
students may know how to use various apps, they do not necessarily know what to do with the
data that is collected. Implementing projects that involve the use of mobile technology will not
only “update” the course but teach students that apps can be useful tools in engineering.
Although we attained results that met all our goals, there are still improvements that can be made
to make the experience for both the students and faculty better. First, there should be more
structure in the initial problem statement, especially if we want to create three or four unique
projects each semester. This would be as simple as including a question to be answered in the
project description. Examples are those that the faculty initially expected from the students: How
does fatigue affect balance? Does balance change over time? Is there a difference between frontto-back and side-to-side balance? With these preliminary questions, the students would have a
direction for what they are measuring and need to figure out how. A second improvement would
be to further stress the importance of controlling variables that you are not directly measuring.
This would include the way each subject performs the test, how the data is recorded, etc. It is not
expected that a perfect experiment will be run and not controlling some things actually teaches
the students the importance of doing so, but these were items that the students in the pilot
semester specifically mentioned. Making sure future students try to control at least one of these
variables would illustrate how you need a structured plan and still demonstrate how
environmental factors can affect your results. The last main improvements are concerned with
collecting the data with the app and transforming it for further use.
Probably the biggest problem that was observed by both the students and the faculty members
was collecting the correct data with the app and how to transform it for analysis. This is still
being examined. Since the students did not take advantage of the faculty member that had the
expertise in this area, they had trouble first deciding exactly how to collect the “right” data and
then how to select and use the data they collected. Since a Smartphone can collect data through
accelerometers and gyroscopes, the students needed to first recognize what was being measured
while the person was balancing and which would be best for their analysis. They then needed to
understand the profiles of that data and recognize the events represented (e.g., when the balance
board hit the ground). The students took all of this on themselves during this pilot semester and
the data they collected was actually not as useful as they originally thought. This means that we
will need to collect more data with the balance board before we can include appropriate ways to
transform the data. The faculty will be working on this aspect during the Spring semester so that
the projects will be ready in Fall 2014.
Conclusion
We believe that the use of Smartphones and mobile computing devices will provide us multiple
projects in the foreseeable future with a little more structure and research on how to transform
the data. This will reduce the stress of finding new projects each semester with minimal funds. It
will also allow students to gain experience with a new engineering tool: apps. The results of this
pilot semester indicate that simple projects with just a balance board and Smartphone can teach
engineering students the process of designing and analyzing a statistical experiment.
1.
S. Deans, “Determining the validity of the Nintendo Wii balance board as an assessment tool for balance,” (MS
thesis, University of Nevada, Las Vegas, 2011). Accessed October 24, 2013,
http://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?article=2241&context=thesesdissertations.
2.
J.D.G Kooijman, J.P. Meijaard, J.M. Papadopoulos, A.Ruina, and A.L. Schwab, “A bicycle can be self-stable
without a gyroscopic or caster effects,” Science Magazine, 332 (2011): 339-342. doi:10.1126/science.1201959
3.
C. Maurer, T. Mergner, B. Bolha, and F. Hlavacka, “Human Balance Control during Cutaneous Stimulation of
the Plantar Soles,” Neuroscience Letters, 302 (2001): 45-48.
4.
T.A. McGuine and J.S. Keene, “The Effect of a Balance Training Program on the Risk of Ankle Sprains in High
School Athletes,” American Journal of Sports Medicine, 34 (2006): 1103-1111.
5.
D. Winter, “Human Balance and Posture Control during Standing and Walking,” Gait & Posture, 3 (1995):
193-214.
6.
National Institutes of Health, National Heart, Lung, and Blood Institute. Aim for a Healthy Weight: Calculate
Your Body Mass Index. Accessed December 31, 2013,
http://www.nhlbi.nih.gov/guidelines/obesity/BMI/bmicalc.htm
7.
Henry Dreyfuss Associates, “Anthropometry,” in The Measurement of Man and Woman, Revised Ed. (New
York: John Wiley & Sons, 2002), 12.
Paper ID #8546
Work in Progress: Using Videotelephony to Provide Independent Technical
Critique of Student Capstone Projects
Dr. Walter W Schilling Jr., Milwaukee School of Engineering
Walter Schilling is an Associate Professor in the Software Engineering program at the Milwaukee School
of Engineering in Milwaukee, Wis. He received his B.S.E.E. from Ohio Northern University and M.S.E.S.
and Ph.D. from the University of Toledo. He worked for Ford Motor Company and Visteon as an Embedded Software Engineer for several years prior to returning for doctoral work. He has spent time at NASA
Glenn Research Center in Cleveland, Ohio, and consulted for multiple embedded systems companies in
the Midwest. In addition to one U.S. patent, Schilling has numerous publications in refereed international
conferences and other journals. He received the Ohio Space Grant Consortium Doctoral Fellowship and
has received awards from the IEEE Southeastern Michigan and IEEE Toledo Sections. He is a member
of IEEE, IEEE Computer Society and ASEE. At MSOE, he coordinates courses in software quality assurance, software verication, software engineering practices, real time systems, and operating systems, as
well as teaching embedded systems software.
Dr. John K. Estell, Ohio Northern University
John K. Estell is a Professor of Computer Engineering and Computer Science at Ohio Northern University.
He received his MS and PhD degrees in computer science from the University of Illinois at UrbanaChampaign, and his BS in computer science and engineering from The University of Toledo. His areas
of research include simplifying the outcomes assessment process, first-year engineering instruction, and
the pedagogical aspects of writing computer games. John currently serves as Chair of the Computers in
Education Division and was one of the principal authors of the Best Paper Rubric used for determining the
Best Overall Conference Paper and Best Professional Interest Council (PIC) Papers for the ASEE Annual
Conference. He is a past recipient of Best Paper awards from the Computers in Education, First-Year
Programs, and Design in Engineering Education Divisions. Dr. Estell is an ABET Commissioner, Vice
President of The Pledge of the Computing Professional, a Senior Member of IEEE, and a member of
ACM, ASEE, Tau Beta Pi, Eta Kappa Nu, Phi Kappa Phi, and Upsilon Pi Epsilon.
Dr. Khalid S. Al-Olimat P.E., Ohio Northern University
Dr. Khalid S. Al-Olimat is professor and chair of the Electrical & Computer Engineering and Computer
Science Department at Ohio Northern University. He obtained his BS in Electrical Engineering from Far
Eastern University in 1990, the MS in Manufacturing Engineering from Bradley University in 1994 and
his PhD in Electrical Engineering from the University of Toledo in 1999. Dr. Al-Olimat is the recipient
of Henry Horldt Outstanding Teacher Award in 2004. He is a senior member of IEEE and the chair of
IEEE-Lima section. His areas of interest are power engineering, adaptive, fuzzy and intelligent control.
Dr. Al-Olimat is a registered professional engineer in the State of Michigan.
c
American
Society for Engineering Education, 2014
Work in Progress: Using Videotelephony to Provide Independent
Technical Critique of Student Capstone Projects
Abstract
As part of the requirement for ABET accreditation, an engineering program is expected to have a
curriculum culminating in a major design experience, commonly referred to as either a “senior
design” or a “capstone” project, based on the knowledge and skills acquired in earlier course
work. One challenge that programs face is providing appropriate technical and professional
feedback to students on their capstone projects. For example, students may be working in an
application domain in which the faculty member has limited knowledge, or may be using newer
technologies that the faculty member has not used before. To overcome these problems, it is
often advantageous for the team to partner with an industrial mentor. The industrial mentor can
provide technical assistance to the project as well as provide impartial and unbiased feedback on
the status of the project. However, this is not always feasible, as not all campuses are located in
urban areas, and project stakeholders may not be local to the institution.
This work in progress article describes the first part of a two part project to overcome some of
these problems. In this portion of the project, capstone teams work directly with an alumnus
who serves as “External Industrial Reviewer” on the project using remote video conferencing.
As the students complete significant milestones, deliverables are critiqued by External Industrial
Reviewer who is impartial to the project. Students receive feedback on the deliverable, which
can then be incorporated into future projects. This article describes the concept as well as
provides a preliminary assessment of the technique.
Introduction
Managing capstone projects at small institutions represents a significant logistical challenge as,
with a limited number of engineering faculty, the specific technical knowledge necessary to
properly direct an industry-sponsored or cutting-edge capstone design experience might not be
present. This problem may be further complicated if the institution is located in a rural area, as
their students are often forced to work in physical isolation, their clients are not nearby, and
travel constitutes an inconvenience. The College of Engineering at Ohio Northern University a
private four-year institution located in the village of Ada, OH, exhibits these characteristics.
There are approximately 450 students in the college, and the nearest major cities (i.e., those with
a population of 100,000 or greater) are between one and two hours away from campus. While
this isolated environment offers many advantages from an academic standpoint, it also poses
many significant challenges. In the ECCS Department, comprised of approximately 140
students, one faculty member, the Senior Design Coordinator, is tasked with the coordination of
all senior capstone projects while the remaining nine faculty members all serve as team
advisors1,2. Furthermore, there has been an increased emphasis on multidisciplinary projects,
requiring the standardization and coordination of many elements of the capstone experience and
fostering greater interaction with students and faculty in other departments3,4,5.
Capstone Project Scope and Milestones
The department solicits project proposals prior to mid-April of the junior year. A number of
projects are sponsored and/or provided by external industrial clients, whereas other projects are
proposed by faculty members of the department. Proposals are generally in the form of a one- or
two-paragraph statement that identifies an opportunity or a need and puts forth a concept that can
address that opportunity or need. All submitted project proposals are first reviewed in a
department faculty meeting to ensure that they have an appropriate technical level of complexity.
The approved project abstracts are then distributed to the junior students. Each student is asked
to choose three or four project proposals and submit them in ranked order of preference. This
feedback is reviewed by the faculty for their comments, after which the department chair and
senior design coordinator assign students to appropriate project groups based on student
capabilities, project needs, and placement preferences Each team is advised by a faculty
member and students start interacting with their faculty project advisor prior to the end of the
junior year.
The senior capstone experience consists of a year-long sequence of two courses; ECCS 4711
Senior Design 1 which is offered in fall semester; and ECCS 4721 Senior Design 2 which is
offered in spring semester. Projects are completed in teams of between 3 and 5 students, and
there are approximately 10 teams. All teams are required to meet on weekly basis with their
faculty project advisor. There are two required oral presentations (the proposal presentation and
project progress presentation) to be given before the Project Review Board (PRB), consisting of
the faculty members of the department, and a submission of the written proposal during the
duration of the first course. The PRB members evaluate the projects based on both the oral
presentations and written report using specific rubrics. The senior design coordinator provides
the industry mentor with the written proposal and the related rubric ahead of the occurrence of
the teleconferencing (usually a week earlier). Both the PRB and the industry mentor provide
suggestions to improve the quality of the design proposals.
In the second senior design course, student teams continue in the implementation phase, working
closely with their advisors on their design project while employing all steps in the engineering
design process in the production of a working prototype. Teams are required to submit a written
progress report and give an oral presentation to the members of the PRB at the middle of the
semester. This way, the department insures that there is sufficient time in the process for
revisions to be made, thereby supporting the iterative nature of real-world design and make
students better prepared for the college wide design showcase event on April.
In the first week of April, student teams present their projects and working prototypes through
poster session in the college design showcase. The posters are judged jointly by the members of
Lima Section IEEE, the department’s Program Working Groups (which includes the industry
mentors), and other representatives from industry.
After the poster session, final touches are made to the projects and student teams focus on the
documentation deliverables. Primary emphasis is placed on a professional quality written
technical report, which includes detailed design documentation, and is graded by multiple faculty
members. An oral presentation of their completed project to their peers and faculty is also a
major course component. Finally, each team develops a web site for their project which serves as
a document repository.
Previous Work
The management of capstone projects has been the topic of numerous papers, both at ASEE and
other conferences. Since the adoption in 2000 by ABET of an outcomes-based assessment
process, there has been an increased emphasis placed upon the quality of the capstone
experience, as well as an increased emphasis on increasing the realism of the capstone
experience.
Many authors have discussed mechanisms to improve the quality of capstone projects. Paliwal
and Sepahpour6 discuss the addition of significant review activities by faculty members as a
mechanism for improving the outcomes of the design experience. Green et al. 7 talk in depth
about the issues related to implementing an interdisciplinary senior design capstone experience,
focusing on the need for faculty specific roles as well as appropriate team composition on the
basis of technical skills and knowledge. Teng and Liu8 discuss the managerial models which can
be applied to capstone projects with industry collaboration. Fries et al.9 discuss the importance
of working directly with a practicing engineer in industry on a routine basis.
The Concept
In industry, it is often required that a project undergo an independent review. When a project is
independently reviewed, an outside expert is brought in to the project to examine the material
and determine the quality of the work. In 2007, the National Research Council asserted that
independent reviews are one of the means by which industrial firms become successful at project
management and continue to stay that way10.
Independent review processes have already been added into some capstone projects. Delaney11
reports on the advantages of having students independently reviewed on their capstone projects.
In this case, the independent review focuses more on technical presentation than technical
content, but there still is the element of independent review. McGinnis and Welch12 discuss how
feedback from an industrial partner works when projects are sponsored by an external entity.
However, not all projects have an industrial partner. Furthermore, in many cases the industrial
partner only reviews the scheduling and project management aspects of the project, not the
technical aspects of the project.
As has been stated previously, there are occasions where the faculty member who is assigned to
mentor the design team may not have full knowledge of either the technology or the domain of
the problem. This can be problematic, for if the students encounter difficulties, the faculty
member may not have the appropriate depth of knowledge to provide adequate guidance.
However, programs which have a strong working relationship with their industrial advisory
board have a great resource available to them. Besides being a valuable means for being in touch
with one’s constituent groups as required by ABET, industrial advisory boards offer programs a
mechanism for soliciting volunteers who are practicing engineers in industry. Often comprised
of alumni, these advisory board members are more than willing to contribute the success of
future students through active participation. To tap this resource, a new position was added to the
capstone project organization chart: the external technical evaluator. As illustrated in Figure 1,
the Senior Design Coordinator recruits an external technical evaluator from the program’s
industrial advisory board to the team, with an attempt to provide the best match between the
needs of the projects and the technical expertise of the selected board member. This evaluator is
tasked with providing a technical review of major project artifacts. The hypothesis being tested
is that this approach will lead to increased technical quality of capstone design projects,
increased professional quality of capstone design projects, and increased interaction with
institutional alumni while creating minimal administrative overhead.
Figure 1: Conceptual Roles
To be effective, the review process must attempt to minimize the amount of additional
administrative overhead incurred by the project. Because of location and scheduling issues,
reviews have been, and will continue to be, conducted entirely by Skype or Google Chat
mechanisms, thereby replacing the need for face to face meetings. This also, in some regards,
increases the realism of the process, for it is now common for large industrial companies to
conduct major technical reviews of projects distributed amongst several geographically-diverse
teams via teleconference.
Proof of Concept and Planned Assessment
Due to the experimental nature of this initiative, students enrolled in the 2013-2014 capstone
design sequence at Ohio Northern University were advised of the nature of the experiment and
given the option to request an independent reviewer. Interested teams were required to obtain
the concurrence of their capstone design advisor. For those teams that were chosen to
participate, the capstone course instructor selected an external technical evaluator from the pool
of volunteers from the ECCS Department’s industrial advisory board. The remaining capstone
design teams are being used as a control.
As each significant deliverable of the capstone design process is completed, the team reviews the
deliverable with their faculty advisor using already established processes. However, the team
also provides a copy of that deliverable to the external technical evaluator for critique. The team
and the evaluator then arrange to meet via teleconferencing applications such as Google Chat.
This allows the evaluator to provide technical feedback on the project while the team have the
opportunity to hone their technical communication skills. Communications are limited to only
one per significant deliverable to ensure that the role is not burdensome to either the external
technical evaluator or the capstone design team, and the design team is free to either accept or
reject the feedback as they see fit.
As of the time of publication, a thorough assessment of the technique has not yet been
conducted, as the projects have not yet been completed. However, preliminary comments, as
shown in Appendix B, have been favorable.
Following the completion of the 2013-2014 academic year, the effectiveness of this approach
will be accessed, with the intent of up-scaling the approach if it is deemed to be successful.
Thus, as of the time of drafting this article, detailed assessment results are not available.
However, an assessment approach has been laid out which will involve a simple survey of the
participants. The survey for the participating capstone design students is given in Appendix A;
similar instruments will be developed to solicit input from the external technical evaluators and
associated faculty advisors. In addition to the survey, the projects of the participating teams will
be compared against the control group at the completion of the capstone process to try and
ascertain if the projects which had external involvement ended up with either a higher quality
project, a more complete project, and/or a more professionally polished project. Unfortunately, a
purely objective measure may be difficult to obtain, given the extreme variability in capstone
project content.
Future Plans
If a favorable assessment of the first part of this project is obtained, then the second phase will be
implemented. In the second phase of the project, external review will be a required aspect of
capstone design projects, and all teams will be assigned an appropriate external technical
reviewer. Additionally, instead of using synchronous video techniques, the reviews may shift to
an asynchronous format. This asynchronous format will allow for the reviews to be directly
included in the project portfolio. The evaluation will be conducted again, and a further
comparison of the effectiveness of the technique will occur.
Conclusions
This article has presented a work in progress at Ohio Northern University, with the goal
enhancing the quality of capstone design projects. Students are assigned an external technical
reviewer from the program’s industrial advisory committee, and this reviewer provides an
independent assessment of the capstone deliverables. Students then use this independent
feedback to improve their project. An assessment mechanism to determine the effectiveness of
the approach has been presented, as well as an overview of the next phase of the project if the
proof of concept is deemed to be successful.
Bibliography
[1] "Assessment of the Results of External Independent Reviews for US Department of Energy
Projects," National Research Council, 2007.
[2] K. Delaney, "Evaluating "Independent" Assessment of Capstone Projects by Mechanical Engineering
Students in DIT," in Proceedings of Edulearn, Barcelona, 2011.
[3] K. Al-Olimat, "An Industry Sponsored Capstone Project: A Story of Success," in Proc. 2010 ASEE
Annual Conference, Louisville, KY, 2010.
[4] J. K. Hurtig and J. K. Estell, "Truly Interdisciplinary: The ONU ECCS Senior Design Experience," in
Proc. 2005 ASEE Annual Conference, Portland, OR, 2005.
[5] J. K. Hurtig and J. K. Estell, "A Common Framework for Diverse Capstone Experiences," in Proc. of
the 2009 Frontiers in Education Conference, San Antonio, TX, 2009.
[6] J. K. Hurtig and J.-D. Yoder, "Lessons Learned in Implementing a Multi-disciplinary Senior Design
Sequence," in Proc. 2005 ASEE Annual Conference, Portland, OR, 2005.
[7] J. K. Estell, D. Mikesell and J.-D. Yoder, "A Decade of Multidisciplinary Capstone Collaboration," in
Proc. 2014 Capstone Design Conference, Columbus, OH, 2014.
[8] M. Paliwal and B. Sephahpour, "A Revised Approach for Better Implementation of Capstone Senior
Design Projects," in Proc. 2012 ASEE Annual Conference, San Antonio, TX, 2012.
[9] M. Green, P. Leiffer, T. Hellmuth, R. Gonzalez and S. Ayers, "Effectively Implementing the
Interdisciplinary Senior Design Experience: A Case Study and Conclusions," in Proc. 2007 ASEE
Annual Conference, Honolulu, HI, 2007.
[10] S. G. Teng and P. C.-H. Liu, "Collaborative Environments for Managing Industrial Projects," in Proc.
2003 ASEE Annual Conference, Nashville, TN, 2003.
[11] R. Fries, B. Cross and S. Morgan, "An Innovative Senior Capstone Design Course Integrating External
Internships, In-Class Meetings, and Outcome Assessment," in Proc. 2010 ASEE Annual Conference,
Louisville, KY, 2010.
[12] M. McGinnis and R. Welch, "Capstones with an Industrial Model," in Proc. 2010 ASEE Annual
Conference, Louisville, KY, 2010.
Appendix A: Senior Project Industrial Peer Review Assessment Survey
1)
2)
3)
4)
5)
6)
7)
8)
How much time did you spend working with your industrial reviewer?
a) None
b) 1 - 5 minutes
c) 6 - 10 minutes
d) 11 - 15 minutes
e) 16 - 20 minutes
f) 21 - 25 minutes
g) 26 - 30 minutes
h) 31 - 35 minutes
i) 36 - 40 minutes
j) 41 - 45 minutes
k) More than 45 minutes
How large was the artifact which your industrial reviewer reviewed?
a) No artifact was provided to the reviewer
b) 1 - 4 pages
c) 5 - 8 pages
d) 9 - 12 pages
e) 13 - 16 pages
f) 17 - 20 pages
g) 21 - 24 pages
h) More than 24 pages
How many major issues did your reviewer find in your artifact?
a) None
b) 1 - 2 issues
c) 3 - 4 issues
d) 5 - 6 issues
e) 7 - 8 issues
f) 9 - 10 issues
g) more than10 issues
How many minor or other simple issues did your reviewer find in your artifact?
a) None
b) 1 - 2 issues
c) 3 - 4 issues
d) 5 - 6 issues
e) 7 - 8 issues
f) 9 - 10 issues
g) more than10 issues
The review session with an industrial reviewer found defects which were not found by my senior design advisor.
a) yes
b) no
I found the review session by an industrial reviewer to be extremely useful.
a) strongly agree
b) agree
c) neutral
d) disagree
e) strongly disagree
I would highly recommend that future teams be able to have an industrial reviewer review their artifact as part of the senior
design process.
a) Strongly agree
b) Agree
c) Neutral
d) Disagree
e) Strongly Disagree
Do you have any other comments on the industrial review process?
Appendix B: Sample Student Comments
1. When we had our Skype meeting, Nick went over several things that were really helpful in
making our paper better. He even touched on things we ourselves said we wanted to add but
didn't change before he looked at it. This made it all the more reason to fix those issues. I thought
getting feedback from Nick was great. I hope the ECCS department does this for senior's next
year. I do wish the next time it is done, we had the groups that weren't presenting at the time wait
outside so they don't distract the current team speaking with Nick. It would make it more
comfortable for the presenting team.
2. I feel like the Skype conversation with Nick. For our project I feel he gave some pretty
worthwhile advise and direction. It would have been nice to possibly have this conversation
happen a little bit sooner rather than so close to the PRBs. I feel this could be a great requirement
to add to the senior design process. I don't think it really adds too much work on the student's end
and has the potential to be very beneficial.
3. I do believe our Skype session with Mr. V. was a good feedback mechanism for our Sr. Design
project. Since Mr. V. has almost 10 years of industry related experience, he was able to point out
if we were "dreaming too big" or being unrealistic in some of our proposed deliverables for the
final presentation. Basically, he helped us in narrowing the scope of our project by encouraging
us to think more in short term success of our prototype, rather than what we ideally want to
create.
4. I did think the Skype conversation with Mr. V. was a good idea. Personally, I think he did make
some good points about our project, but unfortunately most of them went against what our
advisor was wanting so we did not get to incorporate many of his ideas. But, it was nice to get an
outsider's opinion on the project. I think it is a good idea to use this in the future. It didn't take
that long to actually have the conversation, and the information discussed could be valuable to
future groups.
5. In our experience with the professional in the field webcam chat, the advice did not seem to do
anything productive for the project. In fact, he was discouraging with his opinionated concerns of
even pursuing this capstone project that we were trying to accomplish. It seemed as if he did not
read our paper in its entirety or did not completely understand the outlined goals. On several
occasions he would try and convince us to do something either too complicated or attempt a
project that would take too much time. He was quoted as saying “I don’t buy it, its not going to
work.” However, aside from the negative comments he did help with paper formatting and
information that needed to be included.
As a group it was agreed that there shouldn't be just be one “catch-all” professional that helps all
the groups because although he may be intelligent in a couple specific fields he does not
necessarily have expertise to criticize our type of project. Instead, this should be done
independently by each group to obtain a professional in the field more involved with their
respective topics for better feedback. It is also recommended to not be a mandatory assignment
because the opinions of an outside source should not reflect on us as a grade, it should be a
beneficial experience, not a potentially harmful one. We thought that by doing a short, elevator
pitch of the project and having question period afterwards would provide better knowledge for
both ends and could clear up communication issues. Since it was early enough in the semester the
proposals were not very detailed anyways, so he wouldn’t need all of the extra information such
as planning, budget expenses, or literature review. Reading through all of the papers by the
different capstone groups takes too much time and they have already been critiqued multiple
times by other professors. This would help get straight to the point of the goals and objectives of
our project while minimizing confusion. The question period, open to both parties, would be an
opportunity to learn more about the project and possibilities rather than just listening to his
opinions that were formed while reading a drafted proposal.
Paper ID #10516
Work in progress: A first year common course on computational problem
solving and programming
Dr. Bruce W Char, Drexel University (Computing)
Bruce Char is Professor of Computer Science in the Department of Computing of the College of Computing and Informatics at Drexel University.
Thomas Hewett is Professor of Psychology (emeritus) and Computer Science at Drexel University.
Dr. Thomas T. Hewett, Drexel University
Tom Hewett is Professor Emeritus of Psychology and of Computer Science at Drexel University. His
teaching included courses on Cognitive Psychology, Problem Solving and Creativity, the Psychology of
Human-Computer Interaction, and the Psychology of Interaction Design. In addition, he has taught oneday professional development courses at both national and international conferences, and has participated
in post-academic training for software engineers. Tom has worked on the design and development of several software projects and several pieces of commercial courseware. Some research papers have focused
on the evaluation of interactive computing systems and the impact of evaluation on design. Other research
papers have explored some of the pedagogical and institutional implications of universal student access to
personal computers. In addition, he has given invited plenary addresses at international conferences. Tom
chaired the ACM SIGCHI Curriculum Development Group which proposed the first nationally recognized
curriculum for the study of Human-Computer Interaction. Tom’s conference organizing work includes being Co-Chair of the CHI ’94 Conference on Human Factors in Computing Systems and Program Chair
for the 2013 Creativiey and Cognition Conference.
c
American
Society for Engineering Education, 2014
A first year common course on computational problem solving and
programming
Abstract
This is a report on work-in-progress for an entry-level course, Engineering Computation Lab, in
which engineering and other STEM students learn about computational problem-solving and
programming. It provides a hybrid (on-line and in-person) environment for learning introductory
programming and computational problem-solving. It runs at scale, serving 800-1000 engineering
students per term. Pedagogically, it uses active and problem-based learning using contexts more
oriented towards the needs of engineering students than typical generic “intro programming”
courses. Autograded exercises and on-line access to information have been key to feasible
operation at scale. Learning how to operate effectively and smoothly at scale across a variety of
lead instructor preferences, with the particular needs for computation by engineering students has
been the continuing challenge. We report on our experiences, lessons learned, and plans for the
future as we revise the course.
Course objectives
Use of computation is indisputably part of every engineer's foundational training. However,
there does not appear to be a consensus on the extent of such training, or its outcomes. Training
for professional software developers (as evidenced by what it would take to be seriously
considered for a professional software development position nowadays) would seem to include
the equivalent of at least several terms of courses to achieve a working knowledge of software
development: programming in two or more languages, data structures, performance analysis,
software design, and basic principles of software engineering such as testing, documentation, and
archival storage. However, the conventional undergraduate engineering degree is already full of
other mandated science and discipline-specific course work. Until the day arrives where enough
time is given to establish mastery of software development, course designers would seem to need
to settle for the goal of introduction: initial experience with of the skills and knowledge needed
to create and use software to solve problems typical of an engineer's work. This includes:
• Concepts of simple computation: command-based access to technical functionality;
scripting and introductory programming (variables, control statements, functions).
• Application of computation to engineering: concepts of simple mathematical modeling
and simulation, and their use to solve engineering design problems.
• Software engineering: how to build software, get it working quickly, and having
confidence that it works well. Also, how to better amortize the cost of building software
by designing for reuse.
Mastery of these concepts is clearly beyond a single course, or even a year long sequence of
courses. Yet postponement to sophomore or junior year blocks access to even simple
computation skills and concepts in the first years, which blocks more sophisticated use of
software when it might be used by some for educational benefit.
Engineering Computation Lab, 2006-2013
The course was originated and underwent pilot development in 2005 under Jeremy Johnson with
an enrollment consisting primarily of first-year Computer Science majors. The first full
deployment of the course began in 2006 at a scale of approximately 600 students/term. In 2008
we made the transition into a course that was a hybrid of in-person lab activities and out-of-class
on-line autograded exercises with approximately 800 students per term.. The course operates
during the three quarters of our institution's academic year as a series of 15 two-hour class
meetings (one unit credit per quarter) to better achieve the benefits of spaced learning1. During a
term there were 4 two hour labs, with the 5th meeting a two hour proficiency exam. There were
an on-line pre-lab prep quiz, and a post-lab on-line homework assignment. The course typically
ran as approximately 30 lab sections of 30-35 students, across 20 different time periods. This is
an example of a “flipped classroom,” in that most of the contact time was spent in active learning
from lab activities.
Choice of language
The first version of the course used Maple2 as the computation system and programming
language. Maple was selected for several reasons.
1. Maple is interactive, similar to systems such as Python, MATLAB or Mathematica that
allow immediate execution and display of a single operation without a compilation
phase. This leads to more immediate feedback and interaction, more suited to
developing a “personal computing” – getting results that are of interest and immediately
useful for an individual's work. Pedagogically, it allows students to get at least simple
results immediately, with incremental growth from that point.
2. Maple has a large library of STEM procedures, permitting use of sophisticated technical
computing without extensive user programming. Typical small-scale software
development consists of writing the scripting connecting invocations of library
procedures, and providing the user interface programming that allows facile
comprehension of computed results through tabular listing, plots and animations, etc.
3. Maple's standard user interface are interactive worksheets allow text, 2-D mathematics,
code, and graphics to be combined in a WYSIWYG fashion. A publishing mechanism
allows generation of html or pdf from worksheets. This allowed us to write the lab and
course notes3 using the same tools and in the same environment that the students used for
their own work. For example, lab work could be entered by students within the same
document that distributed the directions. The availability of “clickable math”4 in the
interface allowed an incremental approach to the introduction of linear expression and
command syntax used in most conventional programming languages, providing a gentler
learning curve to the syntax that can often be a work bottleneck for beginners.
4. The course's on-line autograding system, Maple T.ATM uses the same programming
language. Since Maple T.A. allowed questions to accept student responses using Maple
syntax, this made it particularly easy to ask autograded programming questions that
required students to submit code as their answer.
Lab activities
Before each lab, students were expected to read course notes3, lab notes5 and take an on-line
pre-lab prep quiz, to ensure familiarity with the programming concepts needed and activities that
they would be asked to do in the lab. Labs typically started with a short (20 minute) lab
overview presented by the lab instructor. This was their opportunity to emphasize what the
students should have begun to understand through the readings by taking the pre-lab quiz. Labs
typically led students through a number of tasks. Work was in groups of two or three students,
based on evidence-based research that such work enables better learning compared to individual
work6,7.
Often there would be “scaffold” programming for the students to complete. Typical labs would
introduce some computational or programming concepts and ask the students to complete the
programming for the following kinds of tasks:
Illustration 1: Lab exercise from 5. In the previous lab, students had completed
scaffolding of an HVAC time-step simulation. In this lab, they learn how to steer
simulation runs and display graphical output by building a GUI in Maple that invokes the
simulation.
Illustration 2: Sample coding problem from Maple T.A.
1. Create a plot for the population growth of a species in an ecosystem, and make
predictions based on the graphical and numerical results.
2. Create an animation of the trajectory of a human cannon ball (from 8, pp. 462-465), based
on given formulae for velocity/position. Variations included an improved model taking
into account wind resistance, or generating a position plot of a bouncing ball.
3. Calculate a least-squares trendline from given time-temperature, or estimated-actual
distance measured from a sensor, and to answer situational questions.
4. Write a control program for an autonomous car simulator, and test it on varying terrain
with a common feature.
5. Calculate and plot the dynamic behavior of a chemical reaction involving four chemicals
described through difference equations (approximating differential equations).
6. Calculate the area under a curve by developing a piecewise polynomial spline fit to data
by using symbolic definite integration (originated by 9). Use this to calculate the power
developed by time-velocity measurements of a baseball batter's swing.
7. Complete and extend a time-step simulation of an HVAC system (see Illustration 1).
Analyze a design space by varying heat/cooling parameters. Design a control program
for a ventilation fan and observe homeostatic temperature behavior based on the fan
control parameters.
The lab work included getting the student to consider and answer questions that involved
interpreting and steering the use of programs they had developed, rather than just getting coding
to pass specified tests. This is one of the major differences of this course from a generic “intro
programming” course. It is ability to design, implement, and then use computation to solve
problems that is most important to STEM students who need to develop programming skills. We
found that doing this does not occur “for free” – there are time tradeoffs involved in having
students learn about using computation for problem-solving, rather than learning additional
Illustration 3: Autonomous Car Simulator Scenario. Students were given
an API for the simulator and asked to write a Maple control program to
navigate through a family of scenarios. The white space is the “road”
while the green are road shoulders. The blue square is the target
destination.
programming features.
While this first course does not provide a complete education in computational engineering, we
believe it provided a way to get students engaged early with the application of computation
skills. This makes possible follow-on courses where the integration of programming and
problem-solving skills can be mastered through repeated, increasingly complex cycles of
instruction.
Autograded problems: pre-lab and post-lab activities
Maple T.A.10 is a proprietary on-line system for administering on-line exercises and tests. Like
other on-line systems, it allows instructor-constructed questions of the conventional sort –
multiple choice, fill-in-the blank, matching, etc. Studies have indicated the effectiveness of
on-line training for programming 11. We also took advantage of the cost/time saving features of
centralized computerized grading in reduced staff resources for grading and its administration.
The reason why we chose it over more common systems is that the Maple engine can be called
on for problem generation and answer checking, giving it particular strength for work the STEM
area.
Maple T.A. was used in the course as a vehicle for creating and delivering questions for pre- and
post-lab activities, as well as the in-class proficiency exam at the end of each term. After a few
years' development, we came up with a battery of prospective problems that we could offer on a
rotating basis, requiring only incremental or evolutionary revision from year to year. We relied
on the instructor- programmability to develop questions appropriate for the problem-solving/
programming dual nature of the work, at scale. In particular, we developed techniques for
delivering problem variants to students using random selection, random parameter values, and
random variation of the problem and its solution algorithm as described further in12,13 Having
found a problem area, we would develop several questions that could be answered through
computation from a model or code. We would write the program that could check solutions for a
variant. When the student asked for their assignment, the system would generate the particular
version of the problem, along with the problem description and answer-checking specific to that
version.
Illustration 3 shows a scenario for an autonomous car controller problem. Students in lab
became familiar with an API for a simulated car controller, which allowed a car to move and turn
based on “stylized” sensor readings of its proximate environment. In the autograded problems,
they built upon their experience to write control programs for obstacles more ambitious than
those they tackled in lab. The variant creation would change the obstacle course so that
different sequences of actions and turns were necessary. The presentation would include a
dynamic generation of the textual description of the scenario, as well as animations illustrating
the obstacle courses that provided the tests for the student controller.
Because of the bi-weekly nature of the lab meetings, there was a week where students did not
meet staff in lab. This week was used for makeup labs and for completion of autograded
homework exercises. The nature of these exercises combined simple questions about knowledge
of language syntax, or one-line code solutions, with problems that required students to extend lab
programming to handle new situations, or to create small programs or scripts that were expected
to be modifications or augmentations of programming given in the lab or in the course notes.
Examples might include:
•
•
•
•
•
Calculating the voltage across a resistor with a given amount of current flowing through
it.
Asking what angle and velocity would work to make the cannonball travel through a ring
suspended a certain distance and height.
Answering questions that predict time or quantity from a trendline formula constructed
from given data.
Figuring out how to compose given plotting functions to create a specified picture, and
entering the function invocation.
Creating control programs to handle additional scenarios for the autonomous car.
About 120 problems were created for the course. Many of these problems requiring the student
to take multiple steps, intended as a kind of problem-based learning 14.
Designing autograded questions
Instructor programmability of the autograder, as Maple T.A. allows, opens the door to more
sophisticated question design, as described in13. We have found the following to be important
considerations in our design activities:
1. What kind of question do you want to ask? Given the particular selection of topics for the
week, there were numerous possibilities: knowledge acquisition/review from readings
(where the humble true/false question was often good enough), problem-solving using
2.
3.
4.
5.
problems similar to ones covered in lab or the readings, exercises that would require
result interpretation or reflective thinking, problem-solving that would require adaptation
and transference of learning, etc.
How much time should students expect to the week's autograded work will take, and how
will you make your question selection fit within that time budget? Despite its use of
autograding, our course emphasizes learning through fixed amounts of lab time in social
interaction with staff and lab partners. There was not the development budget nor the
inclination to use autograding as a kind of “intelligent personal tutor15,16” whereby a
student works many hours being guided through programmed instruction until mastery of
a skill is detected. Nevertheless, it was easy to come up with questions that would
require far more time than the students thought they had for the course. In conventional
instruction limiting the assigned work is also a way to avoid overloading the amount of
grading effort for the staff, but with autograding this is not the case. The “retry until
success” work ethic also may require more time than conventional homework. In a
course like this that combines both skill-building practice, software development, and
problem-based learning, it is important to be aware that problems may, by design, vary
greatly in the length of time to do them.
What application area should the question be about? The topics covered in a particular
lab cycle would often be fertile grounds for any number of questions. For example, in the
“human cannonball” portion of the course, one could come up with a number of variants
of the basic problem, requiring different formulas, different boundary or starting
conditions, different kinds of information. To keep current the idea that the knowledge
and skill-building in the course was intended to have transferability to situations not
literally the same as covered in lab, problems on other topics would be used. The
programming of the autograded problem would be used to present more information
about the new domain, rather than relying on student familiarity with the problem area,
sometimes including graphs, formulae, or animations.
How much scaffolding do you want to put into the question? Some students had a lot of
difficulty with problems requiring multiple steps to solve them, because they could not
figure out how to get to the goal from the start. The hinting mechanism of Maple T.A.
was better at giving short, one sentence general advice. It was also difficult to sense what
a student's conceptual problems were from the numeric or programming answers they
submitted. As we developed experience with certain problems, we were able to develop
on-line notes that were helpful to many (although at the cost of not providing as full an
experience at self-directed discovery of solutions). We found useful to break up longer
problems into multiple parts, asking for intermediate results or realizations before to the
final answers. With this approach, students could see that they were making progress
even if not getting to the end. Visits to the walk-in clinics became more productive
because the staff had more information about where a student was running into problems.
Generating a multi-step problem just means modifying the solution algorithm for the final
result to output intermediate results it computes along the way. The sequence of results
can then be used in a multi-part question.
What information does this programming of a problem solution need, and where will the
students find it? In the later portions of the course, some of the problems relied on the
software packages developed for the course, such as the autonomous car simulator. To
avoid unintended or misinformed use of alternatives (e.g. older versions of the packages
that were somehow still on someone's web pages), the directions needed to be specific
about providing hyperlinks to the intended files.
6. How much effort will it be to test the questions before release? Testing questions for
comprehensibility and correct operation is an important fact of life when operating at
scale. Even if only 5% of the students run into problems or ask questions in our course,
that will generate 40-50 situations that require a staff response. We had to treat our
autograded questions like software engineering, allocating testers and time for testing
before release.
7. How much computer time does it take to generate the question, and to check the
solution? In this era of gigahertz/gigabyte computers, it is easy to think that any problem
suitable for introductory students would use negligible amounts of computing time.
However, larger problems (due to a realistic application, or for anti-gaming purposes, see
next section) requiring dynamic generation to produce can take a significant fraction of a
second to create on-the-fly. An example of this would be a problem generating a
sophisticated animation to help present an explanation of a new application area. While a
half second of computer time is no issue for autograding a class of 30 students, when
operating at scale it can lead to server saturation and unacceptable response times. One
can avoid problem generation bottlenecks by looking for alternatives that require less
computation, or by precomputing hundreds or thousands of variants and storing them on
the server. Then most of generation time is spent in modest amounts of disk access, e.g.
retrieving an animation file rather than computing many frames of a complex scene.
Autograding and “gaming the system”
By “gaming”, we mean the activities of students who operate the autograding system but wish to
bypass learning or trying to solve a problem with an authentic general-purpose approach17. An
example of this might be a student who, after learning that a testing algorithm checks a student
program on three fixed inputs, just writes a series of if statements that deliver the correct
statement in only those three cases, foregoing the programming that might provide the correct
answer in any other case. The use of variants, and requiring solutions to several different
versions of the problem, is itself a way to encourage students to work out the solution on their
own rather than relying on look up or trying to get by using verbatim responses they get from
classmates or web site look up.
While allowing retries is a good way of encouraging students to pursue correct answers if
success is not immediate, it also can be gamed using exhaustive trial and error. Unfortunately in
the initial years of the course Maple T.A. did not have a convenient way of setting up an
assignment so that some parts would allow unlimited retries and other sections (such as with
questions that are true/false or multiple choice) that do not. True/false or multiple choice
questions are obvious kinds of questions that could be gamed with unlimited retries. Other
examples: a math-oriented problem that asked for a fill-in-the-blank integer solution where it
was clear that the only sensible values would be between 1 and 10; an optimization problem
requiring a three digit answer that could be gamed by doing a plot and then exhaustively trying
all numbers around where the plot indicates the optimum occurs. Sometimes in trying to simplify
the work of students by presenting a situation simple enough for a student to quickly understand,
the situation be solved through short-cuts that avoid the intended techniques being intended.
To avoid asking questions that can be easily solved through exhaustive trial-and-error, create
problems where a large amount of manual effort is needed for likely success. For example,
changing the precision required or range of plausibly correct answers so that there are a
minimum of hundreds of possibilities to try.
Another kind of “gaming” are students who will try (authentically) to answer an autograded
question without using the techniques of the course. An example for this course is the desire of
some students to stick with finding answers with hand calculators (which they are experienced in
and confident about using from high school work) rather than learning programming. For this
reason we typically designed problems to a) require working with data sets that were too large to
solve through eyeballing or to key into a hand calculator (i.e. don't ask the student to write a
program to add together numbers and then give them only three number to process), b) ask for
solution to two or three versions of the problem that differed only in minor ways parametrically,
to favor use of techniques such as programming where procedural reuse is easy, and c) were
situations where use of computation would be authentically better than other approaches such as
logic, commonsense reasoning, visual inspection, or lookup. The vulnerability again arises from
the fact that autograders can't see the work the students do to get their answer, combined with an
instructor's natural inclination to give students a simple situation that does not make them work
too long to understand what is being asked.
Proficiency exam
Each of the three quarters of the original version of the course ended with a proctored
proficiency exam in the lab section. Use of computation tools and their on-line help, along with
information on the course web site during the exam was expected, but proctoring included
electronic and personal measures (network blocks, desktop monitoring, walk-by inspection) to
discourage unauthorized access to information.
The exam asked students to demonstrate their programming and problem-solving skills by doing
so during the test. The exam given to each section consisted of a subset of the pre- and post- lab
autograded exercises. All exercises for the term were made available for practice ahead of exam
wee, but students did not know which questions would be used for their section's exam until they
took the exam. Thus, all sections across all time periods were on equal footing as far as knowing
what would be asked, and how to prepare for them. It also made it possible to give makeup
exams for excused absences or exceptional circumstances. During the exam, students were
allowed access to the electronic course and lab notes stored on the course web site, as well as
on-line help built into the version of Maple running on their computer. They were expected to
construct Maple scripts on their computer to calculate the results the Maple T.A. problems
requested. submit them to Maple T.A. Since most exercises came in numerous variants, the exam
tested whether student had become familiar with the programming and technical concepts to be
able to recreate and solve a problem that they had supposedly mastered. Because the exam was
autograded, results were available to students as they exited the exam.
Illustration 4: Example Maple T.A. question for MATLAB-based version of the course. The
student response is translated from MATLAB syntax to Maple syntax automatically so that the
Maple engine within Maple T.A. can check the symbolic answer.
Having a proctored exam worth a significant component of the final grade was also an
anti-gaming measure, in that it detected the absence of genuine learning during the practice
afforded by the lab and autograded exercises.
Lessons learned from the first version of the course – strengths
Much of the effort in the past few years went into the authoring the founding materials. For
example, even though extensive use of autograding made it possible to run the course with the
staffing resources given, instructor authorship of autograded problems that come in variants is an
exercise in software engineering: each problem took approximately five hours to develop and
test to the point of release. While this amount of effort per problem would be prohibitively
expensive for a “use once” problem, the scale and reusability of the problems across multiple
exercises and tests, and across multiple course offerings, provided a solid economic rationale for
this work. In “steady state”, one would expect question development to be more incremental and
feasible within the time budget conventionally allocated for development for large “flagship”
courses. Development of the lab and course notes required several revisions to improve clarity
of exposition, and achieve appropriate pacing of the work. Topics were reordered to try to
satisfy external needs, such as the need for earlier control statements and user-developed
procedures arising from a LEGO robot programming project being handled in a concurrent
Engineering Design course.
Because we viewed the course as constantly evolving over its first few years as we grew in
experience and understanding, we sought information expediently for formative rather than
summative evaluation.
We found that the package of activities (labs, reliance on written materials and active learning)
were effective at allowing to acquire some of the skills we were providing training for in the
course. For example, we introduced the Least Squares data fitting concept in a lab problem
tackled by students in groups, with the same grade given collectively to each group member.
Subsequent autograded results for individuals found that the skill seemed to be acquired by over
80% of students, and with persistent results when the question was asked again a term later.
Event
Test average
Fall, 2010 post-lab quiz (874 students)
87.0%
Fall, 2010, end of term proficiency exam (551
students)
81.2%
Winter, 2010 review quiz (802 students)
86.3%
Table 1: Performance on a least squares data fitting problem (Not all students were given the
problem on the proficiency exam to avoid predictability across exam time periods.)
the students to pass the proficiency exams averaging 80% as indicated in Table 2. The decline in
the scores from progressive terms, which we attribute to the increasing difficulty of the work,
similar to that reported in other introductory programming courses18. However, the term-by-term
achievement scores remained steady over several years' operation of the course.
CS121 (first term)
CS 122 (second term)
CS 123 (third term)
89%
80%
76%
Table 2: Proficiency exam scores for 2011-2012 year, approx. 800 students
Our system logs for Maple T.A. indicated that the most popular times for work were evenings.
By allowing retries, we believed we encourage a “attempt until success” work standard that is
crucial to success in programming. The lack of sophisticated feedback by the autograder was
frustrating at times to some students, since they did not have any easy way to proceed after
submitting a solution and having it marked wrong. However, the use of autograding allowed us
to reallocate staff resources from grading to providing walk-in clinic hours to serving students
who did have difficulties.
A typical year's operation saw over 122,000 problems graded automatically – not including the
additional grading resulting from student retries. We attempted to keep the entire class on a
single schedule of due dates, but this imposed significant swings in the load on the autograding
system. Fortunately our system administrators were able to deploy adequate server power to
handle our size class. Nevertheless, system performance requires careful attention in courses
where significant resources are needed for autograding.
Lessons learned from the first version of the course – limitations of the original format
Maple T.A.'s grading of submitted programming had significant limitations. Input is
inconvenient (cut and paste of the program into a small text box). If a program would fail, the
autograder result would not include any of the standard feedback and error messages that the
desktop version of Maple would give for the same program. It did not safely sandbox certain
kinds of “runaway” computations which would cause the server to become unresponsive.
Because of these issues, we tended to ask “applied” problems where the input to Maple T.A.
consisted of results computed by students after developing and executing code on their own
computers. This meant that the course was weak in the area of drilling students on mastery of
development of small programs, away from lab. While many students (as indicated by the
course marks) were able to learn programming despite the lack of good low-level automatic
feedback, we saw that there were some students who were not well-served by this deficiency.
The limited time available for course work, in class and outside, meant that even with
autograding there was not enough time to exercise all skills separately and then in integration.
This produces a less-than-ideal situation for a subject where there are lengthy chains of
interdependent skills. In some cases, we were forced to fall back on the hope that what we
required would be sufficient for some students, and by providing social reinforcement many of
the rest to go beyond required work to the level needed to complete their mastery.
Because of the difficulties of executing student program submissions, we did not pursue the
important notion that a program can be evaluated in several different ways – correctness,
run-time efficiency, quality of coding style, quality of design, and how well the computed
answers satisfy the need of the user. We think that the effort/feedback experience of trying to
satisfy an autograder on multiple criteria simultaneously would be worthwhile for students, but it
remains as yet unimplemented in our course.
Our impressions anecdotally from the institutionally mandated course surveys and from
discussions with individuals is that some students struggle with the need for original synthetic
thought that is the basis of being successful programming; they seem to be more comfortable
with STEM work that is recall- or pattern-matching based. They also find the abstraction of both
mathematical modeling and the dynamic state changes of a computer program to be hard to work
with, with a resulting decline in motivation. We realized that finding ways of increasing
motivation and finding ways of reducing the stress of the cognitive load of abstraction might be a
way to having more students succeed in the learning tasks.
Finally, the current limitations of autograding technology meant that the assigned work was
“directed” – doing particular things for particular results. While much of introductory
college-level course work is of that nature,, it meant that questions where the answers could not
be anticipated or evaluated easily through an algorithm. Open-ended problems or project-based
work19–21 could not be accommodated. While this is no worse than courses that use problems
from the back of textbook, nevertheless, the appeal of project-based work is that some students
respond better to circumstances where authentic original work is necessary.
Engineering Computation Lab 2.0: revision and refinement.
The course is now in the process of revision, keeping its strengths while addressing limitations
and weaknesses.
1. There are more contact hours. The class is now run two terms instead of three, but with
weekly instead of bi-weekly labs and a weekly one hour lecture. This increases the
number of contact hours from 30 to 60, of which at least 40 are active learning. This
should increase the inventory of skills that are explicitly exercised and discussed, which
should lead to further learning success by the types of students currently having
difficulty.
2. To address the limitations of the autograded feedback there are bi-weekly assignments
graded manually . This should allow better quality high-level feedback to students,
which should improve the quality of learning to those who pay attention to the feedback.
3. The course has switched from Maple to MATLAB as the programming language, to
increase the potential of direct application in subsequent engineering work. Even though
we believe that the primary value of a first course in programming is the transferable
experiences about how programming languages and software design and development
work, there may also be more enthusiasm by some students at using a language that is
widely used professionally. The shift to MATLAB allows the course to more easily work
with data analysis (e.g. interpretation of statistical results), data acquisition, and device
control. This may broaden student interest and increase motivation. However, the
course will continue in include some work on applications of mathematical modeling.
Maple T.A. has the ability to execute MATLAB as a side process as well as the ability to
execute modest amounts of MATLAB code by symbolically translating it and having the
Maple engine execute it. Thus we feel that there are similar capabilities for autograding
MATLAB work in Maple T.A. A simple MATLAB skill-building question is shown in
Illustration 4.
4. The course is adding project-based learning assignments22, allowing more open-ended
and self-directed practice with programming, even though there is a continuing need for
prefacing such work with smaller learning steps such as skill-building and
“homework-sized” problem-solving23 This should create circumstances for authentic
student inventive thinking, potentially increasing student motivation to achieve success in
programming.
5. In order to provide better quality automated program testing we also have introduced use
of Cody CourseworkTM24–26. Cody Coursework automatically tests and scores
student-submitted MATLAB programming. Test result output includes the same
run-time error diagnostics that students see when running their programs on their own
computers, addressing one of the most serious weaknesses of Maple T.A. used as a
program tester. Cody Coursework is hosted in the cloud, so it removes the task of
maintaining an autograding system from the university staff.
6. We are conducting experiments with using MATLAB autograding for badge awards. We
believe that one of the drawbacks of courses such as this which strive to provide
immediately useful skills to engineering students, is that it is difficult for “consumers of
skill” such as employers or later courses to know exactly what the students should be able
to do. We also are extending the use of autograded problems in a proctored proficiency
exam to certification of specific MATLAB skills through an Open Badge-based
certification scheme27–30. This will allow students to advertise their achievement of such
skills in a verifiable and more detailed way than a course grade on a transcript. As an
artifact, we intend to provide review documentation and on-line exercises so that one
could attempt badge certification even after the course had ended.
As part of the course redesign an increased emphasis is being placed on systematic collection of
information that can be used in formative evaluation of the course materials and pedagogical
procedures. The goal is to guide further evolution of the course. This concern with metrics and
measures includes ongoing development and refinement of measurement instruments that allow
more complete understanding of the knowledge various students bring to their course and more
complete understanding of their views on such things as the usefulness of various course
components in helping them to learn. With the course being run at scale these questionnaires are
being developed to be administered on-line. The difficult thing here will be to engage enough
students in the evaluation of course material to provide meaningful and useful data, To this end
the formative evaluation data collection instruments are being informed by the work of others
and by a process of design, evaluate and re-design intended to facilitate student engagement in
providing feedback useful to course development. We also intend to analyze the autograding
scores and interaction data to gain formative evaluation measures based on actual student work.
Conclusions
Our course attempts to provide first-year students with an experience that combines learning
programming with using it for problem-solving and design in engineering, differing in emphasis
and in problem selection from a generic CS1-style <<ref>> course. It uses autograding for a
combination of proficiency-building, skill-assessment, and problem-based active learning.
Autograding has become an important feature of the course, shifting use of human resources to
face-to-face tutoring and higher-level formative and summative evaluation. We continue to
explore the curriculum design space investigating the effects of additional time and staffing
resources, additional varieties of computational engineering activities, project-based learning,
and badge certification.
Acknowledgments
We wish to thank our colleagues Jeremy Johnson, Nagarajan Kagasamy, Baris Taskin, Pramod
Abichandani, Frederick Chapman, David Augenblick, Mark Boady, John Novatnick, L.C. Meng,
William Fligor, and Kyle Levin for their contributions to course's ongoing development. Part of
the changes for 2013-14 were supported by a grant from Mathworks. Furthermore, the course
has benefited from generous development support from the Department of Computer Science
(now the Faculty of Computer Science) and the College of Engineering of Drexel University.
Bibliography
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Sallee, T. Synthesis of research that supports the principles of the CPM Educational Program. (2005). at
<http://www.cpm.org/pdfs/statistics/sallee_research.pdf>
Maplesoft. Maple 17 User Manual. (Maplesoft, 2013). at <http://www.maplesoft.com/view.aspx?
SF=144530/UserManual.pdf>
Char, B. Scripting and Programming for Technical Computation. (Drexel University Department of Computer
Science, 2012). at
<https://www.cs.drexel.edu/complab/cs123/spring2012/lab_materials/Lab4/ScriptingAndProgramming.pdf>
Maplesoft. Featured Demonstration - Clickable Math. (2013). at
<http://www.maplesoft.com/products/maple/demo/player/ClickableMath.aspx>
Char, B. & Augenblick, D. Scripting and Programming for Technical Computation - Lab Notes. Scripting
Program. Tech. Comput. -- Lab Notes at
<https://www.cs.drexel.edu/complab/cs123/spring2013/lab_materials/Lab3/LabsWithoutAnswersCS123Sp13.p
df>
Porter, L., Guzdial, M., McDowell, C. & Simon, B. Success in Introductory Programming: What Works?
Commun ACM 56, 34–36 (2013).
Herrmann, N., Popyack, J. L., Char, B. & Zoski, P. Assessment of a course redesign: introductory computer
programming using online modules. SIGCSE Bull 36, 66–70 (2004).
Anton, H., Blivens, I. & Davis, S. Calculus, Early transcendentals. (2010).
Klebanoff, A., Dawson, S. & Secrest, R. Batter Up: The Physics of Power in Baseball. (1993). at
<http://umbracoprep.rose-hulman.edu/class/CalculusProbs/Problems/BATTERUP/BATTERUP_2_0_0.html>
Maplesoft. Maple T.A. - Web-based Testing and Assessment for Math Courses - Maplesoft. (2013). at
<http://www.maplesoft.com/products/mapleta/>
Miller, L. D. et al. Evaluating the Use of Learning Objects in CS1. in Proc. 42Nd ACM Tech. Symp. Comput.
Sci. Educ. 57–62 (ACM, 2011). doi:10.1145/1953163.1953183
Char, B. ASEE Webinar: Advanced Online Testing Solutions in a Freshman Engineering Computation Lab Recorded Webinar - Maplesoft. (2013). at <http://www.maplesoft.com/webinars/recorded/featured.aspx?
id=569>
Char, B. Developing questions for Maple TA using Maple library modules and non-mathematical computation.
(2011). at <http://www.drexel.edu/~/media/Files/cs/techreports/DU-CS-11-04.ashx>
Hewett, T. T. & Porpora, D. V. A case study report on integrating statistics, problem-based learning and
computerized data analysis. Behav. Res. Methods Instrum. Comput. 31, 244–251 (1999).
Anderson, J. R., Conrad, F. G. & Corbett, A. T. Skill acquisition and the {LISP} tutor. Cogn. Sci. 13, 467 – 505
(1989).
Rivers, K. & Koedinger, K. R. in Intell. Tutoring Syst. (Cerri, S. A., Clancey, W. J., Papadourakis, G. &
Panourgia, K.) 7315, 591–593 (Springer Berlin Heidelberg, 2012).
Rodrigo, M. M. T. & Baker, R. S. J. d. Coarse-grained detection of student frustration in an introductory
programming course. in Berkeley, CA, USA, 75–80 (ACM, 2009).
Shaffer, S. C. & Rosson, M. B. Increasing Student Success by Modifying Course Delivery Based on Student
Submission Data. ACM Inroads 4, 81–86 (2013).
Chinowsky, P. S., Brown, H., Szajnman, A. & Realph, A. Developing knowledge landscapes through
project-based learning. J. Prof. Issues Eng. Educ. Pract. 132, 118–124 (2006).
Hewett, T. T. Cognitive aspects of project-based Civil Engineering education. in (2010). at
<http://www.di3.drexel.edu/Orlando_PDFs/Hewett_Problem_Based_Final.pdf>
21. Detmer, R., Li, C., Dong, Z. & Hankins, J. Incorporating Real-world Projects in Teaching Computer Science
Courses. in Proc. 48th Annu. Southeast Reg. Conf. 24:1–24:6 (ACM, 2010). doi:10.1145/1900008.1900042
22. Lehmann, M., Christensen, P., Du, X. & Thrane, M. Problem-oriented and project-based learning (POPBL) as
an innovative learning strategy for sustainable development in engineering education. Eur. J. Eng. Educ. 33,
283–295 (2008).
23. Boss, S. Project-Based Learning: A Case for Not Giving Up | Edutopia. at
<http://www.edutopia.org/blog/project-based-learning-not-giving-up-suzie-boss>
24. Mathworks. MATLAB Cody Coursework. (2013). at
<http://www.mathworks.com/academia/cody-coursework/index.html>
25. Mathworks. Cody Coursework for Instructors - MATLAB & Simulink. (2013). at
<http://www.mathworks.com/help/coursework/cody-coursework-for-instructors.html>
26. Mathworks. About Cody. (2013). at <http://www.mathworks.com/matlabcentral/about/cody/>
27. Mozilla. Open Badges. Open Badges (2013). at <http://openbadges.org/>
28. Knight, E. et al. MozillaWiki -- Mozilla Open Badges. (2012). at <https://wiki.mozilla.org/Badges>
29. Carey, K. A Future Full of Badges. Chron. High. Educ. (2012). at
<http://chronicle.com/article/A-Future-Full-of-Badges/131455/>
30. Higashi, R., Abramovich, S., Shoop, R. & Schunn, C. D. The Roles of Badges in the Computer Science Student
Network. in Madison, WI, (ETC Press, 2012).
Download