Current Semester`s Projects - College of Engineering

advertisement
Team 1: Equipment Rack Active Cooling System
Background: Highly sophisticated electronic systems are routinely deployed to remote areas of the world in
support of both military, civilian, and humanitarian missions. Remote cell relay towers, local internet gateways for
remote schools, civilian and military surveillance systems, and remote hospitals deployed to disaster areas are all
examples of such systems. Some of these systems may be deployed in vehicles or small aircraft. Many such
systems are “rack mounted” electronics that end up being deployed to extremely hot and humid locations. Since
modern electronic systems generate a great deal of waste heat, this heat load must be dissipated in order to
maintain functionality of the electronics. As more and faster electronic components and processors are added to
various electronic communication, control, or processing systems, this heat load can become so substantial that the
underlying electronics cease functioning or fail outright. Small scale thermoelectric cooling technology (Peltier,
etc.) have shown applicability to cool individual electronic components. However; thermoelectric have a
diminishing return as the heat load increases over 2000W or more. The purpose of this capstone design project is
to develop an innovative “all purpose” ~3 ft3 enclosure that can dissipate 2 KW of continuously generated
electronics heat from an enclosure, while simultaneously protecting the contents from water condensation and any
particulate contamination during the cooling process.
A New Design Concept: This Senior Capstone Design project is to develop an “active cooling system” to
continuously dissipate a 2000W heat source inside a small (~3ft³) enclosure while maintaining the internal
temperature between 15°C and 45°C. The enclosure must be isolated from the outside, and thus cannot exchange
air with the outside environment. Any water that is precipitated out of the interior of the enclosure during cooling
must be evacuated or sequestered in a way that won’t allow it to wet or damage anything on the inside of the
enclosure. The apparatus (if any) inside the enclosure must evenly cool the air within in the enclosure and not be
directly (physically) connected to any of the heat sources inside. Any supporting apparatus on the outside of the
enclosure must be minimized in size, preferably having a footprint smaller than the enclosure itself. The
supporting apparatus may be mounted on the enclosure, or may be separately installed as far as 20ft from the
enclosure itself. The active cooling system must draw its power from a source separate from the equipment
enclosure.
Since active cooling necessarily requires a power source, it is important that the system be as efficient as possible
in transporting heat from the enclosure to the outside environment. It will also require some type of medium for
removing the heat from the enclosure. The medium type is flexible, but should be of a type that can be easily and
safely transported with no hazardous materials or chemical compounds requiring special personal protection
equipment or handling. The medium and any connections should be between 5-20ft from the enclosure. This is to
simulate a common installation of an active cooling system in a ground or airborne vehicle.
To be usable in austere environments, the system should be able to operate off different power sources. As a
minimum, the cooling system should be powerable from a common 110VAC (for testing), a 12 VDC automotive
system, and a 24VDC military vehicle standard. If possible, power interface options should be provided to allow
the system to be powered from a common vehicle alternator/inverter, solar panel, or motive generator like a small
wind turbine. Any other innovative ideas to power and control the system are highly desirable.
Skills Needed for This Work: (1) Basic understanding of thermodynamics. (2) Control system theory for closed
loop temperature control. (3) Mechanical and Electrical design skills for mounting and constructing components.
(4) Test planning and testing skills to ensure capability and durability of a finished design.
Sponsor: Air Force Research Laboratory – Sensors Directorate, WPAFB, OH
Contact: Mr. Ben Bosma, AFRL/RYZ, (937) 853-3043
Team 2: Moving Human Electromagnetic Scattering Simulator (MHESS)
1. Background:
Tracking of humans by Radio frequency or radar means is becoming an increasing important area of
research for the sensors directorate. Since many camera/visible sensors cannot operate in environments
of dust, dense smoke, or cloudy/rainy weather, it is important to be able to create sensors that can track
humans using radar technologies. These could be used for search and rescue, military surveillance, of
bases (at home or over seas), and detect movement of people (criminals, terrorists, or combatants’) in
remote locations of the world.
Electromagnetic (EM) scattering simulations that model the radar scattering phenomena related to
people walking or running are commonly available, and may be used to model fairly complex static and
dynamic radar scattering events related to people walking. Some “commercial” off the shelf analysis
codes include (1) Remcom XFDTD, (2) ANSOFT EM scattering software, (3) or CST Microwave
studio. Typically, if you want to model the radar scattering caused by simple human movements like
walking or running, most scattering models require one to physically measure and manually input
typical “human parameters” of interest (arm length, torso size, leg length, etc.) into the code. These
analysis codes also require artificial motion parameters if one wants to truly examine how the human
EM scattering changes as a function of time during walking, etc. Manual measurements of human
anatomy (like arm/leg length) are time consuming and make it difficult to get a large “representative”
population of subjects for scientific research and analysis purposes.
CAPSTONE Project Purpose: This capstone project involves automating several important steps in
the process of analyzing different people’s walk or gait from a radr standpoint. AFRL wants to merge
the simulation aspect of the electromagnetic scattering with realistic human motion parameters obtained
rapidly, accurately, and automatically. In recent years, the gaming industry has progressed immensely in
the field of motion sensing input devices, and many game consoles now employ a blend of state-of-theart hardware, software, and some old-fashioned ingenuity.
One of the more ingenious game systems is Microsoft's X-box Kinect. A Kinect is a motion sensing
input device based around a webcam-style add-on peripheral for the Xbox 360 console that enables
users to control and interact with the Xbox 360 without the need to touch a game controller, through a
natural user interface using gestures and spoken commands. Since its launch in 2010, scientists and
entrepreneurs have found unique ways to modify this device to extend its applicability to fields such as
medicine, robotics, navigation, and fashion.
Microsoft's Alex Kipman acknowledges the fact that they purposefully did not protect the USB
connection, thus allowing for modifications to the use of the X-box Kinect for application to other
industries. He states,
“What has happened is someone wrote an open-source driver for PCs that essentially opens the USB
connection, which we didn't protect, by design, and reads the inputs from the sensor...”
This means that an engineer and/or programmer may be able to design software that uses the X-box
Kinect as an interface to other technologies and applications, including AFRL’s need for an automated
device for capturing human movement metrics.
2. A new “human signature” device concept:
The purpose of this Senior Capstone Design project would be to develop a small, portable, and
lightweight motion sensing input that interfaces with AFRL provided EM scattering software that
produces an end-to-end Moving Human Electromagnetic Scattering Simulator. The Capstone Project
envisioned “end product” includes an X-box Kinect (or similar motion capture sensor identified by the
capstone team) that is connected connected via USB to a micro-controller equipped with an IP transport
(Ethernet, Bluetooth, 802.11, etc.) in a lightweight and portable enclosure.
The system should be capable of
(1) Connecting to a PC/laptop via TCP/IP over various transports
(2) Be equipped with software (COTS or custom written) that seamlessly records human
anthropomorphic data (arm length, height, leg length, etc) from sensed human motion from the Kinect
(or equivalent) sensing input data;
(3) Input same data to the (AFRL Supplied) EM simulation software data, which will then output a
Doppler radar image in real-time of a moving human with the inputted anthropomorphic data
The following are suggested milestones:
(1) Utilize an open-source driver to connect the motion sensing input to the micro-controller via a USB
port
(2) Design and implement an algorithm that detects a human in the motion sensor data and assigns
parameters with which the EM software will utilize (parser)
(3) Design and implement an algorithm that initiates the EM software with the provided parameters
(data sort)
(4) Design and implement an algorithm that outputs the EM software data in the time-domain (FFTtransform from frequency to time and linearize data)
(5) Utilize a back-projection-based imaging algorithm that images the simulated data in real-time.
(Signals processing - Imaging)
A block diagram is provided below for visualization purposes.
3. Skills needed for this to work:
(1) Innovation and creativity (2) Computer programming skills (3) Enclosure design skills
(4) Circuit design skills and embedded/micro-controller programming
4. Sponsor:
Air Force Research Laboratory - Sensors Directorate, WPAFB, OH
Dr. Analee Miranda, Research Mathematician, 937-528-8118, Analee.Miranda@wpafb.af.mi
Mr. Kenneth Schafer, Computer Scientist, 937-798-8111, Kenneth.Schafer@wpafb.af.mil
Team 3: Jordyn’s Haptic User Interface (HUI)
Sponsor: Michigan State University - Resource Center For Persons with Disabilities
Sponsored by Marathon Oil, Chrysler, Artificial Language Laboratory
Figure 1 Jordyn feeding the animals tactily
Figure 3 TactiPen
Figure 2 Tactos haptic system for the blind
Figure 4 Phantom Omni
Figure 5 Immersion Inc.
haptic touch screen development
kit
Jordyn is a second year computer science student at MSU who experiences blindness. This project’s
goal is to construct a refreshable1 haptic display for Jordyn that will enable her to interact and alter
drawings and other graphic materials.
Many different haptic systems are available commercially and by custom construction as shown in Fig
2 to Fig 5. The Phantom Omni was used at MSU’s Robotics and Automation Laboratory and some of
the hardware for building the other haptic systems is available at the RCPD.
1
Refreshable means a display that dynamically changes to present an unlimited number of objects, lines or
shapes. It may also be used to communicate movement. Jordyn currently uses static tactile documents that
were prepared at the RCPD on a graphics printer.
The minimal tasks required for this project are to purchase or build the hardware components, such as
those shown in the figures, to construct a simple 2D haptic display that can be given a line drawing via
USB or similar connection from a computer. As Jordyn studies her physics and math materials she will
be able to send these graphic images to this display and then feel them.
Further software developers can add features to this device that will enable embedding voice notes,
sounds, or other meaningful information to the drawings. Also providing a way for Jordyn to create
and alter drawings via CAD or freehand methods should be considered.
Participants include the ECE Student team, Jordyn Castor and the RCPD staff. We have also received
help from Dr. Mounia Ziat2 and Dr. Vincent Lévesque3 who published scholarly papers on Haptics for
the blind. Dr. Ziat suggested the construction of a device she describes in one of her publications4.
These custom devices are shown in fig 2 and fig 4.
These custom devices that Dr. Ziat proposes provide an advantage for helping the blind. It uses Braille
pin feedback which has heightened perception for Braille users. The typical haptic device for sighted
users (Fig 3 and Fig 5) may provide single point force feedback which is not as informative as an array
of pins that can tell the user instantly the angle of the line as well as when other lines intersect. An
array of Braille cells could also provide textual feedback on command from the user or vibratory
feedback.
Stephen Blosser
Assistive Technology Specialist
Michigan State University
Resource Center For Persons with Disabilities (RCPD)
120 Bessey Hall East Lansing, MI 48824-1033
Cell: 517 648-9191
2
3
4
http://www.mouniaziat.com/
http://vlevesque.com/
http://myweb.nmu.edu/~mziat/Ziat-LNCS07.pdf
Team 4: Branden’s Detented Joystick
Sponsor: Michigan State University - Resource Center for Persons with Disabilities
Brandon, seated, and part of his associate
team at Jackson Community College (JCC)
Mechanically detented joystick
Branden Bennett is a student at JCC with hopes to transfer from JCC to Michigan State University
(MSU).
Branden experiences athetoid cerebral palsy which causes many unwanted movements and interferes
with his speech, typing, pointing, and most other physical tasks. Branden used a custom mechanically
detented joystick and enter switch, created by John Eulenberg at MSU’s Artificial Language Laboratory
when he was an elementary student. Detenting is defined as providing areas of local stability to help
Branden stay on an item. It is similar to the old VHF tuners that clicked from position to position.
This joystick enabled Branden to independently select letters, words, or phrases and send them to a
computer for preparing documents or for spoken output. Because of the limitations of the mechanical
detenting mechanism Branden experienced many errors in his typing. Adjustable springs, hydraulic
dampers and hardware selectable number of detents made it difficult for Branden and his team to fine
tune the device.
The goal of this ECE480 project is to update this joystick to a USB device that can be used with a PC or
Mac. This detented joystick must also be made programmable so that the detent force, number of
positions, and damping can be adjusted in software.
The skills needed for this project will be familiarity with microcontrollers, force sensors, linear variable
differential transformer (LVDT) or similar position sensors, and electromagnetic brakes. Knowledge of
currently available force feedback joysticks will also be helpful. These, “off the shelf” or gaming
joysticks, typically use motors which do not provide sufficient force and position stability for Branden.
Stephen Blosser
Assistive Technology Specialist
Michigan State University
Resource Center For Persons with Disabilities (RCPD)
120 Bessey Hall East Lansing, MI 48824-1033
Cell: 517 648-9191
Team 5: Replacement Application for a) Transformer Load Metering and b) Natural Gas
Metering
Sponsor: ArcelorMittal Corporation
Description: ArcelorMittal is the #1 Steel producer in the world. This project concerns the integrated
steel mill in Burns Harbor, Indiana.
The Burns Harbor Central Load Dispatch depends on 50 year old frequency conversion technology (GE
TF-10) to read and calculate load usage at the Finishing and Hot Mills as well as a custom built
proprietary system to read and calculate accumulative Natural Gas usage from incoming gas signals.
The project is to design two complete applications that that can

read a load from the main substation transformers

read and Calculate NG signals
The finished application will send the signals to Central Dispatch for monitoring and decision-making.
Features:
1) Must be able to effectively transmit load signals 1 mile to simulate the remote substation transformer
to Central Dispatch.
2) Signals must be able to connect to a standard 4/20mA or 1-5V signal to interface into a PLC.
3) Software (PLC or Calculation) must be provided to effectively measure load at transformer.
Aspects covered: Theory, applied technologies and software
Methodology: We would suggest having the team tour the Burns Harbor steel mill (about 2 ½ hour’s
drive from East Lansing) meet with some of the young engineers and see how the Central Dispatch
equipment is designed and how it currently functions
Wm. J Sammon
ArcelorMittal Corporation
219-399-4650 (ofc) 219-929-8174 (cell)
william.sammon@arcelormittal.com
Team 6: ECG Demonstration Board
Project Sponsor: Texas Instruments-Precision Analog
The purpose of this project is to design, simulate, fabricate, test, and demonstrate a
TI-based ECG Demonstration Board. Individuals with a passion for any or all of the
following are desirable: analog circuit design/simulation, printed circuit board (PCB)
layout & fabrication, biomedical engineering, and hardware debugging.
The input signal will be generated by an ECG simulator (e.g. Cardiosim 2). The output
will be displayed on a Stellaris microcontroller evaluation module
(http://www.ti.com/tool/eks-lm3s3748). The microcontroller evaluation module is preprogrammed as a 2-channel oscilloscope. The team’s final deliverable is a PCB with
the analog circuitry that interfaces an ECG simulator with the microcontroller
evaluation module. Intermediate deliverables include the PCB layout, fabrication, and
testing of two analog circuits. The schematics for the two analog circuits will be
provided.
This project will utilize a variety of components from Texas Instruments’ broad
portfolio. This will qualify the project for participation in Texas Instruments' Analog
Design Contest.
Team members will have the opportunity to develop and/or strengthen the following
skills:





Analog circuit design with op-amps, instrumentation amplifiers, and power
devices.
SPICE simulation (TINA-TI)
PCB layout, fabrication, and testing
ECG fundamental principles
Documentation
These skills will prepare group members well for post-graduation positions such as
applications & design engineering.
TI Contact: Pete Semig, Analog Applications Engineer
(semig@ti.com)
Team 7: Project: Golf-Sierra User Settable G-switch
Sponsor: Instrumented Sensor Technology, Inc.
The g-switch is a small self-contained device that will open or close a relay contact in the event that a preset glevel is “exceeded”. G-levels are to be measured whenever the device is turned “on” using an in-built 3-axis
accelerometer and associated data acquisition and processing electronics.
The g-switch will operate in two modes: (A) peak detection; (B) rms level detection.
Whenever the unit is turned on and in either mode, the switch will activate (and a relay contact will be
closed/open) whenever either the preset g-level or rms-level is exceeded. Once the relay has closed/open, it
shall remain so (latched) until the unit is turned off and back on (reset).
There shall also be two flashing LEDs on the device, one red and one green. Once turned on and un-tripped the
green LED shall flash once every 4 seconds. Once the device is tripped (latched) the green LED shall stop flashing
and the red LED shall flash once every 4 seconds to signal the condition of the switch.
Target operational specifications for the g-switch are as following:
Bandwidth: DC to 50 Hz
Full scale range: 6g (approximate)
Measurement time resolution: 5msec (sampling interval)
RMS calculation window: 4 seconds
g-level trip settings: 0.25g, 0.50g, 1g, 2g, 4g, 6g
rms g-level trip settings: 0.0625, 0.125g, 0.25g, 0.5g, 1g, 2g
Mode selection: By DIP switch on the instrument (peak/rms)
Trip level selection: By DIP switches on the instrument
Relay contact closure ratings: 125volt/AMP
Power supply: (1) Either single 9V battery or multiple AA batteries
(2)option plug for AC adaptor with battery backup[(1)]
Battery powered operational life: 6 months
Mounting orientation: Fixed per accelerometer calibration to earth gravity
Size: NLT 12 cubic inches
Relay connection: Heavy duty screw terminal strip connection(s). One set normally open, one set normally
closed.
Anticipated Applications: Earthquake machinery shutdown, industrial machinery excessive vibration failure
shutdown, blower fan monitoring, seismic activity warning device, traffic bridge seismic closure gates, paper mill
roll vibration monitoring, high value cargo transportation monitoring.
Contact: Greg Hoshal 517-349-8487
Instrumented Sensor Technology, Inc.
Email: hoshal@isthq.com
Team 8: Programmable User Interface to RFID Sensor Platforms
Sponsor: MSU Technologies
Of great concern to state and federal governments is the condition of our nation’s infrastructure.
For instance, the American Recovery and Reinvestment Act set aside almost $72 billion for repairs
and fresh infrastructure projects. According to the American Society of Civil Engineers, that is just
a down payment on the $2.2 trillion that should be spent through 2014. Assigning priorities to
infrastructure projects requires long-term autonomous health status and usage monitoring that
provides data-driven inspection, maintenance and replacement schedules. Sensors that can be
embedded in or retrofitted to structures and autonomously collect, process, save and communicate
pertinent information over the long-term would serve these unmet needs. The objective of the
senior design project will be to develop a programmable radio-frequency identification (RFID)
sensor interface to control an application specific chipset developed at MSU for embedded
structural health monitoring. The system level architecture of the entire system is shown in Fig. 1.
The description of each of the modules can be found below.
User Interface
Module
User Sensor Module
Intel WISP
MSU Sensor IC
RFID
Reader
Energy Harvesting
User
Interface
Device
DC-to-DC
up-converter
MSP430
µController
Vdd
State
Machine
DC-to-DC
converter
Digital Commands
Analog Output
Self-powered Core
Piezo.
Figure 1: Block diagram of the modularization of the MSU
host communications
Sensor IC. An RFID reader is used
to provide power to an Intel WISP device which functions as the
Statement of Work
2.1 Intel WISP
The Wireless Identification and Sensing (WISP) platform will be the host communications device
from the perspective of the MSU sensor IC. The WISP integrates RF energy harvesting from the
RFID reader, DC-to-DC up conversion, an MSP430 series micro-controller, EEPROM, digital
I/O, several channels of analog voltage sensing and capacitive sensing. The Gen 2 protocol is
implemented on the MSP430 micro-controller so as to “look” like an RFID tag; the WISP can be
1
read and written by a Gen 2 RFID reader supporting the Low-Level Reader Protocol (LLRP). More details
on the Intel WISP platform can be obtained from http://www.seattle.intel-research.net/WISP.
The senior design group will use the WISP device to setup, control and read analog data from the MSU
Sensor IC. When instructed by the RFID reader to initiate the sequence of operations designed into the
MSU Sensor IC’s Digital State Machine, the user, through instructions issued by the RFID reader, will have
control over the number of times this sequence is run as well as the ability to issue the individual commands
to the MSU sensor IC (ERASE, PROGRAM, READ and NEXT).
2.2 RFID Reader
The senior design group will use a COTS (commercial off-the shelf) Gen 2 RFID reader to facilitate
programming, setup and reading of the MSU sensor IC via the WISP host. To accomplish this, the group
will have to gain access to low-level commands within the RFID reader. Fortunately, the EPC Gen 2
standard includes LLRP application programming interface, allowing access to commands that write user
data to the WISP host as well as read data from the WISP host.
The LLRP interface protocol is low-level since it provides control of RFID air protocol operation timing
and access to air protocol command parameters. LLRP is specifically concerned with providing the formats
and procedures of communications between a Client and a Reader. Communications between the client
(WISP host) and the reader (RFID reader) is of a request-response type - requests/commands from the
client to the reader, and responses from the reader to the client.
2.3 User Interface Device
A computer directly connected to the RFID reader could provide the user interface. The senior design
group will investigate the design of a graphical user interface (GUI) using Microsoft Visual Studio or using
HTML. The GUI will permit access to the RFID reader’s LLRP command set thereby directly
communicating with the WISP module and indirectly communicating with the MSU Sensor IC.
Contact: Prof. S. Chakrabartty
(shantanu@egr.msu.edu)
Team 9: Robotic Transportation Vehicle
Sponsor: Texas Instruments
Abstract:
The device will be built on a 4 wheeled Platform with electric motors driving 2 of the 4 wheels. The
device will have a control card to control the operation of the motors based upon a TI DSP and motor
Drive card. The Central Processing element will be a Stellaris cortex M3. This processor will interpret
commands and will communicate with the wheel motor controller to determine the direction and speed
of the wheel motors.
Specific Details:
Use a mobile Robot mechanical Base (Available from robotshop.com) or similar.
Replace the DC brushed motors with Brushless DC Motors.
Use a DRV8312 EVM as the controller for the wheels. It will use a C2000 DSP control card.
Use a Stellaris Cortex M3 EVM as a central controller.
Develop an interface between the M3 EVM and DRV8312
This interface will allow commands from the M3 are interpreted by the DRV8312 wheel controller.
Add an RF interface to a chromos watch to allow commands to be sent to the robot for movement.
Movement:
The robot will operate in autonomous mode (code provided) so that it will move around the room on its
own)
Or
It must be able to be switched from autonomous mode to user input to drive the motor.
Key development activities:
Put together robot base (should not be any mechanical requirements)
Put BLDC or PMSM motors on the wheels replacing any brushed DC that are usually standard.
Develop an interface between the Stellaris LMS103 M3 EVM and the DRV8312 EVM (c2000
controller) so commands can be interpreted into motor movement
Put together an RF interface using TI interface (ideally use a chromos watch to control the device)
Develop the SW to drive the robot around the room.
Items Provided:
1: DRV8312 EVM
2: stellaris M3 EVM
3: BLDC motor control SW (use control suite available online)
4: autonomous control SW for M3 (will have to be adapted to this platform)
5: SW development tools (code composer Studio)
The RF communication software must be created.
Stellaris LMS103 Autonomous Vehicle
Navigation Software
Utilize Stellaris-ware robot Navigation SW
The vehicle navigation SW and
Prototype hardware has already
Been completed by the Stellaris team.
Plan: Incorporate this design and SW
Into the motor lab demo platform
http://www.youtube.com/watch?v=M-7C7TIYJ8I
Software will come from this vehicle. Device should be similar, but will use a DRV8312 to drive the
motors and cortex M3 as the central controller.
Must have a battery, but REGEN NOT REQUIRED…
Concept
Wheel :Motor drive and control
Hardware and
Algorithms
(developed by
Motor Lab)
Battery mgmt
Contact: Timothy A. Adcock
Director, TI Motor Lab
zoot@ti.com
Replaced brushed DC
With BLDC motors
Download