Quantitative Flow Visualization for Large Scale Wind Tunnels R. Bömmels, M. Machacek, A. Landolt, and T. Rösgen Institute of Fluid Dynamics, ETH Zurich, Switzerland Introduction Despite the increased use of numerical simulations in the development and performance optimization of aerodynamic vehicles wind tunnel tests still are of fundamental importance in the related engineering design process. In order to keep the cost down and to increase the data return from the expensive measurement campaigns, there is an interest in improving and expanding the diagnostic tools available. Recent developments in modern measurement technology, especially in the areas of digital imaging and photonics, have led to the introduction of a number of computer-based, quantitative flow visualization tools. While these techniques have rapidly gained acceptance in laboratory research, their introduction into the domain of commercial testing has been slower. This may be not only due to the cost involved but also for a number of technical reasons. The reliability, precision and flexibility of the new methods still have to be improved to convert them into turn-key tools. Furthermore, there are some problems associated with the scale-up from bench-top hardware to systems operating in considerably larger and less controlled environments. The paper will address some of those scaling issues. The specific requirements for large scale testing will be analyzed and a number of recent flow visualization and measurement techniques are described which should be well suited for such applications. Generic Issues in Large Scale Flow Visualization Image based flow measurements and visualization techniques provide a number of desirable features such as non-intrusiveness, remote access, and the simultaneous processing of multiple measurement stations. This last feature enables also the computation of spatial gradients in unsteady flows, which is useful in the detection and analysis of spatial flow structures. These properties have led to the widespread introduction of such techniques in the laboratory, 158 R. Bömmels et al. where for example particle image velocimetry (PIV) has developed almost into standard method within the last ten years (Kompenhans 2000). In large scale facilities such as wind tunnels, there are a number of generic problems which have delayed a similar progress. Although these problems are at times simple and of a purely technical nature, they still can affect the functionality and performance limits of various imaging techniques at a very fundamental level. Most camera-based measurement methods require some form of active scene illumination. Dependent on the capabilities of the cameras (high speed or slow scan, direct integration or intensified, large or small dynamic range etc.) this lighting may have to be pulsed or continuous, wide band (white) or spectrally narrow-band and often of a considerable intensity. Incandescent lamps, lasers, high-powered flash lamps or other, more specialized sources can usually be found to meet these requirements. However, when arbitrary aerodynamic models are to be inserted into the field of view, spurious surface reflections may arise which cannot easily be controlled or eliminated. Since it is often the flow close to the model surface which is of particular interest, the cameras’ pixels can become saturated and the information is lost where it counts most. Finding a locally optimized arrangement of illumination and observation angles may temporarily solve the problem but it will arise immediately again if the configuration is changed. Another fairly common requirement in air flow diagnostics is the generation of flow seeding. Usually micron-sized particles or droplets have to be chosen so that the tracers can follow the flow with a sufficient fidelity. Since the backscatter signatures tend to be small, fairly high tracer densities have to be generated which may affect the overall visibility of the model and the optical access in general, defeating in part the original purpose. More importantly, in many imaging techniques the individual tracers have to be spatially resolved by the recording device in order to extract the desired information. This puts constraints on the cameras’ resolution and optical magnification which may in turn severely limit the size of the field of view. In addition, there are issues regarding the chosen tracers’ lifetime (too short, too long) and the environmental and health impact of the sometimes corrosive or poisonous substances involved. For imaging techniques that require either long time integration periods or comparisons with reference images, another problem may arise from a possible lack of configurational stability. Model movements on the supports, facility vibration and other uncontrolled effects may affect the precision with which the position of the measurement points on the model surface can be determined. In techniques such as pressure sensitive paint monitoring, where a pixel-bypixel referencing is necessary, even very small displacements can create significant changes in the ratio images. Finally, there are simple operational and financial constraints that have to be accommodated. While the actual data taking with an imaging setup may be done in a few seconds, the calibration and adaptation of that setup can be an hour-long activity. This may lead to the situation where a specific measurement technique is ruled out purely because of its operational complexity. Quantitative Flow Visualization for Large Scale Wind Tunnels 159 Taking into account these technical constraints, there remain a few basic choices which have to be made in the design and selection of a large scale, quantitative flow visualization system. In the laboratory, a certain preference has developed for imaging configurations which exploit the Eulerian view of the flow such as in PIV. This may be due to technical reasons (e.g. the need for strong illumination, available only in light sheets) but is also related to the interest in spatial flow structures and gradient properties. As a consequence, the flows under investigation are often “tailored” to accommodate an essentially planar image acquisition philosophy. Furthermore, the amount of raw data produced and the associated transfer and storage requirements for a full 3-dimensional measurement grid would become prohibitively large. In flows around complex model surfaces, three-dimensionality is almost always present and a volumetric view of the measurement domain is necessary. In order to keep the data complexity at a reasonable level, it may be more advantageous to rely on Lagrangian techniques, which track localized flow features in time rather than providing spatially resolved snapshots. A similar duality is present when one looks at the data processing philosophies. The established approach is based on a time-global approach where all data acquired during a measurement run are stored and utilized in a postprocessing step to extract the desired information. The need for fast, near realtime data visualization and efficient data storage, however, does favor incremental strategies where the incoming data are processed “on the fly” with only a limited knowledge of the temporal flow evolution. Especially for Lagrangian techniques (e.g. particle tracking methods), the choice of such an incremental approach does have a significant impact on the overall processing scheme. Finally, a balance has to be found between the requirements for fast data analysis / presentation and the achievable measurement accuracy. For visualization-oriented schemes the temporal coherence of the data may be more important than the ultimate precision of each individual data point reading, creating a requirement for efficient error rejection, but not necessarily for error correction. In other situations, the priorities are reversed in that accuracy is more important than speed and an increased temporal effort (including averaging multiple measurements or recursive processing) is considered acceptable. Looking at these considerations, one may state that large scale diagnostic applications create their own set of requirements which may be different from those applicable to laboratory-style research. The measurement techniques which are to be applied have to be adapted especially regarding their operational characteristics. At times, even the development of new methods designed specifically for large scale environments may become necessary. 160 R. Bömmels et al. Candidate Technologies and Techniques At the Institute of Fluid Dynamics of ETH, a medium sized wind tunnel (2m x 3m test section, see Fig. 1) is being operated for educational and research purposes, which was built in the 1930’s by J. Ackeret. Fig. 1: Medium scale wind tunnel at IFD / ETHZ This facility is also used as a test bed for a number of advanced quantitative imaging techniques which are being developed with true large scale applications in mind. Pressure sensitive paints are being studied in cooperation with RUAG Aerospace (CH). Being a technique of obvious potential for wind tunnel applications, the research concentrates specifically on ways to eliminate the inherent temperature sensitivity of the fluorescent paints used and the wind-on / windoff calibration procedure (see also Engler 2000). A pulsed infrared thermography system has been installed to provide a tool for the rapid visualization of laminar-turbulent transition and separation lines on model surfaces. It utilizes a high power stroboscope (7500 W, approx. 7 J / flash) in conjunction with a midwave (3-5mm) infrared camera (20 mK NETD) to detect the subtle changes in surface heat transfer associated with those flow phenomena. The technical challenge lies in the development of imaging strategies which are independent of the model’s surface structure and composition which can strongly affect the infrared signature (Le Sant 2002). In the area of flow velocimetry, a Doppler global velocimeter is being developed based on a custom narrow band pulsed laser with two independent oscillators. This permits the simultaneous use as a PIV system for comparison and the measurement of 3-dimensional velocity vectors with a single camera / single view point arrangement. At the same time a particle tracking velocimeter is being used to provide a Lagrangian view of the flows under investigation. A pair of high speed cameras tracks the motion of helium-filled soap bubbles which are used as low-inertia Quantitative Flow Visualization for Large Scale Wind Tunnels 161 tracers. The resulting three-dimensional path lines can be used to determine velocities and topological information. Finally, activities are under way to develop a simple real-time visualization tool to enhance the information obtained from operating standard smoke probes. Here the emphasis is strictly on the fast processing of the visual information, not on the extraction of detailed quantitative information. In the following, three of the above mentioned examples will be described in more detail as they highlight the points made about large scale diagnostics. Doppler Global Velocimetry Doppler global velocimetry is a planar imaging technique designed to measure three-dimensional velocity components. Instead of analyzing the displacement of individual tracers (as in particle tracking velocimetry – PTV) or of small tracer clusters (as in particle image velocimetry – PIV) one looks at the optical Doppler shift of the moving tracers (Mosedale 2000, Samimy 2000). The great advantage in doing this is that the individual particles do not have to be resolved anymore by the recording camera. Any feature that moves with the flow and crosses the illuminating light sheet will create the desired signature. The camera magnification can be set so as to image the whole test section and one of the major drawbacks of PTV / PIV - the limited recording area - is avoided. Fig. 2: Components of a DGV setup – seeded long pulse laser (left), dual camera / iodine filter cell (right) The price one has to pay for this improvement is twofold. First, the lasers required to generate the light sheet have to be very stable and must emit a very narrow-band radiation with an optical line width of, say, below 20 MHz. Such lasers are available as CW systems, but the illumination in large facilities demands higher intensities only available from pulsed systems (e.g. Nd:YAG lasers). Here a so-called injection seeder has to be used. A low power, highly stable ring laser injects seed radiation into the cavity of a Q-switched oscillator which leads to a longitudinal mode selection and stabilization. Since the opti- 162 R. Bömmels et al. cal bandwidth of the emitted radiation is inversely proportional to the pulse length / roundtrip time, the laser cavities can become fairly large (several meters, see Fig 2) to support the desired bandwidth reduction. The frequency shifts caused be the moving tracer clouds are on the order of the ratio (flow velocity / speed of light), a very small number indeed. The Doppler frequencies in the MHz regime cannot be detected directly since one is using an integrating device (CCD camera) - imaging heterodyne detectors are not (yet) available for those frequencies. A molecular filter cell filled with iodine is used to convert frequency changes into intensity variations. Tuning the laser frequency onto the edge of one of the absorption lines of iodine, any Doppler shift in the imaged scenery can then be observed as a change in transmission (Fig. 3). A camera pair monitors the filtered and unfiltered images and the ratio of both images encodes the velocity observed in the Doppler sensitive direction (Fig. 2). Since each pixel can be processed separately and independently, a very high spatial resolution can be achieved. light sheet with n0 molecular filtering in an iodine cell Dn T 1 T0 V(z1) n = n0 + Dn(z1 = 0) DT T(n) V(z2) n n0 0 V(z3) = 0 x V(z4) V(z5) n = n0 – Dn(z5 = zmax) z result: data image Fig. 3: Operating principle of a molecular filter cell in DGV The system operated at IFD is more advanced in that it operates two seeded lasers in parallel to provide a double pulse capability as well. This facilitates the use of the lasers in a PIV mode for comparison. As a matter of fact, if the scaleup / resolution issue is not critical, a simultaneous DGV / PIV measurement is possible. In such an arrangement, the out-of-plane (DGV) and in-plane (PIV) velocity components in the light sheet can be measured with the same camera system, creating a true 3-component velocimeter. (Normally, different velocity components are measured with separate camera systems in an all-DGV configuration). Besides the advantages cited so far, DGV as a large scale imaging technique is also affected by some inherent limitations. First of all, the intensity ratioing approach for the computation of velocities makes DGV into an essentially analogue technique. The intensity reading of a single camera pixel is affected Quantitative Flow Visualization for Large Scale Wind Tunnels 163 by a number of factors (sensor non-uniformities, background noise, gain nonlinearities, etc.) which have to be carefully compensated. In addition, pulsed laser illumination leads to the appearance of speckles in the images which have to be eliminated. Spatial filtering can achieve this, but at the cost of a reduced spatial resolution. Figure 4 shows an image sequence acquired with a special test target (rotating disc). The processed ratio image shows a component of the overall disc rotation velocity vector. Note also that the surface features can be used in connection with a second image pair to compute the in plane velocity components with PIV. Fig. 4: DGV images or a rotating disc target: filtered (left), unfiltered (center), processed (right) 3D Particle Path Line Tracking The path line tracking system which is described next may serve as an example for a Lagrangian imaging technique. It is based on the tracking of helium filled soap bubbles as they flow through the field of view of a stereo camera system (Fig. 5). In contrast to most established tracking schemes, the bubbles are not imaged based on a short-time flash exposure but rather using continuous lighting. Their signature on the integrating CCD sensors (120 frames/s) is thus a short, continuous path line segment. The correspondence problem is solved by looking at the bubble signatures as they cross the boundaries between consecutive image frames. This creates a unique coordinate in space-time which can easily be linked to the complementary second camera view using the epipolar matching condition known from photogrammetry theory. Velocity information is derived from the measurement of the bubbles’ displacement within and across the individual image frames (Machacek 2002). While the approach is quite robust, the technique is affected by some of the problems listed above. Reflections from walls or models can become quite prominent and the best tracking results tend to be generated in the wake regions away from any solid surface. The situation may be improved by increasing the number of independent views of the scene (i.e. cameras) but other alternatives appear to be more promising in the long term. Preliminary studies have been performed on the utility of fluorescent and / or smoke filled bubbles which could increase the image contrast in the surface reflection regions. 164 R. Bömmels et al. However, issues regarding the environmental safety and pollution have to be considered and have not yet been resolved. Fig. 5: Schematic of the stereo path line tracking system Another aspect of generic importance is the photogrammetric calibration of the camera systems involved. This can be divided into the tasks of finding the cameras’ inner parameters (e.g. lens distortion, focal lengths) and the outer parameters such as camera orientation in space and relative positioning (Ruyten 2002). Especially the measurement of the spatial orientation requires a calibration or reference structure of known geometry which has to be inserted into the field of view. When large volumes have to be imaged these calibration structures also tend to become very large and difficult to handle / position. Point source LED’s Fig. 6: Outer parameter calibration using a synthesized calibration target: calibration rod (left); cloud of accumulated reference points (right) In the present arrangement, the outer parameters are determined using a synthetic target created by moving a calibration rod (length L=0.5m) through the measurement volume (see Fig. 6). Two point-source LEDs on the rod cre- Quantitative Flow Visualization for Large Scale Wind Tunnels 165 ate a “cloud” of reference points with a known relative distance from which the unknown parameters can be deduced (Borghese 2000). A typical result of a reconstructed path line field is shown in Fig. 7. The vortex flow behind a delta wing is clearly visible in this 3D rendering of the reconstructed bubble tracks. The color coding of the individual tracks is an indication of the relative velocity of the helium bubbles forming the path lines. Fig. 7: Flow behind a delta wing visualized using 3D reconstructed path lines Direct Digital Visualization As a last example, some elements and results of a direct visualization technique will be described. Here the idea is to reduce the amount of data processing and, more importantly, data interpretation, to a minimum and to produce simply an enhanced view of the measurement scene which is interpreted by the observer rather than the computer. As a last example, some elements and results of a direct visualization technique will be described. Here the idea is to reduce the amount of data processing and, more importantly, data interpretation, to a minimum and to produce simply an enhanced view of the measurement scene which is interpreted by the observer rather than the computer. This type of context-insensitive processing can usually be done in real-time, that is faster than the normal camera acquisition frame rate (25 frames/s). Since the data is updated rapidly, no sophisticated noise reduction and / or error handling is required: the human observer is usually very tolerant towards transient visualization faults. The main advantage of such a tool, its true interactiveness, is difficult to depict in static images. To give an indication, Fig. 8 shows a typical scene from a wind tunnel test where a smoke probe is used to visualize the flow around a car. The second image shows a streak line picture which is assembled out of the live camera frames in real time. As the smoke probe is traversed, the computer builds up a cumulative image of all streak lines as they are extracted from the smoke trails. 166 R. Bömmels et al. The challenges in this type of processing are once again primarily the lighting and choice of seeding. In addition, fast processing hardware is required but the rapid increase in computing power means that dedicated hardware such as pipelined image processors can successively be replaced by general purpose CPUs. Fig. 8: Real-time processing of smoke trails into a cumulative streak line picture Summary Quantitative flow visualization based on the processing of remotely acquired image data can significantly enhance the understanding and analysis of engineering flows. Modern techniques such as Doppler global velocimetry, pressure sensitive (fluorescent) paints or infrared thermography make use of advanced lasers and image detectors to provide the optical raw data that are subsequently transformed into the desired flow information (velocity, temperature, pressure, etc.). While the different principles of operation have been successfully verified, there are still a number of generic technical issues which have to be resolved before one can expect a routine use in commercial facilities. Examples were given for a DGV system, a 3D particle tracking velocimeter and a simple yet powerful real-time visualization tool. All systems are designed for operation irrespective of the large scales involved (several meters field of view). The control of the illumination and the seeding density remain as a problem which is directly related to the geometrical size and complexity of the measurement environment. The scaling issues will remain one of specific problems in large scale flow diagnostics which are not found on the laboratory scale and must thus be addressed separately. References Borghese NA; Cerveri P; (2000) Calibrating a video camera pair with a rigid bar, Pattern Recognition 33: 81-95 Engler RH; Klein C; Trinks O (2000) Pressure sensitive paint systems for pressure distribution measurements in wind tunnels and turbomachines, Meas. Sci. Technol. 11: 1077-1085 Quantitative Flow Visualization for Large Scale Wind Tunnels 167 Kompenhans J; Raffel M et al (2000) Particle Image Velocimetry in Aerodynamics: Technology and Applications in Wind Tunnels, J. of Visualization 2: 229-244 Le Sant Y; Marchand M; Millan P; Fontaine J (2002) An overview of infrared thermography techniques used in large wind tunnels, Aerospace Sci. Technol. 6: 355-366 Machacek M; Rösgen T (2002) Photogrammetric and Image Processing Aspects in Quantitative Flow Visualization, Ann. N.Y: Acad. Sci. 972: 36-42 Mosedale AD; Elliot GS; Carter CD; Beutner TJ (2000) Planar Doppler Velocimetry in a Large-Scale Facility, AIAA Journal 38 (6): 1010-1024 Ruyten W (2002) More Photogrammetry for Wind-Tunnel Testing, AIAA Journal 40 (7): 1277-1283 Samimy M; Wernet MP (2000) Review of Planar Multi-Component Velocimetry in High-Speed Flows, AIAA Journal 38 (4): 553-574