Uploaded by Savindu Herath

[English] Machine Learning for Fluid Mechanics [DownSub.com]

advertisement
In the 21st century machine learning is rapidly becoming an integral part of everyday life. Every month there are huge advances in the field of artificial intelligence and machine learning and especially in the field of image sciences. Anytime you are dealing with images there is so much that machine learning has to offer,
and previously almost unimaginable tasks, like like turning a static image of the Mona Lisa into a movie
are now possible with some of these techniques for machine learning. So this begs the question how much of this technology, how much of machine learning, can actually carry over into the physical sciences and engineering
and in particular in my field of engineering fluid mechanics. I'm Steve Brunton and I'm gonna walk you through how machine learning can be used for fluid mechanics in a number of vignettes and examples
So this is a huge field with absolutely massive progress happening in machine learning
for the physical sciences, for dynamical systems, and especially for fluid mechanics, where we have such vast and increasing amounts of data from simulations and from experiments. I'm gonna walk you through at a very high level some of my favorite examples and applications of how machine learning is being used for fluid mechanics today. Good so I want to start by kind of just walking you through what I think of when I think of machine learning especially applied to engineering systems. Fluid mechanics are absolutely everywhere fluids play a central role in nearly every trillion-dollar industry health defense transportation energy we literally live and work inside of a fluid and so do all of our machines. Most of the tasks that we have in fluid mechanics, for example model reduction or getting reduced order models; controlling those those fluid flows; figuring out where to put sensors to measure them; and closure models to reduce the complexity
these really critical fluid dynamics tasks will important in solving many of the technological challenges of the next century. So if you think of any science-fiction vision of the future a lot of that is going to be enabled by advanced fluid mechanics models and controls So better energy systems better transportation systems, you name it, again fluids are going to be playing a central part in that. and what I want to point out is that all of these tasks or almost all of these tasks in fluid mechanics can naturally be written as hard optimization problems. So these optimization problems and fluids are very challenging that's why it's hard to make progress in these fields that's why they're so important.
These optimizations are nonlinear, non-convex, multi scale, very high dimensional but I want to point out that that is exactly what machine learning is getting really good at. So if I want a pigeonhole or kind of make a blanket statement about machine learning
I'm going to say the machine learning is building models from data using optimization or more generally, machine learning is a growing set of techniques for optimization for really high dimensional, non-convex optimizations based on a growing wealth of data So machine learning is optimization with data and most of the problems we have in fluid mechanics can be naturally written as optimization problems. So these go together extremely naturally and what we're gonna see is that lots and lots of techniques for machine learning can be directly applied to solving these fluids optimization problems like modeling, control, sensing, and so on and I want to point out when I say high dimensional I mean that the fluid flow itself has many many many degrees of freedom I might need a million or billion degrees of freedom to simulate a turbulent fluid in a computer so it's very high-dimensional. It's nonlinear because it's governed by a nonlinear partial differential equation the Navier-Stokes equation and those combined to make these optimization problems very non-convex, so there's tons of local minima in these optimization problems but again we're gonna use emerging techniques in machine learning to start gaining traction in these fluid dynamics tasks
and I want to point out you know machine learning is not magic a lot of a lot of people who are not kind of in machine learning research or have a background in applied math or statistics you know when I talk to business leaders oftentimes our initial conversations they think the machine learning is like you know a Harry Potter magic wand and you just have a problem and you point your machine learning at it and it becomes solved and that's not at all how this works okay so this is really important to dispel kind of this myth that machine learning is magic this is very understandable in terms of linear algebra optimization and statistics and I think this is really important we can't let machine learning become a black box where it's indistinguishable from magic and so to that point we need not just machine learning solutions in physical sciences and in fluid mechanics but we desperately need models that are interpretable and generalizable models that we can understand and communicate and work with and models that we can trust will work even in new situations that we haven't seen before and so instead of kind of rigorously defining what I mean by these I want to give you an example so I think of Newton's second law F equals MA as the ultimate interpretable and generalizable model okay its interpretive all because it's very simple there's only you know it's in terms of force mass and acceleration and it's generalizable in the sense that Newton can discover this law from an apple falling on a tree on earth and this law still works when we design rocket mission from the earth to the moon okay so this model is interpretable and generalizable it covers a range of phenomena even phenomenon oftentimes that we hadn't seen before okay so this is what we want in our machine learning models especially when we apply these to systems in engineering like fluid mechanics where they might be safety-critical where it might be a part of a system on an airplane or in a car okay and I love this quote you've heard me say this before or quote quote Einstein before everything should be made as simple as possible but not simpler and this principle of parsimony or if simplicity is one of the is that we promote machine learning models that are both interpretable and generalizable and so mathematically what we're looking for oftentimes are models that are sparse so they only have a few terms like F equals MA or e equals mc-squared low-dimensional so they're not in terms of a million degrees of freedom like a fluid my dynamic simulation would be and we want them to be robust in lots of ways robust to noise and disturbances robust to bad information that goes into the training and robust to new scenarios that we haven't seen before okay so this is what we want in a kind of machine learning for fluid mechanics in general now I want to point out a lot of what I'm going to talk about in this video and in in future videos related are following this review paper that I wrote with Barrett Nowak and Petros Kunitz aqus on machine learning for fluid mechanics this is in the annual review of fluid mechanics from 2020 and you can download this paper on the archive so here's a figure that didn't make it into the the review but I like this this figure a lot and it kind of at a very high level points out that fluid dynamics generates tremendous amounts of data so we have theory simulations and experiments that are kind of generating this fluid dynamics data and machine learning can be thought of this as this common information processing framework so that you can integrate this flow data and use that to start solving these really hard optimization problems in modeling and control and reduction okay so this is really critical is this kind of common framework to understand all of this fluids data and just a point of history I think this is kind of fun so so Barrington Petrus and I started working on this review together actually we had separate conversations about the history of machine learning in fluids so it turns out that fluid mechanics is one of the kind of old and original fields where machine learning grew out of so fluids has tons of big data a lot of computational architectures and algorithms were developed specifically to handle fluids problems and in this example it this is kind of a neat example that shows that a lot of these concepts in optimization and stochastic optimization actually go back pretty are in fluids so in the 60s and 70s in in Berlin Reichenberg and sh Zweifel kind of pioneered the use of evolutionary algorithms or genetic algorithms to design and optimize better flow systems so in this experiment they have this multi panel plate where you can change the angle of these plates and they were trying to find an optimal configuration that would reduce the drag on this configuration and so running an optimization that included some aspects of stochasticity so this is AG Alton board or Galton brecht which is basically a Plinko game if you like the prices right you drop a little ball in here and it bounces around and if you sample from this distribution of where that ball falls it's effectively sampling from a binomial or a Gaussian distribution and they use that to introduce stochasticity into their optimization so they did some kind of a stochastic gradient type descent to find an optimal configuration and in in the end it actually turned out not to be exactly flat which is really interesting now you should read more about this history it's it's pretty neat and there was kind of a debate at the time about how this fit into the larger field of fluid mechanics research was this fluid mechanics research and increasingly we're finding today that kind of these optimizations and flow physics you know they're very tightly linked and so the field of fluid mechanics is intrinsically linked to data driven optimizations there's also this really nice connection this to this kind of AI winter in this video by by Sir James light Hill and so I think I'm just gonna play this this is Sir light Hill essentially holding some of the leading AI researchers at the time in the 70s on trial so he his argument was that machine learning and artificial intelligence had gone through this massive hype cycle they had been hyping and hyping and hyping the promise of AI how it's going to change everything speech recognition natural language processing and that research had fundamentally failed to deliver on these promises and so this trial and his subsequent report the light Hill report actually was a pivotal played a pivotal role in the so-called AI winter that really resulted in a massive cut in funding for decades in artificial intelligence research it's kind of ironic and Petros pointed this out I think this is really funny that one of light Hill's criticisms of AI was that it didn't deliver on natural language processing and here we're watching this YouTube video that is being automatically transcribed by a computer algorithm so something has changed in the last twenty thirty years a lot of it is based on better algorithms better computational hardware and open source frameworks where you can actually start to integrate these algorithms and data easily and efficiently and in a community kind of reproducible setting okay so so a lot has changed since then and were able to do a lot more than we weren't before so I'm gonna walk you through some vignettes just kind of highlighting areas of applications that I see machine learning being super important in fluids research and then later we'll do kind of deeper dives into into these topics and again this is very shallow this is the tip of the iceberg there is tons of good out work out there that I'm I'm not covering right now but in my perspective on kind of fluid mechanics in general and this is with or without machine learning one of the things that makes fluids interesting and beautiful to humans and and somewhat tractable that gives us some hope that we can model and control these things is that patterns exist even in very very complicated flows like this flow this cloud flow over r-e shiri island at japan there are these big dominant patterns or coherent structures that resemble much much simpler flows that we can actually model and simulate and control quickly and and robustly this is a picture from a review paper by Sam Tyra and co-authors in the AIW journal from 2017 I think this is a really nice review paper that talks about kind of how patterns exist in fluids and how you can extract them and this this basic theme of patterns existing is one of the major entry points of machine learning into fluid mechanics and in fact I would say that a lot of machine learning research really is aimed around either extracting or characterizing or utilizing low-dimensional or dominant patterns in big data like image data or flow field data and so a lot of that carries over very directly if you think of your flow field as a picture and if you think of the time evolution of your flow field as a movie of moving pictures then a lot of machine learning carries over directly okay so one of our kind of go-to algorithms for for pulling out patterns in fluid flows is the proper orthogonal decomposition this is basically a principal components analysis done on flow data where you take a movie of your flow field this is just flow past a cylinder at low Reynolds number 100 and if you subtract off the average flow field and do a singular value decomposition or a principal component analysis you find that you can very efficiently represent this fluid flow as the sum of only a few kind of eigen flow fields or these these proper these large amplitude principal components okay so patterns exist you can pull out those patterns and you can get a much much simpler representation of your flow so instead of needing a million degrees of freedom if this was a megapixel image I might only need 10 p OD modes and how they vary in time okay and so this idea that there are low dimensional patterns in fluids allows you to do really really powerful things essentially take algorithms directly from image processing and apply them to flow data so this is work by Isabelle Sherrill who's a PhD student working with Brian Paolo G and myself and in this example what she did was she took that low Reynolds number coherent flow past a cylinder and added a ton of salt and pepper corruption so each pixel has a random chance of being completely corrupted not additive noise but like just you know blacked out essentially and using these algorithms from robust statistics these kind of modern optimization techniques that leverage patterns and the data she's able to decompose this flow into the true low rank component that's characterized by only a few PID modes and all of that salt and pepper noise so she's able to split the data into kind of the clean data that we want and all of this salt-and-pepper noise and this is really very interesting you should check out her paper there's a lot more examples and details but we think this is particularly interesting and important for experimental flow measurements so a lot of fluids are still characterized in laboratories because our computers just aren't powerful enough to simulate all of the degrees of freedom of flow past a submarine or a jumbo jet and so we do a lot of laboratory experimental testing but those experimental measurements might be quite noisy and they might have lots of kind of TV static and a-looking noise on top and so Isabelle's algorithm or her adaptation of this robust principal components algorithm allows you to clean up that experimental data and get much higher fidelity flow estimates so that's one area where where machine learning can be used is by just leveraging those patterns and the data as if they were sequences of images another kind of area where patterns exist is in kind of getting closure models for very very very complex flows so this is one of my favorite simulations by my colleagues at kth in Sweden where they're essentially running this massive massive boundary layer simulation so this is an extremely computationally expensive simulation of a turbulent boundary layer and what you see here is that there are these massive massive separation of scales so there's little small scales these kind of horseshoe vortices they're rising up out of the fluid but then every time you zoom out you see more and more and more kind of structure and detail across this incredible span of length scales and time scales and this is common in all real world turbulent fluids so flow over a wing or over a boat or a car or in an engine over a windmill blade you're gonna get this kind of turbulent separated structures over this range of scales but again there are patterns again it exists there are large scale structures that matter for drag and for lift and for mixing for efficiency and so we might not want to characterize every single degree of freedom in this extremely expensive simulation but instead we might want to get a reduced order model for how the big energy containing structures work together to change the drag on this this boundary layer for example okay and so this is a classic picture for anyone who studied turbulence this is the Kolmogorov turbulent energy cascade which basically shows that if you have a turbulent flow you're going to have this massive scale separation so the x-axis is basically spatial scales on a log plot so big vertical structures are here on the left little teeny tiny vortical structures are here on the right and real-world fluids might have many many many orders of magnitude of scales both in space and time and so instead of resolving all of this which is very expensive for computers this is why we have the biggest supercomputers in the world are often built to solve these problems but instead of modeling all of these scales is it possible to approximate how these small scales affect the big energy containing scales that we actually care about because these are often what we care about for a lift and drag and and things like that and so this is a massive Field called closure modeling and it's really really important it's been a focus and fluids research for decades and there's this great review paper by DeRosa me yeah Kirino and Shou called turbulence modeling in the age of data where they really dive into how machine learning and data-driven methods are being used to tackle this closure problem and I think this is one of the most exciting areas where where kind of machine learning research can really make a practical impact on actual everyday industrial flows and this is just one example from one paper this is one of my favorite papers on the topic topic of Rands modeling this is the Reynolds average navier-stokes modeling where you need to close this model to account for small-scale and fast terms that you don't want to resolve fully and so what Ling and colleagues did was instead of just using the big deep neural network which would be the most obvious kind of first thing you might try what they did was design a custom deep neural network with these additional tensor input layers that essentially allowed them to encode or enforce prior physical knowledge about fluid flows so fluid flows are not just unstructured images or movies we know that there's physics there's conservation of energy and mass and momentum there are symmetries and what they were able to do in this paper was embed some of those symmetries and invariants invariants directly in their neural network so that their simulation or their closure is physical by construction and I think this is a beautiful example of what we all should be thinking about when we apply machine learning to physical systems is how do we bake in prior knowledge to make it so that our models are not just accurate but they're also physical okay so really great paper you got to check this out it's a classic in the field now okay so other areas that are really important super-resolution is a technology that is really big in image sciences and of course we can apply this directly to flow fields if we think about them as images so this is data from the Johns Hopkins turn turbulence data set and if you train on a large collection of images like these in a movie flow field images then you can take very low resolution or downsample versions of the flow field and reconstruct kind of this super resolution high resolution image with all of the flow details so this is closely related to that closure problem is the super resolution problem again of taking either low resolution data from experiments or maybe this is a fast cheap and dirty simulation and being able to kind of fill in the details by leveraging a large offline training database to give you the statistics of what flow fields actually look like very cool emerging field lots of work being done here and I'll just point out this is so important I have to I have to tell you about this this image I'm showed up here I've shown up here that gives you this beautiful almost perfect reconstruction of the flow field from this low resolution and down sampled image that's what happens if you run this super resolution algorithm as an interpolator okay so what that means is these kind of dark gray columns are where you train the data and we randomly pepper them throughout this movie so time is evolving in this movie and we randomly pull frames of that movie to Train this algorithm and then we test it on this test frame which is at one of these blue locations that's kind of between data we've already sampled and so this is fundamentally an interpolation task because this blue column is kind of closely related to the two images right before and right after it in the movie that we're in the training data but if we do an extrapolation task and this is much more realistic for many flow problems like if you wanted to use this to predict the weather in the future how the you know the flow field around planet Earth will look next month or next year that's a future extrapolation problem so in that case you'd be using all of this as your training data all the past and you'd be trying to predict the future and this extrapolation is much much harder so you can see here these are the truth flow fields on top and as you start to extrapolate forward and for farther in time the first snapshots actually pretty good but as you go further and further in time you get terrible degradation in the reconstruction so huge caveat extrapolation is always hard and you have to add more physics to make that to make that work okay that's a much harder problem you have to be careful about if you're interpolating or extrapolating good so we already talked about how peyote and proper orthogonal decomposition can be used to great advantage to decompose a flow field into its dominant patterns and decades ago it was actually shown that this kind of p OD or principal components analysis could be written as a shallow kind of one hidden layer auto-encoder neural network so you can do this principal components analysis in a neural network frame framework and what's really nice is that you can then generalize or expand that neural network to have many many many hidden layers and nonlinear activation functions so you can build these deep autoencoders that give you much much better signal compression and reduction so you would have the input be your high resolution flow image or flow field and in the middle layer these are kind of the latent variables or the dominant coherent structures and then the decoder would use those few variables to reconstruct an estimate of the full flow field image so this is kind of an information bottleneck that compresses and then decompresses your your flow field and I'll point out that milano and Kumud sockets in 2002 I believe were the earliest to apply this concept to fluid mechanics in a really really great paper I think that was way ahead of its time people are now you know two decades later in full force trying to apply these auto-encoders to fluid they had it in 2002 so go check out that paper okay good you can also use these concepts to build reduced order models so once you have those patterns extracted through either P OD or an autoencoder let's say for the flow field then you can get very very simple representations for example in those low dimensional coordinate systems in those autoencoder coordinates the dynamics might behave very simply like they might live on this parabolic Bowl this is what Barret Nowak showed in 2003 and then you can use these coordinates and use data-driven methods like Cindy the sparse identification of nonlinear dynamics to build very very efficient models for how these modes evolve in time purely for measurement data and there's a lot of great work on this jean-christophe Lazo from Paris has been innovating a ton of cool methods for applying this Sindhi method to fluids problems to enforce known physics and to get really accurate and efficient models of complex flow systems okay good so maybe kind of one of the the last applications of this once you have your models and you've extracted your patterns one of the ultimate goals is to actually control your fluid you don't just want to understand it and model it you want to manipulate your flow and so flow control is not this this is what people think about a lot of times is kind of again magic you could to magically manipulate what your flow does and that's not at all it happens this is a very principled optimization of your flow field and of your control law with some objective in mind okay and again those objectives come from the real world I want to increase my lift I want to decrease my drag I might want to increase mixing in a combustor okay and so those are all optimization problems so there's nothing magic about this and these optimization problems can be solved increasingly well with tools for machine learning so I really like this diagram from a paper by katarina beaker and Sebastian pipes which basically shows that if you have a really really complicated flow system and you want to control it instead of using the full Navy or stokes equations which are far too expensive what you can do is you can use machine learning to build these surrogate models that are fast and accurate and fast enough to use in real time for feedback control so this is a really nice diagram that you can use for lots of different types of machine learning models and lots of different types of control algorithms in this case they're using model predictive control and I guess maybe one of the last things I want to point out is that much of what I'm talking about in machine learning and in fluid modeling and control is truly inspired by from biology so a lot of machine learning came out of wanting to understand how biological systems how it brains and nervous systems work so seamlessly to make decisions and understand and interact with the world and in fluid mechanics we have some hope that we can actually interact with and understand and model and control complex fluids because biology in some sense provides proof by existence that it's possible to interact with these very very complex turbulent flow fields so this Eagle is performing incredible accurate or let's say agile and adept maneuvers with out measurements of the full three-dimensional flow field presumably without knowledge of the navier's stokes equations or any advanced degrees in fluid mechanics the eagle is taking measurements on its wings and it's experience it's it has learned to interact with this flow field in an extremely elegant way that really goes far beyond what we can do in engineering systems and even beyond the Eagle I take great inspiration from insects that fly and lots of us at the University of Washington are actively working on trying to understand how insects measure and interact with complex flow fields because they have very very simple nervous systems compared to an eagle all flying insects have strain sensitive neurons on their wings and they use this information to make estimation and control decisions about what to do in their flow environment extremely fascinating learning systems okay and so we want to know how we can learn from them in our in our engineering machine learning for flow control okay good so that was kind of a very high-level overview of how machine learning can be used for fluid mechanics I'll go into more detail and kind of go into depth and some of those vignettes later but I really do want to point out that so much work is being done in this field this is just again the tip of the iceberg in in the future you should check and see what has changed even you know six months from now I guarantee you there will be new technology that that I didn't know about when I was making this video so do go check out the references and and stay tuned because this is an incredibly interesting but also very important field of machine learning research thank you
Download