Phil Fawcett: So good afternoon

advertisement
>> Phil Fawcett: So good afternoon. I'm Phil Fawcett. I'm -- I help the
technology transference out of Microsoft Research worldwide, but I invited
Barbara and her team here from the University of Washington Center For
Information Assurance and Cypersecurity which she started several years ago.
And it's an interested area that I think anytime you deal with information you
know how to you make sure it's credible, how do you make sure it's protected, all
sorts of interesting things.
I have a lot of empathy with Barbara because we're both like late age or late
career stage PhD students. So I'm also a PhD student at the University of
Washington as well in information science. And so it's great to have her and her
team here. I hope that everyone that sees this video will contact Barbara and her
team, figure out a way to contribute, because I think it's a number of really hot
issues and really and exciting area to be part of.
And so with that, I'm going to have Barbara start off. She's brought a team of
students here which I think will provide just an awesome overview of the work
they're doing as well as the field itself and where some of the really hard
problems are. So thank you very much for listening. And with that, I'll have
Barbara come up and take it away. Thank you.
>> Barbara Endicott-Popovsky: Thank you. Thank you, Phil. I feel your pain.
I'm so happy to be on the other side of it. Curiosity kills the cat. I don't think I'll
ever be curious about anything as -- again, Phil.
I want to welcome people here this afternoon. We're going to talk about the
Center For Information Assurance and Cypersecurity. And as Phil mentioned,
we're interested in engaging Microsoft in a dialogue about how we at the
University of Washington might partner with Microsoft on a number of initiatives
that we are currently undertaking. I'm hoping that this sparks some thinking on
your parts in terms of ways we might collaborate. Maybe there are other projects
that you think are tangential that will occur to you as you see this presentation
unfold.
We are a center of academic excellence designated by the Department of
Homeland Security and the NSA. This is a little bit about me. I am the director
for the center, but I'm also a research associate professor in the information
school.
My research areas span a number of interests. I'm interested in deception as it
gets deployed in digital forensic tools. I'm also interested in forensic ready
networks. And I'm doing some work with the UBC in library and archival science
looking at how we might learn from librarians about what to store, how to store it,
how to authenticate digital records.
I'm also interested in compliance of all kinds because it's certainly a huge
problem overwhelming organizations. And then I'm looking at how do we define
the emerging sciences of information assurance and digital forensics. They're
not really well defined. We don't know how to talk to one another in the field. So
there's some very interesting work being done at the very beginnings of the
evolution of these fields as sciences. They're certainly well developed in some
ways as practices, but what about the science that's repeatable?
I have been the author of book chapters and publications. I'm in the academic
world so it's publish or perish. And I do currently have a number of PhD students
and master students. You'll meet a few of them this afternoon.
We're going to go over a little bit of background and history on the center to kind
of ground you in the context for what the University of Washington is doing. Also
want to give you a sense of the collaboration and partnerships that we've
engaged as well as an overview of our current work. And towards that ends,
there are two projects that I've selected to talk about specifically. One is the
information assurance body of knowledge. And it maps to a secure coding
project that we're doing. So we'll go into that in a little bit of depth. And also
what we're doing with Next Generation Honeypots or, in other words, how do we
stay ahead of the bad guys?
I mentioned that we are an NSA/DHS Designated Center of Academic
Excellence, but it's in education and research. There have been a number of
universities with an education designation but only couple of handfuls of
universities that of that designation in the research area. And of course the
University of Washington is a tier one research institution and we are able to ply
our research might cybersecurity problem.
The NSA/DHS program stresses domain partnership. Now, there's a lot of lip
service given to academics and industry and government working together,
easier said than done in many ways because those domains are so radically
different, different cultures, different expectations, different goals, different
motivations. But I like to think that we do this very well. And I think you'll see
that we have worked with Microsoft in various parts of the organization, worked
with a very large airplane development company in the pacific northwest, worked
with a number of areas of government.
The whole program that the NSA and the Department of Homeland Security
began to promote back in 1996 is emphasizing domain partnership and how do
we grow IA proposals? I'd like to acknowledge that Microsoft was involved in the
early days of establishing this movement. In fact, we had a major colloquium
back in the mid decade, the last decade here at the Microsoft campus where all
of the centers of excellence at that time congregated.
There have been various people who have been the point of contact within
Microsoft for this movement Arkady Retik is one, who is here this afternoon. And
David Ladd, whom some of you may know had occupied that position.
So Microsoft has been an industry that's been pivotal, perhaps behind the scenes
for many of you, in spurring on this movement that's designed to help create IA
professionals.
What was the motivation behind the government getting into designating these
centers? Well, for one, I probably don't have to tell you folks that the threat
spectrum has certainly escalated. The days of the 12-year-old drinking jolt all
night long causing mayhem are well behind us, and we're looking at very
organized nation state organized crime attack vectors and a very serious threat
that certainly has gotten a lot of people's attention.
And secondly, it's the growing awareness that our critical infrastructure is
interdependent. And this is a national security concern because critical
infrastructure, the electric grid, the power, the water systems, the transportation
system, the banking system, all of these things are so critical to our way of life,
just have the ATM in your local neighborhood go down and you know how
inconvenienced you become.
These systems are now linked to public networks. And I'm not -- I know that
there's policy in place that says that doesn't happen, but I'm here to tell you,
based on research that I've done person with NIST, that it does indeed happen.
There are work-arounds everywhere. And so while it is not as easy talking about
intrusion from a public network, it does present a vulnerability that can happen.
And further, these incremental infrastructure assets are owned and operated
primarily by private companies. This makes it very difficult for the federal
government to really have much say in how they're defended against cyber
attacks or to levy any kind regulation that asks for mandatory improvement.
So the influence has to come from getting people aboard in the universities who
understand what the problems are and are prepared to go out in industry and
solve those particular problems. And then 1996, when this program was
originated, we saw an emerging series of job categories that were being unfilled
because we didn't know where we were going to find those professionals, the
options were on-the-job training. You grew your security professionals. There
have been emerging certifications but nothing that's been standardized and
accepted as a standard that we can universally agree is the appropriate one.
And there has been at the time, or there had been, little happening at the
academic level in universities to train faculty, to train students. And so the
government launched a program to create these centers to meet the demand for
experts and to help equip these experts to protect information infrastructure,
provide recruits for federal government employment and encourage IA research
in critical areas.
So that gives you a sense of where all this happened and how Microsoft
Research got involved a number of years ago and what prompted universities to
begin teaching this subject area.
And here's a list of some universities you might know. It's certainly not
exhaustive. There's over a hundred schools and universities, many of them a lot
smaller than these, that do offer educational programs in information assurance
that have been qualified by a peer review and an overview by the federal
government.
And of those institutions, as I mentioned, about a couple dozen of them are also
designated as research centers.
This is the way that picture happened out on the global in 2004, when the
university decided apply to become a center. So this program had been
underway about eight years before UW stepped up. We were too busy making
money in the dot-com bubble to be concerned about all that security stuff, so we
didn't really see a big interest emerging at the universities until about 2004.
And you can see the opportunity that was perceived. Most of those centers of
academic excellence, those red stars, were concentrated in the belt way. And
that kind of makes sense because it was the Federal Government that was hiring
most of these people. And universities generally train people to work within a
30-mile radius. I don't know if you were aware of that, but that tends to be typical
that students will work and stay near where they graduate. So it was pretty
obvious there was nothing going on in this area in the state of Washington.
There were a couple of centers in Idaho, but the pacific northwest was wide
open, so that spells opportunity.
And that led to my going aboard the University of Washington to breathe life into
this idea of a center of excellence. I was fresh with my PhD in cybersecurity,
computer science, and so I was very enthusiastic about what we could do with
the potential at a University of Washington. And this is conceptual model for how
our center operates and is managed. For those of you that come from academia,
you're very well aware that there are three things academics are graded on.
They're graded on their research, can you get grant money, do you perform
research that's recognized, do you teach, do you teach these subject areas, do
you teach them well, and do you do outreach work to the community or within
your university to disseminate information about your field or make contributions
to your field?
And we do all three areas. We have activities going on in all three. And I'll get
into a little detail in just a moment.
I'd like you to note that at the center of our conceptual model is the Agora. I don't
know how many of you attend the Agora or have heard about it, but it's a unique
institution that isn't really an institution. It's a meeting on a quarterly basis of
security practitioners from all over the United States. They don't even have a
formal charter, so they're and organization that's a non organization. But they've
been meaning to solve mutual security problems for over a dozen years. I think
it's going on about 15 years now. And I'd like to invite anybody who is listening to
this presentation or in our audience to the next meeting which is taking place on
the University of Washington campus in the Hub Auditorium June 18th in the
morning, it's a Friday. It's a half day workshop. It's free. Plus all the donuts and
coffee free that you can consume, which generally attracts students. But it's nice
to be able to go to a free conference. And there are usual three presentations on
current interest matters done by practitioners as well as academics who might be
doing research in this area. And so the next one again is coming up this June.
And there will be another one in September. But get aboard the Agora freight
train.
The Agora is a source for me of practitioner knowledge of what's happening on
the firing line. The problem with this field is us pointy headed academics are
often accused of being out of touch with reality. And it can happen very quickly
because you step away from the front lines in this cybersecurity space and you
quickly get behind in your knowledge. And so to stay on top of things we reach
into that community and make ample use of guest lectures in our academic
offerings, in our research activities if an opportunity arises. So we work with
closely with practice.
In any emerging field, there's always a huge gap between the academic defined
science and the practitioner community. And it exists for a long time and
gradually he move towards convergence. Although in something as dynamic as
our field, I don't know if we'll ever arrive at the same space. So there's always
going to be that tension. We preserve that tension in the classroom. Yes, there's
a question.
>>: Do you have a link where we get information about the Agora program?
>> Barbara Endicott-Popovsky: Actually they don't have a website. You can find
out about it through me, and I'll put you on the mailing list. Very happy to do that.
Endicott, E-n-d-i-c-o-t-t, like Endicott, Washington, no relation, wish there was, at
uw.edu. Okay?
So moving on to look at our academic offerings, this might be of interest to you.
You'll see this pedagogical model a little while later in another presentation. But
we often classes, workshops, certificates, and concentrations in master's
degrees in various departments on campus. So there's quite a wide range of
offerings in information assurance and cybersecurity. And we have a growing
number of professors who have taken an interest in this space. You'll see some
names in a little bit.
I'd like to point out in this particular slide that we are student centered in our
approach to designing curriculum. The goal is to take people on the front end
and turn them into IA practitioners going through a process that's unique every
year. I revise curriculum every year because it is fast paced field. I make sure
things are refreshed. The goal is to turn students into experts.
This is a description high level of two certificates that the center has created and
is offering and a shameless plug. The certificate programs start in autumn. We
will be recruiting our seventh cohort.
There is a list of the three classes that consist of each certificate. The first
information system security is a hands-on kind of course that gives you the
basics of securing an environment from a technical, hands-on perspective. You
can look at the courses that are included, foundations of information security,
tools and technologies and applied solutions and emerging trends.
This course is offered downtown. We have labs down there. So that is our entry
level certificate or our first certificate offering.
And then for people who want to step up to responsibilities for security, we have
the information security and risk management certificate, which recognizes you
will never of a one hundred percent secure system. You're always making
tradeoffs. So the courses include information security and risk management in
context. So we're looking at various industries, various conditions of a company,
is there a lot of mobile work for us, issues going on.
So we look at the problem from a variety of perspectives the first quarter, and
then we build an information risk management toolkit the second. And then I
bring in a practitioner who has live cases from his practice, real cases, they
change every year, and we bring them into the classroom and allow students to
solve them. The goal being that when they walk out that door, they should feel
empowered to take on problems.
That particular class is offered -- a series of classes is offered both online and in
the Seattle classroom. You also can take it for credit and have those credits
stored towards a master's degree if you're so inclined. End of shameless plug.
In terms of outreach, here are some downloadables that might be of interest to
you. We have a lecture series called the unintended consequences of the
information age. You might recognize in the right hand there, Ed Lazowska who
chaired the first lecture series that we put on on privacy reconciling reality. Rob
McKenna was a keynote for the second on privacy versus free speech. And then
we had a wide ranging series of lectures that captured what we're doing with our
infrastructure. It's online, and it's vulnerable.
These are available for download. They're MP3 audio files as well as video files.
Just follow that link and store them on your computer for later viewing.
This is an interesting project that we've had that's been a wonderful outreach
opportunity that we have shared with the Microsoft Corporation. We have put on
the Pacific Rim Collegiate Cyber Defense competition every year now for the last
six years. It became a full-fledged competition three years ago. And Microsoft
graciously hosted it here on campus. We've now actually found a home at a
university that's willing to give up their labs because we can get in there over
spring break. But Microsoft remains an ardent supporter. And we are very
grateful. If you'd like to see the documentary that describes this, the link is below
that picture. And oh, I hope you can see this. We want to say thank you to Kim
Hargraves and to Kevin Sullivan who have been real supporters and champions
of this program. Thank you more than I can say. It's a huge big learning
experience for students. They spend the weekend defending in teams, similar
networks that are seeded with flaws. And we bring in a red team from the
outside that attacks them mercilessly and their goal is to stay alive, keep services
up and running and execute those pesky administrative chores that bosses
always make do you while you're up to your neck in alligators.
We also have the ISC/RMI conference that's now become an institution. We're in
our fourth year, I think, coming up this September where we look at compliance
and risk management impacts from a variety of perspectives. It's here where we
begin to -- several of us, work on the crossover between archiving and digital
forensics.
Now, in the research area, this is how we approach research. This is a
conceptual model similar to the model I showed you for how we develop
curriculum. This is how we think about doing research and disseminating
research at the university.
If you're familiar with how academics operate, they choose their own direction.
And so unless you can align with their interests, often it's very difficult to get their
attention. However, one thing that a center can do is offer dissemination for
what's already existing. And so for those projects that are already underway in
the security space, we do just exactly that through workshops and conferences.
With external partners we have a directed research capability where problems
are brought to us and we have the ability to assemble the talent that will address
those problems. This is really and exciting opportunity for us to partner with an
organization such as Microsoft. And I'm very interested in exploring other
opportunities we might have working with you along those lines.
Note that those gray boxes at the bottom we are able to work across a wide
range of problems, including policy issues, procedural issues, technology issues,
and educational and awareness issues because we have partners at the
university who have agreed to participate in collaborations from a variety of
schools. This is our view of information assurance in the context of an
organization. This is how we see the problem. We don't just focus on the
mechanisms or the technologies. We're thinking about how it gets deployed in
actuality because in the end, unfortunately, it's the people that can circumvent
even the best technologies.
We have an active partnership with Pacific Northwest National lab through a
memo of understanding that's been signed between our two organizations. The
national lab in Richland has stood up a capability and information assurance.
And Deborah Frincke is heading up that activity as chief scientist of the
cybersecurity directorate.
There's a list of people who work for her, but that's by no means exhaustive.
This gives us incredible capability to direct a problem. And this allows us, going
back to that previous model, to look at research problems across a spectrum
from high-level, pure research, all the way to applied problems. Certainly the
Richland folks are very oriented towards applied research.
These are the kinds of academic researchers and practitioners who have either
worked with us in the past or who have agreed to work with us. Radha
Poovendran has -- is my go-to person for academic research, collaboration
whenever we have a problem be when we want to -- we were thinking about
taking on, I sit down with Radha, he has the network security lab at UW, he's
doing amazing work in wireless security and in RFID. I'm the individual at the I
school who has this on their agenda.
And then in the computer science and engineering department, some of these
folks maybe familiar to you. They've indicated an interest in being available on
interesting projects. Hank and Steve Gribble have done interesting work
quantifying getting their arms around spam.
Yoshi Kohno is scary. I don't know if you've seen his latest -- his latest
awareness effort where he's waking up the public to the challenges of
cybersecurity. He ended up showing that you can hack into a car. Did you see
that? He just presented his paper at IEEE security and privacy, and I'm telling
you, I was seeing the IRB all over us. He claims he got the guy who was driving
the car to drive at a reasonable speed when they hacked into it and made it
crash. But the guy had to sign off a waiver to say he wouldn't sue. I don't know.
I mean Yoshi pushes the envelope. When he first came to the university he
hacked into a pacemaker, and that really got all the people in the medical school
on edge. So his whole point is we are embracing these technologies without
thinking about the unintended consequences. And he's made some statements.
Sam Chung from Tacoma is working with me on the secure coding project
integrating secure coding practices into entry level programming courses.
Neal Koblitz is a find. Neal is the coinventor of elliptic conserve cryptography.
He's rather unassuming, quite person. Marvelous lecture, but certainly available
for collaboration on projects.
And Jane Winn is somebody from the law school who is looking at trust issues
along the supply chain. This is just a sample of the people who have raised their
hands and said, you know, if you have an interesting problem let's talk. Maybe
there's something we can do. So we're talking about multidisciplinary research
really cross domain multidiscipline.
On the practitioner research side, likewise some of those names might be
familiar to you. Mike Simon is a go-to guy for me when we have directed
research ideas that I want to vet.
Kirk Bailey is the CISO at the University of Washington who is doing some
amazing things in inventing the CISO position.
John Christiansen is doing some things in IT law. He's the nation's expert on
HIPPA, and he's taken on a new role with Rob McKenna now on a part-time
basis helping defining what the Obama care legal structure is going to look like
when it gets implemented.
Ilanko Subramanian is a gift from Microsoft. We've appreciated working with
Ilanko from the trustworthy computing department, and he's helped us craft a
compliance model that we're applying to drug trial outsourcing to China.
You may know Dave Dittrich who has done some work with DDOS who was
involved in the very early stages of helping to create this center. And so on down
the list.
There's a number of people. Joe Simpson is here today who is going to talk
about his contributions in the systems engineering area.
Those are the funded projects currently underway. The primary ones that we're
focused on, as well as some white papers that we're circulating in terms of things
that we are looking at, getting into.
The Next Generation Honeypot project you're going to hear about in just a
moment. It's looking at challenging conventional design that has grown up out of
practice for these honeypot lures and we're challenging some of the assumptions
that have been made in these designs.
I've mentioned the Secure Coding Project. This summer we're running a
workshop for local teachers. We will be working with Microsoft. There's a
number of people I've listed here who have offered to be advisors: Steve Lipner,
Mike Howard, Arkady Retik, Ryan Hefferman. And I just had another person,
David LeBlanc indicates that he would join us. I'd like to have the opportunity to
vet our materials. We are mapping our materials to all the good work that
Microsoft has done. You're going to see a little bit in a minute how that looks.
But we want to acknowledge that work and use it and integrate it into the
classroom. And do it from the get-go. What's not happening is those very
beginning classes in programming where you could be offering some security
concepts but it's not happening. These are larger at the community college level.
But we have an opportunity to change that paradigm here in the state of
Washington. And I mentioned that we have an NFS grant that's allowing us to do
that.
And I mentioned the China compliance project that we wrapped up and delivered
a model to the medical school.
On the white papers, the things that we're looking at, we want to study the Cyber
Warrior. That research agenda is just evolving, but it's very fascinating and it
needs to be examined.
We also are looking at virtual World Security. And if there's time, I'm going to
play a little two-minute YouTube video that was just released to the White House
yesterday. A novel way of looking at security awareness training.
We are dealing with systems engineering to create, lieu systems engineering
methods to create an information assurance body of acknowledge. We're
dealing with IPSEC interoperability and trust along the supply chain. Those
these are areas that we're exploring, that we're looking at expanding to add to
our research agenda.
Now, what I'd like to do is turn the meeting over to the two projects I mentioned
that we would discuss, to give you a sense of the kind of work that we're doing.
This particular project I call it the Information Assurance Body of Knowledge
Using Systems Engineering to Define the Body of Knowledge, the BOK.
There are three people who are here who represent that team. Joe Simpson. To
give you an idea of Joe's background. He has had long experience and interest
focused in the area of complex systems, system science, systems thinking and
systems management. Joe has professional experience in domain areas that
include environmental restoration, information systems, system security,
aerospace and defense. And his current activities and research interests are
associated with complex systems modeling, evolutionary programming, the
development of systems engineering language and organizational assessment
and improvement with a strong focus on adaptive security systems.
He's accompanied by two other team members, Mary Simpson, who has broad
interest. She likes music, languages, but has developed and expertise in dealing
with system domains, systems engineering, dealing with systems science and
complexity. She was most recently with Battell Memorial Institute. She worked
as chief systems engineer with [inaudible]. And while she was there, she did a
number of projects, including writing the Hanford Strategic Plan and Mission
Direction document. And working with executive managers in places such as the
Boeing Company.
She's also served in multiple capacities for the International Council on Systems
Engineering called INCOSA. And she chaired the corporate advisory board and
was instrumental in expanding the membership. So she's a leader in the
systems engineering movement.
And also there is Dr. Viatcheslav M. Popovsky. If you folks wouldn't mind just
standing up. There's Dr. Popovsky and Mary Simpson. And Joe's going to come
up here in a minute.
Dr. Popovsky is an affiliate professor in the department of education at the
University of Idaho's Center for Excellence. And he has extensive research
experience in pedagogy, particularly the application of the theory of pedagogical
systems in higher education. He's a former associate in professor in chief
pedagogical practice with the St. Petersburg, that's in Russia, not Florida, lest
gas state physical culture academy and is a lecture researcher and coach and
consultant for Elite Sports Teams. He's published over 70 publications in his field
throughout Russia, Europe and the US.
Now, the theme that's common in all of these three people is their interest in
general systems theory, the behavior of systems at that archetype level in the
abstract. And we're looking at how to incorporate principles from systems
thinking, systems engineering into informing us about the pedagogy of writing
code.
So, Joe, I'm going to turn this over to you.
>> Joseph J. Simpson: Thank you. Is my mic on? Can you hear me? Is
everyone awake? Okay. There's some heads nodding. All right. It's a pleasure
to be here today and to speak a little bit about research interest in complex
adaptive systems as they apply to information systems security. And I'm going to
work through these slides a little faster than I planned so that I can give the other
team a little more time. But so we'll just start by -- see if I can sues this.
Okay. So system security basically is emergent property of a system. So
system security basically is something that is not necessarily a characteristic in
the system of out of context. So if you take this system, you put it into a context,
they're going to have an emergent property, whether or not it's secure.
Value is determined -- is also determined in the system context. And so most
people would think that there's some kind of relationship between security and
value. Especially if you're in an organization and you're responsible for
protecting the assets of that organization and using secure systems to protect
those assets.
And so one of the things that you've really got to ask yourself when you have that
type of position or type of job, you're going to say, well, how many resources
should I actually assign to the security component? And how do I know that
when I do these things within my systems development, I develop this kind of
software, these type of operations, this type of training, this with my people that
in this particular context I'm going to have a secure system? So that's one of the
very interesting areas of research that we're working O.
But we need a level -- we need some ways to talk about this across large groups
of people. So what we've done is we've developed this abstract model. And I'm
going to talk about four levels of models when I go through this particular talk.
I'm going to keep going back to the top and working back down again. But this
one is called the asset protection cube or asset protection model.
What it does is it's kind of designed for cognitive reasoning ability of the average
human being is able to understand these three things that basically would be
components of asset protection. We have a threat, we have a system, and then
we have a target.
And then okay at the highest level we look at this at some level of abstraction.
And we drill down one. Okay. We drill down one and then we do it again and we
say okay, now we'll take a look at the system a little bit, and then we'll take a look
at the threat a little bit and we'll take a look at the target a little bit. And we will in
a very structured way go through and add more and more complexity to these
areas but always the target and focus in these cube areas make it so that people
can reason about them. Within the system cube is defined -- is set there so that
communities like systems engineering communities can have a focus area for
their interests. The target cube in this case is systems -- the information
assurance cube is set. There is a focus area for the information assurance
community. And the thread cube is a target -- set there as a focus for the justice,
the legal, and the IC community. So these folks that are normally their
professions are allowing them to have points of focus, structure their areas and
then be able to drill on down to more and more levels of detail.
So as I said, the asset protection model basically is a structured framework for
security topics. Very high level. Provides a focus for the effective professional
groups. So these professional groups can start getting some common
understanding. The key aspect of this is it's totally independent of technology.
It's totally independent of organizational type. And so it's able to be stable.
The information security cube itself when we -- you look at it, it's based on a
system that's been out there about 20 plus years in the information assurance
area. And you know in the last 20, 25 years the technology organizations and
threats, everything have changed. But this particular cube called the McCumber
Cube is still valuable for information assurance people to talk about their area of
expertise.
So the first thing we're doing is just trying to say, okay, we're going to have kind
of a standard language, we're going to have a common language that will stand
the test of time, it will stand the change of technology, it will stand the change of
organizations. All right. Then after we do this, now we have this kind of a
language and some focus areas and we have the organizations. We have these
folks that actually have to go and decide whether or not they're protecting their
assets correctly.
And so we say okay, how are we going to do that? We're proposing a systems
security capability assessment model. Capability assessment model's different
from a capability maturity model, has a different focus. This one is based on the
systems engineering capability assessment model that was developed by the
International Council of Systems Engineering basically in the middle '90s. And
what it does is it allows a group of people to look at an organization and decide
whether or not they meet the criteria for protecting their assets well enough. And
if they do, you go through and you look at this, you do your assessment, you say,
oh, yes, we're doing well enough, everything's cool. We're not going to spend
any more resources. Then you kind of move on. But you also have a baseline.
You just now baselined all your security operations, okay?
Well, let's say, as a matter of fact, that that doesn't happen and you need to do
something. Well, then you need to evaluate the risk. So the system security
capability assessment model really allows you to tie risk and value and context.
And this does it in a very structured way. Now that we've kind of got this high
level A asset protection model, it's really generic, and we drill down in the
organizational piece, now we're looking -- now we're going to be able to tie risk
and evaluation in any specific context.
Then we have a common view of security organizational management
processes. You notice the organizational support and management support
processes we're the same for any domain. We'd abstract them, be the same
type of management process in a hospital, for HIPPA as we would for the
government. And essentially you're going to do the same type of things,
however, you're going to have different information control systems.
So we need an adaptable security assessment structure for specific domain
activities. Once you've gone through, you've looked at your model, you've
analyzed your organization and you decide okay, now, we're really going to do
something when we think we need the system to improve our security and we
need some type of system to improve the way that we're protecting our assets.
Then we would go ahead and we would start applying the systems perspective to
start making decisions about the operational effectiveness and the operational
suitability of anything that we would do there. Let's say, for example, emission
function. Let's say we have a city that is going to deploy a 911 type of
emergency response. So what they have to do is they have to be able to have
communications, people call in, they have to be able to find out where they are,
log their calls, record them, put them -- record them in some way, then dispatch
people to help. So that's -- whoops, wait a minute. So that's kind of the mission
function.
And so then we would design and deploy systems to support the mission function
of the organization of the city. So we would get telephones, we would get radios,
we would basically get IT systems that had host and had back-end servers and
so forth. And all the software code and all of the communication could be
properly put together, and then those system functions would support directly the
mission function.
However, we start looking at operational suitability. Let's say our system works
perfectly, it supports our mission exactly the way we're supposed to but someone
took the SQL server that's in the back end and they left the ports open to the
Internet, okay? Well, is that operationally suitable, are we letting threats in, right?
Or let's say I took the hardware that's supporting the system and I put it out in an
uncovered area and I'm giving it electricity with a generator basically. That gives
it varying voltage. Well, is that operationally suitable? And in many cases what
we're looking at is our information that insurance and our threat vectors are more
in the operational suitability area than they are in our system function area. And
this -- we're very well prepared mostly within systems and software engineering
area requirements development to really know who to ask about functionality.
And we're really well prepared to document our systems of software engineering
techniques and to do our life cycles to produce certain functionality.
We're not really as well prepared within our operational suitability area. Those
are areas I think need some more focused research that helps us really balance
that type of risk.
Okay. So this is a security adaptive response potential, which is a system
security metric, which is based on an analytical, hierarchal process, which is a
type of technique for normalizing values across an organization. I don't know if
anybody's familiar with it. But essentially what you can do with these four
categories of this metric, which is organizational, technical, operational and
content, these are the things we believe are important for information assurance
and security and asset protection in any specific organization.
So when we look at secure code, we say, okay, we're looking at what
contribution to security would secured coding techniques make. We took a
Lipner and Howard, and I just kind of highlighted the areas that are common. So
these are areas that are probably for the -- I would say the system functions and
the operational effectiveness are probably marked in green. And your suitability,
your operational suitability is probably the areas that are unmarked.
And so we're looking at some techniques where we can actually go in and start
saying okay, what things can we really address by doing things better with code,
what things can we really address by doing things better with deployment, and
are there some kind of categories of certain types of systems that if we -- if we
had a menu pick list that we matched the deployment context, how would that
work? Would that be valuable, would that make things more secure?
Okay. So the systems perspective basically provides a framework for a
structured decision analysis when you're thinking about risk, you're thinking
about operational effectiveness and you're thinking about operational suitability
because you can decide to do very risky things. But I believe the most important
aspect of information assurance professional is that they have informed the
decision maker exactly what those risks are.
I think most decision makers, if they have cost effective systems available to
them, they are going to avoid any real risk to their assets by applying cost
effective barriers and cost effective solutions if they're available. So it provides a
balance between the architecture function and risk assessment -- yes?
>>: What do you do about unintentional dependencies between low value
systems and high value systems? Like somebody might use the same
administrator password on both of them. One's well protected, the other's not.
Somebody gets the less well protected low value asset, then it just happens to
have a password that can be cracked and then away we go.
>> Joseph J. Simpson: Yeah.
>>: So a lot of times there's -- it's difficult to identify the risk because the -there's these nearly invisible linkages that you may not be able to analyze easily.
>> Joseph J. Simpson: Exactly. So, there's architectures at different levels, all
right? So one -- someone did the password wrong, training, okay? So basically
if I've designed that organization and I have the folks and I haven't done controls
by separation of processes or separation of responsibilities, you know, you've
allowed this type of password error to happen. It's the first thing.
Second thing is when you go in and you look at the architecture of the value of all
your assets. In this particular case, did the organization value each piece of
information? You said this information is more valuable than that. And I'm going
to protect this more than that. That's kind of what I'm getting from what you are
saying. So those interfaces, right, the ability in that context we now have double,
we have at least two layers of value that we're talking about.
And I don't believe that we do those well. And what I'm trying to say is we have a
system function which is administration, right, administrative password. We do
these functions very, very well. But because of the value involved, people -- the
attacker will take more effort, they will do whatever they have to do to get to that,
and we don't have the menu pick list of architectures or information protection
categories within these systems that basically are self-reenforcing. And that's
where the complexity comes in. And that's where I'm very interested in that type
of research but to even talk about that which I'm struggling right now with the
models within my own mind, you need to have a conceptual model topology
where people can actually -- and everybody understand it, all the people that are
involved. Management, architecture, information, value folks. Those are -- those
are I believe the issues and the solutions to those type of issues, and they have
really little to do with software. Because every piece of software that employed
the system to be perfect, right? And where -- and if we put more money into
making more perfect software, that still is not going to help those things. So how
do we draw the line, right? Does that answer your question? Okay.
So the operational suitability and life cycle cost considerations basically balance
the decisions. And this is kind of an eye chart but basically what it says is when
we have our secure software education system down here, we are taking the
components from our models that we've developed, we have a slot for those,
take the system architecture, put it over there. As you can see, the threat, the
different threat in the different values go in one for our exposure cube, right?
The system specification that comes out of here, you're talking about how these
thing are hooked together. And then the system architecture, when we're trading
them off, which we're going to pay for, that's the other -- that's the other column.
So these are the types of triplets of things that you really want to consider when
you're thinking about how to analyze and deploy secure patterns. And so we
would actually start developing patterns, low level application patterns at this
area based on these higher level models that we're establishing.
This particular model really allows the communities involved to start addressing
things that are important to them. And they -- now once we put them into these
types of formats, you can tell every computer scientists in the room that this is
just a recursive type of pattern that goes down and basically comes out with a
lattice. And so we have a ontology lattice that's developed here which we can
then put into an artificial intelligence type of activity. So there's a lot of software
and things that would be behind this once these get the fairly large. Because just
at this level there's just a little less than 20,000 interconnections between all
these cubes.
And so the relationships between these things are the things that we would then
support the analysis of that once the humans that are focused in these areas
have put in their information. So that's kind of the way that works.
This is the pedagogy model where [inaudible] has talked about before and
exactly allows us to design a structured delivery of these concepts and patterns
once these concepts and patterns have been developed. And of course this is
what I've just talked about now, a series of patterns that we would start
developing that would fit in any case. And we can take those and refresh them
as you go along from the professional organizations and the people that are
affected because context the changing all the time. So we're going to have this
type of delivery mechanism which prioritized once certain types of threats are
taken care of and certain types of exposures are moved off, you don't see those
anymore. They fall down in priority then probably they won't emphasize those as
much, we'll get newer ones.
So essentially secure coding techniques are important. Secure system
deployment is important. Cost effective security is very important because
everybody's going to make the decision of my asset is worth X if the -- if it costs
me 2X to protect it, I'm not going to protect it. No one spends more money
protecting an asset than the asset's worth. The software technology is only part
of the solution but it's very important part. The operations process are only part
of the solution. But a cost effective secure systems I believe are the solution
which we will be able to produce those and deliver those as the community of
practitioners once we are able to precisely talk with each other about the
structures necessary to do that.
So, questions, comments? I think maybe the next presenter?
>> Barbara Endicott-Popovsky: Yeah, I think we'll go ahead. I do want to
comment that this is a work in progress. So there's a lot of questions on the
table. But you see what we're wrestling with, exactly what you were talking
about, David.
And what we could use as a model, if anyone was around when the Carnegie
Mellon CMM model was being devised, the capability maturity model was being
developed here 20 some years ago. That's what this is. And I think it's going to
take a partnership of industry, academics, and government folks to really start
thinking this through and developing the model so that it's robust and takes into
account a lot of different scenarios.
It's also capable, once we develop it, to be simplified. So cognitive simplicity is
the ultimate goal although right now it looks pretty complex.
So next I would like to introduce our next generation honeypot project or staying
ahead of the bad guys. Julia Narvaez is project lead. This is a project that we're
doing with Pacific Northwest National Laboratory. Julia is a graduate student at
UW and is graduating this June, which is really cool. So I'm glad we were able
too get her here today.
These a systems engineer. She's a developer herself. She has her own
company and has a degree in project management. She has extensive
experience in the software development industry and lifecycle application. And
her research interest has been this kind of thing in the last couple years.
She's also going to be joined eventually by Ashish Malviya who is a first-year
graduate student at the I school who is currently working as a research associate
at the center. He has three years of experience in IT infrastructure. He's my
go-to guy whenever anything breaks. He's amazing.
His past professional experience involves working with financial firms and
managing and troubleshooting their data centers. And his research interest is
information security. So I'm going to turn the podium over to Julia and to Ashish.
>> Julia Narvaez: Thank you, Barbara.
Good afternoon. So one of the areas of interest at the center is the study of
cyber attacks, specifically client-side attacks. So this led to the -- this project,
which is assessment of virtualization as a sensor technique. This project has
been conducted in collaboration with the Pacific Northwest National Laboratory.
And participants include Ashish, Barbara, Douglas Nordwall from Pacific
Northwest National Laboratory, Chiraag Aval, who developed a honeypot that we
call bare metal. Christian Seifert from Microsoft. He brings all the experience of
the honeynet alliance. And he has been an advisor from the beginning of the
process. And Barbara, of course.
And we created the Pacific Northwest honeynet project last November. So we
are active members of the project.
Today we are going to talk about some background information, what the
problem of study is, the objectives of our research, the approach and the
conceptual methodology of our project and the architecture, how we are going to
conduct the analysis, future work. And there will be I think maybe time for some
questions.
So how many of you are familiar with the concept of honeypots? Okay. So we
know honeypots are security resources, whose value is the information they
provide, and they are made to be compromised. And they are deceptive by
nature. And they frequently work in virtual environments.
So we -- there are so many types of honeypots. And the type of honeypot that is
deployed depends of the type of attack that is going to be study.
In our case, we are interested in the client-side attacks. And we know malware
development is exploding and the growth is attributed to the professional wisdom
of malware development.
So there is a particular type of attack that is client-side attacks that is conducted
to client applications such as a Web browser. So the Web browser sends a
request to a malicious Web server, the Web server replies with a malicious page
that launches an attack against the browser. If the attack is successful, the
malicious Web server pushes malicious code that installs to the -- into the client
machine.
In order to know how the malicious attack is conducted we use honeypots. And
this leads to a problem. Because security researchers rely on virtualization to -on virtual technology to install honeypots to capture malware and to study the
malware.
But we know virtualization is detectable. And we also know that malware is
becoming increasingly more sophisticated.
So malware is able to detect if it's running in a virtual environment and if it's -and when that happens, if it's capable of hiding its malicious intent. And we have
seen it. And that has a problem for honeypots such as Capture-HPC, which
needs to run in virtual environment. And it can be detected.
So this leads to our research questions, given the sophistication of malware how
-- how adequate are the current designs of honeypots? And how do malware
attacks behave in the presence of virtualization and when there is no
virtualization?
So in order to answer those questions, we have four objectives in our project.
One is to develop a conceptual framework of deception in which we evaluate the
honeypot design. The second is to propose a methodology for assessing design
of honeypots. The third one is to test the honeypot design, and we propose a
new architecture. And the fourth one is to compare the detection capabilities of
honeypots running in different environments.
So our objective one, we develop a deception framework for honeypot design.
So I said that honeypots are deceptive by nature. And we adopt the conventional
-- the definition of deception that is to make somebody think that something is
good or true when it's actually bad or false.
In the design of honeypots, there are three main challenges. One is that
deception problem; the counterdeception problem and the
counter-counterdeception. So the depreciation problem is how to design
honeypots that look like real systems.
The counterdeception problem is what techniques are used to identify that a
system is a honeypot? Because if a system doesn't look like a real -- if a
honeypot doesn't look like a real system and the attacker is able to identify that
it's not a system, it's a honeypot, the information that the honeypot is capturing
might not be reliable. Might be misinformation.
And the third is counter-counterdeception is how to design honeypots that make
the attacker think that they are real systems for find constraints. I'm just going to
focus on the counter-counterdeception problem.
So Bill and Willy [phonetic], they propose the theory of deception. They say that
even though deception is done in an intuitive manner, we think how we do it,
deception planning does follow a process which is the picture in the deception
planning loop. So we switch it for a purpose. And that purpose is supporting a
deception goal which is supporting in a strategic goal.
Once they are -- in order to conduct the deception, there are several
characteristics that the deception planner is going to use and the deception
planner can decide what to show, what to hide, and how to conduct the
deception. So it selects a rose. The rose creates and illusion and if the illusion is
accepted the deception goal is accomplished.
By accomplishing the deception goal it fulfills the goal of the deception and that
supports the strategic goal. Relating to honeypots, there are many kinds of
honeypots. And each type of honeypot can be deployed depending on the type
of study, the attack that we want to study.
But there is not enough taxonomy to classify the honeypots and that delays the
study of honeypots. So some researchers, including Christian Seifert propose
taxonomy of honeypots. They say the taxonomy of six areas of study, including
interaction level, data capture, containment, distribution appearance,
communication interface, and role in multi-tier architecture. These areas
facilitate the study of honeypots.
So we believe that the use of the systematic application of the deception theory
combined with the taxonomy of honeypots helps the deception planner, which is
the researcher, to identify the characteristics and the variables that are going to
be considered every time that a research project is planned or the deception is
being planned. So going back to our examination, our deception plan -- or
deposition goal here is to make the attacker think that the honeypot is an actual
system and be attacked.
According to the taxonomy of honeypots, we select the honeypot that we need.
We say that, for example, the honeypots we use are high interaction client
honeypots that don't avoid attacks, they just capture the information of the
attacks. They capture information of any events or any attacks. The rose is to
give the illusion that a user is accessing a website or a client is sending a request
to a server. And if that server acceptance the illusion, think that it's really a user
sending a request to the server, the server is going to launch an attack to the
browser, and the honeypot is going to be able to capture the information of the
attack and the -- that fulfills the deception goal.
So applying this methodology we actually are applying it, and that way with were
able to control what variables we want to use during our research. So variables
that we are controlling include the operating system, the type of honeypot, the
version of the browser and the environment in which the honeypots are
deployed. The variables -- we are deploying honeypots in two environments, one
that uses virtualization and one that does not use virtualization.
So this leads to the second objective of our research in which can we compared
the detection capabilities of honeypots in two different environments. One that
uses virtualization and one that does not use virtualization, which Ashish is going
to explain.
>> Ashish Malviya: Thank you, Julia. So Julia already talked about the
background behind the research in honeynet. I would like to do a quick walk
through, like what are our objectives behind the research. We want to compare
the detection capabilities of hall wear in two environments. One is virtual and
another one is non-virtual, they're like physical machines.
And after that comparative analysis, we want to do a statistical significance of
discrepancies in both environment. And we want to analyze the techniques used
by those malware developers to detect the virtual machines and creating trouble
to researchers.
And with this research project, we want to inform and educate the community
regarding the work that we are doing in honeynet research and to develop the
requirements for next generation honeypots.
So these are the specific objectives that we have covered this year. We have
developed the bare-metal honeypots in which we are running honeypots on
actual machines and performing an analysis on malware behavior on those
machines.
>> Barbara Endicott-Popovsky: This is the first time anybody has ever built a
bare metal.
>> Ashish Malviya: Bare metal honeypot. So [inaudible], alumnae of Center for
Information Assurance and Cybersecurity has developed this honeypot, and we
are successfully performing the research on this, on this setup. And ->>: Do you have [inaudible] virtualization?
>> Ashish Malviya: And we are in the process of performing these malware
analysis of collected samples from these two environments. And in order to
perform this experiment, we have developed an open source technique to restore
clean image snapshots in minutes because we want to make this work available
for society freely. So we have developed the mechanism so that we can test
these environment independently.
So this is the architecture of a virtual environment where we have a machine
running and the virtual environment loaded on it to capture HPC or honeynet
client -- honeypot client is running on this virtual environment. And accessing the
malicious URLs, URLs declined honeypot server is controlling the operation of
capture HPC and monitoring the behavior of the client machine running on the
virtual environment and collecting the logs from those malicious URLs.
Similarly we have another environment which is the bare-metal honeypot, a
physical environment, where we have two separate machines. One is client
honeypot server and one is client honeypot client, which is accessing the
malicious URLs. And the same way client honeypot server is controlling the
client honeypot client to access those URLs and we are using the set of Windows
PE environment and the UBS containing the XP image to perform the boot
sequence.
When the client honeypot access the malicious URL, it needs to be rebooted and
formatted before it goes to access the second URL. So that operation's been
done by Windows PE and Windows XP's running on two separate UBSes. And
the performance and the operation is almost similar. Just the booting sequence.
And it uses the wake-on-lan signal and shutdown mechanism to control the client
remotely.
So third objective is basically it's about the experimental design. As we already
discussed that, we assume that some malicious and malware -- malwares
identify these virtual environments and they restrict themselves to download the
malicious contain. So we want to analyze the -- like how they behave with
virtualization and without virtualization.
So our experimentation is to run these two environment parallelly and to feed
them these malicious URLs at the same time. So we are -- we have our different
data sources for malicious websites. We are getting malicious websites from
Microsoft malware domain list and shadow server to run our experiment. So it's
a quick snapshot of our experiment. We have two environments running
parallelly and we are treating our malicious URLs at the same time.
So these are some preliminary results which those that there is a difference
between the malicious URLs being accessed by these two environments. So we
are in the process of diving in detail and performing the analysis why there are
differences in detection capabilities of these two environments.
So we'll be performing the statistical analysis of the detection capabilities of
honeypots. And we are talking into consideration all variables like environments
we are using, the sources of URLs and the time of running those URLs on both
the honeypots. And then we are processing towards performing in that malware
reverse engineering to capture the behavior of those malicious websites.
Similarly on the same lines our future work involve developing a proof of concept
which proves that bare-metal works and it captures the malware that detects
virtual machines.
So proof of concept is basically a shell code which will detect this virtual
environment and restrict itself to download the malicious contained.
So our next step involves partnershiping with other research organization for
performing these malware analysis with us.
So that excludes our research work in honeynet, and now I would like to hand it
over to professor to summarize the presentation.
>> Barbara Endicott-Popovsky: As you can see with the honeynet research,
what we're attempting to do is turn this area of research into research and getting
it out of the practitioner mode. The taxonomy of honeypots has had -- has given
us an ability to look at different types of honeypots analyzing the effectiveness of
each type and maybe there is some additional follow-on work that will be done
here this year.
So to summarize, we have approached integration of domains, disciplines, and
our topics in very innovative ways, using systems engineering techniques,
systems thinking.
We've developed some collaborations, diverse disciplines working together,
working in emerging technologies and looking at not just the technical side of
things but also the organizational impacts of security. Now, this is the final chart.
I don't know if we have access to the Internet here at the podium. But this is a
link. And you can try it at your leisure to cybersecurity island, which was just
presented at the White House yesterday.
We've developed a virtual world that's being used as a training theme park in
cybersecurity to raise awareness of cybersecurity issues. It's a lot of fun. So I
thought I'd leave that with you. It's a curtain raiser. This is a private URL. It is a
YouTube video, but it's not made public. So if you want to -- if you want to try it,
it's kind of fun.
>>: [inaudible].
>> Barbara Endicott-Popovsky: Should it be live?
>>: [inaudible].
>> Barbara Endicott-Popovsky: Okay.
[video played].
>>: Computer scams, identity theft, computer malware viruses.
>> Barbara Endicott-Popovsky: Get the volume.
>>: How could people be educated on the dangers of varied and persistent
cybersecurity threats they face daily? Welcome to cybersecurity island
immersive [inaudible] imagine a traveller entering an immersive learning
experience ->> Barbara Endicott-Popovsky: Is that okay?
>>: The quest for cybersecurity is symbolized by the oily snake representing
false security measures and the healing unicorn representing secure systems
built from the ground up. Cybersecurity island is composed of several theme
areas in addition to true security land. Through simulated experiences the
traveller is immersed in learning. And an awareness unfolds that the beautiful
deep blue waters in the central fountain have healing powers. Watch as a virus
infected traveller drags herself to the healing waters to become well again.
Enter the house of illusion and deception for example to learn about cyber crimes
that involve both offensive and defensive illusion or deception. The traveller can
learn some of the history of cyber crime through both deactivated and active
examples of exploits. Learn about steganography where information is hidden
within objects by travelling to stego country. A giant stegosaurus may grab you
with its tongue.
A Times Square scene lets the traveller explore various dishonest business
practices. Without safeguards, your dips can escape or be stolen. Point out
fraudulent activity on the island to a cop. An action will be taken. Signs and
arrows lead the traveller to classrooms such as an underground training session.
Lessons learned on the island include security basics, password security, social
network safety, phishing scams, financial fraud, identity theft and malware
viruses and worms.
The traveller quickly senses the integration of the experimental. And academic
lessons are humorously innovated with our respect for transmission.
Cybersecurity island is designed to form a lasting impression and stimulate the
interest of a diverse group of learners. Come visit cybersecurity island and learn
to be cyber safe.
>> Barbara Endicott-Popovsky: At any rate, that gives you an idea of some of
the projects we have going at the center. That's -- that was fun to work on.
Are there any questions anybody has about what we're doing I think we're ending
right on time. Somebody's having cookies. We don't want to stand between
people and their snack.
>>: How do you see the future of this lab developing and what's your kind of
longer term ->> Barbara Endicott-Popovsky: You mean this -- this particular ->>: Well, not with this, but with the lab as a whole.
>> Barbara Endicott-Popovsky: Well, we're on our way ->>: What's your vision for it?
>> Barbara Endicott-Popovsky: What's the vision?
>>: Yes.
>> Barbara Endicott-Popovsky: The vision's rather grandiose. I mean, we're on
our way to establishing an institute in cybersecurity in this region. The
capabilities and abilities of all of these various people and folks like yours are
astounding. And I've spent enough time with other centers, with people on the
East Coast whom I love, I'm from the East Coast, I don't want to cast asparagus,
but I think that there is a group thing that people out here in the northwest are in
their DNA designed to challenge. And I think it's going to take out-of-the-box
thinking to solve the cybersecurity problems that we're facing.
It's going to take interdisciplinary approaches, it's going to take cross-domain
collaborations to solve our cybersecurity problems. I think there's tremendous
promise in what we're doing with systems engineering. I see our being able to
frame what we mean by cybersecurity and create a framework for conversations
across domains. I've been invited to participate in an international committee this
summer that is defining cybersecurity. They're interested in the intuition we can
make, the I school perspective and the systems engineering perspective.
We've had tremendous cooperation from Microsoft. And what I was looking to do
today was take that first step in introducing the kinds of things we're doing and
hopefully expanding our relationship. I'm very excited. I mean things are coming
together.
The systems engineering portion is the foundation, making definitions of
principles is the foundation of establishing a body of knowledge. And things grow
from there. Integrating all of the things that you've done. I mean there's places
for all the contributions, the tremendous contributions you folks have made. You
must feel like profits crying in the wilderness with all the work that Lipner and
Howard and LeBlanc have done. And nobody pays attention. Hello, is anybody
listening.
>>: Until there's a problem.
>> Barbara Endicott-Popovsky: Until there's a problem. But what we want to do
is incorporate all of that work and give it a platform for the kinds of audiences that
we think we're starting to see accumulate around some of these problems.
Does anybody on the team want to add anything?
>> Joseph J. Simpson: Thank you for having us and hosting us.
>> Phil Fawcett: You bet. It was great.
>> Barbara Endicott-Popovsky: Thank you.
[applause]
Download