Note - School of Computing

advertisement
Pattern-Oriented Software Architecture
Concurrent & Networked Objects
Friday, March 18, 2016
Dr. Douglas C. Schmidt
schmidt@uci.edu
www.cs.wustl.edu/~schmidt/posa.ppt
Electrical & Computing Engineering Department
The Henry Samueli School of Engineering
University of California, Irvine
The Road Ahead
2,400 bits/sec to
1 Gigabits/sec
CPUs and networks have
increased by 3-7 orders of
magnitude in the past decade
Extrapolating this trend to
2010 yields
• ~100 Gigahertz desktops
• ~100 Gigabits/sec LANs
• ~100 Megabits/sec wireless
• ~10 Terabits/sec Internet
backbone
10 Megahertz to
1 Gigahertz
In general, software has not
Increasing software productivity
improved as rapidly or as
and QoS depends heavily on COTS
effectively as hardware
These advances stem
largely from standardizing
hardware & software APIs
and protocols, e.g.:
• Intel x86 & Power PC chipsets
• TCP/IP, ATM
• POSIX & JVMs
• CORBA ORBs & components
• Ada, C, C++, RT Java
Overview of Patterns and Pattern
Languages
www.posa.uci.edu
Patterns
•Present solutions to common software
problems arising within a certain context
•Help resolve key design forces
•Capture recurring structures & dynamics
among software participants to facilitate
reuse of successful designs
•Generally codify expert knowledge &
“best practices”
Pattern Languages
• Define a vocabulary for
talking about software
development problems
• Provide a process for the
orderly resolution of these
problems
• Help to generate & reuse
software architectures
•Flexibility The Proxy
•ExtensibilityPattern
•Dependability
•Predictability
•Scalability
•Efficiency
Overview of Frameworks &
Components
Framework
•An integrated collection of components
that collaborate to produce a reusable
architecture for a family of related
applications
• Frameworks differ from conventional
class libraries:
Frameworks
Class Libraries
“Semi-complete”
applications
Stand-alone
components
Domain-specific
Domainindependent
Inversion of
control
Borrow caller’s
thread of control
Class Library Architecture
Framework Architecture
•Frameworks faciliate reuse of successful software designs & implementations
•Applications inherit from and instantiate framework components
The JAWS Web Server Framework
Key Sources of Variation
• Concurrency models
• e.g.,thread pool vs. thread-per
request
• Event demultiplexing models
• e.g.,sync vs. async
• File caching models
• e.g.,LRU vs. LFU
• Content delivery protocols
• e.g.,HTTP 1.0+1.1, HTTP-NG,
IIOP, DICOM
Event Dispatcher
Protocol Handler
Cached Virtual Filesystem
• Accepts client connection • Performs parsing & protocol
• Improves Web server
request events, receives
processing of HTTP request
performance by reducing the
HTTP GET requests, &
events.
overhead of file system accesses
• JAWS Protocol Handler design
coordinates JAWS’s event
when processing HTTP GET
allows
multiple
Web
protocols,
such
demultiplexing strategy
requests.
as
HTTP/1.0,
HTTP/1.1,
&
HTTP• Various caching strategies, such as
with its concurrency
NG, to be incorporated into a Web
least-recently used (LRU) or leaststrategy.
server.
• As events are processed
they are dispatched to the
appropriate Protocol
Handler.
• To add a new protocol, developers
just write a new Protocol Handler
component & configure it into the
frequently used (LFU), can be
selected according to the actual or
anticipated workload & configured
statically or dynamically.
Applying Patterns to Resolve Key
JAWS Design Challenges
Patterns help resolve the following common challenges:
•Encapsulating low-level OS APIs
•Decoupling event demultiplexing &
connection management from protocol
processing
•Scaling up performance via threading
•Implementing a synchronized request
queue
•Minimizing server threading overhead
•Using asynchronous I/O effectively
•Efficiently Demuxing Asynchronous
Operations & Completions
•Enhancing server configurability
•Transparently parameterizing
synchronization into components
•Ensuring locks are released
properly
•Minimizing unnecessary locking
•Synchronizing singletons correctly
Encapsulating Low-level OS
APIs
Context
A Web server must manage a variety of
OS services, including processes, threads,
Socket connections, virtual memory, &
files. Most operating systems provide lowlevel APIs written in C to access these
services.
Problem
The diversity of hardware and operating
systems makes it hard to build portable
and robust Web server software by
programming directly to low-level
operating system APIs, which are
tedious, error-prone, & non-portable.
Wrapper Facade
calls
data
calls
method1()
…
methodN()
calls
API FunctionA()
calls methods
Application
Solution
Apply the Wrapper Facade design
pattern to avoid accessing low-level
operating system APIs directly.
Intent
This pattern encapsulates data &
functions provided by existing nonOO APIs within more concise,
robust, portable, maintainable, &
cohesive OO class interfaces.
void method1(){
functionA();
functionB();
}
: Application
API FunctionB()
API FunctionC()
void methodN(){
functionA();
}
: Wrapper
Facade
: APIFunctionA
: APIFunctionB
method()
functionA()
functionB()
Decoupling Event Demuxing and
Connection Management from Protocol
Processing
Context
• A Web server can be accessed simultaneously by multiple clients, each of which has
its own connection to the server.
• A Web server must therefore be able to demultiplex and process multiple types of
indication events that can arrive from different clients concurrently.
• A common way to demultiplex events in a Web server is to use select().
Problem
Client
Event Dispatcher
• Developers often tightly couple a Web server’s event-demultiplexing
and connectionHTTP
GET
management code with its
protocol-handling
code that performs
HTTP 1.0 processing.
select()
Web Server
request
• In such a design, the demultiplexing and connection-management
code cannot be
Socket
HTTP
GET
Client
reused
as black-box components
Handles
• Neither by other request
HTTP protocols, nor by other middleware and applications, such as
ORBs and image servers.
Sockets
Client
Connect
• Thus, changes to the event-demultiplexing
and connection-management code will
affect the Web server protocolrequest
code directly and may introduce subtle bugs.
• e.g., porting it to use TLI or WaitForMultipleObjects()
Solution
Apply the Reactor pattern and the Acceptor-Connector pattern to separate the
generic event-demultiplexing and connection-management code from the web
server’s protocol code.
The Reactor Pattern
Intent
The Reactor architectural
pattern allows event-driven
applications to demultiplex
& dispatch service requests
that are delivered to an
application from one or
more clients.
Reactor
handle_events()
register_handler()
remove_handler()
dispatches
*
Handle
handle set
<<uses>>
Event Handler
*
*
owns
handle_event ()
get_handle()
notifies
Concrete Event
Handler A
handle_event ()
get_handle()
Synchronous
Event Demuxer
select ()
Concrete Event
Handler B
handle_event ()
get_handle()
Observations
: Main Program
1. Initialize
phase
2. Event
handling
phase
Con. Event
Handler
: Concrete
Event Handler
Events
: Reactor
register_handler()
get_handle()
Handle
handle_events()
Handles
handle_event()
service()
•Note inversion
of control
•Also note how
long-running
event handlers
can degrade the
QoS since
callbacks steal
event
the reactor’s
thread!
: Synchronous
Event
Demultiplexer
Handles
select()
The Acceptor-Connector Pattern
Intent
The Acceptor-Connector design pattern decouples the connection &
initialization of cooperating peer services in a networked system from the
processing performed by the peer services after being connected & initialized.
notifies
notifies
Dispatcher
uses
uses
*
Transport
Handle
owns
select()
handle_events()
register_handler()
remove_handler()
uses
Transport
Handle
owns
notifies
uses
*
*
Transport
Handle
<<creates>>
owns
*
Service
Handler
*
Connector
Connector()
connect()
complete()
handle_event ()
*
Acceptor
peer_stream_
peer_acceptor_
open()
handle_event ()
set_handle()
Acceptor()
Accept()
handle_event ()
<<activate>>
<<activate>>
*
Concrete
Connector
Concrete Service
Handler A
Concrete Service
Handler B
Concrete
Acceptor
Acceptor Dynamics
: Application
1.Passive-mode
endpoint
initialize phase
: Acceptor
: Dispatcher
open()
Acceptor
Handle1
ACCEPT_
register_handler()
EVENT
handle_events()
accept()
2.Service
handler
initialize phase
: Handle2
: Service
Handler
Handle2
Handle2
3.Service
processing
phase
• The Acceptor ensures that passivemode transport endpoints aren’t used
to read/write data accidentally
•And vice versa for data transport
endpoints…
open()
Service
Handler Events
register_handler()
handle_event()
service()
• There is typically one Acceptor
factory per-service/per-port
•Additional demuxing can be done
at higher layers, a la CORBA
Synchronous Connector Dynamics
Motivation for Synchrony
• If connection latency is
negligible
•e.g., connecting with
a server on the
same host via a
‘loopback’ device
: Application
1.Sync
connection
initiation phase
2.Service
handler
initialize phase
3.Service
processing
phase
Service
Handler
• If multiple threads of
control are available & it
is efficient to use a
thread-per-connection
to connect each service
handler synchronously
: Connector
Addr
• If the services must be
initialized in a fixed
order & the client can’t
perform useful work
until all connections
are established.
: Service
Handler
: Dispatcher
get_handle()
connect()
Handle
register_handler()
open()
Service
Handler
Handle
Events
handle_events()
handle_event()
service()
Asynchronous Connector Dynamics
Motivation for Asynchrony
• If client is establishing
connections over high
latency links
• If client is a
single-threaded
applications
: Application
Service
Handler
1.Async
connection
initiation
phase
2.Service
handler
initialize
phase
3.Service
processing
phase
: Connector
Addr
• If client is initializing many
peers that can be connected
in an arbitrary order.
: Service
Handler
: Dispatcher
get_handle()
connect()
Handle
Handle
register_handler()
CONNECT
Connector EVENT
handle_events()
complete()
open()
register_handler()
Service
Handler
Handle
handle_event()
service()
Events
Applying the Reactor and AcceptorConnector Patterns in JAWS
The Reactor architectural
pattern decouples:
1.JAWS generic
synchronous event
demultiplexing &
dispatching logic from
2.The HTTP protocol
processing it performs
in response to events
Reactor
handle_events()
register_handler()
remove_handler()
<<uses>>
*
handle set
Synchronous
Event Demuxer
select ()
*
dispatches
Handle
*
owns
Event Handler
handle_event ()
get_handle()
notifies
HTTP
Acceptor
handle_event ()
get_handle()
HTTP
Handler
handle_event ()
get_handle()
The Acceptor-Connector design pattern can use a Reactor as its
Dispatcher in order to help decouple:
1.The connection & initialization of peer client & server HTTP services
from
2.The processing activities performed by these peer services once
they are connected & initialized.
The JAWS Web Server Framework
Key Sources of Variation
• Concurrency models
• e.g.,thread pool vs. thread-per
request
• Event demultiplexing models
• e.g.,sync vs. async
• File caching models
• e.g.,LRU vs. LFU
• Content delivery protocols
• e.g.,HTTP 1.0+1.1, HTTP-NG,
IIOP, DICOM
Event Dispatcher
Protocol Handler
Cached Virtual Filesystem
• Accepts client connection • Performs parsing & protocol
• Improves Web server
request events, receives
processing of HTTP request
performance by reducing the
HTTP GET requests, &
events.
overhead of file system accesses
• JAWS Protocol Handler design
coordinates JAWS’s event
when processing HTTP GET
allows
multiple
Web
protocols,
such
demultiplexing strategy
requests.
as
HTTP/1.0,
HTTP/1.1,
&
HTTP• Various caching strategies, such as
with its concurrency
NG, to be incorporated into a Web
least-recently used (LRU) or leaststrategy.
server.
• As events are processed
they are dispatched to the
appropriate Protocol
Handler.
• To add a new protocol, developers
just write a new Protocol Handler
component & configure it into the
frequently used (LFU), can be
selected according to the actual or
anticipated workload & configured
statically or dynamically.
The Acceptor-Connector Pattern
Intent
The Acceptor-Connector design pattern decouples the connection &
initialization of cooperating peer services in a networked system from the
processing performed by the peer services after being connected & initialized.
notifies
notifies
Dispatcher
uses
uses
*
Transport
Handle
owns
select()
handle_events()
register_handler()
remove_handler()
uses
Transport
Handle
owns
notifies
uses
*
*
Transport
Handle
<<creates>>
owns
*
Service
Handler
*
Connector
Connector()
connect()
complete()
handle_event ()
*
Acceptor
peer_stream_
peer_acceptor_
open()
handle_event ()
set_handle()
Acceptor()
Accept()
handle_event ()
<<activate>>
<<activate>>
*
Concrete
Connector
Concrete Service
Handler A
Concrete Service
Handler B
Concrete
Acceptor
Reactive Connection
Management & Data Transfer in
JAWS
Scaling Up Performance via
Threading
Problem
Context
• Processing all HTTP GET requests
• HTTP runs over TCP, which uses
reactively within a single-threaded
flow control to ensure that senders
process does not scale up, because
do not produce data more rapidly
each server CPU time-slice spends
than slow receivers or congested
much of its time blocked waiting for I/O
networks can buffer and process.
• Since achieving efficient end-to-end operations to complete.
quality of service (QoS) is important • Similarly, to improve QoS for all its
to handle heavy Web traffic loads, a connected clients, an entire Web server
process must not block while waiting for
Web server must scale up
connection flow control to abate so it
efficiently as its number of clients
can finish sending a file to a client.
increases.
Solution
Apply the Half-Sync/Half-Async
architectural pattern to scale up
server performance by processing
different HTTP requests
concurrently in multiple threads.
This solution yields two benefits:
1. Threads can be mapped to separate CPUs
to scale up server performance via multiprocessing.
2. Each thread blocks independently, which
prevents one flow-controlled connection from
degrading the QoS other clients receive.
Intent
The Half-Sync/Half-Async
Pattern
The Half-Sync/Half-Async
architectural pattern decouples
async & sync service processing
in concurrent systems, to simplify
programming without unduly
reducing performance. The
pattern introduces two intercommunicating layers, one for
async & one for sync service
processing.
Sync
Service
Layer
Sync Service 1
Sync Service 2
<<read/write>>
<<read/write>>
Queueing
Layer
Queue
<<read/write>>
<<dequeue/enqueue>>
Async
Service
Layer
<<interrupt>>
External
Event Source
Async Service
: External Event
Source
This pattern defines two service
processing layers—one async and
one sync—along with a queueing
layer that allows services to exchange
messages between the two layers.
The pattern allows sync services,
such as HTTP protocol processing, to
run concurrently, relative both to each
other and to async services, such as
event demultiplexing.
Sync Service 3
: Async Service
: Queue
: Sync Service
notification
read()
work()
message
message
enqueue()
notification
read()
work()
message
Applying the Half-Sync/Half-Async
Pattern in JAWS
Synchronous
Service Layer
Worker Thread 1
Worker Thread 2
Worker Thread 3
<<get>>
Queueing
Layer
<<get>>
<<get>>
Request Queue
<<put>>
Asynchronous
Service Layer
HTTP Handlers,
HTTP Acceptor
<<ready to read>>
Reactor
• JAWS uses the HalfSync/Half-Async
pattern to process
HTTP GET requests
synchronously from
multiple clients, but
concurrently in
separate threads
• The worker thread
that removes the
request
synchronously
performs HTTP
protocol processing &
then transfers the file
back to the client.
Socket
Event Sources
• If flow control occurs
on its client connection
this thread can block
without degrading the
QoS experienced by
clients serviced by
other worker threads in
the pool.
Implementing a Synchronized Request
Queue
Context
• The Half-Sync/Half-Async
pattern contains a queue.
• The JAWS Reactor thread is a
‘producer’ that inserts HTTP
GET requests into the queue.
• Worker pool threads are
‘consumers’ that remove &
process queued requests.
Problem
Worker
Worker
Thread
Threadof
2
•A
naive1 implementation
Solution
Apply the Monitor Object pattern to
implement a synchronized queue.
Worker
request Thread
queue3
a
will incur race conditions<<get>>
or ‘busy waiting’
when<<get>>
multiple threads insert and <<get>>
remove
Request Queue
requests.
• e.g., multiple<<put>>
concurrent producer and
consumerHTTP
threads
can HTTP
corrupt
the queue’s
Handlers,
Acceptor
internal state if it is not synchronized properly.
• Similarly, these threads will ‘busy wait’ when
Reactor
the queue is empty or full, which wastes CPU
cycles unnecessarily.
Client
Monitor Object
2..*
sync_method1()
sync_methodN()
• This design pattern synchronizes concurrent
method execution to ensure that only one
uses *
method at a time runs within an object.
Monitor Condition
• It also allows an object’s methods to
wait()
cooperatively schedule their execution
notify()
sequences.
notify_all()
uses
Monitor Lock
acquire()
release()
Dynamics of the Monitor Object
Pattern
: Client
Thread1
: Client
Thread2
: Monitor
Object
sync_method1()
1. Synchronized
method
invocation &
serialization
2. Synchronized
method thread
suspension
3. Monitor
condition
notification
4. Synchronized
method thread
resumption
: Monitor
Lock
: Monitor
Condition
acquire()
dowork()
wait()
the OS thread scheduler
automatically suspends
the client thread
sync_method2()
the OS thread
scheduler
automatically
resumes
the client
thread and the
synchronized
method
acquire()
the OS thread scheduler
atomically releases
the monitor lock
dowork()
notify()
release()
dowork()
release()
the OS thread scheduler
atomically reacquires
the monitor lock
Applying the Monitor Object Pattern in JAWS
The JAWS synchronized
request queue implement
the queue’s not-empty
and not-full monitor
conditions via a pair of
ACE wrapper facades for
POSIX-style condition
variables.
HTTP
Handler
Request Queue
<<put>>
uses
Worker
Thread
uses
2
Thread Condition
wait()
notify()
notify_all()
<<get>>
put()
get()
Thread_Mutex
acquire()
release()
•When a worker thread attempts to dequeue an HTTP GET request
from an empty queue, the request queue’s get() method
atomically releases the monitor lock and the worker thread
suspends itself on the not-empty monitor condition.
•The thread remains suspended until the queue is no longer empty,
which happens when an HTTP_Handler running in the Reactor
thread inserts a request into the queue.
Minimizing Server Threading Overhead
Context
Socket implementations in certain multi-threaded
operating systems provide a concurrent accept()
optimization to accept client connection requests
and improve the performance of Web servers that
implement the HTTP 1.0 protocol as follows:
accept()
•The operating system allows a pool of threads in
a Web server to call accept() on the same
passive-mode socket handle.
•When a connection request arrives, the
operating system’s transport layer creates a new
accept()
accept()
connected transport endpoint, encapsulates this
new endpoint with a data-mode socket handle
and passes the handle as the return value from accept()
accept()
accept().
•The operating system then schedules one of
the threads in the pool to receive this datapassive-mode
mode handle, which it uses to communicate
socket handle
with its connected client.
Drawbacks with the Half-Sync/
Half-Async Architecture
Problem
Although Half-Sync/Half-Async
threading model is more scalable
than the purely reactive model it
is not necessarily the most
efficient design.
•e.g., passing a request
between the Reactor thread
and a worker thread incurs:
•Dynamic memory (de)allocation,
•Synchronization operations,
•A context switch, &
•CPU cache updates
Worker
Thread 1
•This overhead makes JAWS’ latency
unnecessarily high, particularly on
operating systems that support the
concurrent accept() optimization.
Worker
Thread 2
Worker
Thread 3
<<get>>
<<get>>
Request Queue
<<get>>
<<put>>
HTTP Handlers,
HTTP Acceptor
Reactor
Solution
Apply the Leader/Followers
pattern to minimize server
threading overhead.
Dynamics in the Leader/Followers Pattern
Thread 1
1.Leader
thread
demuxing
Thread 2
: Thread
Pool
: Handle
Set
: Concrete
Event Handler
join()
handle_events()
join()
event
handle_event()
2.Follower
thread
promotion
3.Event
handler
demuxing &
event
processing
4.Rejoining the
thread pool
thread 2 sleeps
until it becomes
the leader
thread 2
waits for a
new event,
thread 1
processes
current
event
join()
thread 1 sleeps
until it becomes
the leader
deactivate_
handle()
promote_
new_leader()
handle_
events()
reactivate_
handle()
event
handle_event()
deactivate_
handle()
Applying the Leader/Followers
Pattern in JAWS
Two options:
Although Leader/Followers thread
1.If platform supports accept()
pool design is highly efficient the
optimization then the OS implements
Half-Sync/Half-Async design may be
the Leader/Followers pattern
more appropriate for certain types of
2.Otherwise, this pattern can be
servers, e.g.:
implemented as a reusable framework
• The Half-Sync/Half-Async
design can reorder and
demultiplexes
Thread Pool
prioritize client requests
synchronizer
more flexibly, because it has
join()
a synchronized request
promote_new_leader()
queue implemented using
*
Event Handler
the Monitor Object pattern.
uses
* Handle
handle_event ()
• It may be more scalable,
get_handle()
Handle Set
because it queues requests
handle_events()
deacitivate_handle()
in Web server virtual
reactivate_handle()
memory, rather than the
select()
HTTP
HTTP
Acceptor
Handler
operating system kernel.
handle_event ()
get_handle()
handle_event ()
get_handle()
Problem
Solution
The
Proactor
Pattern
• Developing software that achieves
Apply the Proactor architectural pattern
the potential efficiency & scalability
of async I/O is hard due to the
separation in time & space of async
operation invocations and their
subsequent completion events.
<<uses>>
Initiator
to make efficient use of async I/O.
This pattern allows event-driven applications to
efficiently demultiplex & dispatch service
requests triggered by the completion of async
operations, thereby achieving the performance
benefits of concurrency without incurring many
of its liabilities.
<<uses>>
<<invokes>>
<<uses>>
is associated with
Asynchronous
Operation Processor
execute_async_op()
<<enqueues>>
Asynchronous
Operation
<<executes>>
get_completion_event()
<<dequeues>>
*
async_op()
Asynchronous
Event Demuxer
Completion
Event Queue
Handle
Completion
Handler
handle_event()
<<demultiplexes
& dispatches>>
Proactor
handle_events()
Concrete
Completion
Handler
: Initiator
: Asynchronous
Operation
: Completion
: Proactor
Completion
Handler
Dynamics in the Proactor Pattern
1. Initiate
operation
2. Process
operation
3. Run event
loop
4. Generate
& queue
completion
event
5. Dequeue
completion
event &
perform
completion
processing
: Asynchronous
Operation
Processor
Completion
Handler
Completion
Ev. Queue
exec_async_
operation ()
Event Queue
async_operation()
handle_events()
event
Result
Result
event
Result
Result
handle_
event()
Note similarities & differences with the Reactor pattern, e.g.:
•Both process events via callbacks
•However, it’s generally easier to multi-thread a proactor
service()
Applying the Proactor Pattern in
JAWS
JAWS HTTP components are split into two parts:
The Proactor pattern
1. Operations that execute asynchronously
structures the JAWS
• e.g., to accept connections & receive client HTTP GET
concurrent server to
requests
receive & process
2. The corresponding completion handlers that process the
requests from multiple
async operation results
clients asynchronously.
• e.g., to transmit a file back to a client after an async
connection operation completes
<<uses>>
Web Server
<<uses>>
<<invokes>>
<<uses>>
Windows NT
Operating System
execute_async_op()
<<enqueues>>
Asynchronous
Operation
<<executes>>
GetQueuedCompletionStatus()
<<dequeues>>
Handle
AcceptEx()
ReadFile()
WriteFile()
Asynchronous
Event Demuxer
I/O Completion
Port
is associated with
*
Completion
Handler
handle_event()
<<demultiplexes
& dispatches>>
Proactor
handle_events()
HTTP
Acceptor
HTTP
Handler
Proactive Connection
Management & Data Transfer in
JAWS
Download