NRL Presentation

advertisement

RETSINA MAS Architecture with Service Discovery

Reusable Environment for

Task-Structured Intelligent Networked Agents

Multi-Agent System Architecture

With Local and Wide Area Service Discovery

Functional Architecture

Four parallel threads :

• Communicator

• for conversing with other agents

• Planner

• matches “sensory” input and “beliefs” to possible plan actions

• Scheduler

• schedules “enabled” plans for execution

• Execution Monitor

• executes scheduled plan

• swaps-out plans for those with higher priorities

Agent Architecture

MAS Infrastructure

MAS Interoperation

Translation Services Interoperator Services

MAS Infrastructure

Individual Agent Infrastructure

Interoperation

Interoperation Modules

Capability to Agent Mapping

Middle Agents

Capability to Agent Mapping

Middle Agent Components

Name to Location Mapping

Agent Name Service

Security

Certificate Authority Cryptographic Service

Name to Location Mapping

ANS Component

Security

Security Module Private/Public Keys

Performance Services

MAS Monitoring Reputation Services

Multi-Agent Management Services

Logging Activity Visualization Launching

ACL Infrastructure

Public Ontology Protocol Servers

Performance Services

Performance Service Modules

Management Services

Logging and Visualization Components

ACL Infrastructure

Parser, Private Ontology, Protocol Engine

Communications Infrastructure

Discovery Message Transfer

Communication Modules

Discovery Message Transfer Modules

Operating Environment

Machines, OS, Network, Multicast Transport Layer, TCP/IP, Wireless, Infrared, SSL

RETSINA Architecture

Agents, Middle Agents, and Infrastructure

RETSINA

Intelligent Software Agent

Planner

Scheduler

Execution Monitor

MAS Infrastructure

Name

Services

Capability

Lookup

Services

White

Pages Yellow

Pages

Agent Name

Service (ANS) Matchmaker

Capability

Mediation

Broker

Services

InterOperator

Services

Resource

Resource

Discovery

Management

& Management

Point-to-Point

Resource

Discovery

Communications

And Interactions and

Vocabulary

Aggregation

Communications

Mgt.

Communications

Resource

Resource

Discovery

Management

& Management

Point-to-Point

Resource

Discovery

Communications

Communications

Mgt.

Vocabulary

Aggregation

Peer-to-Peer

Communications

Local Dynamic Discovery

• Simple Service Discovery Protocol (SSDP) from the Universal Plug-n-Play (UPnP) initiative

– Multicast search requests to populate lists of infrastructure service provider alternatives

– Receive Multicast Alive & Byebye messages to automatically update discovered-provider lists.

– Service-specific reactions to Discovery Events

• Register with one or more services and optionally automatically register with newly “alive” systems

• Auto fail-over and pruning of services lists to maintain list of “viable” service providers

SSDP Communications

Announcement of Availability Discontinuance of Availability

Multicast with

Limited

TTL

HTTP NOTIFY

(GENA)

Multicast with

Limited

TTL

HTTP NOTIFY

(GENA)

Multicast

Query Contains: with

Service Type or

Search for

Available

Limited

TTL wildcard for “all”

Host/Port for response

HTTP

M-Search

Services

Unicast

Response Contains:

Unique Service Name point

Service’s Type to

Service’s IP Address

Note: point Service’s Socket/Port

Unlike SLP & Jini, SSDP does not require (or define) a separate

Directory/Registry entity.

An example of an

SSDP discovery-enabled application:

The RETSINA

Agent Name Service

Server

And

Clients

ANS peer-server discovery

• ANS-x comes online with no other servers visible

• ANS-y comes online

– Sends Alive packet

(now ANS-x knows about ANS-y)

– Sends Discovery/Search packet to find other ANS servers

(now ANS-y knows about ANS-x)

• ANS-z has been online; however its network connection was detached, but has now been repaired.

– Periodic “Alive” packets from each

ANS inform others of their presence.

– If another ANS wanted to talk to a peer, but knew of known, it would send out a Discovery Search packet, to which the others would respond

ANS-x ANS-y ANS-z

ANS peer-server discovery

• ANS Servers advertise themselves as a service of type retsina:AgentNameServer

• If any ANS server, that has been discovered, cannot be contacted, its info is removed from the peer candidate list

• Peer ANS servers utilize each other to provide redundancy and load balancing to ANS client activities

• Peer ANS servers push register and unregister commands to each other, and will pull registrations that can’t be looked-up locally

ANS-x ANS-y ANS-z

ANS Client discovery of

Peer-Server cluster

•ANS Client comes online and does a Discovery/Search for services of type retsina:AgentNameServer

•ANS Client makes a list of all responding ANS servers and selects one to connect to.

•ANS Client knows to roll over to alternate server upon failure to connect or interact with any server

ANS-x ANS-y ANS-z

ANS

Client

ANS Client Registration

Information Redundancy

• ANS Clients will send register and unregister commands to a server who will forward the register and unregister commands, in behalf of the client, to the other local peer servers to provide server-facilitated redundancy of registrations.

• When a client “sees” an “Alive” packet from an ANS server

(freshly online, or periodically transmitted) the client will automatically attach to that ANS server and send it a copy of its

Agent Name registration

(providing client-facilitated redundancy)

ANS-x ANS-y ANS-z

ANS

Client

• When a client poses a lookup request that an ANS server doesn’t know, the server will query its discovery partners to see if the registration exists in their local registry.

Islands of ANS Server

Peer Groups

• When an ANS Server can’t resolve a “lookup” request locally, or via its local discovery peergroup, it can access its

“hierarchy-list” of remote ANS servers, and attempt to perform a long-distance pull of the desired registry information.

• A forwarded lookup query will propagate from one realm of peer servers to the next until a max hop count is reached.

• Each hierarchy ANS server will lookup the entry locally, with its local peer group, and then to its own hierarchy partners.

ANS

Client

• Each ANS server along the successful lookup trail will learn the registration for ease of future lookups.

Agent Name Services sans-servers

• When an ANS Client comes online, in addition to “looking” for preexisting ANS servers, it will announce itself as an SSDP discoverable service of type retsina:Agent

• ANS Clients will “listen” to “Alive” packets from other Agents, and add them to an internal Agent registration cache to utilize when no

ANS servers are present.

• If an authoritative ANS Server is present on the network, the local cache is ignored and the ANS server is queried.

• If no ANS server is present, ANS Client (agents) will continue to function, using the internal cache for lookup requests.

• When an ANS server comes online and sends out an “Alive” packet, the ANS Clients will automatically register with the new server, and now utilize it for all subsequent requests.

• The ANS Server can also auto-register Agents using only their

“Alive” packets, and then update the registration information from an actual “register” command.

ANS Registration Leases

• Registrations can contain a lease/ttl request

• The default lease request is 900 seconds = 15 minutes.

• Agents can request longer leases.

• Agents can periodically re-register to renew leases.

• The ANS Server is set to periodically send out an Alive packet at 75% of the default lease time (approximately every 11 minutes) to which all discovery-enabled ANS clients will respond to by sending a registration request that will renew their lease.

• Any lookup after the lease expires will remove the entry from the registry and fail in the local search

Discovery Over the Internet

• Problem:

How to quickly and broadly implement a solution to allow discovery and lookup communication between widely dispersed systems…

• Our Solution:

Piggy-back on top of an existing nonproprietary and popular (widely utilized) communications framework that provides global connectivity - Gnutella.

Agent-to-Agent (A2A) on top of

Gnutella Peer-to-Peer (P2P)

Gnutella’s

Random Connectivity

A2A

Connectivity To

Interest Groups

Categorize

Community

Prime

Alt

Local

Home

Cache

Bad

Used

New

Agent-to-Agent

Enhancements

Confidence

Query’s Seen

Q-Hit’s Seen

Activity Level

Bad Packets

Dup Queries

Dup Msgs

Dup Pkts

Repeat Pkt Type

Sequential Pkts

Query Modification

Min Speed= Encoded Community Identifier

- Quick Check: see if worthwhile to check

- Hide from “standard” Gnutella Servents

Prefix Query with Community Identifier

- Absolute Check (appropriate to process)

Encrypt Query and Responses

- Further Hide Query from Gnutella -

Protect conversation

Task-Structure

- Query & QueryHit Packets escalated to A2A Task Objects for Handling

A2A Categories of Gnutella Peers

Community Filters messages related to specific interests

Prime a-priori Peer candidates loaded at startup

Alternate Peers that have been connected to recently who have high confidence & benefit levels

Local

Home

“Near” to Agent on the current IP network

On the IP sub-network where the Agent is typically located (home-base of operations)

Host Caches Special PONG-servers that tell systems of other Peers recently active on the network

Other

Bad

New

Peers that can't be otherwise categorized

Unreachable, offline, or possibly disruptive

Recently learned, but not yet categorized

A2A Agent Architecture

Agent

?

!

ANS

Client

Community

?

!

? ? ?

Match

Maker

Client

C’mty

Peer

Activity

Client

C’mty

A2A Management Process

Peer-to-Peer Network

• A Community object identifies the A2A community that the Agent will interact with.

?

Question object is created when an Agent asks a question to the Community object .

!

• An Answer object is an individual response associated with the specific

Question .

A2A Discovery

• When a Community object is created, it can be marked as a

“Discoverable” Community.

• Discoverable Communities automatically create a discovery

Question object to periodically attempt to discover other

Agents that support the same Community type.

• Discoverable Communities create an Auto-Answer that watches incoming Questions to see if they are discovery queries. If so, the Agent’s location is automatically replied.

• Discovery replies are maintained by the Communities as a list of known community peers, and utilized by the A2A state management process to ensure adequate connectivity to each peer community.

Gnutella Query Integration

MsgID

TTL

Hops

MsgID

TTL

Hops

Service’s Address

Service’s Port

Number of Files

Number of KB

MsgID

TTL

Hops

Question

Min Speed Filter

MsgID

TTL

Hops

Service’s Address

Service’s Port

Number of Hits

Speed Offered

Hit:

Filename

File Size

Index

Gnutella Query Integration

MsgID

TTL

Hops

Question

= Matchmaker:discover

Min Speed Filter

= 10,850,000

Community Identifier: Matchmaker

Encoded Identifier:10,850,000

Agent Query: discover

MsgID

TTL

Hops

Service’s Address

Service’s Port

Number of Hits

Speed Offered

Hit:

Filename

File Size

Index

Agent Name Service Clients on A2A/P2P

• A Discoverable Task to automatically form communities with other retsina:AgentNameService peers.

register agent-name commands create Auto-Answers for future lookup agent-name and listall queries

unregister agent-name commands remove local Auto-

Answers of appropriate lookup and listall queries

• Agents can lookup other Agents without the use of an

ANS server.

• Agents can cache previously found lookup information and facilitate lookups by other peers.

A2A Functionality

Agent Point-of-View

Create Community Object for type of

Communications/Interest desired; i.e.

RETSINA, FIPA, Agents, Auctions,

Matchmaking, etc.

• Client/Consumer Mode

– Ask a Question

– Get Answers and Process Responses

Service/Producer Mode

– Receive a Question

– Add Responses and Dispatch Reply

Handle Replies

Auto-discover Community Peers from replies

• Filter for appropriateness

Queue response messages

Encrypt/Decrypt replies

• Provide Call-back/Event Driven capabilities for Answer processing

Agent-to-Agent Support Library

Initiate steady-state of connectivity if it has not already been established

Allow Incoming connections

Multicast connection for local traffic

Collect peer candidates from Host-Caches

Make Outgoing Connections to the most promising candidates and track their usefulness

Maintain defined levels of peer connectivity

Discovery and Lookup Queries

Begin Discovery for Community Peers

• Filter for task-related messages

• Queue messages as needed

Encrypt/Decrypt messages

• Provide Synch/Asynchronous and Callback/Event Driven facilities for query processing

Agent-to-Agent Library Layers

A2A

Library

API and Mgt. for Asynchronous Peer-to-Peer

Community-oriented Queries and Responses

Community-of-Interest Mgt. & Discovery

State Management

Connection Management

Peer-to-Peer Interface

Network I/O Layer

Gnutella

TCP/IP Sockets

UDP Multicast

New Gnutella News

• The base protocols for Gnutella have been updated to enhance scaling and reduce extraneous traffic

• Some clients only attach to others that can support the latest protocol revisions

• Like A2A peers, new UltraPeers form a more permanent and better-managed backbone between realms of dynamic client peers and can query peers for information about content/services they offer

(but inter-UltraPeer routing techniques need to be addressed)

New Opportunities for A2A

• Open Source Gnutella Protocol Libraries are now available and publicly maintained.

• Queries and their responses can now contain both a standard “text” part, and a enhanced XML (or other) part to further describe the request/content with meta information (possibly using SOAP or DAML-S)

• UltraPeers can filter traffic sent to a leaf node based on the leaf’s ability to handle the request. This could be used to implement content (or service) based routing using the text part of queries, with the actual request contained in the XML part. This would eliminate the need for a separate “service discovery” process.

Download