Major Network Components

advertisement
Th e fo llowin g is a n e xce rp t from a d raft c h a p te r o f a n ew e n te rpris e a rc h ite ctu re
te xt b o o k th a t is c u rre ntly u n d e r d e ve lo p m e n t e n title d “En te rpris e Arc h ite ctu re :
P rin c iple s a n d P ra ctic e” b y Bria n Ca m e ro n a nd S a n d e e p P u ra o.
Major Network Components
Routers & Switches
A router connects two or more networks and sends information between them. Routers
contain information on many possible routes through a network and determine the best
route for each data packet to take. Routers operate by reading incoming data packets
and examining their source and destination routing address.
Switches are passive network devices that provide connectivity between multiple
servers and storage devices. Switches allow multiple devices to communicate
simultaneously with no reduction in transmission speed, and they provide scalable
bandwidth. A switch provides a pair of network devices with a fast, segregated
connection that ensures that the communication between the devices does not enter
other parts of the network.
Firewalls
A firewall is a device that attempts to prevent unauthorized electronic access to
anetwork. A firewall examines data packets at the network address level. There are
three general types of firewalls: stateful inspection proxies, packet screening, and proxy
servers.
• Packet-screening firewalls examine data packets for network address
information. This type of firewall permits and restricts access to and from
specific Internet sites.
• Proxy servers examine data packets for their destination and source address
information as well as for information stored in the data area of the data packet.
Because the proxy server examines the data area, individual programs can be
permitted or restricted.
• Stateful inspection proxies detect malicious additions to network signals by
monitoring the signals and ensuring that the signals are legitimate.
Storage Area Networks (SANs)
SANs are made up of servers and stand-alone storage devices connected by a
dedicated network. SAN storage devices do not contain any server functionality, and
they do not implement a file system. The hosts themselves implement and manage the
file system. Any server on a network has access to any storage device on the network.
This allows for independent storage operations and scalability.
Copyright © 2010 Brian H. Cameron
1
Servers
A server is typically defined as a computer system that has been designated to run a
specific application or set of applications. Servers that run only one application are
commonly often named for that application. For example, an email server runs only the
enterprise email application. Server applications can be partitioned among several
servers to distribute the workload. Some types of servers are as follows:
•
•
•
•
•
Application Servers
FTP Servers
Mail Servers
Web Servers
Database Servers
Mainframes
A mainframe is defined as a high-performance computer that is typically utilized for
large-scale computing where high performance, security, and availability are required.
Traditionally, mainframes have been associated with centralized computing; however,
today mainframes have become more multi-purposed. A mainframe can handle such
tasks as multiple workload processing, utilization tracking, network analysis, control
centralization, and resource allocation.
Clients
Clients are applications or systems that access a remote service on a server via a
network. Types of clients include:
Fat clients have data processing capabilities and do not have to rely on the server. The
personal computer is the most common fat client today. Fat clients generally have high
performance.
Thin clients are reliant on the host server for most processing functions. They are used
for graphical display for information provided by applications running on the host
computer.
Hybrid clients are a mixture of both fat and thin clients. They typically do processing
functions locally and rely on a central server for data storage. Hybrid clients offer
features from both thin clients and fat clients, making them highly manageable, while
possessing higher performance than thin clients.
Network Topologies
The term “topology” refers to the physical layout of a network. It also refers to how the
network nodes communicate and how the nodes are connected. There are three types
of topologies: signal, logical, and physical.
Copyright © 2010 Brian H. Cameron
2
•
•
•
Signal topology is the mapping of the route that the signals take when traveling
between the network nodes.
Logical topology is the mapping of the path that data takes when moving
between the network nodes.
Physical topology is the mapping of the network nodes and their physical
connections. This involves the locations of nodes, the layout of wiring, and the
interconnections between the network nodes.
Some examples of physical topologies are as follows:
Point-to-Point
A Point-to-Point topology is a dedicated connection between one server and one
storage device. This topology provides nearly guaranteed communications between the
two points of the network.
Star
All network cables in the Startopology are connected to a single central switch or hub.
With this topology, communication across the network is accomplished by passing data
through the central switch or hub. The central switch or hub is a potential point of
failure, and network communications would cease if the switch or hub stopped working.
Advantages of star topology are such as:\
The failure of a single computer or cable doesn't bring down the entire network.
The centralized networking equipment can reduce costs in the long run by making
network management much easier.
It allows several cable types in same network with a hub that can accommodate multiple
cable types.
Disadvantages of star topology are such as:
Failure of the central hub causes the whole network failure.
It is slightly more expensive than using bus topology.
Copyright © 2010 Brian H. Cameron
3
Ring
This topology connects each node on the network to two other nodes, with the
formation of the ring occurring by the connection of the first and last network nodes.
Network data is transmitted from one network node to the next, flowing in one direction
through the circular ring.
Advantages are following:
One computer cannot monopolize the network.
It continue to function after capacity is exceeded but the speed will be slow.
Disadvantages are following:
Failure of one computer can affect the whole network.
It is difficult to troubleshoot.
Adding and removing computers disrupts the network.
Copyright © 2010 Brian H. Cameron
4
Bus
A central cable, known as a bus, is utilized in the Bus topology to connect all network
devices. There are two kinds of Bus topologies, Linear and Distributed.
In a Linear Bus topology, a central cable or bus is used to connect all network nodes.
This bus has exactly two endpoints and transmits all data between nodes in the
network. The transmitted data is received in a virtually simultaneous fashion by all
network nodes. A Distributed Bus topology is similar to a Linear Bus topology except
that the bus has branches added to the main bus to create multiple endpoints. Other
than having more than two endpoints, the Distributed Bus topology operates in the
same manner as does the Linear Bus topology.
Advantages of the bus are following.
Bus is easy to use and understand and inexpensive simple network
It is easy to extend a network by adding cable with a repeater that boosts the signal and
allows it to travel a longer distance.
Disadvantages are following.
A bus topology becomes slow by heavy network traffic with a lot of computer because
networks do not coordinate with each other to reserve times to transmit.
It is difficult to troubleshoot a bus because a cable break or loose connector will cause
reflections and bring down the whole network.
Mesh
In a Mesh topology, the network nodes are connected together with many redundant
network interconnections. In a pure Mesh topology, every network node has a
Copyright © 2010 Brian H. Cameron
5
connection to every other network node. There are two types of Mesh topologies: Full
and Partial.
Figure – Full Mesh Topology
In a Full Mesh topology, all of the network nodes are connected to each other by a
point-to-point link, allowing for the simultaneous transmission of data from any network
node to the other network nodes. In a Partial Mesh topology, some of the network
nodes are connected to multiple nodes via a point-to-point link. Since a connection is
not made between every network node, the Partial Mesh design offers the benefits of
partial redundancy but with less complexity and expense when compared to the Full
Mesh topology.
Figure – Partial Mesh Topology
Copyright © 2010 Brian H. Cameron
6
Tree and Hypertree
The Tree topology is a hybrid of the Star and Bus topologies. The core design is similar
to a bus. The network nodes are connected over a central cable in sequence. The
branches of a Tree network may contain workstations connected in a star-like
configuration. As with a Bus topology, network signals from a transmitting node are
received by all other nodes and travel the length of the central cable. The Hypertree
topology is a combination of two or more Tree topologies to make one new Hypertree
topology.
Figure – Hypertree Topology
Hybrid
The Hybrid network topology connects two or more different network topologies
together. Hybrid topologies include Star-Bus, Hierarchical Star, Star-wired Ring, and
Hybrid Mesh.
Integration Opportunities
Enterprises must understand what technologies and resources they have in their
environment, and decide whether these products and technologies will meet their future
needs. Understanding how various networking components can be interdependent is an
essential prerequisite to performing this analysis. As the line between IT and the
business blurs even more, and as IT practitioners in organizations are targeted
regarding business performance, the integrated network becomes an inextricable part of
the chain, ensuring that business performance requirements are met.
The technologies that will have the greatest impact on enterprise communications will be
delivered through the convergence of voice, video and data technologies. Although the
principal business impact will come from new and innovative applications (and the
integration of these applications), an infrastructure that is unprepared for the demands of
these new applications will quickly negate any possible advantage. Understanding the
Copyright © 2010 Brian H. Cameron
7
emerging future trends in this rapidly changing area will enable an enterprise to build an
infrastructure that is ready to support emerging applications.
Future Trends
Wireless Convergence
Increasingly, mobile devices are considered part of the corporate network strategy as
opposed to a silo, as has been the case. Wireless local area networks (WLANs) are
starting to focus more on running voice and video over the wireless medium. Some
cases of the all-wireless office are also starting to emerge. A WLAN has long been
thought of as a separate and distinct network architecture. To achieve the promise of
wireless in the enterprise, WLANs will need to become a more integral part of the entire
wired infrastructure. To maneuver this new "mine field," enterprises will need to
understand the standards development process, the implementation hurdles, and the
growing number of potential technologies and vendors for wireless products and
services.
Networked Storage
As the value of information continues to increase, the significance of data storage
technologies continues to grow. Information is a strategic resource, and the efficient
storage, organization, and retrieval of data is crucial. The number of e-mail messages
alone has grown from 9.7 billion per day in 2000 to more than 35 billion messages per
day in 2007. Many e-mail messages contain a variety of media and file types, forcing a
focus on information sharing rather than server-centric data storage. Electronic
materials must be shared via storage networking environments to meet the current
enterprise-wide information needs. In addition, the increased storage and information
management demands related to the Sarbanes-Oxley, Health Insurance Portability and
Accountability Act (HIPAA), and other government regulations has created an enormous
demand for enterprise storage and information management solutions.
The overall 50% annual increase in data creation is coupled with enterprises' increasing
interest in retaining digital, rather than physical, copies of materials, and storage needs
are growing exponentially over time. With the potential competitive advantage gained
from the access to and analysis of all of an organization’s data, information availability
becomes critically important. For example, the retail industries can lose over $1 million
per hour of downtime, while brokerages stand to lose more than $6.5 million per hour.
To meet growing demand for information storage and retrieval, new enterprise storage
technologies have been developed that address specific data storage and management
needs:
• Direct Attached Storage (DAS) systems attach storage drives directly to servers
• Network Attached Storage (NAS) environments are made up of specialized
servers dedicated to storage
• Storage Area Networks (SANs) are highly scalable and allow hosts to implement
their own storage file systems,
Copyright © 2010 Brian H. Cameron
8
•
Content Addressable Storage (CAS) systems are a mechanism for storing and
retrieving information based on content rather than location.
Because the storage needs of all organizations are growing exponentially today, huge
investments are made each year in storage-related hardware, software, and skilled
employees to design and navigate through these complex enterprise solutions.
Virtualization
Virtualization refers to the abstraction (or virutalization) of technology resources. The
physical characteristics of resources are hidden from the applications and end-users
that utilize them. Virtualization can make a single physical storage device or server
appear to operate as multiple resources. Virtulation technology can also make multiple
storage devices or servers appear to operate as a single resource.
There are three areas of IT where virtualization is most prevalent: server virtualization,
network virtualization, and storage virtualization:
•
Server virtualization masks server resources (such as processors and operating
systems) from the applications and users. This masking increases resource
sharing and utilization and eliminates the need to understand and manage
complicated server resources.
•
Network virtualization combines network resources by splitting up the available
network bandwidth into channels. Each channel is independent from the other
channels, and each can be assigned to a particular network device in real time.
Network virtualization hides the complexity of the network by dividing it into
manageable components.
•
Storage virtualization creates a single centrally managed virtual storage device
by combining the physical storage from multiple network storage devices.
Storage virtualization technologies have become economical and efficient and
can be used in organizations of all sizes.
Grid computing is a form of virtualization where several computers run simultaneously
to become a super computer of sorts. These super computers can compute
computationally-intensive operations and virtualization allow this pooling of computing
capabilities.
Cloud Computing
Cloud computing utilizes Internet Protocol (IP) technology to create high capability
computing environments that are highly scalable. The cloud symbol is typically used by
network architects to depict the Internet or other IP networks. Clients in cloud computing
environments are concerned with the services provided by the environment and not the
details of the underlying technologies. Often the computing resources are owned and
maintained by third party providers in centralized data centers.
Copyright © 2010 Brian H. Cameron
9
Green Computing
Green computing explores the most efficient ways to utilize computing resources. Green
computing encompasses many aspects of computing, from the production of more
environmentally friendly products to better power management and the use of
virtualization technologies. Government certifications for green data centers are also
being developed. Criteria such as the use of alternative energy technologies, recycling,
and other green approaches are being considered for these certifications.
Factors Affecting Network Performance
Channel Availability
Since the enterprise network is probably the most fundamental IT service, high network
availability is a core concern. A high availability solution is a network, which is available
for requests when called upon. Network designers need to achieve as close to 100%
uptime as possible. While 100% uptime is virtually impossible, most networks strive for
99.999% uptime. To calculate the expected percent of uptime per year we can use the
following formula:
% of uptime per year = (8760 – expected number of hours down per year) / 8760
So if four hours of downtime per month is acceptable to your organization, then 48 hours
of downtime per year is acceptable. Fitting that into the formula, 48 hours per year
equates to 99.452% of uptime per year. In order to obtain a 99.999% uptime per year, a
network would expect to only have five seconds of downtime per month and only one
minute of total downtime per year. In order to design for high network availability, the
fault tolerance of different components of the network infrastructure must be understood.
Fault Tolerance
Fault tolerant configurations help prevent device failure due to an unexpected problem.
The following are some examples of failures and solutions to those failures:
•
•
•
•
•
•
Power Failure – Have computers and network devices running on a UPS.
Power Surge – Utilize surge protectors.
Data loss – Run scheduled backups and mirror data on an alternate
location.
Device/Computer Failure – Have a second device. Also have replacement
components available.
Overload – Setup alternate computers or network devices that can be used
for load balancing or as alternate processors
Viruses – Make sure you maintain up-to-date virus definitions.
Copyright © 2010 Brian H. Cameron
10
Data Storage and Retrieval
Data storage and retrieval is the process of gathering and cataloging data so that it can
be found and utilized when needed. Due to the exponential growth of data in recent
times, organizations of all sizes are facing issues related to storage growth, data
consolidation, and backup and recovery. These challenges have created the need for
the capabilities provided by storage networking. There are two main technologies
utilized for storage networking: storage area networks (SANs) and network-attached
storage (NAS).
Network-Attached Storage (NAS) devices are specialized servers that are dedicated to
providing storage resources. The devices plug into the local area network (LAN) where
servers and clients can access NAS storage resources. Any server or client on the
network can directly access the storage resources. Storage Area Networks (SANs) are
made up of servers and stand-alone storage devices connected by a dedicated
network. SAN storage devices do not have any server functionality and do not
implement a file system. Hosts implement and manage the file system. SANs are
typically used by larger organizations with complex data storage needs.
Bandwidth and Latency Considerations
Bandwidth refers to the data transfer rate supported by a network and is a major factor in
network performance. The greater the bandwidth capacity, the greater network
performance. However, network bandwidth is not the only factor that determines the
performance of a network. Latency is another key element of network performance.
Latency refers to several types of delays commonly incurred in processing of network
data. High latency network connections typically experience significant delay times while
low latency connections have relatively small delays.
Theoretically, the maximum bandwidth of a network connection is dependent on the
technology utilized. Latencies affect the actual network bandwidth obtained and this
bandwidth typically varies over time. Network bottlenecks that decrease effective
network bandwidth are created by excessive latency. The source of the latency
determines if the delay is temporary or persistent. Two common types of latency are
router latency and architecture and peer induced latency.
When the routers in the network are the cause of network latency, router latency exists.
Packets are typically routed through a series of routers in an IP network. The source
and a destination network addresses are stored in the header of each packet. There
may be many routers that touch a data packet on the trip from souce to detination. Each
router in the journey receives the packet and reads the destination address from the
header. The router may change the header depending on current congestion levels on
the network and then sends the pack on to the next router. The efficiency and
procession speeds of the routers in the network will have an impact on the latency of the
overall network.
Routers calculate the route options for a packet on route to a destination on a network.
Different networks may have different sets of routing rules. Larger networks that often
contain the most sophisticated networking hardare typically provide the most efficient
routes from point to point. Smaller networks with less sophisticated equiment may take
Copyright © 2010 Brian H. Cameron
11
longer paths utilizing more routhers, causing more network latency. This is known as
aarchitecture and peer induced latency.
Inter-Organizational Infrastructure
TCP/IP and Internetworking
Internetworking and Extranets
The term internetworking refers to the connection of two or more computer networks or
segments to form an internetwork (or internet). Devices such as switches and routers
are used to send data across the internetwork. These devices function on layer 3 of the
OSI Basic Reference Model (Network Layer) (see following figure). Within the network
layer, data is passed in the form of packets and processed/routed accordingly with level
3 protocols depending on the suite utilized. IPv4, IPv6, ARP, ICMP, RIP, OSPF, BGP,
IGMP, and IS-IS are some examples of protocols that could be used for processing.
http://wiki.go6.net/images/2/2b/Osi-model.png
Intranets utilize Internet protocol (IP) to create a secure, private network that is used to
share part the enterprise's operations and information with employees. Intranets are
very similar to intranetworks in almost all aspects except that they are kept private.
Extranets extend the organization's intranet to include stakeholders outside of the
enterprise. Extranets securely share portions of a company’s operations or information
with external stakeholder such as vendors, suppliers, customers, and other partners.
Copyright © 2010 Brian H. Cameron
12
Protocols
In relation to computing infrastructure, a protocol is the set of standards that governs
transmissions between two endpoints. Protocols are developed to define
communications, and each protocol has its own unique set of rules. Interpretation of a
protocol can occur at the hardware level, software level, or a combination of the two.
When designing software or hardware, engineers must follow the defined protocol if
they intend to successfully connect to other networked devices/programs. Some
examples of properties that protocols define may include but are not limited to:
•
•
•
•
•
Negotiation
Handshaking
Message format
Error handling
Termination procedures
Several protocols are defined for each layer of the 7 Layer OSI Model. In the lower
levels the protocols define hardware devices, and at higher levels protocols are defined
for the application layer of computing. In enterprise application integration, we are
typically more concerned with the higher level protocols defined in the application layer.
Common protocols in the application layer include DHCP, DNS, FTP, HTTP, IMAP,
LDAP, SIP, SMTP, and SOAP.
TCP/IP
TCP/IP is the set of network communications protocols utilized over the Internet. It is
named for the first two protocols that were defined in the standard: the Internet
Protocol (IP) and the Transmission Control Protocol (TCP). TCP/IP spans several
layers of the 7 Layer OSI Model to create a link between two or more networked
devices. TCP/IP was created in the 1970’s by DARPA to lay the foundation for a wide
area network later to become known as the Internet.
The original TCP/IP network model was developed for the DoD and consisted of three
layers, but the model that is now utilized generally is based around four layers: the data
link layer, the network layer, the transport layer, and the application layer.
The TCP/IP model does not distinguish the application, presentation, and session
layers of the OSI model seperately and contains only the application layer. Within each
of these layers, the TCP/IP model executes various procedures in regards to the task to
accomplish the overall communication link. The following table provides a comparison
of the OSI and TCP/IP network models.
Copyright © 2010 Brian H. Cameron
13
Model Architecture Comparison
OSI Model
TCP/IP Model
7. Application layer
6. Presentation layer
4. Application layer - Telnet, FTP, SMTP,
DNS, RIP, SNMP
5. Session layer
4. Transport layer
3. Transport layer - TCP, UDP
3. Network layer
2. Network layer (Internet layer) - IP,
IGMP, ICMP, ARP
2. Data Link layer
1. Physical layer
1. Link layer (Network Interface layer) –
Ethernet, Token Ring, Frame Relay, ATM
Routing
The process of selecting routes to send traffic in a network is known as routing. In
packet switching networks, such as the Internet, routing directs the transit of data
packets from their source to destination primarly through hardware devices known as
routers. This process is directed by information contained in the routing tables on each
router.
Routing tables contain rules that determine where packets will be sent. This information
is often displayed in a tabular format. These tables contain information needed to
determine the best path forward for a packet to reach its destination. IP packets contain
information source and destination information. When a router receives a data packet, it
matches the information in the packet to the entry in the routing table that provides the
best route to the destination. The table also provides instructions for transmitting the
packet to the next router or “hop” as it continues its journey across the network.
Integration Problems and Opportunities for Inter-Organizational Infrastructure
Complexity
The implementation of internetworks is challenging, and issues often arise in the areas
of network management, reliability, connectivity, and flexibility. These areas are
important components of an effective and effiecient network. Often, many challenges
arise when attempting to connect disparate technolgies. For example, different systems
or sites many use different types of communication protocols.
Organizations today rely heavily on data communications, and network reliability is
crucial to operations. Most large internetworks allow for redundancy in the network to
enable communications when problems occur. A robust network management
capability is needed that provides centralized troubleshooting and support capabilities for
Copyright © 2010 Brian H. Cameron
14
the internetwork. Performance, security, and other issues must be addressed for the
smooth operations of the network. We live in a dynamic word, and internetworks must
be agile enough to change as business needs change.
Security
Security is defined as the condition of being protected against loss or danger. Security
is generally thought to be similar in concept to safety. The difference between these two
terms is that security has an added emphasis on protection from outside threats. In
terms of application integration, there are five main areas of security for consideration.
Computing security – is a type of security that focuses on the secure operation of
computers. The definition of secure operations can vary by organization and
application and is typically defined by a written security policy. While the content of
such policies varies, all typically address issues related to electronic information that is
stored and processed as well as confidentiality issues. Some good computing practices
include keeping unauthorized users from using your computer, changing passwords
regularly, and keeping business email separate from personal accounts.
Data security – refers to the process of ensuring that access to data is controlled and
that the data is kept safe from tampering or corruption. Data security is enforced with
profiles that restrict users from changing data. The subject of data security is very broad
but generally defines the actions taken to ensure the quality of data remains high with
few or no errors.
Application security - refers to the measures taken to prevent breaches in application
security that can occur through errors in the design, development, or deployment of an
application. Application security is generally addressed in the design of the application
and is used to prevent wormholes from appearing within the system. For example if an
error occurs in a program and dumps someone’s bank account number in the stack
trace, that would not be desirable and could be a serious security concern for the
organization responsible.
Information security – refers to protecting information systems and the information they
contain from unauthorized use, modification, disruption, or destruction. Information
security is almost identical to computing security but focuses more on the information
involved and not choice on the end of the user. Good practices involve enforcing strict
business rules to prevent the alteration of data when the action is not desired.
Network security – refers to the protection of network resources from unauthorized use.
This unauthorized access could come from internal as well as external sources.
Internetwork security is essential. When most people consider network security, they
often think of security as protecting the network from outside attacks. However, most
security breaches come from inside of the network, and it sprotection from internal
security violations is crucial. There are a variety of strategies that organizations take in
order to secure data across the internetwork/extranet. Some of the more popular
include:
Copyright © 2010 Brian H. Cameron
15
Isolation - this strategy isolates systems that require different types of access from each
other. Firewalls offer the isolation of network devices from outside or unauthorized
users. Firewalls can be configured based on requirements to limit access to certain
networks/ports. Another form of isolation is dependent on the architecture design. By
creating sub-networks or independent entities, network engineers can isolate all or any
part of a network that should not be accessible by the general user. By designing high
risk systems to only communicate with a small independent network, engineers can
limit the chance of unauthorized access or a failure scenario.
Strong authentication - a form of two-factor authentication should be implemented when
feasible. Two-factor authentication is the practice of utilizing two distinct methods for
determining whether or not the user is allowed to access the system. By enforcing a
strong form of authentication, organizations can reduce the risk of an unauthorized user
gaining access to a specific system. Authentication methods are grouped into three
categories with respect to human factors. These three categories are as follows:
•
•
•
Something the user knows (i.e., a password)
Something the user does or is, such as voice patterns, DNA sequence,
fingerprint, or retinal pattern
Something the user has (i.e., an ID card)
Recently the reliability of two-factor security has come into question, mostly due to the
idea that if hackers want your information, they can deploy a variety of false interfaces
that can collect your information, such as man-in-the-middle attacks and Trojan attacks.
In order for organizations to avoid these occurrences, they must take extra measures to
ensure they make it clear the user is on the organization's page and not a spoofed
page.
Granular access controls – These types of controls are important elements in the
security of complex systems. Many organizations today need to interact with a number
of outside business partners and must implement the principle of "least privilege".
Digital certificates are one of the most common ways for organizations to deal with
various levels of security/permissions. Digital certificates are usually configured in a
public key environment where the user has their own personal unique key that
combines with the public key and is ultimately fed through the authentication routine.
This method allows numerous privacy levels to all function within the same system
securely. At an organizational level, the security authority at the company would certify
the user in a specific domain, and after that point the user is restricted to the certain
actions that are associated with the profile they were approved with. VeriSign, a trusted
certificate authority, eventually created classes of certification that can be applied to
users of a particular system.
Copyright © 2010 Brian H. Cameron
16
Copyright © 2010 Brian H. Cameron
17
The classes are as follows:
•
•
•
•
•
Class 1 intended individual’s email
Class 2 requires identity proof for organizations
Class 3 requires identity verification for software and servers using
certificate authority (CA)
Class 4 for online business transactions
Class 5 for governmental security
Encryption - Extranets require sensitive corporate data to be shared via the Internet.
Virtual private network (VPN) technology offers strong encryption for data transmitted
over unsecured networks such as the Internet. The basic task of encryption is to
transform information into a stream formatted using a mathematic algorithm. Based on
the application, encryption can vary greatly, but the common trend is for great
encryption schemes for data being passed out of a private network.
Future Trends Affecting Inter-Organizational Infrastructure
IPv6
Internet Protocol version 6 (IPv6) is the sucessor to IPv4, the current protocol for use on
the Internet. The main benefit of IPv6 is the flexibility in assigning addresses that the
much larger addressing space enables. Routing is more streamlined in comparison to
traditional IPv4 transport, but requires a bit more bandwidth, which can be a problem in
areas where bandwidth is extremely limited. Some of the major advantages of IPv6
include:
Larger Address Space - IPv6 relieves the threat of space exhaustion, which is
closely approaching in the current IPv4 protocol. More address space also
means that network administration will become much easier due to the
elimination of complex subnetting schemes.
More Efficient Routing Infrastructure - IPv6 also offers an auto-configuration
feature that allows a device to send a request once connected to receive an
address.
Better Security - As a result of decreased steps from addressing to browsing,
the overall security of the protocol is much better. Addressing also makes it
possible to better track hackers, since all addresses are unique. Based on a
variety of methods, cyber-crime can be better enforced once all devices are
converted to the IPv6 protocol.
Better Quality of Service (QoS) - The overall quality of service will increase due
to the simplified routing procedures and more direct routing schemes. The
protocol will likely reduce bandwidth usage, but this must be verified once a
large scale adoption of the system is in place.
Copyright © 2010 Brian H. Cameron
18
Event-Driven Architecture
Event-Driven Architecture (EDA) is defined as a software architecture pattern that
promotes the initiation, detection, consumption, and reaction to events. Event-driven
architecture is thought to complement service-oriented architecture (SOA) because
Web services can be triggered by events. EDA is comprised of different event layers
and processing styles depending on the event being processed. By implementing this
loosely coupled but well distributed structure, organizations can better
analyze/understand the actions that drive business.
SaaS / Web 2.0
In a software as a service (SaaS) application delivery model, a vendor hosts an
application for use by its customers. The application is typically a web application that
is accessed over the Internet. Web 2.0 technologies can be more generally defined as
networks where users determine what types of content they want to view or publish.
Currently there has been a huge drive from companies to implement solutions that fall
into the Web 2.0 category. The main advantages of these systems include flexibility and
intuition on the users end. The most popular instances of Web 2.0 technologies include:
•
•
•
•
•
YouTube
Digg
Facebook
MySpace
Pandora
Web 3.0
The term Web 3.0 describes different evolutions of Web applications and usage.
Examples of these evolutions include the Semantic web, the use of various artificial
intelligence technologies, the use of 3D technologies, and the utilization of content by
non-browser based applications. The future of Web 3.0 is far from certain but the
transformation of the Internet into an all-encompassing data source with integrated
artificial intelligence is underway.
Ontologies
An ontology is a domain-specific representation or model of a set of concepts and their
relationships to one another. The term has its roots in philosophy, where it describes a
systematic account of existence. Enterprise ontologies are comprised of definitions and
terms that are relevant to the business. Semantic heterogeneity is a major issue in
enterprise integration. Most of the EI solution suites on the market today focus mainly
on technical integration and do not adequately address the issue of semantic
heterogeneity. Efforts to address the semantic issue are immature at present, but once
Copyright © 2010 Brian H. Cameron
19
fully addressed, the semantic aspect will provide more robustness and consistency in
enterprise integration efforts.
One such effort is the development of the Web Ontology Language (OWL) to express
ontologies in a standard, XML based language. OWL is a set of XML-based markup
languages that are designed to enable computer applications to process the content of
web pages rather than just presenting pages of information. Ontologies created with
the OWL standard describe the organization of ideas in a particular domain in a format
that can be read and understood by computer applications.
The Semantic Web refers to an evolving, mostly theoretical, extension of the World
Wide Web where it is possible for the web to react to requests of people and computers
to use web content. The semantic web effort is comprised of collaborative working
groups, a set of design principles, and several enabling technologies, such as the Web
Ontology Language (OWL). Several elements of the semantic web are conceptual in
nature and are yet to be developed.
Copyright © 2010 Brian H. Cameron
20
Download