Managing user experience for MBB TextStart In recent years, mobile

advertisement
Managing user experience for MBB
TextStart
In recent years, mobile broadband usage and demand have been growing exponentially.
With the development of Internet and multimedia applications, usage of social
networking, online gaming, video streaming, etc. has now increased greatly among
mobile users. In addition to that, the introduction of high performance smartphones or
mobile terminals is also encouraging the growth and demand of mobile broadband
resources and services.
Call for visualized network management
In tandem with the growth of the mobile market, mobile user satisfaction has also
become the top priority for operators and the importance of measurement of user
satisfaction is higher than ever. With the complexity of networks, service offerings and
business models, the understanding of user experience and satisfaction is getting more
complicated.
Gaps in traditional measurement
The traditional method of measuring user experience via KPI is no longer an adequate
choice as KPI alone cannot represent the user experience across different services
available today. There are mainly two gaps:
Measuring E2E performance: Current KPI measures the performance of each
network element independently, resulting in neglect of cases of failures or bad
performances due to other network elements. Thus, actual network performance is
expected to be lower than those recorded in the scorecard.
Measuring service/application performance: Traditional KPI measures network
performance from the network point of view, like connection success rate, drop call rate,
handover success rate etc, but what really matters to users are questions like “Can I
open a web page? How long does it take to open the web page? What is the quality of
the streaming media? How fast is the buffering time?” In other words, the traditional
performance measurement method is unable to measure the performance of services at
the application level.
KQI provides visibility into network
In order to better visualize service quality, a new set of measuring indexes, the Key
Quality Index (KQI) has been established. The KQI measures service accessibility,
retainability and integrity from end-to-end, and align service-independent quality
indexes and service quality indexes. Service-independent quality indexes are the trigger
points that are independent from the service used, and are also known as control plane
messages. Meanwhile, service quality indexes, also known as user plane messages, are
the trigger points on the application layer. In order to capture the application layer
messages, an external or internal probe with Deep Packet Inspection (DPI) engine is
required to identify protocols passing over the network.
Managing service quality, assuring user experience
With the increasing complexity of the network, relying on KPI and KQI alone is still far
from enough. To better manage the network quality, the best way is to see the problem
directly from the eyes of the users. There are four factors which decide end user
experience:
Network – service-independent quality, including coverage, network element
performance and capacity.
Service – service-dependent quality, or quality of the application level, like transaction
throughput, http setup failure, http setup time etc.
Device – with the growing demand for, and advances in smartphones, data broadband
transactions from a smartphone are no longer lower than the transactions generated
from a PC.
End user – the best way to understand the end user experience is directly from user’s
point of view.
As these four factors cover widely diverse focus areas, different methods of
management have to be employed.
Preventive management
Different service models and user behaviors have different kinds of impact on network
performance, especially on resource demand. For example, push mail services and
BlackBerry phones which have a periodic heartbeat and uncertain time of email arrival
create a large signaling load on the network. We need to know what the impact on the
network is when a new service or a new marketing strategy is introduced.
Most networks have performance issues mainly due to increasing traffic and end-user
demand and expectation. In most cases, service providers expand their network
capacity without much knowledge of how the network resources were utilized. Service
profiling is important as it provides better insights into the utilization of the network’s
services. This provides an overview and deeper understanding of how and when the
application services were used.
With a better understanding of user behavior and demand for services, a more accurate
service model can be derived, like traffic caused by HTTP, email, FTP, streaming,
mobile TV and so on. As each company grapples with minimum resources, traffic
growth for different services can be forecast in order to provide a better picture of
resource planning, resource expansion or subscriber acquisition. And close monitoring
of user behavior changes will allow a better grip of capacity breakpoint or bottleneck
and help providers prepare in advance for surges in traffic. Visibility of user behavior
changes will allow a better decision on marketing strategy.
Proactive management
Customer complaints are always a challenge for a network. In most cases, 2% of users
who have a bad experience will contribute to more than 20% of the total complaints.
The objective of proactive management is to proactively pay attention to call failures of
end users, especially very important users (VIPs). The VIP users are usually users with
greater influence and value, for example government agencies or high traffic users.
This is to solve the network issues more proactively and if possible, in advance, before
users lodge complaints, instead of reactively responding to customer complaints.
Another type of user who we should pay attention to is the very annoyed person (VAP).
These are users who experienced the worse quality of the network, like having 90% of
their calls either unable to be accessed, or dropped. Traditionally, network performance
would be measured using indicators drive tests and OSS statistics. A drive test is a
sampling method, which finds it difficult to take into account all factors in the network;
and frequent drive tests will imply a high OPEX. Meanwhile OSS statistics is an
averaging method. With the baseline of a considerably good network reaching a
statistical performance KPI of greater than 98%, we assume the 2% failures are
randomly distributed among all the users. However there is a probability that these 2%
failures are isolated cases which actually apply to only a small group of users in a
specific area. This scenario will cause extreme dissatisfaction among this specific
group of users. So, by proactive analysis of VAP users, we are able to identify such
users or areas and conduct in-depth root-cause analysis and optimization to solve
network performance issues which might have been overlooked under traditional OSS
statistics and drive tests.
The challenge of user level quality monitoring is the privacy policies in different
countries. When faced with such a policy that hinders measurement, we can take a step
back from performance monitoring of each user to performance monitoring of a group
of users instead. Or, in the worst case scenario, monitor after a complaint ticket has
been opened.
Effective E2E quality management
After we have a better view of network quality and users’ demands, to effectively
manage E2E quality, we have to see it from two directions, improvement and control.
Improvement: The fundamental quality improvement is still based on wireless
optimization, core optimization, transport optimization and IMS optimization. What we
do differently here is to align all network elements from end-to-end in the optimization
solution rather than just focus on a single network element. Solutions like the
smartphone impact solution are actually optimized parameters and features on the RAN
side, but the main improvement is actually benefitting the PS core. Usually when we
see low throughput or data speed, wireless failure is the first that comes to mind, but
there is also a great possibility it was caused by the transport layer, or the core network
having high retransmission rate or packet fragmentation. So, in order to improve the
network quality more effectively, end-to-end optimization of the network is necessary.
Control: What is more important is to have high-value users enjoy more of the network
resources. In most networks, users who pay less or just subscribe for the minimum
package, are most often not as active as those who pay higher prices for special network
access packages. With this in mind, it is unfair for those users who pay more to receive
the same priority as those general users in accessing the network resources. QoS control
comes in here, as the strategy to cap the resources for those who pay less, with lower
service quality, and guarantee more resources and higher service quality for those who
pay more. There are three types of QoS strategies:
Wireless QoS: This strategy is to control the guaranteed bit rate (GBR), maximum bit
rate (MBR) and access priorities according to the classes of users.
IP QoS: This strategy is to set different priorities for different service applications in
accessing IP transport resources.
Core QoS (Policy Control): This strategy is more intelligent, as using an integrated
policy control system, we can configure the type of services which can be accessed by
the user. Using this strategy, service providers can actually tailor different charging
packages for end users, like a minimum package of paying less than USD5 a month for
allowing the user to only visit Facebook, a student package for only surfing the Internet
and accessing email, or a business package with more services for email, Internet, P2P,
etc.
Systems enabling E2E service quality management
There are various tools applicable for wireless, transport and core network and it falls
into either the category of active test system or passive monitoring system.
Active test tools include drive test tool, mobile call simulator or an IP traffic simulator.
These tools are used to generate calls as a sampling benchmark of the network quality.
As for passive monitoring system, we can categorize it in three types:
Portable probes with DPI engine, which are usually the size of a small PC, can capture
user traffic protocols like HTTP, WAP, SMTP, FTP etc, at IUPS, Gn, Gi or Gw
interface. Portable probes can process data throughput at a maximum of up to 500Mbps.
Portable probes are comparably lower in cost, good for troubleshooting and carrying
out root cause analysis of service quality. Due to the lower storage capacity, the
portable tool is usually used to probe a certain period for troubleshooting purposes.
Server probes are a higher configuration of probing system, with a high performance
server when compared to portable probes. With a processing capability up to 4Gbps
and large storage configuration, this probing system is suitable for a real time 24/7
probing. This allows us to play back history records for better complaint analysis.
Comparably the cost of server probes is much higher than that of portable probes.
Service director or quality manager is the management system which integrates all
passive probes, active probes and OSS as a centralized monitoring system. With the
function of service quality dash board and SLA monitoring, it can be integrated into the
standard operation and maintenance process to be more comprehensive in monitoring
end-user experience. Undeniably, with such functions, it incurs higher investment.
The growth of mobile broadband is here, it is now and it is unavoidable. With changes
in user culture and growing demands on higher speed and new services, it is necessary
to improve current optimization process to be more focused on user level and
application level to remain competitive. The tailor-made suit is here to stay.
TextEnd
Download