Best_Practices_for_Optimizing_Blackboard_Learn.ppt

advertisement
Best Practices for Optimizing
Blackboard Learn
Steve Feldman,
sfeldman@blackboard.com
What We’ll Cover
• A deployment approach for the ages.
• How to make use of the new sizing guide.
• Optimizing the platform components.
Flexible and Scalable Application
Deployment
Flexible and Scalable Application
Deployment
• An ideal deployment will contain…
– Availability at every edge of the application environment
• Strategy: Physical distribution of load-balanced systems
• Strategy: Minimum DB recovery, not necessarily 0 downtime
– Consumption of every possible machine resource
• Strategy: Virtualization provisioning
– Techniques for improving user experience
• Strategy: Techniques and tools for achieving page-level SLAs
– Large addressable memory spaces
• Strategy: 64-bit and large OS process space allocations
Flexible and Scalable Application
Deployment
• An ideal deployment will contain…
– Minimum Storage Recovery Time
• Strategy: Enterprise storage with Snapshot capabilities
– Advanced monitoring for operations and planning
• Strategy: Measurement tools and analytics
– Automation…Automation…Automation
• Strategy: Investment in repeatable, reliable automated
processes.
Deployment: Availability
• VLEs are different beasts today then in the past.
–
–
–
–
Communities are bigger
Sessions last longer
Content is richer
Key point: Adoption is greater and users expect their sites up 24 x
7 x 365
• Architecture is designed for many parallel instances of the
product scaled in a horizontal fashion.
– Distributed physical deployments
– Virtualization is a key element
• Database failover more important than horizontal
database scalability.
– Emphasis on vertical database scalability
Deployment: Resource Utilization
• Moore’s law is in full effect
– CPUs are getting faster with more cores
– Memory is in abundance and cheap
– Storage is grossly abundant
• Massive systems can be obtained at low cost, but
cannot be saturated in stand-alone configurations.
• Virtualization offers the opportunity…
– Deploy with availability in mind
– Saturate system resources
Deployment: Improving Page
Responsiveness
• Gzip…Gzip…Gzip…
– All of our supported browsers handle gzip?
– Reduces payload
• Improves lower latency connections like Cable, DSL and Dial-up
– Minor overhead on the application layer (~2% to ~5%)
• Have the option to perform at the load-balancer layer
– Most Bb deployments do not enable Gzip at all
• Even when enabled, some proxies and software packages mess-up
the Accept Encoding Header
• Optimize your images
– Page size really does matter
– Reduce the size without reducing the quality
Deployment: Large Address
Space
• As of Blackboard Learn™ Release 9.1 all
supported/certified configurations include a 64-bit
option.
• Pushing more processing to client and DB over the
last few releases, but major memory management
technique is to use more application caches.
– Memory stays persistent longer
– Less wasteful from a creation/destruction perspective, but
puts greater demands on larger spaces.
• Most of our application testing focused on 4GB and
8GB JVM deployments on 6GB and 10GB OS
spaces.
– Limited testing at 16GB and 32GB
Deployment: Storage MTTR
• Reference architecture pushes for “diskless” boots
in which ISCSI or NFS partition resides on an
enterprise storage system.
• Both OS/VM partition and data partition served up
from remote storage deployment designed for
performance and scalability.
– Make your hardware work from a CPU, Memory and
Network perspective…save the Disk for the experts.
• Consider scenarios for reducing “Mean Time to
Recovery or Repair”
– Snapshot technology offering minutes for recovery
Deployment: Advanced Monitoring
• Measurement is the secret sauce for successful
deployments.
– Most reliable and scalable deployments measure beyond
the server infrastructure
• Different types of measurements
– System/Environmental measurements
– Business measurements
– Synthetic measurements
• Collecting is only part of the prize
– Need to analyze the data to drive business decisions from
the data.
Deployment: Automation
• Goal of moving to 100% unattended and fully automated
deployment.
• Reduce MTTR and prevent disasters
• Automation requires intimacy…intimacy requires
knowledge
• Use automation for
–
–
–
–
–
Configuration Management and Deployment
Maintenance
Repeatable tasks
Adaptive tuning
Minimize possibility of human error
• http://dev2ops.org/storage/downloads/FullyAutomatedProv
isioning_Whitepaper.pdf
Sizing the Application: To HyperThread or Not
• Applies to Intel deployments only
– “..delivers thread-level parallelism on each processor resulting in
more efficient use of processor resources—higher processing
throughput—and improved performance on multi-threaded
software.” –Intel Corporation
• Greatly improved in series 5500+ processor
• Provides double worker thread capacity
• If it’s not turned on, stop what you are doing and enable
it ASAP!
Moving Away from Clusters
• Tomcat clusters were introduced back in Blackboard
Learn 7.X prior to the transcendence of server
virtualization.
– Only supported 32-bit configurations at the time, but systems were
being shipped with 8GB, 16GB and 32+GB of RAM.
– Needed a way to take advantage of memory, but were limited to a
1.7GB address space.
– Recommending “distributed” deployment approaches as well.
• Still applies, but can be achieved differently.
• Clustering has its advantages, but also has its penalties.
– Failover not as ideal as one would desire.
• Best approach is to scale up with 64-bit spaces and
distributed JVMs across both virtual and physical configs
Sizing Using P.A.Rs
• PAR = Performance Archetype Ratios
– Methodology for sizing based on units of work that can be
applied to “unit of configuration”
• PARs assume a world of linear units
– Add units of configuration to meet growing demands of unit
of work.
• PARs based on (4) key resources: CPU, Memory,
Disk and I/O and application interfaces (threads
and connections).
• Used for making capacity decisions for sizing both
virtual and physical components.
Optimizing the Web Server
• The web server in the Blackboard Learn
configuration is nothing more than a gateway to the
application container.
– When clusters were more relevent, the web server acted
as a pseudo load-balancer.
• Not many opportunities for optimization other than
– KeepAlives
– Interfaces
– Compression
• It can become a bottleneck if not properly optimized
– Better to have high ceilings from an interface perspective
Optimizing the JVM
• Java hotspot offers standard –X and non-standard –XX
options for performance and behavior.
– -X options are always guaranteed across releases and patches of
Java.
– -XX options must be used with caution as they are subject to
change with any release of Java.
• -XX options should be tested and measured using the
production safe arguments.
• Read the release notes of Java for “performance” updates
– http://java.sun.com/javase/6/webnotes/ReleaseNotes.html
Optimizing the JVM
• Cross-platform recommendation for using Concurrent
Mark Sweep Collector
– Best optimized for 64-bit address
– Combine –XX:+UseConcMarkSweepGCwith –XX:+UseParNewGC
• Manually size New Space using –XX:NewSize and –
XX:MaxNewSize options (1/4 to 1/3 total heap).
– Consider Survivor Space ratios 4 or lower.
• Be careful about sharing –XX non-standard options across
customers.
– If you don’t understand what the option does and it’s not
recommended by Blackboard, best choice is to not use it.
Optimizing the Database: SQL
Server
• # of data files makes no difference on SQL Server for
Data and Transaction
• Allow the data/transaction files to grow as big as they
want within reason.
– What’s reason: 64GB
– http://msdn.microsoft.com/en-us/library/ms143432(sql.90).aspx
• TempDB is completely different story
– # of files = # of DB Threads
– Set first X files to a uniform size, set last file to same size with
auto-extension ON
– Determine size need over time
• Separate volume for paging file
Optimizing the Database: SQL
Server
• Be aware of MDOP: Max Degree of Parallelism
– Setting to unlimited can have a negative affect on query
performance unintentionally.
• AWE can and does work on 64-bit systems
• Configure READ_COMMITTED_SNAPSHOT
• Two nuggets of information:
– Learn How to Use SQL DMVs
– Study SQL Server Wait Events and Tuning
Optimizing the Database: Oracle
• Balance I/Os across multiple data files (~2 to 8GB per
file).
• REDO is critical to performance a session/query level.
– Be aware of how much REDO is being used over time.
– NOLOGGING will disable, be we rarely use NOLOGGING
• TEMP is very complex and used for managing transient
data.
– One TEMP file is adequate
– If latency exists on TEMP, consider introducing TEMP file groups
• SGA is important, but PGA can be your best friend or your
worst enemy with high concurrency.
Optimizing the Database: Oracle
• Oracle DBO can be your friend
– Must understand optimizer behavior
– Willingness to read Cost Execution Plans
• Using Wait Events and Cost Execution Plans for tuning
initiatives
– Wait events are at a system, session and query level
• Importance of Statistics and Histograms
– CBO is just guessing without properly set statistics and
histograms.
– CBO is dependent on your data.
Please provide feedback for this session by emailing
BbWorldFeedback@blackboard.com.
The subject of the email should be title of this
session:
[INSERT TITLE HERE]
Download