Andrew Kahng, Professor, UC San Diego

advertisement
DAC-2003 Interoperability Workshop
University Needs and Directions
Andrew B. Kahng
UC San Diego CSE & ECE Depts.
abk@ucsd.edu
http://vlsicad.ucsd.edu/
Academic Experiences
• Loose interoperability: MARCO GSRC Bookshelf
http://gigascale.org/bookshelf/
– Repository for CAD-IP Reuse
• scalable design effort: auto-installation, all-pairs benchmarking, design
flow optimization, …
• scalable learning curves: “autograders”, open-source
– Common denominator for industry-academia interactions
• Open-source interoperability: PDTools
– Many benefits: research quality and speed, student training,
recognition
– Requires commitment, non-trivial software expertise, non-trivial
effort (platforms, etc.
– Slow to permeate academic community
Academic Personalities
• (+) Historically, have set good examples
– Frameworks, open-source, …
• (+) Careful, methodical, less prone to biases
• (-) Attitudes
– Interoperability, reusability, reproducibility, benchmarking:
“not what a Ph.D. is about”
– Competitiveness (! avoid comparisons, avoid exposing IP)
• (-) Risk vs. reward paradox
– Incremental research (needs interoperability) is high-risk
– Novel research (incomparable; no interop needed) is low-risk
• (-) Inertia
– “What’s in it for me?”
– “Can’t afford to spend student time on Purify, Solaris…”
– “If everyone else does it, then I’ll think about it”
Interoperability For Benchmarking
• Benchmarking is an academic imperative ! interoperability
• Lack of interoperable analysis backplane slows CAD-IP reuse
– ! Common evaluators of delay / power / etc.
– ! Common incremental STA backplane
• Need relevant, stable metrics
– E.g., relevant to physical synthesis and RTL-to-GDSII methodologies
• Need benchmarks with sufficient information (+ datamodel)
– HDL, flat gate-level netlists, libraries, timing constraints
• Despite relevant initiatives such as OA, still:
– no good public analysis tools (RCX, DC, STA, …)
– no good metrics, no benchmarks, no libraries
• If “OpenAccess” is the path to interoperability, why doesn’t
academia see compelling reasons to absorb OA overheads?
– OA-based utilities for extraction, timing, layout, … ?
– Native datasets and libraries; native solution evaluators?
– Wide adoption, with OAC companies basing their own flows on OA?
A List of Catalysts
• Understand barriers to adoption (“activation energies”),
key gaps when assessing interoperability in academia
• “Free, modular tool-chain” = reference flow
– Consistent with perspective of CAD-IP reuse
– Clear incentive for researchers and EDA vendors to use OA
– Reference flow to which innovations can be plugged in
• Libraries and vertical benchmarks
– How much does it cost to run Cadabra/Prolific and Silicon
Metrics, using, say, STARC open design rules?
– Many failed startups and design projects ! lots of test cases?
• Analysis backplane
– Signoff-quality RCX/DC/STA/synth/… engines ! instant
adoption
Download