ISO/IEC JTC 1/SC 29 N 15907

advertisement
ISO/IEC JTC 1/SC 29
N 15907
ISO/IEC JTC 1/SC 29
Coding of audio, picture, multimedia and hypermedia information
Secretariat: JISC (Japan)
Document type:
Meeting Report
Title:
Meeting Report, the 114th SC 29/WG 11 Meeting, 2016-02-22/26, San Diego, USA [SC 29/WG
11 N 15906]
Status:
Draft meeting report [Requested action: For SC 29's information]
Date of document:
2016-06-03
Source:
Convener, ISO/IEC JTC 1/SC 29/WG 11
Expected action:
INFO
Email of secretary:
sc29-sec@itscj.ipsj.or.jp
Committee URL:
http://isotc.iso.org/livelink/livelink/open/jtc1sc29
INTERNATIONAL ORGANISATION FOR STANDARDISATION
ORGANISATION INTERNATIONALE DE NORMALISATION
ISO/IEC JTC 1/SC 29/WG 11
CODING OF MOVING PICTURES AND AUDIO
N15906
ISO/IEC JTC 1/SC 29/WG 11
San Diego, CA, US – February 2016
Source: Leonardo Chiariglione
Title:
Report of 114th meeting
Status
Report of 114th meeting
Annex A – Attendance list ................................................................................................................. 25
Annex B – Agenda ............................................................................................................................. 35
Annex C – Input contributions ........................................................................................................... 39
Annex D – Output documents ............................................................................................................ 75
Annex E – Requirements report ......................................................................................................... 83
Annex F – Systems report .................................................................................................................. 91
Annex G – Video report ................................................................................................................... 155
Annex H – JCT-VC report ............................................................................................................... 171
Annex I – JCT-3V report ................................................................................................................. 288
Annex J – Audio report .................................................................................................................... 359
Annex K – 3DG report ..................................................................................................................... 399
1
Opening
The 114th MPEG meeting was held in San Diego, CA, USA, 2016/02/22T09:00-26T20:00
2
Roll call of participants
This is given in Annex 1
3
Approval of agenda
Tha approved agenda is given in Annex 2
4
Allocation of contributions
This is given in Annex 3
5
Communications from Convenor
There was no communication
6
Report of previous meetings
The following document was approved
15623
7
Report of 113th meeting
Workplan management
7.1.1 Media coding
7.1.2 Profiles, Levels and Downmixing Method for 22.2 Channel Programs
The following document was approved
15999
ISO/IEC 14496-3:2009/DAM 6, Profiles, Levels and Downmixing Method for 22.2
Channel Programs
7.1.3 Additional Levels and Supplemental Enhancement Information
The following documents were approved
16030
16031
Disposition of Comments on ISO/IEC 14496-10:2014/DAM2
Text of ISO/IEC 14496-10:2014/FDAM2 Additional Levels and Supplemental
Enhancement Information
7.1.4 Progressive High 10 Profile
The following documents were approved
16032
16033
Request for ISO/IEC 14496-10:2014/Amd.4
Text of ISO/IEC 14496-10:2014/PDAM4 Progressive High 10 Profile, additional VUI
code points and SEI messages
7.1.5 Printing material and 3D graphics coding for browsers
The following document was approved
16037
Core Experiments Description for 3DG
7.1.6 Updated text layout features and implementations
The following documents were approved
15930
15931
Request for ISO/IEC 14496-22:2015 AMD 2 Updated text layout features and
implementations
Text of ISO/IEC 14496-22:2015 PDAM 2 Updated text layout features and
implementations
7.1.7 Video Coding for Browsers
The Convenor recalled the ITTF letter giving instructions on handling Type 3 declaration that
he had distributed on the livelink email system.
The most welcome clarification mandated that VCB activities should be stopped.
He also recalled that the Secretary had incredibly omitted to send the Convenor the Type 3
declaration that ITTF had sent to the Secretariat.
The Convenor also responded orally at the meeting and then in writing (m38221) to the FINB
document (m38044).
7.1.8 Internet Video Coding
The following documents were approved
16034
16035
16036
16038
Study Text of ISO/IEC DIS 14496-33 Internet Video Coding
Internet Video Coding Test Model (ITM) v 14.1
Description of IVC Exploration Experiments
Draft verification test plan for Internet Video Coding
7.1.9 MVCO Extensions on Time Segments and Multi-Track Audio
The following document was approved
15936
WD of ISO/IEC 21000-19:2010 AMD 1 Extensions on Time Segments and Multi-Track
Audio
7.1.10 Contract Expression Language
The following documents were approved
15937
15994
DoC on ISO/IEC DIS 21000-20 2nd edition Contract Expression Language
Text of ISO/IEC IS 21000-20 2nd edition Contract Expression Language
7.1.11 Media Contract Ontology
The following documents were approved
15939
15940
DoC on ISO/IEC DIS 21000-21 2nd Media Contract Ontology
Text of ISO/IEC IS 21000-21 2nd Media Contract Ontology
7.1.12 User Description
The following documents were approved
15941
15942
DoC on ISO/IEC DIS 21000-22 User Description
Text of ISO/IEC IS 21000-22 User Description
7.1.13 FU and FN extensions for Parser Instantiation
The following documents were approved
16041
16042
Disposition of Comments on ISO/IEC 23002-4:2014/PDAM3
Text of ISO/IEC 23002-4:2014/DAM3 FU and FN descriptions for parser instantiation
from BSD
7.1.14 Spatial Audio Object Coding
The following document was approved
16079
Draft of ISO/IEC 23003-2:2010 SAOC, Second Edition
7.1.15 MPEG Surround Extensions for 3D Audio
The following document was approved
15934
Study on ISO/IEC 23003-1:2007/DAM 3 MPEG Surround Extensions for 3D Audio
7.1.16 Support for MPEG-D DRC
The following documents were approved
16082
16083
DoC on ISO/IEC 23003-3:2012/DAM 3 Support of MPEG-D DRC, Audio Pre-Roll and
IPF
Text of ISO/IEC 23003-3:2012/FDAM 3 Support of MPEG-D DRC, Audio Pre-Roll and
IPF
7.1.17 Parametric DRC, Gain Mapping and Equalization Tools
The following documents were approved
16085
16086
DoC on ISO/IEC 23003-4:2015 PDAM 1, Parametric DRC, Gain Mapping and
Equalization Tools
Text of ISO/IEC 23003-4:2015 DAM 1, Parametric DRC, Gain Mapping and
Equalization Tools
7.1.18 Media Context and Control – Control Information
The following documents were approved
16109
16110
Request for ISO/IEC 23005-2 4th Edition Control Information
Text of ISO/IEC CD 23005-2 4th Edition Control Information
7.1.19 Media Context and Control – Sensory Information
The following documents were approved
16111
16112
Request for ISO/IEC 23005-3 4th Edition Sensory Information
Text of ISO/IEC CD 23005-3 4th Edition Sensory Information
7.1.20 Media Context and Control – Virtual World Object Characteristics
The following documents were approved
16113
16114
Request for ISO/IEC 23005-4 4th Edition Virtual world object characteristics
Text of ISO/IEC CD 23005-4 4th Edition Virtual world object characteristics
7.1.21 Media Context and Control – Data Formats for Interaction Devices
The following documents were approved
16115
16116
Request for ISO/IEC 23005-5 4th Edition Data Formats for Interaction Devices
Text of ISO/IEC CD 23005-5 4th Edition Data Formats for Interaction Devices
7.1.22 Media Context and Control – Common Types and Tools
The following documents were approved
16117
16118
Request for ISO/IEC 23005-6 4th Edition Common types and tools
Text of ISO/IEC CD 23005-6 4th Edition Common types and tools
7.1.23 HEVC
The following documents were approved
16045
16046
16048
16049
16050
16051
16052
Disposition of Comments on DIS ISO/IEC 23008-2:201x
Text of ISO/IEC FDIS 23008-2:201x High Efficiency Video Coding [3rd ed.]
High Efficiency Video Coding (HEVC) Test Model 16 (HM16) Improved Encoder
Description Update 5
HEVC Screen Content Coding Test Model 7 (SCM 7)
MV-HEVC verification test report
SHVC verification test report
Verification test plan for HDR/WCG coding using HEVC Main 10 Profile
7.1.24 Conversion and coding practices for HDR/WCG video
The following document was approved
16063
WD of ISO/IEC TR 23008-14 Conversion and coding practices for HDR/WCG video
7.1.25 Additional colour description indicators
The following document was approved
16047
WD of ISO/IEC 23008-2:201x/Amd.1 Additional colour description indicators
7.1.26 MPEG-H 3D Audio File Format Support
The following documents were approved
16090
16091
DoC on ISO/IEC 23008-3:2015/DAM 2, MPEG-H 3D Audio File Format Support
Text of ISO/IEC 23008-3:2015/FDAM 2, MPEG-H 3D Audio File Format Support
7.1.27 MPEG-H 3D Audio Phase 2
The following document was approved
16092
Study on ISO/IEC 23008-3:2015/DAM 3, MPEG-H 3D Audio Phase 2
7.1.28 Carriage of Systems Metadata in MPEG-H 3D Audio
The following document was approved
16093
Study on ISO/IEC 23008-3:2015/DAM 4, Carriage of Systems Metadata
7.1.29 3D Audio Profiles
The following document was approved
16094
WD on New Profiles for 3D Audio
7.1.30 Point Cloud Compression
The following documents were approved
16121
16122
Current status on Point Cloud Compression (PCC)
Description of PCC Software implementation
7.1.31 Free Viewpoint Television
The following documents were approved
16128
16129
16130
Results of the Call for Evidence on Free-Viewpoint Television: Super-Multiview and
Free Navigation
Description of 360 3D video application Exploration Experiments on Divergent multiview video
Use Cases and Requirements on Free-viewpoint Television (FTV) v.3
7.1.32 Future Video Coding
The following documents were approved
16138
16125
16066
16067
16068
Requirements for a Future Video Coding Standard v2
Presentations of the Workshop on 5G and beyond UHD Media
Algorithm description of Joint Exploration Test Model 2 (JEM2)
Description of Exploration Experiments on coding tools
Call for test materials for future video coding standardization
7.1.33 Digital representations of light fields
The following documents were approved
16149
16150
Summary notes of technologies relevant to digital representations of light fields
Draft report of the Joint Ad hoc Group on digital representations of light/sound fields
for immersive media applications
7.1.34 Genome Compression
The following documents were approved
16134
16135
16136
16137
16146
16145
16147
Requirements on Genome Compression and Storage
Lossy compression framework for Genome Data
Draft Call for Proposal for Genomic Information Compression and Storage
Presentations of the MPEG Seminar on Genome Compression Standardization
Evaluation Procedure for the Draft Call for Proposals on Genomic Information
Representation and Compression
Database for Evaluation of Genomic Information Compression and Storage
Results of the Evaluation of the CfE on Genomic Information Compression and Storage
7.2 Composition coding
7.2.1 MPEG-H Composition Information
The following document was approved
15970
Technology under Consideration for ISO/IEC 23008-11 AMD 1
7.3 Description coding
7.3.1 Compact Descriptors for Video Analysis
The following documents were approved
15938
16064
16065
Results of the Call for Proposals on CDVA
CDVA Experimentation Model (CXM) 0.1
Description of Core Experiments in CDVA
7.4 Systems support
7.4.1 Coding-independent codepoints
The following documents were approved
16039
16040
Request for ISO/IEC 23001-8:201x/Amd.1
Text of ISO/IEC 23001-8:201x/PDAM1 Additional code points for colour description
7.4.2 Media orchestration
The following documents were approved
16131
16132
16133
Context and Objectives for Media Orchestration v.3
Requirements for Media Orchestration v.3
Call for Proposals on Media Orchestration Technologies
7.5 Transport and File formats
7.5.1 Carriage of MPEG-H 3D audio over MPEG-2 Systems
The following documents were approved
15912
15913
DoC on ISO/IEC 13818-1:2015 DAM 5 Carriage of MPEG-H 3D audio over MPEG-2
Systems
Text of ISO/IEC 13818-1:2015 FDAM 5 Carriage of MPEG-H 3D audio over MPEG-2
Systems
7.5.2 Carriage of Quality Metadata in MPEG-2 Systems
The following document was approved
15914
Text of ISO/IEC 13818-1:2015 AMD 6 Carriage of Quality Metadata in MPEG-2
Systems
7.5.3 Virtual segments
The following documents were approved
15915
15916
DoC on ISO/IEC 13818-1:2015 PDAM 7 Virtual Segment
Text of ISO/IEC 13818-1:2015 DAM 7 Virtual Segment
7.5.4 Signaling of carriage of HDR/WCG video in MPEG-2 Systems
The following documents were approved
15917
15918
Request for ISO/IEC 13818-1:2015 AMD 8 Signaling of carriage of HDR/WCG video in
MPEG-2 Systems
Text of ISO/IEC 13818-1:2015 PDAM 8 Signaling of carriage of HDR/WCG video in
MPEG-2 Systems
7.5.5 DRC file format extensions
The following documents were approved
15922
15923
DoC on ISO/IEC 14496-12:2015 PDAM 1 DRC Extensions
Text of ISO/IEC 14496-12:2015 DAM 1 DRC Extensions
7.5.6 Enhancements to the ISO Base Media File Format
The following document was approved
15925
WD of ISO/IEC 14496-12:2015 AMD 2
7.5.7 Carriage of NAL unit structured video in the ISO Base Media File Format
The following documents were approved
15927
15928
Draft DoC on ISO/IEC DIS 14496-15 4th edition
Draft text of ISO/IEC FDIS 14496-15 4th edition
7.5.8 MMT
The following documents were approved
15963
15964
Description of Core Experiments on MPEG Media Transport
Revised text of ISO/IEC 23008-1:201x 2nd edition MMT
7.5.9 Use of MMT Data in MPEG-H 3D Audio
The following documents were approved
15958
15959
DoC on ISO/IEC 23008-1:201x PDAM 1 Use of MMT Data in MPEG-H 3D Audio
Text of ISO/IEC 23008-1:201x DAM 1 Use of MMT Data in MPEG-H 3D Audio
7.5.10 MMT Enhancements for Mobile Environments
The following documents were approved
15995
15960
Request for ISO/IEC 23008-1:201x AMD 2 Enhancements for Mobile Environments
Text of ISO/IEC 23008-1:201x PDAM 2 Enhancements for Mobile Environments
7.5.11
The following document was approved
15961
WD of ISO/IEC 23008-1:201x AMD 3
7.5.12 Carriage of Green Metadata in an HEVC SEI Message
The following documents were approved
15948
15949
DoC on ISO/IEC 23001-11:201x/DAM 1 Carriage of Green Metadata in an HEVC SEI
Message
Text of ISO/IEC 23001-11:201x/FDAM 1 Carriage of Green Metadata in an HEVC SEI
Message
7.5.13 Support for AVC, JPEG and layered coding of images
The following documents were approved
15972
15973
Request for ISO/IEC 23008-12 AMD 1
Text of ISO/IEC 23008-12 PDAM 1 Support for AVC, JPEG and layered coding of
images
7.5.14 MPEG Media Transport Implementation Guidelines
The following documents were approved
15976
15977
15978
DoC for ISO/IEC PDTR 23008-13 2nd edition MPEG Media Transport Implementation
Guidelines
Text of ISO/IEC DTR 23008-13 2nd edition MPEG Media Transport Implementation
Guidelines
WD of ISO/IEC 23008-13 3rd edition MPEG Media Transport Implementation
Guidelines
7.5.15 DASH
The following documents were approved
15985
15986
15987
Technologies under Consideration for DASH
Descriptions of Core Experiments on DASH amendment
Draft Text of ISO/IEC 23009-1 3rd edition
7.5.16 Authentication, Access Control and Multiple MPDs
The following documents were approved
15979
15980
DoC on ISO/IEC 23009-1:2014 DAM 3 Authentication, Access Control and multiple
MPDs
Text of ISO/IEC 23009-1:2014 FDAM 3 Authentication, Access Control and multiple
MPDs
7.5.17 Segment Independent SAP Signalling, MPD chaining and other extensions
The following documents were approved
DoC on ISO/IEC 23009-1:2014 PDAM 4 Segment Independent SAP Signalling (SISSI), MPD
chaining, MPD reset and other extensions
Text of ISO/IEC 23009-1:2014 DAM 4 Segment Independent SAP Signalling (SISSI), MPD
chaining, MPD reset and other extensions
7.5.18 MPEG-DASH Implementation Guidelines
The following document was approved
15990
WD of ISO/IEC 23009-3 2nd edition AMD 1 DASH Implementation Guidelines
7.5.19 Server and Network Assisted DASH
The following document was approved
15991
Study of ISO/IEC DIS 23009-5 Server and Network Assisted DASH
7.5.20 DASH with Server Push and WebSockets
The following documents were approved
15992
15993
Request for subdivision of ISO/IEC 23009-6 DASH with Server Push and WebSockets
Text of ISO/IEC CD 23009-6 DASH with Server Push and WebSockets
7.6 Multimedia architecture
7.6.1 MPEG-M Architecture
The following documents were approved
15952
15953
DoC on ISO/IEC CD 23006-1 3rd edition Architecture
Text of ISO/IEC DIS 23006-1 3rd edition Architecture
7.6.2 MPEG extensible middleware (MXM) API
The following document was approved
15954
Text of ISO/IEC IS 23006-2 3rd edition MXM API
7.6.3 MPEG-V Architecture
The following documents were approved
16106
16107
16108
Technology under consideration
Request for ISO/IEC 23005-1 4th Edition Architecture
Text of ISO/IEC CD 23005-1 4th Edition Architecture
7.6.4 Mixed and Augmented Reality Reference Model
The following document was approved
16119
Text of ISO/IEC 2nd CD 18039 Mixed and Augmented Reality Reference Model
7.6.5 Media-centric Internet of Things and Wearables
The following documents were approved
16140
16141
16139
16120
Overview, context and objectives of Media-centric IoTs and Wearables
Conceptual model, architecture and use cases for Media-centric IoTs and Wearables
Draft Requirements for Media-centric IoTs and Wearables
State of discussions related to MIoTW technologies
7.6.6 Big Media
The following document was approved
16151
Thoughts and Use Cases on Big Media
7.7 Application formats
7.7.1 Augmented Reality Application Format
The following document was approved
16103
Text of ISO/IEC FDIS 23000-13 2nd Edition Augmented Reality Application Format
7.7.2 Publish/Subscribe Application Format
The following documents were approved
15944
15945
DoC on ISO/IEC DIS 23000-16 Publish/Subscribe Application Format
Text of ISO/IEC FDIS 23000-16 Publish/Subscribe Application Format
7.7.3 Multisensorial Effects Application Format
The following documents were approved
16104
16105
DoC for ISO/IEC 23000-17:201x CD Multiple Sensorial Media Application Format
Text of ISO/IEC 23000-17:201x DIS Multiple Sensorial Media Application Format
7.7.4 Omnidirectional Media Application Format
The following documents were approved
16143
15946
Requirements for OMAF
Technologies under Considerations for Omnidirectional Media Application Format
7.7.5 Common Media Application Format
The following documents were approved
16144
15947
Requirements for the Common Media Application Format
WD v.1 of Common Media Application Format
7.7.6 Visual Identiy Management Application Format
The following document was approved
16142
Use cases and requirements on Visual Identiy Management AF
7.8 Reference implementation
7.8.1 Reference Software for Internet Video Coding
The following documents were approved
16026
16027
Disposition of Comments on ISO/IEC 14496-5:2001/PDAM41
Text of ISO/IEC 14496-1:2001/DAM41 Reference Software for Internet Video Coding
7.8.2 Reference Software for Alternative Depth Information SEI message
The following documents were approved
16028
16029
Disposition of Comments on ISO/IEC 14496-5:2001/PDAM42
Text of ISO/IEC 14496-1:2001/DAM42 Reference Software for Alternative Depth
Information SEI message
7.8.3 Reference Software for File Format
The following document was approved
15935
WD of ISO/IEC 14496-32 Reference Software and Conformance for File Format
7.8.4 Reference Software and Implementation Guidelines of User Description
The following documents were approved
15921
15943
Request for ISO/IEC 21000-22 AMD 1 Reference Software and Implementation
Guidelines of User Description
Text of ISO/IEC 21000-22 PDAM 1 Reference Software and Implementation Guidelines
of User Description
7.8.5 Reference Software for Green Metadata
The following documents were approved
15950
15951
Request for ISO/IEC 23001-11 AMD 2 Conformance and Reference Software
Text of ISO/IEC 23001-11 PDAM 2 Conformance and Reference Software
7.8.6 Reference Software for Media Tool Library
The following documents were approved
16043
16044
Disposition of Comments on ISO/IEC 23002-5:2013/PDAM3
Text of ISO/IEC 23002-5:2013/DAM3 Reference software for parser instantiation from
BSD
7.8.7 SAOC and SAOC Dialogue Reference Software
The following document was approved
16078
Study on ISO/IEC 23003-2:2010/DAM 5, SAOC Reference Software
7.8.8 Reference Software for DRC
The following documents were approved
16087
16088
Request for Amendment ISO/IEC 23003-4:2015/AMD 2, DRC Reference Software
ISO/IEC 23003-4:2015/PDAM 2, DRC Reference Software
7.8.9 Reference Software for MPEG-M
The following document was approved
15955
Text of ISO/IEC IS 23006-3 3rd edition MXM Reference Software and Conformance
7.8.10 MMT Reference Software
The following documents were approved
15965
15966
15967
Text of ISO/IEC IS 23008-4 MMT Reference Software
Request for ISO/IEC 23008-4 AMD 1 MMT Reference Software with Network
Capabilities
Text of ISO/IEC 23008-4 PDAM 1 MMT Reference Software with Network Capabilities
7.8.11 HEVC Reference Software`
The following documents were approved
16053
16054
16055
16056
Disposition of Comments on ISO/IEC 23008-5:2015/DAM3
Disposition of Comments on ISO/IEC 23008-5:2015/DAM4
Text of ISO/IEC FDIS 23008-5:201x Reference Software for High Efficiency Video
Coding [2nd ed.]
Request for ISO/IEC 23008-5:201x/Amd.1
16057
Text of ISO/IEC 23008-5:201x/PDAM1 Reference Software for Screen Content Coding
Profiles
7.8.12 3D Audio Reference Software
The following documents were approved
16095
16096
3D Audio Reference Software RM6
Workplan on 3D Audio Reference Software RM7
7.8.13 MPEG-DASH Reference Software
The following document was approved
15989
Work plan for development of DASH Conformance and reference software and sample
clients
7.9 Conformance
7.9.1 Internet Video Coding Conformance
The following documents were approved
16024
16025
Disposition of Comments on ISO/IEC 14496-4:2004/PDAM46
Text of ISO/IEC 14496-4:2004/DAM46 Conformance Testing for Internet Video Coding
7.9.2 Conformance of File Format
The following document was approved
15935
WD of ISO/IEC 14496-32 Reference Software and Conformance for File Format
7.9.3 Media Tool Library Conformance
The following documents were approved
16043
16044
Disposition of Comments on ISO/IEC 23002-5:2013/PDAM3
Text of ISO/IEC 23002-5:2013/DAM3 Reference software for parser instantiation from
BSD
7.9.4 Conformance for Green Metadata
The following documents were approved
15950
15951
Request for ISO/IEC 23001-11 AMD 2 Conformance and Reference Software
Text of ISO/IEC 23001-11 PDAM 2 Conformance and Reference Software
7.9.5 SAOC and SAOC Dialogue Enhancement Conformance
The following document was approved
16077
Study on ISO/IEC 23003-2:2010/DAM 4, SAOC Conformance
7.9.6 MPEG-V – Conformance
7.9.7 Conformance of MPEG-M
The following document was approved
15955
Text of ISO/IEC IS 23006-3 3rd edition MXM Reference Software and Conformance
7.9.8 MMT Conformance
The following document was approved
15968
Workplan of MMT Conformance
7.9.9 HEVC Conformance
The following documents were approved
16058
16059
16060
16061
16062
Disposition of Comments on ISO/IEC 23008-8:2015/DAM1
Disposition of Comments on ISO/IEC 23008-8:2015/DAM2
Disposition of Comments on ISO/IEC 23008-8:2015/DAM3
Text of ISO/IEC FDIS 23008-8:201x HEVC Conformance Testing [2nd edition]
Working Draft of ISO/IEC 23008-8:201x/Amd.1 Conformance Testing for Screen
Content Coding Profiles
7.10 Maintenance
7.10.1 Systems coding standards
The following documents were approved
15910
15911
15919
15920
15926
15929
15932
15933
15956
15957
15962
15969
DoC on ISO/IEC 13818-1:2015 AMD 1/DCOR 2
Text of ISO/IEC 13818-1:2015 AMD 1/COR 2
Text of ISO/IEC 13818-1:2015 DCOR 2
Defect Report of ISO/IEC 14496-12
Text of ISO/IEC 14496-14 COR 2
Defect Report of ISO/IEC 14496-15
DoC on ISO/IEC 14496-30:2014 DCOR 2
Text of ISO/IEC 14496-30:2014 COR 2
DoC on ISO/IEC 23008-1:2014 DCOR 2
Text of ISO/IEC 23008-1:2014 COR 2
Defects under investigation in ISO/IEC 23008-1
Text of ISO/IEC 23008-11 DCOR 1
15974
15975
15974
15975
15983
15984
DoC on ISO/IEC 23008-12 DCOR 1
Text of ISO/IEC 23008-12 COR 1
DoC on ISO/IEC 23008-12 DCOR 1
Text of ISO/IEC 23008-12 COR 1
Text of ISO/IEC 23009-1:2014 DCOR 3
Defects under investigation in ISO/IEC 23009-1
7.10.2 Audio coding standards
The following documents were approved
15998
15924
16076
16080
16081
16084
16089
8
Text of ISO/IEC 14496-3:2009/AMD 3:2012/COR 1, Downscaled (E)LD
Text of ISO/IEC 14496-5:2001/AMD 24:2009/DCOR 3, Downscaled (E)LD
ISO/IEC 23003-2:2010/DCOR 3 SAOC, SAOC Corrections
DoC on ISO/IEC 23003-3:2012/Amd.1:2014/DCOR 2
Text of ISO/IEC 23003-3:2012/Amd.1:2014/COR 2
Defect Report on MPEG-D DRC
ISO/IEC 23008-3:2015/DCOR 1 Multiple corrections
Organisation of this meeting
8.1 Tasks for subgroups
The following tasks were assigned
Group
Requirements
Systems
Std
7
A
V
Exp
Exp
Exp
Exp
Exp
2
4
21
A
Pt E/A
1
1
1
1
15
A4
A5
A6
A7
E4
A2
22
32
22
15
16
17
18
E3
E1
E1
E1
E1
E1
E1
Title
Compact Descriptors for Video Analysis
CMAF
MioT – MPEG wearable
FTV
Future video coding
Media orchestration
Genome compression
Everything 3D
Carriage of additional Audio P&L
Carriage of 3D Audio
Carriage of quality metadata
Virtual segments
Carriage of layered HEVC
Carriage of AVC-based 3D excluding
MVC
Open Font Format
FF RS (including SVC FF RS)
User description
MPAF
PSAF
Multisensorial AF
MLAF
B
DA
H
M
Video
Exp
4
7
B
VC
C
Exp
H
3V
4
H
Audio
4
D
19
20
21
22
8
10
1
E1
E1
E1
E1
A1
A1
A3
Screen Sharing AF
Omnidirectional Media AF
CMAF
Identity management AF
CICP
ROI coordinates
Authentication access control and multiple
MPD
2 E2
DASH C&RS
3 E3
Implementation guidelines
5 E1
SAND
6 E1
FDH
1 A1 MMT data carriage in 3D Audio
A2 Mobile MMT
4 E1
MMT Reference Software
7 E1
MMT Conformance
12 E1
Storage of image sequences in ISOBMFF
13 E2
MMT implementation guidelines
1 E3
Architecture
2 E3
MXM Engines and API
3 E3
Reference software
Media orchestration
10 A2 Additional levels and SEI
A4 Progressive High 10 Profile
33 E1
Internet Video Coding
6 E2
Reference software
14 E1
CDVS RS&C
15 E1
CDVA
4 E4
Parser instantiation from BSD
8 A1 CICP
4 A2 VTL extensions (HEVC)
Future Video Coding
2 E3
SCC
5 A1 RExt Reference Software
A3 SHVC Reference Software
8 A2 RExt Conformance
A3 SHVC conformance
14 E1
HDR best practices
4 A45 MFC+depth conformance
5 A39 MFC+depth RS
10 A3 Additional SEI
5 A2 MV-HEVC RS
A4 3D RS
8 A1 MV + 3D Conformance
3 A6 Levels for 22 channel programs
4 A1 Decoder generated DRC sets
A2 DRC C
A3 DRC RS
H
3DG
3 E2
A2
6 E1
9 E1
5 A40
11 E2
25 E3
27 A4
13 E2
17 E1
2 E3
E1
1 E3
2 E3
3 E3
4 E3
5 E3
6 E3
7 E3
4
A
M
MAR
V
Communication White
Paper
Video
Introduction
Column
3D Audio phase 2
FF support for 3D Audio
3D Audio RS
3D Audio C
Web3D coding RS
Protos for ARAF in BIFS
Web 3DG coding
Web3D coding C
ARAF
Multisensory MAF
MPEG-V engines
MAR RM
Architecture
Control Information
Sensory information
Virtual world object characteristics
Data representation for interaction devices
Common types and tools
Conformance and reference software
Animation Framework eXtension (AFX)
Compact descriptors for visual search
Image File Format
CEL & MCO
ARAF
3D Audio
Make a good introduction for ARAF, UD
Open font format
Dynamic Range Control
HEVC/SCC
3D Audio
CDVS
User Description
ARAF
PSAF
8.2 Joint meetings
The following joint meetings were held
Groups
R, V
R, V, VC, VCEG
R, V, VCEG
S, A
R, V
R, V, 3
S, R
All
R, V, VCEG, VC
What
HDR
HDR, SCC
AVC
System carriage
CDVA
Everything 3D
CMF, OMAF
MPEG Vision
HDR
Day
Mon
Mon
Mon
Tue
Tue
Tue
Tue
Wed
Wed
Time1
15:30
16:30
18:00
09:00
09:00
10:00
12:00
11:30
11:00
Time2
16:30
18:00
18:30
10:00
10:00
11:00
13:00
13:00
12:00
Where
V
VC
VC
A
R
R
R
A
Salon D-E
JPEG-MPEG
A, 3
R, A
R, 3
All
All
9
Joint standards
3D Audio in VR/AR
3D Audio profiles
MIoTW
Communication
MPEG Vision
Thu
Thu
Thu
Thu
Thu
Thu
09:00
10:00
11:00
14:00
16:00
16:00
10:00
11:00
12:00
15:00
17:00
18:00
JPEG
A
A
3
MIssion Beach
WG management
9.1 Terms of reference
The following document was approved
16000
Terms of Reference
9.2 Liaisons
9.2.1 Input liaisons
The following input liaisons were received
#
Title
37576 Liaison statement
37577 Liaison statement
37578 ITU-T SG
37579 ITU-T SG 16
37580 ITU-R SG 6
37581 JTC 1/WG 10
37582 JTC 1/WG 10 on
37583 Liaison statement
37584 Liaison statement
37585 Liaison statement
37653 Liaison statement
37654
37655
37656
37787
HDR
MPEG-H Audio
Liaison statement
Liaison statement
Source
JTC 1/
WG 10
SC 37
ITU-T
SG 16
ITU-T
SG 16
ITU-R
SG 6
JTC 1/
WG 10
JTC 1/
WG 10
ITU-T
SG 20
JTC 1/
WG 10
IETF
ITU-T
SG 9
DVB
DVB
3GPP
DVB
Disposition
IoT
3
Biometric presentation attack detection
16 on video coding collaboration
R
V
on discontinued IPTV multimedia
application frameworks to include the
definition of IPTV in N13952
on ITU-R BS.1196-5Audio coding for
digital broadcasting
Invitation to the 4th JTC 1/WG 10
Meeting, 18th-22nd Jan. 2016
Logistical information for the 4th JTC
1/WG 10 Meeting
a new Study Group, IoT and its
applications including smart cities and
communities (SC&C)
collection of Data related to the Internet of
Things
URI Signing
Specification of IP-VOD DRM for cable
television multiscreen system in MultiDRM environment
S
DASH
TEMI
A
3
3
3
3
S
S
V
A
S
S
37788 Liaison statement
ITU-T
SG 16
37899 MPEG-DASH (N15699) DASH
IF
38016 Liaison statement
ETSI
ISG
CCM
38040 Liaison statement
HbbTV
38041 Revised WD of ISO/IEC JTC 1/
30141
WG 10
38042
JTC 1/
WG 10
38082 HDR
SMPTE
38095 Liaison statement
VSF
37658 IEC CD 60728-13-1 Ed.
2.0
37589 IEC CD 61937-13 and
61937-14
37590 IEC CD 61937-2 Ed.
2.0/Amd.2
37686 IEC CD 62608-2 Ed.1.0
37591 IEC CD 63029 [IEC
100/2627/CD]
IEC TC
100
IEC TC
100
IEC TC
100
IEC TC
100
IEC TC
100
37592 IEC CD 63029 [IEC
100/2627A/CD]
IEC TC
100
37650 IEC CDV 62394/Ed.3
IEC TC
100
IEC TC
100
IEC TC
100
37595 IEC CDV 62680-1-3,
62680-3-1 and 62943
37586 IEC CDV 62766-4-1,
62766-4-2, 62766-5-1,
62766-5-2, 62766-6,
62766-7 and 62766-8
37657 IEC CDV 62827-3 Ed.
1.0
37659 IEC CDV 62943/Ed. 1.0
37598 IEC CDV 63002 Ed. 1.0
37660 IEC CDV 63002 Ed.1.0
IEC TC
100
IEC TC
100
IEC TC
100
IEC TC
100
HDR
V
S
HDR
V
TEMI
Internet of Things Reference Architecture
(IoT RA)
Request contributions on IoT use cases
S
3
higher resolution JPEG 2000 video in
MPEG-2 TS
Cable networks for television signals,
sound signals and interactive services
Digital audio - Interface for non-linear
PCM encoded audio bitstreams
Digital audio - Interface for non-linear
PCM encoded audio bitstreams
Multimedia home network configuration Basic reference model
Multimedia e-publishing and e-book
technologies - Raster-graphics imagebased e-books
Multimedia e-publishing and e-book
technologies - Raster-graphics imagebased e-books (Titre) : Introductory note
Service diagnostic interface for consumer
electronics products and networks
Universal Serial Bus Type-CTM Cable
and Connector Specification, Revision 1.1
Open IPTV Forum (OIPF) consumer
terminal function and network interfaces
for access to IPTV and open Internet
multimedia services
Wireless Power Transfer - Management
Visible light beacon system for
multimedia applications
Identification and communication
interoperability method for external power
supplies used with portable computing
devices
Identification and communication
interoperability method for external power
supplies used with portable computing
devices
3
V
S
37588 IEC CDV 63028 Ed. 1.0
IEC TC
100
Wireless Power Transfer - Magnetic
Resonance Interoperability
37596 IEC CDV
IEC TC
100
37587 IEC DTR 62921 Ed. 2.0
IEC TC
100
37651 IEC
IEC TC
100
IEC TC
100
MIDI (MUSICAL INSTRUMENT
DIGITAL INTERFACE)
SPECIFICATION 1.0 (ABRIDGED
EDITION, 2015)MIDI (MUSICAL
INSTRUMENT DIGITAL INTERFACE)
SPECIFICATION 1.0
QUANTIFICATION METHODOLOGY
FOR GREENHOUSE GAS EMISSIONS
FOR COMPUTERS AND MONITORS
NP Microspeakers
37597 IEC
37652 IEC
IEC TC
100
37593 IEC
IEC TC
100
IEC TC
100
IEC TC
100
37594 IEC
37661 IEC TR 62935 Ed1.0
37685 IEC TS 63033-1
IEC TC
100
NP MIDI (MUSICAL INSTRUMENT
DIGITAL INTERFACE)
SPECIFICATION 1.0 (ABRIDGED
EDITION, 2015)
NP Multimedia Vibration Audio Systems Method of measurement for audio
characteristics of audio actuator by pinnaconduction
NP: Universal Serial Bus interfaces for
data and power - Part 3-1
NP: Universal Serial Bus interfaces for
data and power - Part 1-3
TECHNICAL REPORT OF
MEASUREMENT METHODS – High
Dynamic Range Video
CAR MULTIMEDIA SYSTEMS AND
EQUIPMENT – DRIVE MONITOR
SYSTEM
9.2.2 Output liaisons
The following output liaisons were sent
16148
16126
16015
16016
16017
16018
16019
16020
16021
16022
16023
16069
16070
Liaison with TC 276
Liaison to JTC 1/WG 9
Liaison Statement to IETF on URI signing
Liaison Statement template on DASH with Server Push and WebSockets
Liaison Statement to 3GPP on DASH
Liaison Statement to DASH-IF on DASH
Liaison Statement to DVB on TEMI
Liaison Statement to HbbTV on TEMI
Liaison Statement template on Common Media Application Format
Liaison Statement template on HEVC sync samples in ISO/IEC 14496-15
Liaison Statement to SCTE on Virtual Segments and DASH
Liaison statement to ITU-T SG 16 on video coding collaboration
Liaison statement to DVB on HDR
16071
16072
16073
16074
16097
16098
16099
16100
16101
16102
16123
16124
Liaison statement to ATSC on HDR/WCG
Liaison statement to ITU-R WP6C on HDR
Liaison statement to ETSI ISG CCM on HDR
Liaison statement to SMPTE on HDR/WCG
Liaison Statement to DVB on MPEG-H 3D Audio
Liaison Statement Template on MPEG-H 3D Audio
Liaison Statement to ITU-R Study Group 6 on BS.1196-5
Liaison Statement to IEC TC 100
Liaison Statement to IEC TC 100 TA 4
Liaison Statement to IEC TC 100 TA 5
Liaison Statement to JTC1 WG10 related to MIoT
Liaison Statement to ITU-T SG20 related to MIoT
9.2.3 Statement of benefits
The following document was approved
16075
Statement of benefit for establishing a liaison with DICOM
9.2.4 Organisations in liaison
The following document was approved
16011
List of organisations in liaison with MPEG
9.3 Ad hoc groups
The following document was approved
15905
List of AHGs Established at the 114th Meeting
9.4 Asset management
The following documents were approved
16005
16006
16007
16008
16009
Schemas
Reference software
Conformance
Content
URIs
9.5 IPR management
The following document was approved
16010
Call for patent statements on standards under development
9.6 Work plan and time line
The following documents were approved
16001
16002
16003
16004
16013
15971
10
MPEG Standards
Table of unpublished FDIS
MPEG Work plan
MPEG time line
Complete list of all MPEG standards
MPEG Strategic Standardisation Roadmap
Administrative matters
10.1 Schedule of future MPEG meetings
The following meeting schedule was approved
#
115
116
117
118
119
120
121
122
City
Country yy mm-mm dd-dd
Geneva
CH
16
05-06 30-03
Chengdu
CN
16
10 17-21
Geneva
CH
17
01 16-20
Hobart
AU
17
04 03-07
Torino
IT
17
07 17-21
??
??
17
10 23-27
??
??
18
01 22-26
??
??
18
04 16-20
10.2 Promotional activities
The following documents were approved
15996
15997
15907
11
White Paper on MPEG-21 Contract Expression Language (CEL) and MPEG-21
Media Contract Ontology (MCO)
White Paper on the 2nd edition of ARAF
Press Release of the 114th Meeting
Resolutions of this meeting
These were approved
12
A.O.B.
There was no other business
13
Closing
Meeting closed at 2016/02/26T20:00
– Attendance list
LastName
Brown
Chon
Chong
El-Maleh
Hach
Harvey
Kempf
Koizumi
Lee
Li
Lo
Nandhakumar
Pazos
Sedlik
Zhong
Zhou
Aaron
Abbas
Agnelli
FirstName
Gregg
Jaehong
In suk
khaled
Faraz
Ian
James
Mariko
Gwo Giun
Zhu
Charles
Nandhu
Marcelo
Jeffrey
Xin
Wensheng
Anne
Adeel
Matteo
Organization
Microsoft
Qualcomm
Qualcomm
Qualcomm
Simon Fraser University
20th Century Fox
Ericsson
Kyoto Seika University
National Cheng Kung University
University of Missouri, Kansas City
Qualcomm Technologies Incorporated
LG Electronics
Qualcomm Inc.
PLUS Coalition
QUALCOMM
SIP UCLA ITA
Netflix
GoPro
Fraunhofer IIS
Agostinelli
Ahn
Alberti
Alpaslan
Alshina
Alvarez
Amon
Andersson
Andriani
Aoki
Massimiliano
Yong-Jo
Claudio
Zahir
Elena
Jose Roberto
Peter
Kenneth
Stefano
Shuichi
Trellis Management Ltd
Kwangwoon University (KWU)
EPFL
Ostendo Technologies, Inc.
Samsung Electronics
Huawei Technologies
Siemens AG
Ericsson
ARRI Cinetechnik GmbH
NHK
Artusi
Azimi
Bae
Bailer
Balestri
Ballocca
Bandoh
Bang
Baroncini
Barroux
Baumgarte
Baylon
Beack
Alessandro
Maryam
HyoChul
Werner
Massimo
Giovanni
Yukihiro
Gun
Vittorio
Guillaume
Frank
David
Seungkwon
Trellis
University of British Columbia
Konkuk University
JOANNEUM RESEARCH
Telecom Italia S.p.A.
Sisvel Technology srl
NTT
ETRI
GBTech
Fujitsu Laboratories Ltd.
Apple Inc.
ARRIS
ETRI
Country
United States
United States
United States
United States
Canada
United States
United States
Japan
China
United States
United States
United States
United States
United States
United States
United States
United States
United States
Germany
United
Kingdom
Korea
Switzerland
United States
Korea
United States
Germany
Sweden
Germany
Japan
United
Kingdom
Canada
Korea
Austria
Italy
Italy
Japan
Korea
Italy
Japan
United States
United States
Korea
Benjelloun
Touimi
bergeron
Bivolarsky
Abdellatif
cyril
Lazar
Huawei Technologies (UK) Co., Ltd.
Thales
Tata Communnications
Bober
Boitard
Bouazizi
Boyce
brondijk
Bross
Bruylants
Budagavi
Bugdayci Sansli
Byun
Carballeira
Ceulemans
Chai
Miroslaw
Ronan
Imed
Jill
robert
Benjamin
Tim
Madhukar
Done
Hyung Gi
Pablo
Beerend
Chi
Surrey University
University of British Columbia
Samsung Research America
Intel
Philips
Fraunhofer HHI
Vrije Universiteit Brussel - iMinds
Samsung Research America
Qualcomm
Kangwon National University
Universidad Politecnica de Madrid
iMinds - ETRO - VUB
Real Communications
Chalmers
Champel
Alan
Mary-Luc
University of Warwick
Technicolor
Chandramouli
Chang
CHANG
CHANG
Chen
Chen
Chen
Chen
Chen
Chen
Chen
Chiariglione
Chien
Chinen
Chiu
Cho
Choe
Choi
Choi
Choi
Choi
Choi
Choi
CHOI
Chon
Chono
Chujoh
Krishna
Mike
SUNG JUNE
YAO JEN
Chun-Chi
Hao
Jianle
Jie
Junhua
Peisong
Weizhong
Leonardo
Wei-Jung
Toru
Yi-Jen
Minju
Yoonsik
Byeongdoo
Hae Chul
JangSik
Jangwon
Kiho
Miran
SEUNGCHEOL
Sang Bae
Keiichi
Takeshi
Queen Mary, University of London
Huawei Devices
ETRI
ITRI International
InterDigital
Shanghai Jiao Tong University
Qualcomm Inc.
Peking University
Huawei Technologies Co., Ltd.
Broadcom Corporation
Huawei technologies CO.,LTD
CEDEO
Qualcomm Inc.
Sony Corporation
Intel Corp.
Seoul Women's University
Yonsei University
Samsung Electronics
Hanbat National University
Kangwon national university
LG Electronics
Samsung Electronics
ETRI
Sejong University
Samsung Electronics
NEC Corp.
Toshiba corporation
United
Kingdom
France
United States
United
Kingdom
Canada
United States
United States
Netherlands
Germany
Belgium
United States
Finland
Korea
Spain
Belgium
United States
United
Kingdom
France
United
Kingdom
United States
Korea
China
United States
China
United States
China
China
United States
China
Italy
United States
Japan
United States
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Japan
Japan
Chun
clare
Concolato
DA YOUNG
Dang
Sungmoon
gordon
Cyril
JUNG
Niem
Insignal
orange labs
Telecom ParisTech
Konkuk University
SCTE
Daniels
Daniels
de Bruijn
De Cock
de Haan
DEROUINEAU
Descampe
David
Noah
Werner
Jan
Wiebe
Nicolas
Antonin
Sky
MIT
Philips
Netflix
Philips
VITEC
intoPIX
Dias
Doherty
Dolan
Domanski
doyen
Ducloux
Duenas
Dufourd
Ebrahimi
Ehara
fautier
Feng
Fersch
Filippov
Filos
Foessel
Fogg
Francois
Fröjdh
Fuchs
Gao
Gendron
Gérard
Andre
Richard
Michael
Marek
didier
Xavier
Alberto
Jean-Claude
Touradj
Hiroyuki
thierry
Jie
Christof
Alexey
Jason
Siegfried
Chad
Edouard
Per
Harald
Wen
Patrick
MADEC
BBC
Dolby Labs
TBT
Poznan University of Technology
technicolor
B<>com
NGCodec Inc
Institut Mines Telecom
EPFL
Panasonic Corporation
harmonic
Vixs System Inc
Dolby Laboratories
Huawei Technologies
Qualcomm
Fraunhofer IIS
MovieLabs
Technicolor
Ericsson
Fraunhofer IIS
Harmonic Inc
Thomson Video Networks
B-COM
Gibbs
Gibellino
GISQUET
Grange
Jon
Diego
Christophe
Adrian
Huawei Technologies Co Ltd
Telecom Italia SPA
CANON Research France
Google
Grant
Graziosi
Grois
Grüneberg
Gu
GUEZ VUCHER
Catherine
Danillo
Dan
Karsten
Zhouye
Marc
Nine Tiles
Ostendo Technologies Inc.
Fraunhofer HHI
Fraunhofer HHI
Arris Group Inc.
SCPP
Korea
France
France
Korea
United States
United
Kingdom
United States
Netherlands
United States
Netherlands
France
Belgium
United
Kingdom
United States
United States
Poland
France
France
United States
France
Switzerland
Japan
United States
Canada
Germany
Russia
United States
Germany
United States
France
Sweden
Germany
United States
France
France
United
Kingdom
Italy
France
United States
United
Kingdom
United States
Germany
Germany
United States
France
Hagqvist
Hamidouche
Hannuksela
Hara
HARA
HASHIMOTO
HATTORI
He
He
He
Hearty
Hendry
HEO
Hernaez
Herre
Hinds
Hipple
Hirabayashi
HO
Hoang
Hofmann
Holland
Hong
Hsieh
Huang
Huang
Huang
Huang
Huang
Hughes
Iacoviello
Ichigaya
Ikai
Ikeda
IMANAKA
Ishikawa
Iwamura
JANG
Jang
Jang
Jang
Jang
Januszkiewicz
Jasioski
Jeon
JEONG
Jeong
jiang
Jari
Wassim
Miska
Junichi
KAZUHIRO
Ryoji
SALLY
Dake
Yun
Yuwen
Paul
FNU
JIN
Mikel
Juergen
Arianne
Aaron
Mitsuhiro
YO-SUNG
Dzung
Ingo
James
Seungwook
Ted
Cheng
Qi
Tiejun
Wei
Yu-Wen
Kilroy
Roberto
Atsuro
Tomohiro
Masaru
Hideo
Takaaki
Shunsuke
DALWON
EueeSeon
In-su
Si-Hwan
Wonkap
Łukasz
Marcin
Seungsu
JINHO
Min Hyuk
minqiang
Nokia Technologies
INSA de Rennes / IETR
Nokia
Ricoh company, ltd.
NHK
Renesas Electoronics
Twentieth Century Fox
BlackBerry
Tsinghua Univ.
InterDigital Communications
Sony Electronics, Inc.
Qualcomm
LG Electronics
Stanford University
Fraunhofer IIS
Cable Television Laboratories
Netflix
SONY Corp.
GIST
NXP Semiconductor
Fraunhofer IIS
Intel Corporation
Arris
Qualcomm
Nanjing Zhongxingxin Software Co.Ltd
Dolby
Peking University
Shanghai Jiao Tong University
MediaTek Inc.
Microsoft
RAI
NHK
Sharp corporation
Sony Electronics Inc.
NTT
Waseda University
NHK
Korea Electronics Technology Institute
HanyangUniversity
ETRI
ETRI
Vidyo Inc.
Zylia Sp. z o.o.
Imagination Ltd.
SungKyunKwan Univ.
GreenCloud co.ltd.
Myongji University
santa clara university
Finland
France
Finland
Japan
Japan
Japan
United States
Canada
China
United States
United States
United States
Korea
United States
Germany
United States
United States
Japan
Korea
United States
Germany
United States
United States
United States
China
China
China
China
China
United States
Italy
Japan
Japan
United States
Japan
Japan
Japan
Korea
Korea
Korea
Korea
United States
Poland
Poland
Korea
Korea
Korea
United States
Jin
Jo
Jo
Jongmin
Joo
Joshi
Joveski
JU
JUN
Jung
Jung
Kaneko
Kang
Kang
KANG
Karczewicz
Kawamura
Keinert
Kerofsky
Kervadec
Kim
Kim
Kim
Kim
Kim
Kim
Kim
Kim
Kim
Kim
Kim
Kim
Kim
Kimura
Klenner
Ko
Koenen
KOLAROV
Konstantinides
koo
Kratschmer
Krauss
Guoxin
Bokyun
Bokyun
Lee
Sanghyun
Rajan
Bojan
HYUNHO
DONGSAN
Cheolkon
Joel
Itaru
DongNam
Jewon
JUNG WON
Marta
Kei
Joachim
Louis
Sylvain
DaeYeon
Duckhwan
Hui Yong
Ilseung
Jae Hoon
Jae-Gon
Jungsun
Kyuheon
Moo-Young
Namuk
Sang-Kyun
Seung-Hwan
Youngseop
Takuto
Peter
Hyunsuk
Rob
Krasimir
Konstantinos
bontae
Michael
Kurt
Qualcomm
Kyunghee University
Kyunghee University
SK Telecom
ETRI
Qualcomm
USTARTAPP
Korea Electronics Technology Institute
ETRI
Xidian University
Orange Labs
Tokyo Polytechnic University
Nexstreaming
Ewha W. University
ETRI
Qualcomm
KDDI Corp. (KDDI R&D Labs.)
Fraunhofer IIS
Interdigital
Orange Labs
Chips&Media Inc.
ETRI
ETRI
Hanyang Univ.
Apple
Korea Aerospace University
MediaTek Inc
KyungHee Univ
Qualcomm
Sungkyunkwan Univ.
Myongji Unversity
Sharp Labs of America
Dankook University
NTT
Panasonic
ETRI
TNO
APPLE
Dolby Labs
etri
Fraunhofer IIS
Dolby Laboratories Inc.
Kudumakis
Panos
Kui
Kuntz
Kwak
Fan
Achim
Youngshin
Queen Mary University of London
Pecking University, Shenzhen Graduated
School
Fraunhofer IIS
Ulsan National Institute of Science and
United States
Korea
Korea
Korea
Korea
United States
France
Korea
Korea
China
France
Japan
Korea
Korea
Korea
United States
Japan
Germany
United States
France
Korea
Korea
Korea
Korea
United States
Korea
United States
Korea
United States
Korea
Korea
United States
Korea
Japan
Germany
Korea
Netherlands
United States
United States
Korea
Germany
Germany
United
Kingdom
China
Germany
Korea
Kwon
Ladd
Lafruit
LAI
Lainema
Oh-Jin
Patrick
Gauthier
PoLin
Jani
Technology
Sejong University
Comcast
Université Libre de Bruxelles
MediaTek USA
Nokia
Laverty
Lavric
Le Léannec
Lee
Lee
Lee
Lee
Lee
Lee
Lee
Lee
LEE
Lee
Lee
Lee
Lee
LEE
Lee
Lee
Lei
Levantovsky
Li
LI
Li
Li
Li
Li
Li
Lim
Lim
Lim
Lim
Lim
Lin
Lin
LIN
lin
LIU
Liu
liu
Lohmar
Edward
Traian
Fabrice
Bumshik
Chang-kyu
Dongkyu
Hae
Jae Yung
Jangwon
Jin Young
Jinho
JUNGHYUN
MyeongChun
Seungwook
Sungwon
Taegyu
Taejin
Yong-Hwan
Yung Lyul
Shawmin
Vladimir
Ge
HAITING
Jisheng
Min
Ming
Xiang
Zhu
ChongSoon
Jaehyun
Sung-Chang
Sungwon
Youngkwon
Ching-Chieh
Chun-Lung
HUI-TING
tao
LI
Shan
wenjun
Thorsten
DTS Inc.
Institut Mines-Télécom
Technicolor
LG Electronics
ETRI
Kwangwoon University
ETRI
Sejong University
LG Electronics
ETRI
ETRI
Hanyang University
Korea Electronics Technology Institute
ETRI
Qualcomm
Yonsei University
ETRI
Far East University
Sejong University
MediaTek, Inc.
Monotype
Peking University, Shenzhen Graduate School
Huawei Technologies Co., Ltd.
Tsinghua University
Qualcomm Inc.
ZTE Corporation
Qualcomm
University of Missouri, Kansas City
Panasonic R&D Center Singapore
LG Electronics
ETRI
Sejong Univ.
Samsung
ITRI
ITRI international
ITRI/National Chiao Tung University
Tongji University
Peking University
MediaTek
huawei USA
Ericsson
Korea
United States
Belgium
United States
Finland
United
Kingdom
France
France
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Korea
Korea
United States
Korea
Korea
Korea
Korea
China
United States
China
China
United States
United States
China
United States
United States
Singapore
Korea
Korea
Korea
United States
China
China
China
China
China
United States
United States
Germany
Lu
Lu
Luthra
ma
Macq
Malamal
Vadakital
Mandel
Maness
Margarit
Marpe
Mattavelli
MAZE
Ang
Taoran
Ajay
xiang
Benoit
Zhejiang University
Dolby Laboratories
Arris
Huawei Technologies Co. Ltd
UCL
China
United States
United States
China
Belgium
Vinod Kumar
Bill
Phillip
Dimitrie
Detlev
Marco
Frederic
Nokia
Universal Pictures [Comcast]
DTS, Inc.
Sigma Designs
Fraunhofer HHI
EPFL
Canon Research France
McCann
Minezawa
Minoo
Misra
Mitrea
Morrell
Murtaza
Ken
Akira
Koohyar
Kiran
Mihai
Martin
Adrian
Zetacast/Dolby
Mitsubishi Electric Corporation
Arris Group Inc.
Sharp
Institut Mines Telecom
Qualcomm
Fraunhofer IIS
Naccari
Nakachi
Nakagami
NAM
Narasimhan
Nasiopoulos
Natu
Navali
Neuendorf
Neukam
Nguyen
NICHOLSON
Norkin
Numanagic
Ochoa
Oh
Oh
OH
Oh
Oh
Ohm
Onders
Ostermann
Oyman
Pahalawatta
PALLONE
Pan
Matteo
Takayuki
Ohji
HYEONWOO
Sam
Panos
Ambarish
Prabhu
Max
Christian
Tung
Didier
Andrey
Ibrahim
Idoia
ByungTae
Hyun Mook
HYUN OH
Kwan-Jung
Sejin
Jens-Rainer
Timothy
Joern
Ozgur
Peshala
Gregory
Hao
BBC
NTT
Sony corporation
DONGDUK WOMEN'S UNIVERSITY
Arris Inc
UBC
Australian Government
Ericsson
Fraunhofer IIS
Fraunhofer IIS
Fraunhofer HHI
VITEC
Netflix
Simon Fraser University
Stanford University
Korea Aerospace University
LG Electronics
WILUS Inc.
ETRI
LG Electronics
RWTH Aachen
Dolby Laboratories
Leibniz University Hannover
Intel Corporation
AT&T Entertainment Group
Orange
Apple Inc.
Finland
United States
United States
United States
Germany
Switzerland
France
United
Kingdom
Japan
United States
United States
France
United States
Germany
United
Kingdom
Japan
Japan
Korea
United States
Canada
Australia
United States
Germany
Germany
Germany
France
United States
Canada
United States
Korea
Korea
Korea
Korea
Korea
Germany
United States
Germany
United States
United States
France
United States
PANAHPOUR
TEHRANI
Panusopone
parhy
Paridaens
parikh
Park
Park
Park
PARK
PENG
Pereira
Peters
Philippe
Piron
Poirier
Preda
Pu
Qiu
Quackenbush
Ramasubramonia
n
Rapaka
Reznik
Ridge
ROH
Rosewarne
Rouvroy
Rusanovskyy
Said
Salehin
Samuelsson
Sanchez de la
Fuente
Schierl
Seeger
Segall
Sen
Senoh
Seregin
Setiawan
Shima
Shivappa
Singer
Skupin
So
Sodagar
Sohn
Mehrdad
Krit
manindra
Tom
nidhish
Kyungmo
Min Woo
Sang-hyo
YOUNGO
WEN-HSIAO
Fernando
Nils
Pierrick
Laurent
Tangi
Marius
Fangjun
Wenyi
Schuyler
Adarsh
Krishnan
Krishnakanth
Yuriy
Justin
DONGWOOK
Chris
Gaël
Dmytro
Amir
S M Akramus
Jonatan
NAGOYA UNIVERSITY
ARRIS
Nvidia
Ghent University - iMinds - Multimedia Lab
Nokia Ltd
Samsung Electronics Co., Ltd.
Samsung Electronics
HanyangUniversity
SAMSUNG
ITRI International/NCTU
Instituto Superior Técnico
Qualcomm
Orange
Nagravision
Technicolor
Institut MINES TELECOM
Dolby Laboratories
Hisilicon
Audio Research Labs
Japan
United States
United States
Belgium
Finland
Korea
Korea
Korea
Korea
United States
Portugal
United States
France
Switzerland
France
France
United States
China
United States
Qualcomm Technologies Inc.
Qualcomm Inc
Brightcove, Inc.
Nokia
LG ELECTRONICS
Canon
Intopix
Qualcomm Inc.
Qualcomm Technologies, Inc.
Qualcomm Technolgies Inc
Ericsson
United States
United States
United States
Finland
Korea
Australia
Belgium
United States
United States
United States
Sweden
Yago
Thomas
Chris
Andrew
Deep
Takanori
Vadim
Panji
Masato
Shankar
David
Fraunhofer HHI
Fraunhofer HHI
NBCUniversal/Comcast
Sharp
Qualcomm
NICT
Qualcomm
Huawei Technologies Duesseldorf GmbH
Canon Inc.
Qualcomm Technologies Inc.
Apple
Fraunhofer Heinrich Hertz Institute for
Telecommunication
Samsung Elec.
Microsoft
Seoul Women's University
Germany
Germany
United States
United States
United States
Japan
United States
Germany
Japan
United States
United States
Robert
Youngwan
Iraj
Yejin
Germany
Korea
United States
Korea
Sole
Song
Song
Qualcomm
Samsung
Qualcomm
United States
Korea
United States
Nvidia
Poznan University of Technology
Technicolor
Qualcomm Incorporated
Samsung Electronics
Ericsson
Apple Inc.
Fraunhofer HHI
NHK
LG Electronics
Microsoft
Panasonic R&D Center Singapore
Arris International
Sony Corporation
United States
Poland
United States
Germany
United States
Sweden
United States
Germany
Japan
Korea
United States
Singapore
United States
Japan
Swaminathan
Joel
Jaeyeon
Jeongook
Ranga
Ramanujam
Olgierd
Alan
Thomas
Dale
Jacob
Yeping
Karsten
Takehiro
Jong-Yeul
Gary
Haiwei
wendell
Teruhiko
Viswanathan
(Vishy)
Adobe Systems Inc.
Sychev
Syed
T. Pourazad
Takabayashi
Tanimoto
Teo
Tescher
Thoma
Thomas
Tian
Toma
Topiwala
Tourapis
Tu
Tulvan
Tung
van Deventer
Van Wallendael
Vanam
Vetro
Voges
Wan
Wang
Wang
Wang
Wang
Watanabe
Welch
Maksim
Yasser
Mahsa
Kazuhiko
Masayuki
Han Boon
Andrew
Herbert
Emmanuel
Dong
Tadamasa
Pankaj
Alexandros
Jih-Sheng
Christian
Yi-Shin
Oskar
Glenn
Rahul
Anthony
Jan
Wade
Jian
Ronggang
Xiaquan
Ye-Kui
Shinji
Jim
Huawei Technologies
Comcast
TELUS & UBC
Sony Corporation
Nagoya Industrial Science Research Institute
Panasonic
Microsoft
Fraunhofer IIS
TNO
Mitsubishi Electric Research Labs
Panasonic Corp.
FastVDO LLC
Apple Inc
ITRI international
Mines-Telecom, Telecom Sud-Paris
MStar Semiconductor, Inc.
TNO
Ghent University - iMinds - Multimedia Lab
InterDigital Communications
Mitsubishi Electric
Leibniz Universitaet Hannover
Broadcom Corporation
Polycom
Peking University Shenzhen Graduate School
Hisilicon
Qualcomm Incorporated
ITSCJ
IneoQuest Technologies
United States
Russian
Federation
United States
Canada
Japan
Japan
Singapore
United States
Germany
Netherlands
United States
Japan
United States
United States
China
France
China
Netherlands
Belgium
United States
United States
Germany
United States
Canada
China
China
United States
Japan
United States
Srinivasan
Stankiewicz
Stein
Stockhammer
Stolitzka
Strom
Su
Suehring
Sugimoto
Suh
Sullivan
Sun
sun
Suzuki
Wenger
Willème
Won
Wu
Stephan
Alexandre
Dongjae
Kai
Vidyo
Université Catholique de Louvain (UCL)
Sejong Univ.
Panasonic R&D Center Singapore
Wu
Wuebbolt
XIANGJIAN
Xie
Xiu
Xu
Xu
Xu
XU
Yamamoto
Yang
Yang
Yang
YANG
Ye
Yea
Yin
Yin
Yoon
YU
Yu
Yun
Yun
Yun
Zernicki
Zhang
Zhang
Zhang
ZHANG
ZHAO
Zhao
Zhao
Zhou
Zhou
Zhu
Zoia
Zou
Ping
Oliver
WU
Qingpeng
Xiaoyu
Haiyan
Jizheng
Xiaozhong
YILING
Yuki
Anna
Haitao
Jar-Ferr
JEONG MIN
Yan
Sehoon
Jiaxin
Peng
Kyoung-ro
Lu
Yue
JaeKwan
Jihyeok
Jihyeok
Tomasz
Li
Louis
Wenhao
XINGGONG
WILF
Xin
Zhijie
Jiantong
Minhua
Jianqing
Giorgio
Feng
ZTE UK Ltd
Technicolor
Kwangwoon University
Huawei
InterDigital Communications Inc.
HanyangUniversity
Microsoft
MediaTek
Shanghai Jiaotong University
Sony Corporation
Korea Aerospace Universiy
Huawei Technologies Co., Ltd.
National Cheng Kung University
Korea Electronics Technology Institute
InterDigital Communications
LG Electronics
Huawei
Dolby
Konkuk University
Zhejiang University
Arris
ETRI
Kyung Hee University
Kyunghee University
Zylia
Qualcomm
AMD
Intel Corp.
Peking University
VIXS
Qualcomm Technologies, Inc.
Huawei Technologies Duesseldorf GmbH
Huawei Technologies Co., LTD.
Broadcom
Fujitsu R&D Center
Independent Consultant
Qualcomm
United States
Belgium
Korea
Singapore
United
Kingdom
Germany
Korea
China
United States
Korea
China
United States
China
Japan
Korea
China
China
Korea
United States
Korea
China
United States
Korea
China
United States
Korea
Korea
Korea
Poland
United States
Canada
China
China
Canada
United States
Germany
China
United States
China
Switzerland
United States
– Agenda
#
1
2
3
4
5
6
7
#
#
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
2
3
1
2
3
Item
Opening
Roll call of participants
Approval of agenda
Allocation of contributions
Communications from Convenor
Report of previous meetings
Workplan management
Media coding
Support for Dynamic Range Control
Additional Levels and Supplemental Enhancement Information
Alternative depth information
Printing material and 3D graphics coding for browsers
Metadata for Realistic Material Representation
Composite Font Representation
Video Coding for Browsers
Internet Video Coding
MVCO
Contract Expression Language
Media Contract Ontology
FU and FN extensions for Parser Instantiation
FU and FN extensions for SHVC and Main10 Profiles of SHVC
Tools for Reconfigurable Media Coding Implementations
Support for MPEG-D DRC
Media Context and Control – Control Information
Media Context and Control – Sensory Information
Media Context and Control – Virtual World Object Characteristics
Media Context and Control – Data Formats for Interaction Devices
Media Context and Control – Common Types and Tools
HEVC
3D Audio
3D Audio Profiles
3D Audio File Format Support
Carriage of systems metadata
Free Viewpoint Television
Higher Dynamic Range and Wide Gamut Content Distribution
Future Video Coding
Processing and Sharing of Media under User Control
Genome Compression
Lightfield Formats
Composition coding
Description coding
MPEG-7 Visual
Compact Descriptors for Video Analysis
Green Metadata for HEVC SEI message
4
4
1
2
5
6
7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
8
1
2
3
4
5
9
1
2
3
4
5
6
7
10
1
2
3
4
5
6
7
8
9
10
MPEG-21 User Description
Systems support
Coding-independent codepoints
Media orchestration
IPMP
Digital Item
Transport and File formats
Carriage of Layered HEVC in MPEG-2 TS
Carriage of Green Metadata
Carriage of 3D Audio
Carriage of Quality Metadata in MPEG-2 Systems
Enhanced carriage of HEVC
Carriage of AVC based 3D video excluding MVC
Carriage of Timed Metadata Metrics of Media in the ISO Base Media File
Format
Carriage of ROI information
MPEG Media Transport
Image File Format
Authentication, Access Control and Multiple MPDs
DASH for Full Duplex Protocols
MPEG-DASH Implementation Guidelines
Server and Network Assisted DASH
Multimedia architecture
MPEG-M Architecture
MPEG-M API
MPEG-V Architecture
Media-centric Internet of Things and Wearables
Big media
Application formats
Augmented Reality AF
Publish/Subscribe Application Format (PSAF)
Multisensorial Effects Application Format
Media Linking Application Format
Screen Sharing Application Format
Omnidirectional Media Application Format
Media-related privacy management Application Format
Reference implementation
MVC plus depth extension of AVC Reference Software
3D extension of AVC Reference Software
MPEG-4 Audio Synchronization Reference Software
Pattern based 3D mesh compression Reference Software
Support for Dynamic Range Control, New Levels for ALS Simple Profile, and
Audio Synchronization Reference Software
Video Coding for Browsers Reference Software
Video coding for Internet Video Coding
CEL and MCO Reference Software
MPEG-7 Visual Reference Software
CDVS Reference Software
11
12
13
14
15
16
17
18
19
20
21
22
23
24
11
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
12
1
2
3
4
5
6
7
8
9
8
1
2
3
ARAF reference software
Media Tool Library Reference Software
SAOC and SAOC Dialogue Reference Software
DRC Reference Software
Reference Software for MPEG-V
Reference Software for MPEG-M
MMT Reference Software
HEVC Reference Software
HEVC RExt Reference Software
MV-HEVC Reference Software
SHVC Reference Software
3D-HEVC Reference Software
3D Audio Reference Software
MPEG-DASH Reference Software
Conformance
New levels for AAC profiles and uniDRC support
Multi-resolution Frame Compatible Stereo Coding extension of AVC
Conformance
3D-AVC Conformance
MFC+Depth Extension of AVC Conformance
Additional Multichannel Conformance Data
Pattern based 3D mesh compression Conformance
Video Coding for Browsers Conformance
Internet Video Coding Conformance
CDVS Conformance
CEL and MCO Conformance
ARAF Conformance
Media Tool Library Conformance
SAOC and SAOC Dialogue Enhancement Conformance
MPEG-V – Conformance
Conformance for MPEG-M
MMT Conformance
HEVC Conformance
3D Audio Conformance
Maintenance
Systems coding standards
Video coding standards
Audio coding standards
3DG coding standards
Systems description coding standards
Visual description coding standards
Audio description coding standards
MPEG-21 standards
MPEG-A standards
Organisation of this meeting
Tasks for subgroups
Joint meetings
Room assignment
9
1
2
3
4
5
6
7
8
1
2
3
4
9
10
11
10
1
2
11
12
13
WG management
Terms of reference
Officers
Editors
Liaisons
Responses to National Bodies
Work item assignment
Ad hoc groups
Asset management
Reference software
Conformance
Test material
URI
IPR management
Facilities
Work plan and time line
Administrative matters
Schedule of future MPEG meetings
Promotional activities
Resolutions of this meeting
A.O.B.
Closing
– Input contributions
MPEG
number
m37527
Title
Input on 14496-15 defect
report
Source
Ye-Kui Wang, Guido Franceschini, David Singer
FTV AHG: Results of
m37528 newer HTM codecs on
FTV test sequences
Takanori Senoh, Akio Ishikawa, Makoto Okui, Kenji Yamamoto,
Naomi Inoue, Pablo Carballeira López, Francisco Morán
Burgos
Resubmission of
Swissaudec's MPEG-H
"Phase 2" Core
m37529
Experiment According to
MPEG Plenary's Decision
from Oct 23, 2015
Clemens Par
Study Of 23008-3 DAM4
m37531 (w15851) - Carriage Of
Systems Metadata
Michael Dolan, Dave Singer, Schuyler Quackenbush
m37532
Functional components for
M. Oskar van Deventer
Media Orchestration
Proposal of additional text
m37533 and data for MMT
Shuichi Aoki, Yuki Kawamura
Implementation guidelines
m37534
Contribution on Media
Orchestration
m37535 Hybrid Log-Gamma HDR
Jean-Claude Dufourd
A. Cotton, M. Naccari (BBC)
Description of the
reshaper parameters
K. Minoo (Arris), T. Lu, P. Yin (Dolby), L. Kerofsky (InterDigital),
m37536
derivation process in ETM D. Rusanovskyy (Qualcomm), E. Francois (Technicolor)
reference software
m37537
Report of HDR Core
Experiment 4
P. Topiwala (FastVDO), E. Alshina (Samsung), A. Smolic
(Disney), S. Lee (Qualcomm)
m37538
Report of HDR Core
Experiment 5
P. Topiwala (FastVDO), R. Brondijk (Philips), J. Sole
(Qualcomm), A. Smolic (Disney),
m37539
Encoder optimization for
HDR/WCG coding
Y. He, Y. Ye, L. Kerofsky (InterDigital)
Timed Metadata and
m37540 Orchestration Data for
Media Orchestration
Sejin Oh
m37542 Report on CE6
R.Brondijk(Philips), S.Lasserre(Technicolor), Y.He(Interdigital),
D. Rusanovskyy (Qualcomm)
Some observations on
visual quality of Hybrid
m37543
Log-Gamma (HLG) TF
processed video (CE7)
A. Luthra, D. Baylon, K. Minoo, Y. Yu, Z. Gu
Summary of Voting on
m37544 ISO/IEC 230091:2014/PDAM 4
SC 29 Secretariat
Summary of Voting on
ISO/IEC 23008-
SC 29 Secretariat
m37545
8:201x/DAM 1
Summary of Voting on
m37546 ISO/IEC 230081:2014/DCOR 2
SC 29 Secretariat
Summary of Voting on
m37547 ISO/IEC DIS 21000-21
[2nd Edition]
SC 29 Secretariat
Summary of Voting on
ISO/IEC DIS 21000-22
SC 29 Secretariat
Summary of Voting on
m37549 ISO/IEC 1449626:2010/DAM 4
SC 29 Secretariat
Summary of Voting on
m37550 ISO/IEC DIS 21000-20
[2nd Edition]
SC 29 Secretariat
Summary of Voting on
ISO/IEC DIS 23008-4
SC 29 Secretariat
Summary of Voting on
m37552 ISO/IEC 1449612:201x/PDAM 1
SC 29 Secretariat
Summary of Voting on
m37553 ISO/IEC 144963:2009/PDAM 6
SC 29 Secretariat
Summary of Voting on
m37554 ISO/IEC 230081:201x/PDAM 1
SC 29 Secretariat
Summary of Voting on
m37555 ISO/IEC 138181:2015/DAM 5
SC 29 Secretariat
Summary of Voting on
m37556 ISO/IEC 138181:2015/DAM 6
SC 29 Secretariat
Summary of Voting on
m37557 ISO/IEC DIS 23000-13
[2nd Edition]
SC 29 Secretariat
Summary of Voting on
m37558 ISO/IEC CD 23006-1 [3rd
Edition]
SC 29 Secretariat
Summary of Voting on
m37559 ISO/IEC 230083:201x/DAM 2.2
SC 29 Secretariat
Summary of Voting on
m37560 ISO/IEC 230033:2012/DAM 3
SC 29 Secretariat
m37548
m37551
Summary of Voting on
m37561 ISO/IEC DIS 23006-2 [3rd SC 29 Secretariat
Edition]
Summary of Voting on
m37562 ISO/IEC DIS 23006-3 [3rd SC 29 Secretariat
Edition]
Summary of Voting on
m37563 ISO/IEC 2300111:201x/DAM 1
SC 29 Secretariat
Summary of Voting on
m37564 ISO/IEC 138181:2015/PDAM 7
SC 29 Secretariat
Table of Replies on
m37565 ISO/IEC 144965:2001/FDAM 37
ITTF via SC 29 Secretariat
Table of Replies on
m37566 ISO/IEC 144964:2004/FDAM 43
ITTF via SC 29 Secretariat
m37567
Table of Replies on
ISO/IEC FDIS 23001-12
ITTF via SC 29 Secretariat
Table of Replies on
m37568 ISO/IEC 1449627:2009/FDAM 6
ITTF via SC 29 Secretariat
Table of Replies on
m37569 ISO/IEC FDIS 23001-8
[2nd Edition]
ITTF via SC 29 Secretariat
Table of Replies on
m37570 ISO/IEC 138181:2015/FDAM 3
ITTF via SC 29 Secretariat
Table of Replies on
m37571 ISO/IEC 138181:2015/FDAM 2
ITTF via SC 29 Secretariat
Table of Replies on
m37572 ISO/IEC 138181:201x/FDAM 4
ITTF via SC 29 Secretariat
Summary of Voting on
ISO/IEC 14496m37573
SC 29 Secretariat
3:2009/Amd.3:2012/DCOR
1
Table of Replies on
m37574 ISO/IEC FDIS 23001-7
[3rd Edition]
ITTF via SC 29 Secretariat
m37575
Table of Replies on
ISO/IEC FDIS 23008-12
ITTF via SC 29 Secretariat
m37576
Liaison Statement from
JTC 1/WG 10
JTC 1/WG 10 via SC 29 Secretariat
m37577
Liaison Statement from SC
SC 37 via SC 29 Secretariat
37
Liaison Statement from
m37578 ITU-T SG 16 on video
coding collaboration
ITU-T SG 16 via SC 29 Secretariat
Liaison Statement from
ITU-T SG 16 on
m37579 discontinued work items
on IPTV multimedia
application frameworks
ITU-T SG 16 via SC 29 Secretariat
Liaison Statement from
m37580 ITU-R SG 6 on ITU-R
BS.1196-5
ITU-R SG 6 via SC 29 Secretariat
Liaison Statement from
m37581 JTC 1/WG 10 on Invitation JTC 1/WG 10 via SC 29 Secretariat
to the 4th JTC 1/WG 10
Meeting
Liaison Statement from
JTC 1/WG 10 on Logistical
m37582
JTC 1/WG 10 via SC 29 Secretariat
information for the 4th JTC
1/WG 10 Meeting
m37583
Liaison Statement from
ITU-T SG 20
ITU-T SG 20 via SC 29 Secretariat
m37584
Liaison Statement from
JTC 1/WG 10
JTC 1/WG 10 via SC 29 Secretariat
m37585
Liaison Statement from
IETF
IETF via SC 29 Secretariat
IEC CDV 62766-4-1,
62766-4-2, 62766-5-1,
m37586
62766-5-2, 62766-6,
62766-7 and 62766-8
IEC TC 100 via SC 29 Secretariat
m37587 IEC DTR 62921 Ed. 2.0
IEC TC 100 via SC 29 Secretariat
m37588 IEC CDV 63028 Ed. 1.0
IEC TC 100 via SC 29 Secretariat
m37589
IEC CD 61937-13 and
61937-14
IEC TC 100 via SC 29 Secretariat
m37590
IEC CD 61937-2 Ed.
2.0/Amd.2
IEC TC 100 via SC 29 Secretariat
m37591
IEC CD 63029 [IEC
100/2627/CD]
IEC TC 100 via SC 29 Secretariat
m37592
IEC CD 63029 [IEC
100/2627A/CD]
IEC TC 100 via SC 29 Secretariat
IEC NP: Universal Serial
m37593 Bus interfaces for data and IEC TC 100 via SC 29 Secretariat
power - Part 3-1
IEC NP: Universal Serial
m37594 Bus interfaces for data and IEC TC 100 via SC 29 Secretariat
power - Part 1-3
m37595
IEC CDV 62680-1-3,
62680-3-1 and 62943
IEC TC 100 via SC 29 Secretariat
IEC CDV MIDI (MUSICAL
INSTRUMENT DIGITAL
INTERFACE)
m37596
IEC TC 100 via SC 29 Secretariat
SPECIFICATION 1.0
(ABRIDGED EDITION,
2015)
IEC NP MIDI (MUSICAL
INSTRUMENT DIGITAL
INTERFACE)
m37597
SPECIFICATION 1.0
(ABRIDGED EDITION,
2015)
IEC TC 100 via SC 29 Secretariat
m37598 IEC CDV 63002 Ed. 1.0
IEC TC 100 via SC 29 Secretariat
National Body Contribution
on observation regarding
m37599
USNB via SC 29 Secretariat
convergence on media
format
m37600
National Body Contribution
USNB via SC 29 Secretariat
on the quality of ISOs
editing processes
Patent declarations where
m37601 the patent holder is
ITTF via SC 29 Secretariat
unwilling to grant licences
m37602
Report on CE7 xCheck
and Viewing
m37603
Corrections of TEMI text
errors to include in COR2
5G and future media
m37604 consumption [5G/Beyond
UHD]
m37605
Report of HDR Core
Experiment 1
Jean Le Feuvre
Emmanuel Thomas, Toon Norp, José Almodovar
Jacob Strom, Joel Sole, Yuwen He
Proposed Improvements
to Algorithm Description of J. Chen (Qualcomm), E. Alshina (Samsung), G. J. Sullivan
m37606
Joint Exploration Test
(Microsoft), J.-R. Ohm (RWTH Aachen Univ.), J. Boyce (Intel)
Model 1
[MMT-IG] Proposal of
m37607 additional text for chapter
5.7.2
Yejin Sohn, Minju Cho, Jongho Paik
[FTV AHG] Response to
Call for Evidence on Freem37608
Lu Yu(Zhejiang University), Qing Wang, Ang Lu, Yule Sun
Viewpoint Television:
Zhejiang University
m37609
Report of AHG on Media
Orchestration
Rob Koenen for AHG,
m37610
Performance of JEM 1
tools analysis
E. Alshina, A. Alshin, K. Choi, M. Park (Samsung)
CE1-related: Optimization
of HEVC Main 10 coding
m37611 in HDR videos Based on
C. Jung, S. Yu, Q. Lin(Xidian Univ.), M. Li, P. Wu(ZTE)
Disorderly Concealment
Effect
Adaptive Quantizationm37612 Based HDR video Coding C. Jung, Q. Fu, G. Yang, M. Li, P. Wu
with HEVC Main 10 Profile
Adaptive PQ: Adaptive
Perceptual Quantizer for
m37613
HDR video Coding with
HEVC Main 10 Profile
C. Jung, S. Yu, P. Ke, M. Li, P. Wu
m37614
CE-6 test4.2: Color
enhancement
Y. He, L. Kerofsky, Y. Ye (InterDigital)
m37615
HEVC encoder
optimization
Y. He, Y. Ye, L. Kerofsky (InterDigital)
m37616
[FDH] Comments on
PoLin Lai, Shan Liu, Lulin Chen, Shawmin Lei
DASH-FDH push-template
CE-FDH: Streaming flow
m37617 control of sever push with
a push directive
m37618
Lulin Chen, Shan Liu, Shawmin Lei
CE-FDH- Server push with
Lulin Chen, Shan Liu, PoLin Lai, Shawmin Lei
a fragmented MPD
m37619 Luma delta QP adjustment J. Kim, J. Lee, E. Alshina, Y. Park(Samsung)
based on video statistical
information
m37620
Comments on DASH-FDH
Li Liu, XG Zhang
URL Templates
m37621
An automatic pushdirective
Peter Klenner, Frank Herrmann
m37622
[CE-FDH] Comments on
push template
Franck Denoual, Frederic Maze, Herve Ruellan
m37623
[CE-FDH] Editorial
comments on WD
Franck Denoual, Frederic Maze,
Adaptive Gamut
m37624 Expansion for Chroma
Components
A Dsouza, Aishwarya, K Pachauri (Samsung)
m37625
HDR-VQM Reference
Code and its Usage
m37626
[FDH] Comments on FDH
Kevin Streeter, Vishy Swaminathan
WebSocket Bindings
m37627
SCC encoder
improvement
Y.-J. Chang, P.-H. Lin, C.-L. Lin, J.-S. Tu, C.-C. Lin (ITRI)
m37628
light field uses cases and
workflows
D. Doyen, A. Schubert, L. Blondé, R. Doré
K Pachauri, S Sahota (Samsung)
[MMT-IG]Use case:
Combination of MMT and
m37629
Minju Cho, Yejin Sohn, Jongho Paik
HTTP streaming for
synchronized presentation
m37630
CE7: Cross-check report
for Experiment 7.2
m37631 HEVC tile tracks brand
m37632
L-HEVC track layout
simplifications
M. Naccari, M. Pindoria (BBC)
Jean Le Feuvre, Cyril Concolato, Franck Denoual, Frédéric
Mazé
Jean Le Feuvre, Cyril Concolato, Franck Denoual, Frédéric
Mazé
Selective Encryption for
m37633 AVC Video Streams
(MRPM)
cyril bergeron, Benoit Boyadjis, Sebastien lecomte
Selective encryption for
m37634 HEVC video streams
(MRPM)
Wassim Hamidouche, Cyril Bergeron, Olivier Déforges
m37635
PKU’s Response to
MPEG CfP for Compact
m37636
Descriptor for Visual
Analysis
m37637
Summary of Voting on
ISO/IEC CD 23000-17
Zhangshuai Huang, Ling-Yu Duan, Jie Chen, Longhui Wei,
Tiejun Huang, Wen Gao,
SC 29 Secretariat
m37638 (removed)
Summary of Voting on
m37639 ISO/IEC 1449614:2003/DCOR 2
SC 29 Secretariat
Summary of Voting on
ISO/IEC 23003m37640
SC 29 Secretariat
3:2012/Amd.1:2014/DCOR
2
Summary of Voting on
m37641 ISO/IEC 2300812:201x/DCOR 1
SC 29 Secretariat
Summary of Voting on
m37642 Combined on ISO/IEC
14496-5:2001/PDAM 42
SC 29 Secretariat
Summary of Voting on
m37643 Combined on ISO/IEC
23003-4:2015/PDAM 1
SC 29 Secretariat
Summary of Voting on
m37644 ISO/IEC DIS 14496-15
[4th Edition]
SC 29 Secretariat
Summary of Voting on
m37645 ISO/IEC 230085:2015/DAM 3
SC 29 Secretariat
Summary of Voting on
m37646 ISO/IEC 230085:201x/DAM 4
SC 29 Secretariat
m37647 (removed)
Summary of Voting on
m37648 ISO/IEC DIS 15938-6 [2nd SC 29 Secretariat
Edition]
m37649
Summary of Voting on
ISO/IEC DIS 23000-16
SC 29 Secretariat
m37650 IEC CDV 62394/Ed.3
IEC TC 100 via SC 29 Secretariat
m37651 IEC NP Microspeakers
IEC TC 100 via SC 29 Secretariat
IEC NP Multimedia
Vibration Audio Systems Method of measurement
m37652
IEC TC 100 via SC 29 Secretariat
for audio characteristics of
audio actuator by pinnaconduction
m37653
Liaison Statement from
ITU-T SG 9
ITU-T SG 9 via SC 29 Secretariat
m37654
Liaison Statement from
DVB on HDR
DVB via SC 29 Secretariat
m37655
Liaison Statement from
DVB on MPEG-H Audio
DVB via SC 29 Secretariat
m37656
Liaison Statement from
3GPP
3GPP via SC 29 Secretariat
m37657 IEC CDV 62827-3 Ed. 1.0 IEC TC 100 via SC 29 Secretariat
IEC CD 60728-13-1 Ed.
2.0
IEC TC 100 via SC 29 Secretariat
m37659 IEC CDV 62943/Ed. 1.0
IEC TC 100 via SC 29 Secretariat
m37660 IEC CDV 63002 Ed.1.0
IEC TC 100 via SC 29 Secretariat
m37661 IEC TR 62935 Ed1.0
IEC TC 100 via SC 29 Secretariat
m37658
Centralized Texture-Depth
Jar-Ferr Yang, Hung-Ming Wang, Chiu-Yu Chen, Ting-Ann
m37662 Packing SEI Message for
Chang,
H.264/AVC
m37663
Centralized Texture-Depth Jar-Ferr Yang, Hung-Ming Wang, Chiu-Yu Chen, Ting-Ann
Packing SEI Message for Chang,
HEVC
m37664
BT.HDR and its
implications for VUI
m37665
Commercial requirements
T. Fautier (Harmonic), C. Fogg
for next-generation video
m37666
Hybrid Log Gamma
observations
C. Fogg
m37667
HDR Core Experiments
(CE) observations
C. Fogg
C. Fogg
m37668 ICtCp testing
C. Fogg
m37669 HDR-10 status update
C. Fogg
m37670 Best practices for HDR10
C. Fogg
m37671
Noise reduction for HDR
video
C. Fogg, A. Tourapis
m37672
Traffic engineering for
Multipath MMT
Jihyeok Yun, Kyeongwon Kim, Taeseop Kim, Doug Young Suh,
Jongmin Lee,
Quadtree plus binary tree
m37673 structure integration with
JEM tools
J. An, H. Huang, K. Zhang, Y.-W. Huang, S. Lei (MediaTek)
[FTV AhG] Soccer Light
m37674 Field Interpolation Applied Lode Jorissen, Patrik Goorts, Yan Li, Gauthier Lafruit
on Compressed Data
m37675
High level syntax support M. Naccari, A. Cotton, T. Heritage (BBC), Y. Nishida, A.
for ARIB STD-B67 in AVC Ichigaya (NHK), M. Raulet (ATEME)
m37676
Tutorial: Media Sync in
Oskar van Deventer
DVB-CSS and HbbTV 2.0
m37677
Coordination functionality
for MPEG MORE
Oskar van Deventer
Answer to the Call for
Evidence for Genome
Compression and Storage:
m37678 Reference-free
Jan Voges, Marco Munderloh, Joern Ostermann
Compression of Aligned
Next-Generation
Sequencing Data
CE7-related: Some
comments on system
m37679
gamma setting for HLG
(CE7)
Recommended conversion
process of YCbCr 4:2:0
m37680 10b SDR content from
E. Francois, K. Minoo, R. van de Vleuten, A. Tourapis
BT.709 to BT.2020 color
gamut
Report of HDR Core
Experiment 7: On the
m37681 visual quality of HLG
generated HDR and SDR
video
A. Luthra (Arris), E. Francois (Technicolor), L. van de Kerkhof
(Philips)
Enhanced Filtering and
m37682 Interpolation Methods for
Video Signals
A.M. Tourapis, Y. Su, D. Singer (Apple Inc)
m37683
Enhanced Luma
Adjustment Methods
A.M. Tourapis, Y. Su, D. Singer (Apple Inc)
m37684
HDRTools: Extensions
and Improvements
A.M. Tourapis, Y. Su, D. Singer (Apple Inc), C. Fogg
(Movielabs)
m37685 IEC TS 63033-1
IEC TC 100 via SC 29 Secretariat
m37686 IEC CD 62608-2 Ed.1.0
IEC TC 100 via SC 29 Secretariat
AHG on HDR and WCG:
m37687 Average Luma Controlled
Adaptive dQP
A. Segall, J. Zhao (Sharp), J. Strom, M. Pettersson, K.
Andersson (Ericsson)
m37688
HDR CE5: Report of
Experiment 5.3.2
W. Dai, M. Krishnan, P. Topiwala (FastVDO)
m37689
CE1-related: LUT-based
luma sample adjustment
C. Rosewarne, V. Kolesnikov (Canon)
m37690
Content colour gamut SEI
Hyun Mook Oh, Jangwon Choi, Jong-Yeul Suh
message
HDR CE2: report of
m37691 CE2.b-1 experiment
(reshaper setting 1)
E. Francois, Y. Olivier, C. Chevance (Technicolor)
Video usability information
m37692 signaling for SDR
Hyun Mook Oh, Jong-Yeul Suh
backward compatibility
HDR CE6: report of CE6m37693 4.6b experiment (ETM
using SEI message)
m37694
Re-shaper syntax
extension for HDR CE6
CE7: Results Core
m37695 Experiment 7.1 test a and
b
m37696
Non-normative HM
encoder improvements
m37697 MIoT Use-case
E. Francois, C. Chevance, Y. Olivier (Technicolor)
Hyun Mook Oh, Jong-Yeul Suh
M. Naccari, A. Cotton, M. Pindoria
K. Andersson, P. Wennersten, R. Sjöberg, J. Samuelsson, J.
Ström, P. Hermansson, M. Pettersson (Ericsson)
Jo Bukyun
HDR CE6: Core
Experiments 4.3 and 4.6a:
Description of the Philips Wiebe de Haan, Robert Brondijk, rocco goris, Rene van der
m37698
CE system in 4:2:0 and
Vleuten
with automatic reshaper
parameter derivation
m37699
HDR CE6-related: Cross
Check of CE6.46b
m37700
HDR CE7-related: xcheck
Robert Brondijk
7 1a & 1 b
Robert Brondijk, Wiebe de Haan
CE1-related: Optimization
of HEVC Main 10 coding
m37701 in HDR videos Based on
C. Jung, S. Yu, Q. Lin (Xidian Univ.), M. Li, P. Wu (ZTE)
Disorderly Concealment
Effect
HDR CE7: xCheck 1a 1b
m37702 and Core Experiement 2a
and 2b
m37703
CE2-related: Adaptive
Quantization-Based HDR
Robert Brondijk, Wiebe de Haan, Rene van der Vleuten, Rocco
Goris
C. Jung, Q. Fu, G. Yang(Xidian Univ.), M. Li, P. Wu(ZTE)
video Coding with HEVC
Main 10 Profile
m37704
HDR CE5: Report on Core
Robert Brondijk, Wiebe de Haan, Jeroen Stessen
Experiment CE5.3.1
m37705
CE5-related: xCheck of
CE5.3.2
robert brondijk, rocco goris
CE2-related: Adaptive PQ:
Adaptive Perceptual
m37706 Quantizer for HDR video
C. Jung, S. Yu, P. Ke(Xidian Univ.), M. Li, P. Wu(ZTE)
Coding with HEVC Main
10 Profile
m37707
Highly efficient HDR video Alan Chalmers, Jonathan Hatchett, Tom Bashford Rogers, Kurt
compression
Debattista
m37708
HDR CE6: Cross-check of
W. Dai, M. Krishnan, P. Topiwala (FastVDO)
6.46a
Future video coding
m37709 requirements on virtual
reality
J. Ridge, M.M. Hannuksela (Nokia)
CIECAM02 based
Orthogonal Color Space
m37710
Transformation for
HDR/WCG Video
Y. Kwak, Y. Baek(UNIST), J. Lee, J. W. Kang, H.-Y. Kim(ETRI)
Editorial Clarifications on
Text of ISO/IEC 13818m37711
1:2015 AMD7 Virtual
Segmentation
Yasser Syed, Alex Giladi, Wendell Sun,
Palette encoder
improvements for the 4:2:0
m37712
C. Gisquet, G. Laroche, P. Onno (Canon)
chroma format and
lossless
Comments on alignment of
m37713 SCC text with multi-view
C. Gisquet, G. Laroche, P. Onno (Canon)
and scalable
Bug fix for DPB operations
m37714 when current picture is a
X. Xu, S. Liu, S. Lei (MediaTek)
reference picture
Cross Check Report for
m37715 CE on HREP (Test Site
Fraunhofer IDMT)
Judith Liebetrau, Thomas Sporer, Alexander Stojanow
Bottom-up hash value
m37716 calculation and validity
check for SCC
W. Xiao (Xidian Univ.), B. Li, J. Xu (Microsoft)
HDR CE7-related:
m37717 Additional results for
Experiment 7.2a
M. Naccari, M. Pindoria, A. Cotton (BBC)
m37718
Crosscheck of HDR
CE5.3.1 (JCTVC-W0069)
J. Samuelsson, M. Pettersson, J. Strom (Ericsson)
Crosscheck of Luma delta
QP adjustment based on
m37719 video statistical
J. Samuelsson, J. Strom, P. Hermansson (Ericsson)
information (JCTVCW0039)
m37720 Report on HDR CE3
Vittorio Baroncini, Louis Kerofsky, Dmytro Rusanovskyy
[SAND] A Proposal to
m37721 improve SAND
alternatives
Iraj Sodagar
m37722
SJTU 4k test sequences
evaluation report
m37723
HDR CE2: CE2.a-2,
T. Lu, F. Pu, P. Yin, T. Chen, W. Husak (Dolby), Y. He, L.
CE2.c, CE2.d and CE2.e-3 Kerofsky, Y. Ye (InterDigital)
Namuk Kim, Seungsu Jeon, Huikjae Shim
HDR CE2 related: Further
T. Lu, F. Pu, P. Yin, T. Chen, W. Husak (Dolby), Y. He, L.
m37724 Improvement of JCTVCKerofsky, Y. Ye (InterDigital)
W0084
m37725
Indication of SMPTE 2094R. Yeung, S. Qu, P. Yin, T. Lu, T. Chen, W. Husak (Dolby)
10 metadata in HEVC
Description of Color
Graded SDR content for
m37726
HDR/WCG Test
Sequences
P. J. Warren, S. M. Ruggieri, W. Husak, T. Lu, P. Yin, F. Pu
(Dolby)
SDR backward
L. van de Kerkhof, W. de Haan (Philips), E. Francois
m37727 compatibility requirements
(Technicolor)
for HEVC HDR extension
VCEG Ad hoc group report
m37728 on HDR Video Coding
J. Boyce, E. Alshina, J. Samuelsson (AHG chairs)
(VCEG AHG5)
HDR CE2-related: some
m37729 experiments on reshaping Y. Olivier, E. Francois, C. Chevance
with input SDR
HDR CE3: Results of
subjective evaluations
m37730
conducted with the DSIS
method
Martin Rerabek, Philippe Hanhart, Touradj Ebrahimi
HDR CE3: Benchmarking
of objective metrics for
m37731
HDR video quality
assessment
Philippe Hanhart, Touradj Ebrahimi
Description of the
Exploratory Test Model
m37732
(ETM) for HDR/WCG
extension of HEVC
K. Minoo (Arris), T. Lu, P. Yin (Dolby), L. Kerofsky (InterDigital),
D. Rusanovskyy (Qualcomm), E. Francois (Technicolor), ,
m37733
Proposed text for BT.HDR
C. Fogg (MovieLabs)
in AVC
HDR CE2: Report of
CE2.a-3, CE2.c and CE2.d
m37734
Yue Yu, Zhouye Gu, Koohyar Minoo, David Baylon, Ajay Luthra
experiments (for reshaping
setting 2)
HDR CE2: Report of
CE2.b-2, CE2.c and CE2.d
m37735
Zhouye Gu, Koohyar Minoo, Yue Yu, David Baylon, Ajay Luthra
experiments (for reshaping
setting 2)
m37736
SHVC verification test
results
Proposed editorial
improvements to HEVC
m37737
Screen Content Coding
Draft Text 5
Y. He, Y. Ye (InterDigital), Hendry, Y.-K Wang (Qualcomm), V.
Baroncini (FUB)
R. Joshi, G. Sullivan, J. Xu, Y. Ye, S. Liu, Y.-K. Wang, G. Tech
m37738
Report of HDR Core
Experiment 6
Robert Brondijk, Sebastien Lasserre, Dmytro Rusanovskyy,
yuwen he
m37739
Report of HDR Core
Experiment 2
D.Rusanovskyy(Qualcomm), E.Francois (Technicolor),
L.Kerofsky (InterDigital), T.Lu(Dolby), K. Minoo(Arris),
m37740
HDR CE2: CE2.c-Chroma D.Rusanovskyy, J.Sole, A.K.Ramasubramonian, D. B. Sansli,
QP offset study report
M. Karczewicz (Qualcomm),
m37741
Effective Colour Volume
SEI
m37742
HDR CE5 test 3: Constant J. Sole, D. Rusanovskyy, A. Ramasubramonian, D. Bugdayci,
Luminance results
M. Karczewicz (Qualcomm)
A. Tourapis, Y. Su, D. Singer (Apple Inc.)
HDR CE2-related: Results
J. Sole, A. Ramasubramonian, D. Rusanovskyy, D. Bugdayci,
m37743 for combination of CE1
M. Karczewicz (Qualcomm)
(anchor 3.2) and CE2
m37744
HDR CE2: Report on
CE2.a-1 LCS
A.K. Ramasubramonian, J. Sole, D. Rusanovskyy, D.
Bugdayci, M. Karczewicz
m37745
HDR CE2: CE2.e-1
Enhancement of ETM
Dmytro Rusanovskyy
m37746
HDR CE6: Test 4.1
Reshaper from m37064
D. B. Sansli, A. K. Ramasubramonian, D. Rusanovskyy, J.
Sole, M. Karczewicz (Qualcomm)
Comparison of
Compression Performance
of HEVC Screen Content
m37747
B. Li, J. Xu, G. J. Sullivan (Microsoft)
Coding Extensions Test
Model 6 with AVC High
4:4:4 Predictive profile
Evaluation of Chroma
Subsampling for High
m37748
Dynamic Range Video
Compression
R. Boitard, M.T. Pourazad, P. Nasiopoulos
Evaluation of Backwardm37749 compatible HDR
Transmission Pipelines
M. Azimi, R. Boitard, M. T. Pourazad, P. Nasiopoulos
m37750
Evaluation report of SJTU
T. Biatek, X. Ducloux (BCom)
Test Sequences
m37751
Media Orchestration in the
Krishna Chandramouli, Ebroul Izquierdo
Security Domain
m37752
ATSC Liaison on
HDR/WCG
Walt Husak
m37753
Report of HDR Core
Experiment 3
Vittorio Baroncini, Louis Kerofsky, Dmytro Rusanovskyy
[FTV AhG] Software for
m37754 SMV and FN sweeping
subjective tests
Yule Sun, Yan Li, Qing Wang, Ang Lu, Lu Yu, Krzysztof
Wegner, Gauthier Lafruit
Proposed Common Media
David Singer, Krasimir Kolarov, Kilroy Hughes, John Simmons,
m37755 Format for Segmented
Bill May, Jeffrey Goldberg, Alex Giladi, Will Law,
Media
Proposed Requirements
Krasimir Kolarov, John Simmons, David Singer, Iraj Sodagar,
m37756 for Common Media Format Bill May, Jeffrey Goldberg, Will Law, Yasser Syed, Sejin Oh,
for Segmented Media
Per Fröjdh, Kevin Streeter, Diego Gibellino
m37757
m37758
Closed form HDR 4:2:0
chroma subsampling
Andrey Norkin (Netflix)
(HDR CE1 and AHG5
related)
Cross-check report on
m37759 HDR/WCG CE1 new
anchor generation
D. Jun, J. Lee, J.W. Kang (ETRI)
[FTV AHG] View Synthesis
m37760 for Linear and Arc Video
Ji-Hun Mun, Yunseok Song, Yo-Sung Ho
Sequences
[PACO] Generalized
Playback Control: a
m37761
framework for flexible
playback control
Iraj Sodagar
m37762 [PACO] PACO CE Report Iraj Sodagar
Root Elements for Mediam37763 centric IoT and Wearable
(M-IoTW)
Anna Yang, Jae-Gon Kim, Sungmoon Chun, Hyunchul Ko
Common and Gesturem37764 Based Wearable
Description for M-IoTW
Sungmoon Chun, Hyunchul Ko, Anna Yang, Jae-Gon Kim
m37765
CE6-related: xCheck of
CE6 4.1
Answer to the Call for
Evidence on Genome
m37766 Compression: Review of
genomic information
compression tools
Robert Brondijk
Ibrahim Numanagic, Faraz Hach, James Bonfield, Claudio
Alberti, Jan Voges,
Proposal for the update to Claudio Alberti, Marco Mattavelli, Ana A. Hernandez-Lopez,
m37767 the database of genomic Mikel Hernaez, Idoia Ochoa, Rachel G. Goldfeder, Noah
test data
Daniels, Jan Voges, Daniel Greenfield
Revision of N15739
“Evaluation framework Claudio Alberti, Marco Mattavelli, Ana A. Hernandez-Lopez,
m37768 for lossy compression of
Mikel Hernaez, Idoia Ochoa, Rachel G. Goldfeder, Noah
genomic Quality
Daniels, Jan Voges, Daniel Greenfield
Values―
m37769
CDVSec: CDVS for
Fingerprint Matching
Corrections and
m37770 Clarifications on MPEG-H
3D Audio DAM3
Giovanni Ballocca, Attilio Fiandrotti, Massimo Mattelliano,
Alessandra Mosca
Oliver Wuebbolt, Johannes Boehm, Alexander Krueger, Sven
Kordon, Florian Keiler
HDR CE7: Comments on
visual quality and
compression performance
m37771
E. Francois, Y. Olivier, C. Chevance (Technicolor)
of HLG for SDR backward
compatible HDR coding
system
Fragmented MPD for
m37772 DASH streaming
applications
Lulin Chen, Shan Liu, PoLin Lai, Shawmin Lei
Introduction to tiled full
parallax light field display
m37773
and requirements for FTV
discussion
Danillo Graziosi, Zahir Y. Alpaslan, Hussein S. El-Ghoroury
m37774
Proposed amendment
items for ISO/IEC 14496-
Vladimir Levantovsky (on behalf of AHG on Font Format
Representation)
22 "Open Font Format"
m37775
Report of the AHG on Font Vladimir Levantovsky (On behalf of AHG on Font Format
Format Representation
Representation)
[Part 3] MPD chaining for
m37776 DASH ad insertion with
Iraj Sodagar
early termination use case
[FTV AhG] Elemental
images resizing method to Kazuhiro Hara, Masahiro Kawakita, Jun Arai, Tomoyuki
m37777
compress integral 3D
Mishina, Hiroshi Kikuchi
image
Evaluation Report of
m37778 Chimera Test Sequence
for Future Video Coding
H. Ko, S.-C. Lim, J. Kang, D. Jun, J. Lee, H. Y. Kim (ETRI)
m37779
JEM1.0 Encoding Results
S.-C. Lim, H. Ko, J. Kang, H. Y. Kim (ETRI)
of Chimera Test Sequence
m37780
Demonstration of audio
and video for MPEG-UD
Demo for MPEG-UD:
m37781 Face-to-face speech
translation
Hyun Sook Kim, Sungmoon Chun, Hyunchul Ko, Miran Choi
Miran Choi, Minkyu Lee, Young-Gil Kim, Sanghoon Kim
m37782 withdrawn
m37783
Report on the decoding
complexity of IVC
Sang-hyo Park, Seungho Kuk, Haiyan Xu, Yousun Park, Euee
S. Jang
JCT-3V AHG Report:
HEVC Conformance
m37784
testing development
(AHG5)
Y. Chen, T. Ikai, K. Kawamura, S. Shimizu, T. Suzuki
Proposed editorial
improvements to MVm37785
HEVC and 3D-HEVC
Conformance Draft
Y. Chen, T. Ikai, K. Kawamura, S. Shimizu, T. Suzuki
Information on how to
m37786 improve Text of DIS
14496-33 IVC
Sang-hyo Park, Yousun Park, Euee S. Jang
m37787
Liaison Statement from
DVB
DVB via SC 29 Secretariat
m37788
Liaison Statement from
ITU-R WP 6C
ITU-R WP 6C via SC 29 Secretariat
Proposed Draft 2 of
Reference Software for
m37789 Alternative Depth Info SEI T. Senoh (NICT)
Message Extension of
AVC
Defect report on ISO/IEC
m37790 13818-1 regarding STD
buffer sizes for HEVC
Karsten Grüneberg, Thorsten Selinger
m37791
Report on corrected USAC
Christof Fersch
conformance bitstreams
m37792
HEVC Green Metadata
Proposition
m37793
Suggested modification to
Wendell Sun, Prabhu Navali
structure in Virtual
nicolas.derouineau@vitec.com, nicolas.tizon@vitec.com,
yahia.benmoussa@univ-ubs.fr
Segmentation
JRS Response to Call for
Proposals for
Technologies Compact
m37794
Werner Bailer, Stefanie Wechtitsch
Descriptors for Video
Analysis (CDVA) - Search
and Retrieval
Cross-check of JCTVCm37795 W0107: Closed form HDR A. Tourapis
4:2:0 chroma subsampling
SJTU 4K test sequences
m37796 evaluation report from
Sejong University
N. Kim, J. Choi, G.-R. Kim, Y.-L. Lee (Sejong Univ.)
Proposal for the Definition
of a Ultra-Low Latency
m37797
Marco Mattavelli, Wassim Hamidouche,
HEVC Profile for Content
Production Applications
[FTV AhG] View Synthesis
m37798 on Compressed Big Buck Yan Li, Gauthier Lafruit
Bunny
Extension of Prediction
Modes in Chroma Intra
m37799
Coding for Internet Video
Coding
Anna Yang, Jae-Yung Lee, Jong-Ki Han, Jae-Gon Kim
m37800
Wearables and IoT&AAL
use case standardisation
Kate Grant
m37801
Light Fields for Cinematic
Virtual Reality
Tristan Salomé, Gauthier Lafruit
Follow up on application
scenarios for privacy and
security requirements on
m37802
Jaime Delgado, Silvia Llorente
genome usage,
compression, transmission
and storage
Proposed White Paper on
MPEG-21 Contract
Expression Language
Jaime Delgado, Silvia Llorente, Laurent Boch, VÃctor RodrÃm37803
(CEL) and MPEG-21
guez Doncel
Media Contract Ontology
(MCO)
m37804
Proof of concept: NAMF
for Multipath MMT
Jihyeok Yun, Kyeongwon Kim, Taeseop Kim, Dough Young
Suh, Jong Min Lee
m37805 113th MPEG Audio Report Schuyler Quackenbush
m37806
Editor's preparation for
Jae-Kwan Yun, Guiseppe Vavalà (CEDEO)
ISO/IEC 3rd FDIS 23006-2
m37807
Editor's preparation for
Jae-Kwan Yun, Guiseppe Vavalà (CEDEO), Massimo
ISO/IEC 3rd FDIS 23006-3 Balestri(Telecom Italia),
Proposal of metadata
m37808 description for light display Jae-Kwan Yun, Yoonmee Doh, Hyun-woo Oh, Jong-Hyun Jang
in MPEG-V
Direction-dependent subm37809 TU scan order on intra
prediction
S. Iwamura, A. Ichigaya (NHK)
[FTV AhG] Test materials
m37810 for 360 3D video
application discussion
Gun Bang, Gwhang soon Lee, Nam Ho Hur
Summary of Voting on
m37811 ISO/IEC 1449630:2014/DCOR 2
SC 29 Secretariat
Summary of Voting on
m37812 ISO/IEC 138181:2015/Amd.1/DCOR 2
SC 29 Secretariat
Summary of Voting on
m37813 ISO/IEC 144965:2001/PDAM 41
SC 29 Secretariat
Summary of Voting on
m37814 ISO/IEC 230024:2014/PDAM 3
SC 29 Secretariat
Summary of Voting on
m37815 ISO/IEC 230091:2014/DAM 3
ITTF via SC 29 Secretariat
Proposed number of core
m37816 channels for LC profile of
MPEG-H 3D Audio
Takehiro Sugimoto, Tomoyasu Komori
Proposal of Additional
m37817 Requirements for Media
Orchestration
Sejin Oh, Jangwon Lee, Jong-Yeul suh
Additional Use Cases and
m37818 Requirements for Media
Jangwon Lee, Sejin Oh, Jong-Yeul Suh
Orchestration
Additional Requirements
m37819 for Omnidirectional Media
Format
Jangwon Lee, Sejin Oh, Jong-Yeul Suh
Evaluation report of Bm37820 Com test sequence
(JCTVC-V0086)
O. Nakagami, T. Suzuki (Sony)
m37821
Comment on test
sequence selection
O. Nakagami, T. Suzuki (Sony)
m37822
Evaluation report of
Huawei test sequence
K. Choi, E. Alshina, A. Alshin, M. Park (Samsung)
m37823
Evaluation report of
Huawei test sequence
Kiho Choi, E. Alshina, A. Alshin, M. Park
m37824
Adaptive Multiple
Transform for Chroma
K. Choi, E. Alshina, A. Alshin, M. Park, M. Park, C. Kim
(Samsung)
m37825
Cross-check of JVETB0023
E. Alshina, K. Choi, A. Alshin, M. Park, M. Park, C. Kim
(Samsung)
m37826
Sony's crosscheck report
on USAC conformance
Yuki Yamamoto, Toru Chinen
Proposed simplification of
m37827 HOA parts in MPEG-H 3D Hiroyuki EHARA, Kai WU, Sua-Hong NEO
Audio Phase 2
Efficient color data
m37828 compression methods for
PCC
Li Cui, Haiyan Xu, Seung-ho Lee, Marius Preda, Christian
Tulvan, Euee S. Jang
m37829 Evaluation Report of
P. Philippe (Orange)
Chimera and Huawei Test
Sequences for Future
Video Coding
m37830
Comments on the defect
report of 14496-15
Teruhiko Suzuki, Kazushiko Takabayashi, Mitsuhiro
Hirabayashi (Sony), Michael Dolan (Fox)
A proposal of instantiation
Hae-Ryong Lee, Jun-Seok Park, Hyung-Gi Byun, Jang-Sik
m37831 to olfactory information for
Choi, Jeong-Do Kim
MPEG-V 4th edition part 1
m37832
MPEG-H Part 3 Profile
Definition
Christof Fersch
ETRI's cross-check report
m37833 for CE on High Resolution Seungkwon Beack, Taejin Lee
Envelope Processing
Maintenance report on
MPEG Surround
m37834
Extension tool for internal
sampling-rate
Seungkwon Beack, Taejin Lee, Jeongil Seo, Adrian Murtaza
RAI's contribution to
m37835 MPEG-UD reference
software
Alberto MESSINA, Maurizio MONTAGNUOLO
Requirements for
m37836 Omnidirectional Media
Application Format
Eric Yip, Byeongdoo Choi, Gunil Lee, Madhukar Budagavi,
Hossein Najaf-Zadeh, Esmaeil Faramarzi, Youngkwon
Lim(Samsung)
Proposed Text for
m37837 Omnidirectional Media
Application Format
Byeongdoo Choi, Eric Yip, Gunil Lee, Madhukar Budagavi,
Hossein Najaf-Zadeh, Esmaeil Faramarzi, Youngkwon
Lim(Samsung)
m37838 Descriptions on MANE
Youngwan So, Kyungmo Park
m37839 MMT service initiation
Youngwan So, Kyoungmo Park
Simplification of the
m37840 common test condition for X. Ma, H. Chen, H. Yang (Huawei)
fast simulation
Performance analysis of
m37841 affine inter prediction in
JEM1.0
H. Zhang, H. Chen, X. Ma, H. Yang (Huawei)
m37842
Harmonization of AFFINE,
H. Chen, S. Lin, H. Yang, X. Ma (Huawei)
OBMC and DBF
m37843
[CE-FDH] Fast start for
DASH FDH
Franck Denoual, Frederic Maze, Herve Ruellan, Youenn
Fablet, Nael Ouedraogo
m37844
New brand for Image File
Format
Franck Denoual, Frederic Maze
m37845
Non-normative JEM
encoder improvements
K. Andersson, P. Wennersten, R. Sjoberg, J. Samuelsson, J.
Strom, P. Hermansson, M. Pettersson (Ericsson)
Reference Software for
m37846 ISO BMFF Green
Metadata parsing
Khaled.Jerbi@thomson-networks.com,
Xavier.Ducloux@thomson-networks.com,
Patrick.Gendron@thomson-networks.com
[CE-FDH] Additional
m37847 parameters for Push
Directives
Kazuhiko Takabayashi, Mitsuhiro Hirabayashi
[PACO] Carriage of
m37848 playback control related
information/data
Kazuhiko Takabayashi, Mitsuhiro HIrabayashi
m37849 Comment on 23009-1
Mitsuhiro HIrabayashi, Kazuhiko Takabayashi, Paul Szucs,
strudy DAM3
TRUFFLE: request of
m37850 virtualized network
function to MANE
m37851
Editorial comment on
14496-12
Yasuaki Yamagishi
Doug Young Suh, Bokyun Jo, Jongmin Lee
Mitsuhhiro Hirabayashi, Kazuhiko Takabayashi, MItsuru
Katsumata
The usage of fragment
m37852 counter for a bigger size of Dongyeon Kim, Youngwan So, Kyungmo Park
MFU
m37853
Clarifications on MPEG-H
DAM3
Sang Bae Chon, Sunmin Kim
Keypoint Trajectory
Coding on Compact
m37854
Descriptor for Video
Analysis (CDVA)
Dong Tian, Huifang Sun, Anthony Vetro
Evaluation Report of
Huawei and B-Com Test
m37855
Sequences for Future
Video Coding
F. Racapé, F. Le Léannec, T. Poirier (Technicolor)
m37856
[SAND] selection of
cached representations
Remi Houdaille
HDR CE2: crosscheck of
CE2.a-2, CE2.c, CE2.d
m37857
and CE2.e-3 (JCTVCW0084)
C. Chevance, Y. Olivier (Technicolor)
HDR CE2: crosscheck of
m37858 CE2.a-1 LCS (JCTVCW0101)
C. Chevance, Y. Olivier (Technicolor)
HDR CE6: crosscheck of
CE6.4.3 (JCTVC-W0063)
C. Chevance, Y. Olivier (Technicolor)
m37859
Enhancement of Enose
Sungjune Chang, Hae-Ryong Lee, Hyung-Gi Byun, Jang-Sik
m37860 CapabilityType for MPEGChoi, Jeong-Do Kim
V 4th edition part 2
A modification of syntax
and examples for
Jong-Woo Choi, Hae-Ryong Lee, Hyung-Gi Byun, Jang-Sik
m37861
EnoseSensorType for
Choi, Jeong-Do Kim
MPEG-V 4th edition part 5
m37862
Adaptive reference sample
A. Filippov, V. Rufitskiy (Huawei)
smoothing simplification
m37863
Proposed Study on DAM3 Christian Neukam, Michael Kratschmer, Max Neuendorf,
for MPEG-H 3D Audio
Nikolaus Rettelbach, Toru Chinen
ISO/IEC 14496-15: on
m37864 extractor design for HEVC M. M. Hannuksela, V. K. Malamal Vadakital (Nokia)
files
ISO/IEC 14996-15:
decode time hints for
m37865
implicit reconstruction of
access units in L-HEVC
V. K. Malamal Vadakital, M. M. Hannuksela (Nokia)
m37866
Multi-layer HEVC support
in HEIF
V. K. Malamal Vadakital, M. M. Hannuksela (Nokia)
m37867
Progressive refinement
indication for HEIF
M. M. Hannuksela, E. B. Aksu (Nokia)
m37868 HEIF conformance files
E. B. Aksu (Nokia), L. Heikkilä (Vincit), J. Lainema, M. M.
Hannuksela (Nokia)
m37869
Derived visual tracks for
ISOBMFF and HEIF
V. K. Malamal Vadakital, M. M. Hannuksela (Nokia)
m37870
Relative location offsets
for ISOBMFF
M. M. Hannuksela, V. K. Malamal Vadakital (Nokia)
m37871 Comments on DAM4
Yago Sanchez, Robert Skupin, Karsten Grueneberg, Cornelius
Hellge, Thomas Schierl
Comments on the usecases, requirements and
m37872
specification for Common
Media Format
Thorsten Lohmar, Prabhu Navali, Per Fröjdh, Jonatan
Samuelsson
m37873
HEVC tile subsets in
14496-15
Karsten Grüneberg, Yago Sanchez
m37874
Thoughts on MPEG-H 3D
Audio
Nils Peters, Deep Sen, Jeongook Song, Moo Young Kim
m37875
Proposed Study on DAM4
Adrian Murtaza, Leon Terentiv, Jouni Paulus
for MPEG-D SAOC
m37876
Proposed Study on DAM5
Adrian Murtaza, Leon Terentiv, Jouni Paulus
for MPEG-D SAOC
Proposed Study on DAM3
m37877 for MPEG-D MPEG
Adrian Murtaza, Achim Kuntz
Surround
m37878
Proposed Corrigendum to
Adrian Murtaza, Jouni Paulus, Leon Terentiv
MPEG-D SAOC
m37879
Thoughts on MPEG-D
SAOC Second Edition
BRIDGET Response to
the MPEG CfP for
m37880 Compact Descriptors for
Video Analysis (CDVA) Search and Retrieval
Adrian Murtaza, Jouni Paulus, Leon Terentiv
Massimo Balestri (Telecom Italia), Gianluca Francini (Telecom
Italia), Skjalg Lepsoy (Telecom Italia), Miroslaw Bober
(University of Surrey), Sameed Husain (University of Surrey),
Stavros Paschalakis (Visual Atoms)
m37881
Proposed RM7 of MPEGD DRC software
m37882
Proposed Corrigendum to
Michael Kratschmer, Frank Baumgarte
MPEG-D DRC
m37883
Max Neuendorf, Michael Kratschmer, Manuel Jander, Achim
Complexity Constraints for
Kuntz, Simone Fueg, Christian Neukam, Sascha Dick, Florian
MPEG-H
Schuh
m37884
Parametric Limiter for
MPEG-D DRC
Michael Kratschmer, Frank Baumgarte
Michael Kratschmer, Frank Baumgarte
Proposed DCOR to
Max Neuendorf, Achim Kuntz, Simone Fueg, Andreas Hölzer,
m37885 MPEG-H 3D Audio edition Michael Kratschmer, Christian Neukam, Sascha Dick, Elena
2015
Burdiel, Toru Chinen,
Constraints for USAC in
m37886 adaptive streaming
applications
m37887
CDVS: Matlab code
maintenance
Evaluation Report of Bm37888 COM Test Sequence for
Future Video Coding
Max Neuendorf, Matthias Felix, Bernd Herrmann, Bernd
Czelhan, Michael Härtl
Alessandro Bay (Politecnico di Torino), Massimo Balestri
(Telecom Italia)
H. B. Teo, M. Dong (Panasonic)
(JCTVC-V0086)
3DA CE on High
m37889 Resolution Envelope
Processing (HREP)
Sascha Disch, Florin Ghido, Franz Reutelhuber, Alexander
Adami, Jürgen Herre
m37890
Polyphase subsampled
E. Thomas (TNO)
signal for spatial scalability
m37891
Software for MPEG-H 3D
Audio RM6
Michael Fischer, Achim Kuntz, Sangbae Chon, Å•ukasz
Januszkiewicz, Sven Kordon, Nils Peters, Yuki Yamamoto
m37892
Review of MPEG-H 3DA
Metadata
Simone Füg, Christian Ertel, Achim Kuntz
[FTV AHG] Technical
Description of Poznan
University of Technology
m37893
proposal for Call for
Evidence on FreeViewpoint Television
Marek Domanski, Adrian Dziembowski, Adam Grzelka,
Łukasz Kowalski, Dawid Mieloch, Jarosław Samelak,
Olgierd Stankiewicz, Jakub Stankowski, Krzysztof Wegner
m37894
Proposed modifications on
Gregory Pallone
MPEG-H 3D Audio
m37895
Study on MPEG-D DRC
23003-4 PDAM1
Frank Baumgarte, Michael Kratschmer,
m37896
Study on Base Media File
Format 14496-12 PDAM1
Frank Baumgarte, David Singer, Michael Kratschmer,
m37897
Review of MPEG-H 3DA
Signaling
Max Neuendorf, Sascha Dick, Nikolaus Rettelbach,
Coding Efficiency /
Complexity Analysis of
m37898 JEM 1.0 coding tools for
the Random Access
Configuration
H. Schwarz, C. Rudat, M. Siekmann, B. Bross, D. Marpe, T.
Wiegand (Fraunhofer HHI)
m37899
Reply Liaison on MPEGDASH (N15699)
Iraj Sodagar on behalf of DASH-IF
m37900
Additional Use Case of
Media Orchestration
Jin Young Lee
Report of AhG on
Responding to Industry
m37901
Needs on Adoption of
MPEG Audio
Schuyler Quackenbush
Report of AhG on 3D
m37902 Audio and Audio
Maintenance
Schuyler Quackenbush
m37903 Use Case of MIoT
Jin Young Lee
m37904
On the Image file format
and enhancements
m37905
Error in the sample groups
David Singer
in 14496-12
David Singer
m37906 Possible DASH Defects
Jean Le Feuvre, Cyril Concolato, , ,
m37907 ISOBMFF Defect
Jean Le Feuvre, Cyril Concolato
Editorial reorganization of
m37908 MIME-related sections in
ISOBMFF
Cyril Concolato, Jean Le Feuvre
m37909
White Paper on ARAF 2nd
Traian Lavric, Marius Preda
Edition
m37910
Big Data for personalized
user experience in AR
m37911
image analysis description
duckhwan kim, bontae koo
for M-IoTW
Traian Lavric, Marius Preda, Veronica Scurtu
Future video coding
m37912 support for guided
transcoding
J. Samuelsson, T. Rusert, K. Anderssson, R. Sjöberg
(Ericsson)
Dependent Random
m37913 Access Point (DRAP)
pictures in DASH
J. Samuelsson, M. Pettersson, R. Sjoberg (Ericsson)
m37914
Support of VR video in
Ye-Kui Wang, Hendry, Marta Karczewicz
ISO base media file format
m37915
Comments on ISO/IEC
14496-15
m37916 Comments on DuI
Hendry, Ye-Kui Wang,
Thomas Stockhammer
m37917
Proposed Updates to
Broadcast TV Profile
Thomas Stockhammer
m37918
Next Generation Audio in
DASH
Thomas Stockhammer
[FDH] Editorial and
m37919 technical comments on
WD3
m37920
Languages, Roles and
Roles schemes
m37921 Labels in DASH
m37922
Selection Priority and
Group
m37923 Adaptation Set Switching
Iraj Sodagar
Thomas Stockhammer
Thomas Stockhammer
Thomas Stockhammer
Thomas Stockhammer
m37924 SAND: Comments on DIS Thomas Stockhammer
m37925
SAND: Addition of Byte
Ranges
Thomas Stockhammer
m37926 SAND: Partial File Delivery Thomas Stockhammer
m37927
Clarification on the format
of xlink remote element
Waqar Zia, Thomas Stockhammer
m37928
On Conformance and
Reference Software
Waqar Zia, Thomas Stockhammer
m37929
Updates on Web
Interactivity Track
Thomas Stockhammer
m37930
Zylia Poznan Listening
Test Site Properties
Jakub Zamojski, ŕukasz Januszkiewicz, Tomasz Żernicki
m37931
Updates on Partial File
Handling
Thomas Stockhammer
Corrections and
m37932 clarifications on Tonal
Component Coding
m37933
Low Complexity Tonal
Component Coding
ŕukasz Januszkiewicz, Tomasz Żernicki
Tomasz Żernicki, ŕukasz Januszkiewicz, Andrzej
Rumiński, Marzena Malczewska
Web3D Coding for large
m37934 objects with attributes and Christian Tulvan, Euee S. Jang, Marius Preda
texture
m37935 Conformance Assets
Christian Tulvan, Marius Preda
Big Data platform for 3D
m37936 reconstruction and
diffusion
Christian Tulvan, Marius Preda
m37937 Software Assets
Christian Tulvan, Marius Preda
m37938 Content Assets
Christian Tulvan, Marius Preda
m37939
MMT Session Setup and
Control
Imed Bouazizi
m37940
MMT Integration with
CDNs
Imed Bouazizi
m37941
Migrating Sessions from
HTTP to MMT
Imed Bouazizi
m37942
MMT as a Transport
Component for Broadcast
Imed Bouazizi
m37943
Support for new Container
Imed Bouazizi
Formats
m37944
Customized Presentations
Imed Bouazizi
for MPEG-CI
m37945 MMT Developer's Day
Imed Bouazizi
m37946
MPEG Workshop on 5G /
Beyond UHD Media
Abdellatif Benjelloun Touimi, Imed Bouazizi, Iraj Sodagar,
Jörn Ostermann, Marius Preda, Per Fröjdh, Shuichi Aoki
m37947
Listening Test Results for
TCC from FhG IIS
Sascha Dick, Christian Neukam
m37948
Updates to the MMT
reference software
Imed Bouazizi
m37949 [MMT] Update of MP table Hyun-Koo Yang, Kyungmo Park, Youngwan So
Software implementation
m37950 of visual information
H. R. Tohidypour, M. Azimi, M. T. Pourazad, P. Nasiopoulos
fidelity (VIF) for HDRTools
FTV AHG: fast DERS
m37951 (Depth Estimation
Software)
Takanori Senoh, Akio Ishikawa, Makoto Okui, Kenji Yamamoto,
Naomi Inoue,
A Progressive High 10
m37952 profile in ISO/IEC 1449610/MPEG-4 part 10/AVC
A.M. Tourapis, D. Singer, K. Kolarov
m37953
FTV AHG: CfE Response
fDEVS
Takanori Senoh, Akio Ishikawa, Makoto Okui, Kenji Yamamoto,
Naomi Inoue,
Generalized Constant and
Non-Constant Luminance
m37954 Code Points in ISO/IEC
A.M. Tourapis, Y. Su, D. Singer, K. Kolarov, C. Fogg
14496-10 and ISO/IEC
23001-8
FTV AHG: improved
m37955 VSRS (view synthesis
software)
m37956
Takanori Senoh, Akio Ishikawa, Makoto Okui, Kenji Yamamoto,
Naomi Inoue,
Performance evaluation of J. Chen, X. Li, F. Zou, M. Karczewicz, W.-J. Chien, T. Hsieh
JEM 1 tools by Qualcomm (Qualcomm)
Evaluation report of Netflix
m37957 Chimera and SJTU test
F. Zou, J. Chen, X. Li, M. Karczewicz (Qualcomm)
sequences
m37958 Non Square TU
K. Rapaka, J. Chen, L. Zhang, W.-J. Chien, M. Karczewicz
Partitioning
(Qualcomm)
m37959
Personalized Video
Insertion for TRUFFLE
Jongmin Lee
m37960
NAT Traversal for
TRUFFLE
Jongmin Lee
Universal string matching
m37961 for ultra high quality and
ultra high efficiency SCC
L. Zhao, K. Zhou, J. Guo, S. Wang, T. Lin (Tongji Univ.)
Four new SCC test
sequences for ultra high
m37962
quality and ultra high
efficiency SCC
J. Guo, L. Zhao, Tao Lin (Tongji Univ.)
Performance comparison
of HEVC SCC CTC
m37963
sequences between
HM16.6 and JEM1.0
S. Wang, T. Lin (Tongji Univ.)
m37964
Further improvement of
intra coding tools
S.-H. Kim, A. Segall (Sharp)
Report of evaluating
m37965 Huawei surveillance test
sequences
C.-C. Lin, J.-S. Tu, Y.-J. Chang, C.-L. Lin (ITRI)
Report of evaluating
m37966 Huawei UGC test
sequences
J.-S. Tu, C.-C. Lin, Y.-J. Chang, C.-L. Lin (ITRI)
Cross-check of SCC
m37967 encoder improvement
(JCTVC-W0042)
B. Li, J. Xu (Microsoft)
Cross-check of Nonnormative HM encoder
m37968
improvements (JCTVCW0062)
B. Li, J. Xu (Microsoft)
m37969
Recursive CU structure for Kui Fan, Ronggang Wang, Zhenyu Wang, Ge Li, Tiejun Huang,
IVC+
Wen Gao,
m37970
Speed optimization for IVC Shenghao Zhang, Kui Fan, Ronggang Wang, Ge Li, Tiejun
decoder
Huang, Wen Gao
Crosscheck for bottom-up
hash value calculation and
m37971
W. Zhang (Intel)
validity check for SCC
(JCTVC-W0078)
De-quantization and
m37972 scaling for next generation J. Zhao, A. Segall, S.-H. Kim, K. Misra (Sharp)
containers
Netflix Chimera test
m37973 sequences evaluation
report
m37974
MMT signaling for
Presentation Timestamp
m37975 MIoT API instances
M. Sychev, H. Chen (Huawei)
Kyungmo Park
Sang-Kyun Kim, Min Hyuk Jeong, Hyun Young Park
m37976
Introduction to IoTivity
APIs (MIoT)
Sang-Kyun Kim, Min Hyuk Jeong, Hyun Young Park,
m37977
Modification of
3DPrintingLUT
In-Su Jang, Yoon-Seok Cho, Jin-Seo Kim, Min Hyuk Jeong,
Sang-Kyun Kim
m37978
MMT signaling message
for consumption Report
jaehyeon bae, youngwan so, kyungmo park
Some considerations on
hue shifts observed in
m37979
M. Pindoria, M. Naccari, T. Borer, A. Cotton (BBC)
HLG backward compatible
video
Evaluation report of SJTU
m37980 Test Sequences from
T. Ikai (Sharp)
Sharp
3D Printing Color
m37981 Reproduction command
and preference
m37982
Cross-check of JCTVCW0085
In-Su Jang, Yoon-Seok Cho, Jin-Seo Kim, Min Hyuk Jeong,
Sang-Kyun Kim
J. Lee, J. W. Kang, D. Jun, H. Ko (ETRI)
[FTV AHG] Data Format
m37983 for Super Multiview Video
Application
Mehrdad Panahpour Tehrani, Kohei Isechi, Keita Takahashi,
Toshiaki Fujii (Nagoya University)
[FTV AHG] Free
m37984 Navigation Using
Superpixel Segmentation
Mehrdad Panahpour Tehrani, Kazuyoshi Suzuki, Keita
Takahashi, Toshiaki Fujii (Nagoya University), Tomoyuki
Tezuka (KDDI)
[FTV AHG] A New
Mehrdad Panahpour Tehrani, Ryo Suenaga, Tetta Maeda,
m37985 Technology for Free
Kazuyoshi Suzuki, Keita Takahashi, Toshiaki Fujii (Nagoya
Navigation in Sport Events University)
m37986
Bugfix on the software for
MPEG-H 3D Audio
Information on Korean
m37987 standard for terrestrial
UHD broadcast services
m37988
Mpeg-UD Reference S/W
for RRUI Service
m37989 On SAND conformance
m37990
Evaluation of some intracoding tools of JEM1
Proposed changes to
m37991 ISO/IEC CD 23006-1 3rd
edition
m37992
Taegyu Lee, Henney Oh
Henney Oh(WILUS inc. on behalf of NGBF)
Jaewon Moon (KETI), Jongyun Lim(KETI), Tae-Boem
Lim(KETI), Seung Woo Kum(KETI), Min-Uk Kim(Konkuk Univ.),
Hyo-Chul Bae(Konkuk Univ.)
Emmanuel Thomas
A. Filippov, V. Rufitskiy (Huawei)
Giuseppe Vavalà (CEDEO), Kenichi Nakamura (Panasonic)
Demonstration of usage of
Bojan Joveski, Mihai Mitrea, Rama Rao Ganji
MPEG-UD
Towards a reference
architecture for Mediam37993
centric IoT and Wearable
(M-IoTW)
Mihai Mitrea, Bojan Joveski
m37994
Modification of merge
candidate derivation
W. -J. Chien, J. Chen, S. Lee, M. Karczewicz (Qualcomm)
m37995
HDR CE2: cross-check
report of JCTVC-W0033
T. Lu, F. Pu, P. Yin
m37996
HDR CE2: cross-check
report of JCTVC-W0097
T. Lu, F. Pu, P. Yin
[FTV-AHG]ZJU's
m37997 rendering experiment on
SoccerArc of FTV
Ang Lu, Lu Yu, Qing Wang, Yule Sun, ,
m37998
TU-level non-separable
secondary transform
Proposal of metadata
m37999 descriptions for handling
sound effect in MPEG-V
m38000
X. Zhao, A. Said, V. Seregin, M. Karczewicz, J. Chen, R. Joshi
(Qualcomm)
Saim Shin, Jong-Seol James Lee, Dalwon Jang, Sei-Jin Jang,
Hyun-Ho Joo, Kyoungro Yoon
Improvements on adaptive
M. Karczewicz, L. Zhang, W.-J. Chien, X. Li (Qualcomm)
loop filter
Customized Presentation
m38001 Scheme for live
broadcasting
Yiling Xu, Wenjie Zhu, Hao Chen, Wenjun Zhang
m38002
Customized Presentation
Scheme for VoD
m38003
MMT Asset Compensation
Yiling Xu, Hao Chen, Teng Li, Wenjun Zhang
Relationship
Yiling Xu, Wenjie Zhu, Bo Li, Lianghui Zhang, Wenjun Zhang
Evaluation report of SJTU
test sequences for future
m38004
S.-H. Park, H. Xu, E. S. Jang (Hanyang Univ.)
video coding
standardization
m38005
MMT Asset homogeneity
relationship
Yiling Xu, Hao Chen, Teng Li, Wenjun Zhang
m38006
IG of Content Accessible
Time Window
Yiling Xu, Chengzhi Wang, Hao Chen, Wenjun Zhang
An optimized transfer
m38007 mechanism for static slices Yiling Xu, Chengzhi Wang, Hao Chen, Wenjun Zhang
in video stream
HDR CE6: cross-check
report of test4.2: Color
m38008
enhancement (JCTVCW0034)
E.Alshina(Samsung)
HDR CE2 related: crossm38009 check report of linear luma E.Alshina
re-shaper
Improving protocol
m38010 efficiency for short
messages transmission
m38011
Real-time interaction
message
Resource
m38012 Request/Response
Message
Yiling Xu, Hao Chen, Ning Zhuang, Wenjun Zhang
Yiling Xu, Ning Zhuang, Hao Chen, Chengzhi Wang, Wenjun
Zhang
Yiling Xu, Hao Chen, Ning Zhuang, Wenjun Zhang
Spatial Partitioning Based
m38013 on the Attributes of Media Yiling Xu, Shaowei Xie, Ying Hu, Wenjun Zhang
Content
m38014
The Scheme of adaptive
FEC
Shared experience use
m38015 cases for MPEG Media
Orchestration
Yiling Xu, Wei Huang, Hao Chen, Bo Li, Wenjun Zhang
Oskar van Deventer
m38016
Liaison statement from
ETSI ISG CCM
David Holliday (Dolby)
m38017
[FTV Ahg] Report on test
results with quality
Wenhao Hong, Ang Lu, Lu Yu
assessment index: 3DPQI. Zhejiang University
Impact of 8-bits coding on
Yahia Benmoussa, Nicolas Derouineau, Nicolas Tizon, Eric
m38018 the Green Metadata
Senn,
accuracy
m38019
Crosscheck of JVETB0022(ATMVP)
X. Ma, H. Chen, H. Yang (Huawei)
Proposed editorial
m38028 improvements for MMT
2nd edition
Kyungmo Park
Architecture for shared
m38029 experience - MPEG
MORE
Oskar van Deventer
[MMT-IG] MMT QoS
m38030 management for media
adaptation service
Kyungmo Park, Youngwan So
Contribution to WD of
ISO/IEC 21000-19 AMD 1
m38031 Extensions on TimeThomas Wilmering, Panos Kudumakis
Segments and Multi-Track
Audio
m38032
Moving Regions of Interest
Emmanuel Thomas
signalling in MPD
Enhancement of
"dequantization and
m38033
inverse transform"
metadata (III)
yahia benmoussa, eric senn, Nicolas Tizon, Nicolas
Derouineau
Technical comments on
m38034 Amd 3: authorization and
authentication scheme
Emmanuel Thomas
m38035 Report on SAND-CE
Emmanuel Thomas, Mary-Luc Champel, Ali C. Begen
Shared experience
m38036 requirements for MPEG
Media Orchestration
Oskar van Deventer
[FTV AHG] Further results
on scene reconstruction
Sergio GarcÃa Lobo, Pablo Carballeira López, Francisco
m38037
with hybrid SPLASH 3D
Morán Burgos
models
m38038
[FDH] CE-FDH
Conference Call Notes
CE6-related: Cross check
m38039 report for Test 4.1
Reshaper from m37064
m38040
Liaison Statement from
HbbTV
Liaison Statement from
m38041 JTC 1/WG 10 on Revised
WD of ISO/IEC 30141
Kevin Streeter
M. Naccari (BBC)
HbbTV via SC 29 Secretariat
JTC 1/WG 10 via SC 29 Secretariat
Liaison Statement from
JTC 1/WG 10 on Request
m38042
JTC 1/WG 10 via SC 29 Secretariat
contributions on IoT use
cases
m38043 Request for editorial
Changkyu Lee, Juyoung Park,
correction of header
compression in ISO/IEC
23008-1 2nd edition
National Body Contribution
on ITTFs contribution on
m38044
Finland NB via SC 29 Secretariat
option 3 declarations (SC
29 N 15497)
National Body Contribution
m38045 on the defect report of
Japan NB via SC 29 Secretariat
ISO/IEC 14496-15
Table of Replies on
m38046 ISO/IEC FDIS 23005-2
[3rd Edition]
ITTF via SC 29 Secretariat
Table of Replies on
m38047 ISO/IEC FDIS 23005-4
[3rd Edition]
ITTF via SC 29 Secretariat
Table of Replies on
m38048 ISO/IEC FDIS 23005-5
[3rd Edition]
ITTF via SC 29 Secretariat
Table of Replies on
m38049 ISO/IEC FDIS 23005-6
[3rd Edition]
ITTF via SC 29 Secretariat
Proposal of error
Si-Hwan Jang, Sanghyun Joo, Kyoung-Ill Kim, Chanho Jung,
m38050 modification of validator in
Jiwon Lee, Hyung-Gi Byun, Jang-Sik Choi
reference software
Demo program of visual
m38051 communication for
reference software
Si-Hwan Jang, Sanghyun Joo, Kyoung-Ill Kim, Chanho Jung,
Jiwon Lee, Hyung-Gi Byun, Jang-Sik Choi
Cross-check of m37799
(Extension of Prediction
m38052 Modes in Chroma Intra
Coding for Internet Video
Coding)
Sang-hyo Park, Won Chang Oh, Euee S. Jang
Proposal of Validation Tool
for MPEG-21 User
Hyo-Chul Bae, Kyoungro Yoon, Seung-Taek Lim, Dalwon
m38053
Description Reference
Jang,
Software
Title Editor’s input for
m38054 the WD of MPEG-V Part 1 Seung Wook Lee, JinSung Choi, Kyoungro Yoon
Version 4
Title Editor’s input for
m38055 the WD of MPEG-V Part 2 Seung Wook Lee, JinSung Choi, Kyoungro Yoon
Version 4
Editor’s input for the
m38056 CD of MPEG-V Part 5
Version 4
Seung Wook Lee, JinSung Choi, Kyoungro Yoon, Hyo-Chul
Bae
MIoT Use-case (Vibration
m38057 subtitle for movie by using Da Young Jung, Hyo-Chul Bae, Kyoungro Yoon,
wearable device)
JCT-VC AHG report:
HEVC HM software
m38058 development and software K. Suehring, K. Sharman
technical evaluation
(AHG3)
JCT-VC AHG report:
m38059 HEVC conformance test
development (AHG4)
T. Suzuki, J. Boyce, R. Joshi, K. Kazui, A. Ramasubramonian,
Y. Ye
JCT-VC AHG report: Test
m38060 sequence material
(AHG10)
T. Suzuki, V. Baroncini, R. Cohen, T. K. Tan, S. Wenger, H. Yu
Preliminary study of
m38061 filtering performance in
HDR10 pipeline
K Pachauri, Aishwarya
[AhG][FTV] New depth
m38062 estimation and view
synthesis
Beerend Ceulemans, Minh Duc Nguyen, Gauthier Lafruit,
Adrian Munteanu
[FTV AhG] SMV/FN
subjective assessment:
m38063
MultiView Perceptual
Disparity Model
Pablo Carballeira, Jesús Gutiérrez, Francisco Morán,
Narciso GarcÃa
Report of VCEG AHG6 on
m38064 JEM Software
X. Li, K. Suehring (AHG chairs)
Development
Cross-check of JCTVCW0075 on palette encoder
m38065
V. Seregin (Qualcomm)
improvements for lossless
mode
JCT-VC AHG report: SCC
H. Yu, R. Cohen, A. Duenas, K. Rapaka, J. Xu, X. Xu (AHG
m38066 coding performance
chairs)
analysis (AHG6)
HDR CE2: Cross-check of
JCTVC-W0084 (combines
m38067 solution for CE2.a-2,
D. Rusanovskyy
CE2.c, CE2.d and CE2.e3)
Recommended practices
for power measurements
m38068
and analysis for
multimedia applications
m38069
SCC Level Limits Based
on Chroma Format
Steve Saunders, Alexis Michael Tourapis, Krasimir Kolarov,
David Singer
S Deshpande, S.-H. Kim
[FDH] Additional
m38070 comments on DASH-FDH PoLin Lai, Shan Liu, Lulin Chen, Shawmin Lei
push-template
JCT-VC AHG report:
HEVC test model editing
m38071
and errata reporting
(AHG2)
B. Bross, C. Rosewarne, M. Naccari, J.-R. Ohm, K. Sharman,
G. Sullivan, Y.-K. Wang
JCT-VC AHG report:
m38072 SHVC verification testing
(AHG5)
V. Baroncini, Y.-K. Wang, Y. Ye
m38073
comment on FEC
signaling in MMT
Lulin Chen, Shan Liu, Shawmin Lei
[AHG 5] Further
m38074 confirmation on MV-HEVC K. Kawamura, S. Naito (KDDI)
conformance streams
m38075
Bug report (#114) on MVK. Kawamura, S. Naito (KDDI)
HEVC Reference Software
JCT-VC AHG report: SCC
R. Joshi, J. Xu (AHG co-chairs), Y. Ye, S. Liu, G. Sullivan, R.
m38076 extensions text editing
Cohen (AHG vice-chairs)
(AHG7)
m38077
Report of VCEG AHG4 on
T. Suzuki, J. Boyce, A. Norkin (AHG chairs)
Test Sequence Selection
Cross-check of nonnormative JEM encoder
m38078
improvements (JVETB0039)
B. Li, J. Xu (Microsoft)
Cross-check of JVETm38079 B0038: Harmonization of
AFFINE, OBMC and DBF
X. Xu, S. Liu (MediaTek)
JCT-VC AHG report: SCC
K. Rapaka, B.Li (AHG co-chairs), R. Cohen, T.-D. Chuang, X.
m38080 extensions software
Xiu, M. Xu (AHG vice-chairs)
development (AHG8)
Crosscheck for further
m38081 improvement of HDR CE2 Y. Wang, W. Zhang, Y. Chiu (Intel)
(JCTVC-W0085)
m38082 SMPTE Liaison on HDR
Walt Husak
Summary of Voting on
m38083 ISO/IEC 230088:2015/DAM 2
ITTF via SC 29 Secretariat
Summary of Voting on
m38084 ISO/IEC 230088:2015/DAM 3
ITTF via SC 29 Secretariat
Performance Testing for
ISO/IEC 15938m38085
3:2002/Amd 3:2009:
Image signature tools
Karol Wnukowicz, Stavros Paschalakis, Miroslaw Bober
JCT-VC AHG report:
m38086 Project management
(AHG1)
G. J. Sullivan, J.-R. Ohm
JCT-VC AHG report:
m38087 SHVC software
development (AHG12)
V. Seregin, H. Yong, G. Barroux
JCT-VC AHG report:
m38088 SHVC test model editing
(AHG11)`
G. Barroux, J. Boyce, J. Chen, M. Hannuksela, G.-J. Sullivan,
Y.-K. Wang, Y. Ye
JCT-VC AHG report:
m38089 Complexity of SCC
extensions (AHG9)
Alberto Duenas (AHG chair), M. Budagavi, R. Joshi, S.-H. Kim,
P. Lai, W. Wang, W. Xiu (co-chairs)
Report of the AHG on
MVCO, Contract
m38090
Jaime Delgado
Expression Language, and
Media Contract Ontology
HDR CE2: cross-check
report of CE2.b-2, CE2.c
m38091
and CE2.d experiments
(JCTVC-W0094)
C. Chevance, Y. Olivier (Technicolor)
Report of VCEG AHG1 on
m38092 Coding Efficiency
M. Karczewicz, M. Budagavi (AHG chairs)
Improvements
m38093 Report of MPEG AHG on
C. Fogg, E. Francois, W. Husak, A. Luthra,
HDR and WCG Video
Coding
m38094
FF-LDGM Codes for CE
Takayuki Nakachi, Masahiko Kitamura, Tatsuya Fujii
on MMT Adaptive AL-FEC
m38095 VSF Liaison
Walt Husak
Coding results of 4K
surveillance and 720p
m38096
portrait sequences for
future video coding
K. Kawamura, S. Naito (KDDI)
Summary of Voting on
m38097 ISO/IEC 144964:2004/PDAM 46
SC 29 Secretariat
Summary of Voting on
m38098 ISO/IEC 230025:2013/PDAM 3
SC 29 Secretariat
Summary of Voting on
m38099 ISO/IEC PDTR 23008-13
[2nd Edition]
SC 29 Secretariat
Cross-check of JVETB0058: Modification of
m38100
merge candidate
derivation
H. Chen, H. Yang (Huawei)
[FTV AHG] 3D-HEVC
m38101 extensions for free
navigation
Marek Domanski, Jarosław Samelak, Olgierd Stankiewicz,
Jakub Stankowski, Krzysztof Wegner
m38102 MMT Adhoc Group Report Imed Bouazizi
Cross-check of JVETB0039: Non-normative
m38103
JEM encoder
improvements
C. Rudat, B. Bross, H. Schwarz (Fraunhofer HHI)
Cross-check of Non
m38104 Square TU Partitioning
(JVET-B0047)
O. Nakagami (Sony)
Crosscheck of the
m38105 improvements on ALF in
JVET-B060
C.-Y. Chen, Y.-W. Huang (MediaTek)
m38106
Cross-check of JVETB0060
B. Li, J. Xu (Microsoft)
m38107 QEC Effort
Ali C. Begen
Chross-check of JVETB0059: TU-level nonm38108
separable secondary
transform
S.-H. Kim (Sharp)
m38109
SJTU 4k test sequences
evaluation report
S. Jeon, N. Kim, H. Shim, B. Jeon (SKKU)
Requirements on
m38110 Cinematic Virtual Reality
Mapping Schemes
Ziyu Wen, Jisheng Li, Sihan Li, Jiangtao Wen
Summary of Voting on
m38111 ISO/IEC 1449610:2014/DAM 2
ITTF via SC 29 Secretariat
m38112 Summary of Voting on
ITTF via SC 29 Secretariat
ISO/IEC DIS 230082:201x [3rd Edition]
Speech-Related Wearable
m38113 Element Description for M- Miran Choi, Hyun-ki Kim
IoTW
ARAF Expansion for
Transformation System
m38114
JinHo Jeong, HyeonWoo Nam
between the outdoor GPS
and Indoor Navigation
m38115 VCB AHG Report
Mohamad Raad
m38116 IVC AhG report
Ronggang Wang, Euee S. Jang
Simplification of Low Delay
m38117 configurations for JVET
M. Sychev (Huawei)
CTC
Information and
m38118 Comments on Hybrid Log
Gamma
J. Holm
m38119 [FDH] Lists are sufficient
Thomas Stockhammer
m38120
Report of the AHG on
RMC
Marco Mattavelli
AHG on Requirements on
m38121 Genome Compression and Marco Mattavelli
Storage
Report of BoG on parallel
m38122 encoding and removal of
cross-RAP dependencies
m38123
3D dynamic point cloud
data sets
K. Suehring, H. Yang (BoG coordinators)
Lazar Bivolarsky
[FDH] Complementary test
m38124 examples for Segment
Emmanuel Thomas
Lists compression
m38125 Systems' Agenda
Youngkwon Lim
Coverage of, additions and
m38126 correction to the ISOBMFF Cyril Concolato, Jean Le Feuvre
conformance files
m38127
Indication of SMPTE 2094Wiebe de Haan, Leon van de Kerkhof, Rutger Nijland
20 metadata in HEVC
m38128
Report of the AhG on
Green MPEG
Xavier Ducloux
m38129
Report of AHG on Big
Media
Abdellatif Benjelloun Touimi, Miroslav Bober, Mohammed Raad
Updates on Adaptation Set
m38130 Linking based on NB
Thomas Stockhammer
Comments
Report of AHG on Mediam38131 Related Privacy
Management
Youngkwon Lim
m38132 MLAF demonstration
Marius Preda on behalf of BRIDGET consortium
m38133
AHG on Graphics
compression
m38134 JCT-3V AHG report:
Lazar Bivoalrsky
J.-R. Ohm, G. J. Sullivan
Project management
(AHG1)
m38135
DASH Ad-hoc Group
Report
Iraj Sodagar
m38136
Point Cloud Codec for
Tele-immersive Video
Rufael Mekuria (CWI), Kees Blom (CWI), Pablo Cesar (CWI)
m38137 MPEG-21 UD AhG Report Bojan Joveski, Sanghyun Joo
JCT-3V AHG Report: 3DHEVC Draft and MVm38138
HEVC / 3D-HEVC Test
Model editing (AHG2)
G. Tech, J. Boyce, Y. Chen, M. Hannuksela, T. Suzuki, S. Yea,
J.-R. Ohm, G. Sullivan
JCT-3V AHG Report: MVHEVC and 3D-HEVC
m38139
G. Tech, H. Liu, Y. Chen,
Software Integration
(AHG3)
m38140
Report of AhG for
Lightfield formats
Arianne Hinds
Report of AHG on Future
Jens-Rainer Ohm, Gary J. Sullivan, Jill Boyce, Jianle Chen,
m38141 Video Coding Technology
Karsten Suehring, Teruhiko Suzuki
Evaluation
Media Orchestration m38142 C&O and requirements
additions
Oskar van Deventer
m38143 File format AHG Report
David Singer
AHG Report on Mediam38144 centric Internet of Things
(MIoT) and Wearable
Mihai Mitrea, Sang-Kyun Kim, Sungmoon Chun
m38145 AHG on AR
Traian Lavric, Marius Preda
m38146
next draft of Media
Orchestration C&O
Jean-Claude Dufourd
ISO/IEC 14496-15: on
extractor design for HEVC M. M. Hannuksela, V. K. Malamal Vadakital (Nokia), K.
m38147
files (merging m37864 and Grüneberg, Y. Sanchez (Fraunhofer HHI)
m37873)
m38148 Overview of ICtCp
m38149
Report of AHG on
Application Formats
Comments on Broadcast
m38150 TV profile (MPEG-DASH
3rd edition)
m38151
Scheme for indication of
content element roles
Background information
m38152 related to IoT
standardisation activities
J. Pytlarz (Dolby)
Youngkwon Lim
Mary-Luc Champel
Kurt Krauss, Thomas Stockhammer
Kate Grant
m38153
[CE-FDH] CE report on
FDH
Kevin Streeter, Vishy Swaminathan
m38154
A Proof of Concept for
SAND conformance
Emmanuel Thomas
Report of VCEG AHG2 on
m38155 Subjective Distortion
T. K. Tan (AHG chair)
Measurement
SHM software
m38156 modifications for multiview support
X. Huang (USTC), M. M. Hannuksela (Nokia)
SHM software
m38157 modifications for multiview support
X. Huang (USTC), M. M. Hannuksela (Nokia)
m38158
AHG on FTV (Freeviewpoint Television)
Some Elementary
m38159 Thoughts on HDR
Backward Compatibility
Masayuki Tanimoto, Krzysztof Wegner, Gauthier Lafruit
P. Topiwala (FastVDO)
Report on BoG on informal
m38160 subjective viewing related K. Andersson, E. Alshina (BoG coordinators)
to JVET-B0039
Reconstructed 3D point
clouds and meshes
m38161
donated by
UPM/BRIDGET
Francisco Morán Burgos, Daniel Berjón DÃez, Rafael
Pagés Scasso
[FDH] Complementary
m38162 comments on URL
Template
Li Liu
m38163
[FDH] Proposal for
m38164 Simplified Template
Format for FDH
m38165
Misc-Clarifications-on14496-30-2014
m38166 SAND SoDIS 1st draft
Kevin Streeter, Vishy Swaminathan
Michael Dolan
Mary-Luc Champel
[CE-FDH] Push Directive
Frederic Maze, Franck Denoual (Canon), Kazuhiko
m38167 enabling fast start
Takabayashi, Mitsuhiro Hirabayashi (Sony)
streaming on MPD request
m38168
SAND capabilities
messages
Mary-Luc Champel
m38171
Updated DuI and DCOR3
items
Iraj Sodagar
m38172
Report of BoG on
selection of test material
T. Suzuki, J. Chen (BoG coordinators)
m38173
Updates to the WebSocket
Imed Bouazizi
Binding of FDH
m38174 CE8 Report
Walt Husak
m38175 AhG on MPEG Vision
Rob Koenen
Conversion and Coding
m38176 Practices for HDR/WCG
Video, Draft 1
J. Samuelsson (editor)
m38177 Withdrawn
m38178 AhG on Lightfield Formats Arianne Hinds
m38179
Notes for MPEG Vision
Session
Rob Koenen
JCT-3V AHG Report: 3D
m38180 Coding Verification Testing V. Baroncini, K. Müller, S. Shimizu
(AHG4)
m38181 CE8 Report
Description of Exploration
m38182 Experiments on Coding
Tools
E. Alshina, J. Boyce, Y.-W. Huang, S.-H. Kim, L. Zhang (EE
coordinators)
m38183 Requirements for OMAF
Byeongdoo Choi, Youngkwon Lim(Samsung), Jangwon Lee,
Sejin Oh(LG)
m38184
Open GOP resolution
change using SHVC
TRUFFLE: Request of
m38185 virtualized network
function to vMANE
Yago Sanchez, Robert Skupin, Patrick Gendron
Bokyun Jo, Doug Young Suh, Jongmin Lee
TRUFFLE: Virtual Network
m38186 Function message to
Bokyun Jo, Doug Young Suh, Jongmin Lee
vMANE
m38187
[FDH] Update to Text of
FDH Working Draft
MMT signaling message
m38188 for consumption report
(updated M37978)
m38189
[CE-FDH] Simplified fast
start push directive
m38190
[SAND] Update of the
conformance workplan
Kevin Streeter
Jaehyeon Bae, Kyungmo Park, Youngwan So, Hyun-koo Yang
Frederic Maze (Canon), Kazuhiko Takabayashi (Sony)
[DuI] Proposal to fix the
m38191 cyclic definition of the
Frédéric Mazé
media content component
m38192
SRD input for text for
23009-1 DAM4
Emmanuel Thomas
Verification Test Plan for
HDR/WCG Video Coding
m38193
Using HEVC Main 10
Profile
R. Sjöberg, V. Baroncini, A. K. Ramasubramonian (editors)
Input on media content
m38194 component definition for
Text of 23009-1 DCOR 3
Frederic Maze
m38195 Actions on TuC
Iraj Sodagar
m38196
URI Signing input text for
WD part 3
Emmanuel Thomas
m38197
SAND update for CE
document
Emmanuel Thomas
m38198 DASH Subgroup Report
Iraj Sodagar
Call for test material for
m38199 future video coding
standardization
A. Norkin, H. Yang, J.-R. Ohm, G. J. Sullivan, T. Suzuki (call
coordinators)
m38200
Meeting Report of 2nd
JVET Meeting
G. J. Sullivan, J.-R. Ohm (JVET coordinators)
Algorithm description of
m38201 Joint Exploration Test
Model 2
J. Chen, E. Alshina, G. J. Sullivan, J.-R. Ohm, J. Boyce (JEM
editors)
JVET common test
m38202 conditions and software
reference configurations
K. Suehring, X. Li (CTC editors)
m38203
Meeting Report of 14th
JCT-3V Meeting
J.-R. Ohm, G. J. Sullivan
m38204
MV-HEVC Verification
Test Report
V. Baroncini, K. Müller, S. Shimizu
Draft 2 of Reference
m38205 Software for Alternative
T. Senoh
Depth Info SEI in 3D-AVC
m38206
MV-HEVC and 3D-HEVC
Conformance Draft 4
T. Ikai, K. Kawamura, T. Suzuki
m38207 3D-HEVC Software Draft 4 G. Tech, H. Liu, Y. W. Chen
Meeting Report of the 23rd
JCT-VC Meeting (19–26
m38208
G. J. Sullivan, J.-R. Ohm (chairs)
February 2016, San
Diego, USA)
Reference Software for
m38209 Screen Content Coding
Draft 1
m38210
K. Rapaka, B. Li, X. Xiu (editors)
Draft text for ICtCp support
P. Yin, C. Fogg, G. J. Sullivan (editors)
in HEVC (Draft 1)
Report of MPEG AHG on
Future Video Coding
m38211
Technology Evaluation
(copy of m38141)
J.-R. Ohm, G. J. Sullivan, J. Boyce, J. Chen, K. Sühring, T.
Suzuki
High Efficiency Video
Coding (HEVC) Test
C. Rosewarne, B. Bross, M. Naccari, K. Sharman, G. J.
m38212
Model 16 (HM 16) Update Sullivan (editors)
5 of Encoder Description
m38213
Verification Test Report for
Y. Ye, V. Baroncini, Y.-K. Wang (editors)
Scalable HEVC (SHVC)
m38214
HEVC Screen Content
Coding Draft Text 6
R. Joshi, S. Liu, G. J. Sullivan, Y.-K. Wang, J. Xu, Y. Ye
(editors)
m38215
Conformance Testing for
SHVC Draft 5
J. Boyce, A. K. Ramasubramonian (editors)
Reference software for
m38216 Scalable HEVC (SHVC)
Extensions Draft 4
Y. He, V. Seregin (editors)
Conformance Testing for
Improved HEVC Version 1
m38217
T. Suzuki, K. Kazui (editors)
Testing and Format Range
Extensions Profiles Draft 6
Screen Content Coding
m38218 Test Model 7 Encoder
Description (SCM 7)
R. Joshi, J. Xu, R. Cohen, S. Liu, Y. Ye (editors)
Conformance Testing for
HEVC Screen Content
m38219
Coding (SCC) Extensions
Draft 1
R. Joshi, J. Xu (editors)
Common Test Conditions
m38220 for HDR/WCG Video
Coding Experiments
E. François, J. Sole, J. Ström, P. Yin (editors)
Convenor's response to
Leonardo Chiariglione
m38221 FINB's "Regarding
ITTF’s contribution on
“option 3―
declarations"
– Output documents
MPEG
no
Source
Title
w15904 General/All
Resolutions of the 114th Meeting
w15905 General/All
List of AHGs Established at the 114th Meeting
w15906 General/All
Report of the 114th Meeting
w15907 General/All
Press Release of the 114th Meeting
w15908 General/All
Meeting Notice of the 115th Meeting
w15909 General/All
Meeting Agenda of the 115th Meeting
w15910 Systems
DoC on ISO/IEC 13818-1:2015 AMD 1/DCOR 2
w15911 Systems
Text of ISO/IEC 13818-1:2015 AMD 1/COR 2
w15912 Systems
DoC on ISO/IEC 13818-1:2015 DAM 5 Carriage of MPEG-H 3D audio
over MPEG-2 Systems
w15913 Systems
Text of ISO/IEC 13818-1:2015 FDAM 5 Carriage of MPEG-H 3D audio
over MPEG-2 Systems
w15914 Systems
Text of ISO/IEC 13818-1:2015 AMD 6 Carriage of Quality Metadata in
MPEG-2 Systems
w15915 Systems
DoC on ISO/IEC 13818-1:2015 PDAM 7 Virtual Segment
w15916 Systems
Text of ISO/IEC 13818-1:2015 DAM 7 Virtual Segment
w15917 Systems
Request for ISO/IEC 13818-1:2015 AMD 8 Signaling of carriage of
HDR/WCG video in MPEG-2 Systems
w15918 Systems
Text of ISO/IEC 13818-1:2015 PDAM 8 Signaling of carriage of
HDR/WCG video in MPEG-2 Systems
w15919 Systems
Text of ISO/IEC 13818-1:2015 DCOR 2
w15920 Systems
Defect Report of ISO/IEC 14496-12
w15921 Systems
Request for ISO/IEC 21000-22 AMD 1 Reference Software and
Implementation Guidelines of User Description
w15922 Systems
DoC on ISO/IEC 14496-12:2015 PDAM 1 Enhanced DRC
w15923 Systems
Text of ISO/IEC 14496-12:2015 DAM 1 Enhanced DRC
w15924 Audio
Text of ISO/IEC 14496-5:2001/AMD 24:2009/DCOR 3, Downscaled
(E)LD
w15925 Systems
WD of ISO/IEC 14496-12:2015 AMD 2
w15926 Systems
Text of ISO/IEC 14496-14 COR 2
w15927 Systems
Draft DoC on ISO/IEC DIS 14496-15 4th edition
w15928 Systems
Draft text of ISO/IEC FDIS 14496-15 4th edition
w15929 Systems
Defect Report of ISO/IEC 14496-15
w15930 Systems
Request for ISO/IEC 14496-22:2015 AMD 2 Updated text layout features
and implementations
w15931 Systems
Text of ISO/IEC 14496-22:2015 PDAM 2 Updated text layout features
and implementations
w15932 Systems
DoC on ISO/IEC 14496-30:2014 DCOR 2
w15933 Systems
Text of ISO/IEC 14496-30:2014 COR 2
w15934 Audio
Study on ISO/IEC 23003-1:2007/DAM 3 MPEG Surround Extensions for
3D Audio
w15935 Systems
WD of ISO/IEC 14496-32 Reference Software and Conformance for File
Format
w15936 Systems
WD of ISO/IEC 21000-19:2010 AMD 1 Extensions on Time Segments
and Multi-Track Audio
w15937 Systems
DoC on ISO/IEC DIS 21000-20 2nd edition Contract Expression
Language
w15938 Systems
Results of the Call for Proposals on CDVA
w15939 Systems
DoC on ISO/IEC DIS 21000-21 2nd Media Contract Ontology
w15940 Systems
Text of ISO/IEC IS 21000-21 2nd Media Contract Ontology
w15941 Systems
DoC on ISO/IEC DIS 21000-22 User Description
w15942 Systems
Text of ISO/IEC IS 21000-22 User Description
w15943 Systems
Text of ISO/IEC 21000-22 PDAM 1 Reference Software and
Implementation Guidelines of User Description
w15944 Systems
DoC on ISO/IEC DIS 23000-16 Publish/Subscribe Application Format
w15945 Systems
Text of ISO/IEC FDIS 23000-16 Publish/Subscribe Application Format
w15946 Systems
Technologies under Considerations for Omnidirectional Media
Application Format
w15947 Systems
WD v.1 of Common Media Application Format
w15948 Systems
DoC on ISO/IEC 23001-11:201x/DAM 1 Carriage of Green Metadata in
an HEVC SEI Message
w15949 Systems
Text of ISO/IEC 23001-11:201x/FDAM 1 Carriage of Green Metadata in
an HEVC SEI Message
w15950 Systems
Request for ISO/IEC 23001-11 AMD 2 Conformarnce and Reference
Software
w15951 Systems
Text of ISO/IEC 23001-11 PDAM 2 Conformance and Reference
Software
w15952 Systems
DoC on ISO/IEC CD 23006-1 3rd edition MXM
w15953 Systems
Text of ISO/IEC DIS 23006-1 3rd edition MXM
w15954 Systems
Text of ISO/IEC IS 23006-2 3rd edition MXM API
w15955 Systems
Text of ISO/IEC IS 23006-3 3rd edition MXM Conformance
w15956 Systems
DoC on ISO/IEC 23008-1:2014 DCOR 2
w15957 Systems
Text of ISO/IEC 23008-1:2014 COR 2
w15958 Systems
DoC on ISO/IEC 23008-1:201x PDAM 1 Use of MMT Data in MPEG-H
3D Audio
w15959 Systems
Text of ISO/IEC 23008-1:201x DAM 1 Use of MMT Data in MPEG-H 3D
Audio
w15960 Systems
Text of ISO/IEC 23008-1:201x PDAM 2 Enhancements for Mobile
Environments
w15961 Systems
WD of ISO/IEC 23008-1:201x AMD 3
w15962 Systems
Defects under investigation in ISO/IEC 23008-1
w15963 Systems
Description of Core Experiments on MPEG Media Transport
w15964 Systems
Revised text of ISO/IEC 23008-1:201x 2nd edition MMT
w15965 Systems
Text of ISO/IEC IS 23008-4 MMT Reference Software
w15966 Systems
Request for ISO/IEC 23008-4 AMD 1 MMT Reference Software with
Network Capabilities
w15967 Systems
Text of ISO/IEC 23008-4 PDAM 1 MMT Reference Software with
Network Capabilities
w15968 Systems
Workplan of MMT Conformance
w15969 Systems
Text of ISO/IEC 23008-11 DCOR 1
w15970 Systems
Technology under Consideration for ISO/IEC 23008-11 AMD 1
w15971 Convener
MPEG Strategic Standardisation Roadmap
w15972 Systems
Request for ISO/IEC 23008-12 AMD 1 Support for AVC, JPEG and
layered coding of images
w15973 Systems
Text of ISO/IEC 23008-12 PDAM 1 Support for AVC, JPEG and layered
coding of images
w15974 Systems
DoC on ISO/IEC 23008-12 DCOR 1
w15975 Systems
Text of ISO/IEC 23008-12 COR 1
w15976 Systems
DoC for ISO/IEC PDTR 23008-13 2nd edition MPEG Media Transport
Implementation Guidelines
w15977 Systems
Text of ISO/IEC DTR 23008-13 2nd edition MPEG Media Transport
Implementation Guidelines
w15978 Systems
WD of ISO/IEC 23008-13 3rd edition MPEG Media Transport
Implementation Guidelines
w15979 Systems
DoC on ISO/IEC 23009-1:2014 DAM 3 Authentication, Access Control
and multiple MPDs
w15980 Systems
Text of ISO/IEC 23009-1:2014 FDAM 3 Authentication, Access Control
and multiple MPDs
w15981 Systems
DoC on ISO/IEC 23009-1:2014 PDAM 4 Segment Independent SAP
Signalling (SISSI), MPD chaining, MPD reset and other extensions
w15982 Systems
Text of ISO/IEC 23009-1:2014 DAM 4 Segment Independent SAP
Signalling (SISSI), MPD chaining, MPD reset and other extensions
w15983 Systems
Text of ISO/IEC 23009-1:2014 DCOR 3
w15984 Systems
Defects under investigation in ISO/IEC 23009-1
w15985 Systems
Technologies under Consideration for DASH
w15986 Systems
Descriptions of Core Experiments on DASH amendment
w15987 Systems
Draft Text of ISO/IEC 23009-1 3rd edition
w15989 Systems
Work plan for development of DASH Conformance and reference
software and sample clients
w15990 Systems
WD of ISO/IEC 23009-3 2nd edition AMD 1 DASH Implementation
Guidelines
w15991 Systems
Study of ISO/IEC DIS 23009-5 Server and Network Assisted DASH
w15992 Systems
Request for subdivision of ISO/IEC 23009-6 DASH with Server Push and
WebSockets
w15993 Systems
Text of ISO/IEC CD 23009-6 DASH with Server Push and WebSockets
w15994 Systems
Text of ISO/IEC IS 21000-20 2nd edition Contract Expression Language
w15995 Systems
Request for ISO/IEC 23008-1:201x AMD 2 Enhancements for Mobile
Environments
w15996 Communications
White Paper on MPEG-21 Contract Expression Language (CEL) and
MPEG-21 Media Contract Ontology (MCO)Â
w15997 Communications White Paper on the 2nd edition of ARAF
w15998 Audio
Text of ISO/IEC 14496-3:2009/AMD 3:2012/COR 1, Downscaled (E)LD
w15999 Audio
ISO/IEC 14496-3:2009/DAM 6, Profiles, Levels and Downmixing Method
for 22.2 Channel Programs
w16000 General/All
Terms of Reference
w16001 General/All
MPEG Standards
w16002 General/All
Table of unpublished MPEG FDISs
w16003 General/All
MPEG work plan
w16004 General/All
MPEG time line
w16005 General/All
Schema assets
w16006 General/All
Software assets
w16007 General/All
Conformance assets
w16008 General/All
Content assets
w16009 General/All
URI assets
w16010 General/All
Call for patent statements on standards under development
w16011 General/All
List of organisations in liaison with MPEG
w16012 General/All
MPEG Standard Editors
w16013 General/All
Complete list of all MPEG standards
w16014 General/All
Verbal reports from the Requirements, Systems, Video, Audio, 3DG and
Communication subgroups made at this meeting
w16015 Convener
Liaison Statement to IETF on URI signing
w16016 Convener
Liaison Statement template on DASH with Server Push and WebSockets
w16017 Convener
Liaison Statement to 3GPP on DASH
w16018 Convener
Liaison Statement to DASH-IF on DASH
w16019 Convener
Liaison Statement to DVB on TEMI
w16020 Convener
Liaison Statement to HbbTV on TEMI
w16021 Convener
Liaison Statement template on Common Media Application Format
w16022 Convener
Liaison Statement template on HEVC sync samples in ISO/IEC 1449615
w16023 Convener
Liaison Statement to SCTE on Virtual Segments and DASH
w16024 Video
Disposition of Comments on ISO/IEC 14496-4:2004/PDAM46
w16025 Video
Text of ISO/IEC 14496-4:2004/DAM46 Conformance Testing for Internet
Video Coding
w16026 Video
Disposition of Comments on ISO/IEC 14496-5:2001/PDAM41
w16027 Video
Text of ISO/IEC 14496-5:2001/DAM41 Reference Software for Internet
Video Coding
w16028 Video
Disposition of Comments on ISO/IEC 14496-5:2001/PDAM42
w16029 Video
Text of ISO/IEC 14496-5:2001/DAM42 Reference Software for
Alternative Depth Information SEI message
w16030 Video
Disposition of Comments on ISO/IEC 14496-10:2014/DAM2
w16031 Video
Text of ISO/IEC 14496-10:2014/FDAM2 Additional Levels and
Supplemental Enhancement Information
w16032 Video
Request for ISO/IEC 14496-10:2014/Amd.4
w16033 Video
Text of ISO/IEC 14496-10:2014/PDAM4 Progressive High 10 Profile,
additional VUI code points and SEI messages
w16034 Video
Study Text of ISO/IEC DIS 14496-33 Internet Video Coding
w16035 Video
Internet Video Coding Test Model (ITM) v 14.1
w16036 Video
Description of IVC Exploration Experiments
w16037 3DG
Core Experiments Description for 3DG
w16038 Video
Draft verification test plan for Internet Video Coding
w16039 Video
Request for ISO/IEC 23001-8:201x/Amd.1
w16040 Video
Text of ISO/IEC 23001-8:201x/PDAM1 Additional code points for colour
description
w16041 Video
Disposition of Comments on ISO/IEC 23002-4:2014/PDAM3
w16042 Video
Text of ISO/IEC 23002-4:2014/DAM3 FU and FN descriptions for parser
instantiation from BSD
w16043 Video
Disposition of Comments on ISO/IEC 23002-5:2013/PDAM3
w16044 Video
Text of ISO/IEC 23002-5:2013/DAM3 Reference software for parser
instantiation from BSD
w16045 Video
Disposition of Comments on DIS ISO/IEC 23008-2:201x
w16046 Video
Text of ISO/IEC FDIS 23008-2:201x High Efficiency Video Coding [3rd
ed.]
w16047 Video
WD of ISO/IEC 23008-2:201x/Amd.1 Additional colour description
indicators
w16048 Video
High Efficiency Video Coding (HEVC) Test Model 16 (HM16) Improved
Encoder Description Update 5
w16049 Video
HEVC Screen Content Coding Test Model 7 (SCM 7)
w16050 Video
MV-HEVC verification test report
w16051 Video
SHVC verification test report
w16052 Video
Verification test plan for HDR/WCG coding using HEVC Main 10 Profile
w16053 Video
Disposition of Comments on ISO/IEC 23008-5:2015/DAM3
w16054 Video
Disposition of Comments on ISO/IEC 23008-5:2015/DAM4
w16055 Video
Text of ISO/IEC FDIS 23008-5:201x Reference Software for High
Efficiency Video Coding [2nd ed.]
w16056 Video
Request for ISO/IEC 23008-5:201x/Amd.1
w16057 Video
Text of ISO/IEC 23008-5:201x/PDAM1 Reference Software for Screen
Content Coding Profiles
w16058 Video
Disposition of Comments on ISO/IEC 23008-8:2015/DAM1
w16059 Video
Disposition of Comments on ISO/IEC 23008-8:2015/DAM2
w16060 Video
Disposition of Comments on ISO/IEC 23008-8:2015/DAM3
w16061 Video
Text of ISO/IEC FDIS 23008-8:201x HEVC Conformance Testing [2nd
edition]
w16062 Video
Working Draft of ISO/IEC 23008-8:201x/Amd.1 Conformance Testing for
Screen Content Coding Profiles
w16063 Video
WD of ISO/IEC TR 23008-14 Conversion and coding practices for
HDR/WCG video
w16064 Video
CDVA Experimentation Model (CXM) 0.1
w16065 Video
Description of Core Experiments in CDVA
w16066 Video
Algorithm description of Joint Exploration Test Model 2 (JEM2)
w16067 Video
Description of Exploration Experiments on coding tools
w16068 Video
Call for test materials for future video coding standardization
w16069 Convener
Liaison statement to ITU-T SG 16 on video coding collaboration
w16070 Convener
Liaison statement to DVB on HDR
w16071 Convener
Liaison statement to ATSC on HDR/WCG
w16072 Convener
Liaison statement to ITU-R WP6C on HDR
w16073 Convener
Liaison statement to ETSI ISG CCM on HDR
w16074 Convener
Liaison statement to SMPTE on HDR/WCG
w16075 Convener
Statement of benefit for establishing a liaison with DICOM
w16076 Audio
ISO/IEC 23003-2:2010/DCOR 3 SAOC, SAOC Corrections
w16077 Audio
Study on ISO/IEC 23003-2:2010/DAM 4, SAOC Conformance
w16078 Audio
Study on ISO/IEC 23003-2:2010/DAM 5, SAOC Reference Software
w16079 Audio
Draft of ISO/IEC 23003-2:2010 SAOC, Second Edition
w16080 Audio
DoC on ISO/IEC 23003-3:2012/Amd.1:2014/DCOR 2
w16081 Audio
Text of ISO/IEC 23003-3:2012/Amd.1:2014/COR 2
w16082 Audio
DoC on ISO/IEC 23003-3:2012/DAM 3 Support of MPEG-D DRC, Audio
Pre-Roll and IPF
w16083 Audio
Text of ISO/IEC 23003-3:2012/FDAM 3 Support of MPEG-D DRC, Audio
Pre-Roll and IPF
w16084 Audio
Defect Report on MPEG-D DRC
w16085 Audio
DoC on ISO/IEC 23003-4:2015 PDAM 1, Parametric DRC, Gain Mapping
and Equalization Tools
w16086 Audio
Text of ISO/IEC 23003-4:2015 DAM 1, Parametric DRC, Gain Mapping
and Equalization Tools
w16087 Audio
Request for Amendment ISO/IEC 23003-4:2015/AMD 2, DRC Reference
Software
w16088 Audio
ISO/IEC 23003-4:2015/PDAM 2, DRC Reference Software
w16089 Audio
ISO/IEC 23008-3:2015/DCOR 1 Multiple corrections
w16090 Audio
DoC on ISO/IEC 23008-3:2015/DAM 2, MPEG-H 3D Audio File Format
Support
w16091 Audio
Text of ISO/IEC 23008-3:2015/FDAM 2, MPEG-H 3D Audio File Format
Support
w16092 Audio
Study on ISO/IEC 23008-3:2015/DAM 3, MPEG-H 3D Audio Phase 2
w16093 Audio
Study on ISO/IEC 23008-3:2015/DAM 4, Carriage of Systems Metadata
w16094 Audio
WD on New Profiles for 3D Audio
w16095 Audio
3D Audio Reference Software RM6
w16096 Audio
Workplan on 3D Audio Reference Software RM7
w16097 Convener
Liaison Statement to DVB on MPEG-H 3D Audio
w16098 Convener
Liaison Statement Template on MPEG-H 3D Audio
w16099 Convener
Liaison Statement to ITU-R Study Group 6 on BS.1196-5
w16100 Convener
Liaison Statement to IEC TC 100
w16101 Convener
Liaison Statement to IEC TC 100 TA 4
w16102 Convener
Liaison Statement to IEC TC 100 TA 5
w16103 3DG
Text of ISO/IEC FDIS 23000-13 2nd Edition Augmented Reality
Application Format
w16104 3DG
DoC for ISO/IEC 23000-17:201x CD Multiple Sensorial Media Application
Format
w16105 3DG
Text of ISO/IEC 23000-17:201x DIS Multiple Sensorial Media Application
Format
w16106 3DG
Technology under consideration
w16107 3DG
Request for ISO/IEC 23005-1 4th Edition Architecture
w16108 3DG
Text of ISO/IEC CD 23005-1 4th Edition Architecture
w16109 3DG
Request for ISO/IEC 23005-2 4th Edition Control Information
w16110 3DG
Text of ISO/IEC CD 23005-2 4th Edition Control Information
w16111 3DG
Request for ISO/IEC 23005-3 4th Edition Sensory Information
w16112 3DG
Text of ISO/IEC CD 23005-3 4th Edition Sensory Information
w16113 3DG
Request for ISO/IEC 23005-4 4th Edition Virtual world object
characteristics
w16114 3DG
Text of ISO/IEC CD 23005-4 4th Edition Virtual world object
characteristics
w16115 3DG
Request for ISO/IEC 23005-5 4th Edition Data Formats for Interaction
Devices
w16116 3DG
Text of ISO/IEC CD 23005-5 4th Edition Data Formats for Interaction
Devices
w16117 3DG
Request for ISO/IEC 23005-6 4th Edition Common types and tools
w16118 3DG
Text of ISO/IEC CD 23005-6 4th Edition Common types and tools
w16119 3DG
Text of ISO/IEC 2nd CD 18039 Mixed and Augmented Reality Reference
Model
w16120 3DG
State of discussions related to MIoTW technologies
w16121 3DG
Current status on Point Cloud Compression (PCC)
w16122 3DG
Description of PCC Software implementation
w16123 Convener
Liaison Statement to JTC1 WG10 related to MIoT
w16124 Convener
Liaison Statement to ITU-T SG20 related to MIoT
w16125 Requirements
Presentations of the Workshop on 5G and beyond UHD Media
w16126 Requirements
Liaison to JTC1/WG9
w16127 Requirements
Draft Terms of Reference of the joint Ad hoc Group on Genome
Information Representation between TC 276/WG5 and ISO/IEC
JTC1/SC29/WG11
w16128 Requirements
Results of the Call for Evidence on Free-Viewpoint Television: SuperMultiview and Free Navigation
w16129 Requirements
Description of 360 3D video application Exploration Experiments on
Divergent multi-view video
w16130 Requirements
Use Cases and Requirements on Free-viewpoint Television (FTV) v.3
w16131 Requirements
Context and Objectives for Media Orchestration v.3
w16132 Requirements
Requirements for Media Orchestration v.3
w16133 Requirements
Call for Proposals on Media Orchestration Technologies
w16134 Requirements
Requirements on Genome Compression and Storage
w16135 Requirements
Lossy compression framework for Genome Data
w16136 Requirements
Draft Call for Proposal for Genome Compression and Storage
w16137 Requirements
Presentations of the MPEG Seminar on Prospect of Genome
Compression Standardization
w16138 Requirements
Requirements for a Future Video Coding Standard v2
w16139 Requirements
Draft Requirements for Media-centric IoTs and Wearables
w16140 Requirements
Overview, context and objectives of Media-centric IoTs and Wearable
w16141 Requirements
Conceptual model, architecture and use cases for Media-centric IoTs
and Wearable
w16142 Requirements
Use cases and requirements on Visual Identity Management AF
w16143 Requirements
Requirements for OMAF
w16144 Requirements
Requirements for the Common Media Application Format
w16145 Requirements
Database for Evaluation of Genomic Information Compression and
Storage
w16146 Requirements
Evaluation Procedure for the Draft Call for Proposals on Genomic
Information Representation and Compression
w16147 Requirements
Results of the Evaluation of the CfE on Genomic Information
Compression and Storage
w16148 Requirements
Liaison with TC 276
w16149 Requirements
Summary notes of technologies relevant to digital representations of light
fields
w16150 Requirements
Draft report of the Joint Ad hoc Group on digital representations of
light/sound fields for immersive media applications
Requirements
w16151
Thoughts and Use Cases on Big Media
– Requirements report
Source: Jörn Ostermann (Leibniz Universität Hannover)
1.1Requirements documents approved at this meeting
No.
Title
16125 Presentations of the Workshop on 5G and beyond UHD Media
Draft Terms of Reference of the Joint Ad hoc Group on Genome Information
16127
Representation between TC 276/WG 5 and ISO/IEC JTC 1/SC 29/WG 11
Results of the Call for Evidence on Free-Viewpoint Television: Super-Multiview
16128
and Free Navigation
Description of 360 3D video application Exploration Experiments on Divergent
16129
multi-view video
16130 Use Cases and Requirements on Free-viewpoint Television (FTV) v.3
16131 Context and Objectives for Media Orchestration v.3
16132 Requirements for Media Orchestration v.3
16133 Call for Proposals on Media Orchestration Technologies
16134 Requirements on Genome Compression and Storage
16135 Lossy compression framework for Genome Data
16136 Draft Call for Proposal for Genome Compression and Storage
16137 Presentations of the MPEG Seminar on Genome Compression Standardization
16138 Requirements for a Future Video Coding Standard v2
16139 Draft Requirements for Media-centric IoTs and Wearables
16140 Overview, Context and Objectives of Media-centric IoT‘s and Wearable
16141 Conceptual model, architecture and use cases for Media-centric IoT‘s and Wearable
16142 Use cases and requirements on Visual Identity Management AF
16143 Requirements for OMAF
16144 Requirements for the Common Media Application Format
16145 Database for Evaluation of Genomic Information Compression and Storage
Evaluation Procedure for the Draft Call for Proposals on Genomic Information
16146
Representation and Compression
Results of the Evaluation of the CfE on Genomic Information Compression and
16147
Storage
16148 Liaison with TC 276
16149 Summary notes of technologies relevant to digital representations of light fields
Draft report of the Joint Ad hoc Group on digital representations of light/sound
16150
fields for immersive media applications
16151 Thoughts and Use Cases on Big Media
1.2 MPEG-A
2.1. Part 19 – Omnidirectional Media Application Format
For certain virtual reality devices, omnidirectional video is required. The purpose of
this application format is to develop a standardized representation of this video format
together with 3D audio. For video the mapping of omnidirectional video to rectangular
video is important in order to enable the use of HEVC (Figure 1). The requirements are
given in N16143 Requirements for OMAF.
Image stitching,
and
equirectangular
mapping
Figure 1
Video
encode
Video
decode
Video
rendering on
a sphere
Content flow process for omnidirectional media – here video.
2.2. Part 20 – Common Media Application Format
The segmented media format that has been widely adopted for internet delivery using
DASH, Web browsers, commercial services such as Netflix and YouTube, etc. is
derived from the ISO Base Media File Format, using MPEG codecs, Common
Encryption, etc. The same components have already been widely adopted and
specified by many application consortia (ARIB, ATSC, DECE, DASH-IF, DVB,
HbbTV, SCTE, 3GPP, DTG, DLNA, etc.), but the absence of a common media
format, or minor differences in practice, mean that slightly different media files must
often be prepared for the same content. The industry would further benefit from a
common format, embodied in an MPEG standard, to improve interoperability and
distribution efficiency.
The Common Media Application Format defines the media format only. MPEG
technologies such as DASH and MMT may be used for the delivery of the Common
Media Application Format Segment. Multiple delivery protocols may specify how the
same Common Media Application Format Segments are delivered. The description
and delivery of presentations are both considered to be in layers above the layer that
defines the media format and the encoding and decoding of Media Segments, and
therefore they are out of the proposed scope. The Requirements are presented in
N16144 Requirements for the Common Media Application Format. It is foreseen that
this MAF will include profiles defining the allowable media representations.
1.3 Explorations
3.1. 3D Media Formats
Light fields and sound fields for representation of true 3D contents will gain more relevance
in the future. At the 111th and 112th MPEG meeting, especially video related aspects were
discussed and presented. In the area of images, JPEG started an activity on light fields.
N16149 Summary notes of technologies relevant to digital representations of light fields
shows the results of a brain storming session (Figure 2). In N16150 Draft report of the Joint
Ad hoc Group on digital representations of light/sound fields for immersive media
applications is the table of contents for a document to be produced by a joint MPEG/JPEG
adhoc group JAhG on Digital Representations of Light/Sound Fields for Immersive Media
Applications.
Figure 2
Components and interfaces of a 3D A/V system
3.2. Compact Descriptors for Video Analysis (CDVA)
The envisioned activity which will go beyond object recognition required for example in
broadcast applications. Especially for automotive and security applications, object
classification is of much more relevance than object recognition (Figure 3, Figure 4). MPEG
foresees IP-cameras with a CDVA encoder which enables search, detection and classification
at low transmission rates. Related technology within MPEG can be found in MPEG-7 and
video signatures.
2
Visual'
features'
extrac2on'
Visual'
features'
encoding'
Image/
video'
encoding'
“Analyze)then)compress”3
“Compress)Then)Analyze”3
Visual'
features'
decoding'
Image/
video'
decoding'
Visual'
analysis'
task'
Visual'
features'
extrac2on'
Figure
3The upper part of the diagram shows the ―Analyze-Then-Compress‖ (ATC) paradigm. That is,
Pipelines for the “ Analyze-Then-Compress” and “ Compress-Then-Analyze” paradigms.
sets of video features are extracted from raw frames end encoded before transmission resulting in low
bandwidth communications. This is opposite to traditional ―Compress-Then-Analyze‖ (CTA) paradigm,
in which video features are extracted close to complex visual analysis.
Fig. 1.
Although several different descriptors have been proposed in recent years, they all share a similar processing pipeline.
That is, a feature vector is computed following three main processing steps, namely pre-smoothing, transformation
and spatial pooling [2]. For example, the state-of-the-art SIFT descriptor [3] is obtained performing Gaussian
smoothing, followed by the computation of local gradients, which are then pooled together to build a histogram.
Several visual analysis applications, such as object recognition, traffic/habitat/environmental monitoring, surveillance, etc., might benefit from the technological evolution of networks towards the “ Internet-of-Things” , where
low-power battery-operated nodes are equipped with sensing capabilities and are able to carry out computational
tasks and collaborate over a network. In particular, Visual Wireless Sensor Networks (VWSNs) are a promising
technology for distributed visual analysis tasks [4][5]. The traditional approach to such scenarios, which will be
denoted “ Compress-Then-Analyze” (CTA) in the following, is based on a two-step paradigm. First, the signal of
interest (i.e., a still image or a video sequence) is acquired by a sensor node. Then, it is compressed (e.g., resorting to
JPEG or H.264/AVC coding standards) in order to be efficiently transmitted over a network. Finally, visual analysis
is performed at a sink node [6][7][8]. Since the signal is acquired and subsequently compressed, visual analysis is
based on a lossy representation of the visual content, possibly resulting in impaired performance [9][10]. Although
such paradigm has been efficiently employed in a number of applications (e.g., video surveillance, smart cameras,
etc.), several analysis tasks might require streaming high quality visual content. This might be infeasible even with
state-of-the-art VWSN technology [11] due to the severe constraints imposed by the limited network bandwidth. A
possible solution consists in driving the encoding process so as to optimize visual analysis, rather than perceptual
quality, at the receiver side. For example, JPEG coding can be tuned so as to preserve SIFT features in decoded
images [12].
At the same time, an alternative “ Analyze-Then-Compress” (ATC) approach, in a sense orthogonal to CTA,
Figurepopularity
4
Usage
CDVA
for identification
of classes
is gaining
in the of
research
community.
The ATC paradigm
reliesand
on recognition.
the fact that some tasks can
be performed resorting to a succinct representation based on local features, disregarding the actual pixel-level
It is According
foreseen that
identification
much
moredirectly
challenging
than object
recognition.
content.
to ATC,
local features of
areclasses
extractedisfrom
a signal
by the sensing
node. Then,
they are
This work
startdispatched
later than
the
on As
detection
recognition.
For detection andand
compressed
to bemight
efficiently
over
thework
network.
illustratedand
in Figure
1, “ Analyze-Then-Compress”
recognition, challenges include flexible objects and specular surfaces. Furthermore, low
latency is required.
“ Compress-Then-Analyze” represent concurrent paradigms that can be employed to address the problem of analyzing
content captured from distributed cameras. Compression of visual features is key to the successful deployoment of
th
theAt
ATC
scheme,
since112
VWSNs
typicallyapose
constraints
the available
bandwidth.Descriptors
Several worksfor
the
previous
meeting,
CfPstrict
N15339
Call regarding
for Proposals
for Compact
Video Analysis - Search and Retrieval (CDVA) and the related evaluation framework N15338
Evaluation Framework for Compact Descriptors for Video Analysis - Search and Retrieval
July were
2, 2013 issued. Evaluation of three full proposals was done at this 114th meeting. Sufficient
DRAFT
technology was submitted such that the standardization work can start in the Video subgroup.
3.3. Big Media
Big data is to MPEG of interest in case media is involved. Thoughts towards an use cases to
which MPEG could contribute are in N16151 Thoughts and Use Cases on Big Media.
3.4. Free Viewpoint TV
Free Viewpoint TV was the vision that drove the development of many different 3D video
coding extensions. It is now time to take back a step and see where the future of 3D will go.
Super-multiview displays and holographic displays are currently under development. They
will provide horizontal as well as vertical parallax. Hence, we need further extensions of
current multiview technology, which assumes a linear camera arrangement, in order to
accommodate for more general camera arrangements for future displays. For interaction and
navigation purposes, modern human computer interfaces need to be developed. The purpose
of the exploration was previously described in N14546 Purpose of FTV Exploration.
In order to decide on a potential start of a standardization activity, a Call N15348 Call for
Evidence on Free-Viewpoint Television: Super-Multiview and Free Navigation was issued at
the 112th meeting. MPEG provides software in N15349 FTV Software Framework that may be
used for the purpose of the call. Evaluation was carried out at this meeting.
It appears that the problem is of very limited interest to industry. MPEG received only two
full proposals related to navigation. For FTV based on dense camera arrangements, only one
partial proposal was received. All proposals improve on the anchors. However, the anchors
and reference sequences were of insufficient quality such that a subjective evaluation done at
the meeting is without any merit.
Results are summarized in N16128 Results of the Call for Evidence on Free-Viewpoint
Television: Super-Multiview and Free Navigation. Use cases and requirements were updated
in N16130 Use Cases and Requirements on Free-viewpoint Television (FTV) v.3. The new
requirements also consider N16129 Description of 360 3D video application Exploration
Experiments on Divergent multi-view video, which promotes divergent multi-view applications
such as 360-degree 3D video and wide-area FTV (Figure 5).
Figure 5:
The concept of virtual views synthesis on 360-degree 3D video
At this point no timeline for the work of the group could be defined. However, the group is
requested to create a plan for the generation of adequate references. In this context, MPEG
members are encouraged to provide further test sequences for virtual reality applications.
3.5. Future Video Coding
A N15726 Workshop on 5G and beyond UHD Media was held in order to explore the
opportunities of new mobile networks and application for MPEG standardization.
Presentation are available as N16125 Presentations of the Workshop on 5G and beyond UHD
Media.
Seven speakers from industry discussed service and technologies. Cloud computing and
Internet of Things will have an impact on network usage. Latency for latency of the 5G
network is between 1 and 5 ms. Assuming adequate bandwidth, many services can be
implemented in the cloud. This will also enable an abundance of virtual reality and
augmented reality applications with remote computing.
The requirements for future video coding were updated in N16138 Requirements for a Future
Video Coding Standard v2. Further input from industry is desirable. Especially the impact of
the pre- and post-processing, learning, transcoding and the cloud on the codec design should
be studied.
3.6. HDR and WCG
MPEG and VCEG concluded that HDR and WCG can very well be coded using current
HEVC Main 10 technology. The potential enhancements demonstrated at this meeting do not
warrant a separate standardization effort. Nevertheless, HDR and WCG will be an intrinsic
part of the development of any future video coding standard.
3.7. Genome Compression
Today, DNA sequencing creates a lot of data. One data set is easily about 800 Mbyte.
Typically, several data sets are made for one person. Given that today machines are optimized
for speed and not for accuracy, an interesting opportunity for compression technology might
exist. TC276 Biotechnology is currently not considering compression. A N16148 Liaison with
TC 276 was approved. In order to collaborate also within adhoc groups, N16127 Draft
Terms of Reference of the Joint Ad hoc Group on Genome Information Representation
between TC 276/WG 5 and ISO/IEC JTC 1/SC 29/WG 11 was approved.
In response to a Call for Evidence on Genome compression, MPEG received for submissions
from research institutes and universities. Eight tools were proposed which progress over the
state of the art as summarized in N16147 Results of the Evaluation of the CfE on Genomic
Information Compression and Storage. MPEG concluded that there is sufficient interest and
technology to issue N16136 Draft Call for Proposal for Genome Compression and Storage
with evaluation taking place at the 116th meeting. This call is issued and will be evaluated
jointly with TC276. Due to the meeting schedule of TC 276, the text of the final call will most
likely be fixed in March 2016. The N16146 Evaluation Procedure for the Draft Call for
Proposals on Genomic Information Representation and Compression needs further update
especially in the area of lossy compression highlighted in N16135 Lossy compression
framework for Genome Data. A new version of the requirements is N16134 Requirements on
Genome Compression and Storage.
MPEG selected a test set for evaluating compression performance. The data set as described
in N16145 Database for Evaluation of Genomic Information Compression and Storage
includes human, plant and animal genomes.
N16137 Presentation of the MPEG Seminar on Genome Compression Standardization
includes the slides of some stake holder as presented at the seminar at this meeting.
Manufactures, service providers and technologists presented their current thinking. It appears
that the temporal evolution of the genome is an important application for genome processing.
The standard will have to balance compression efficiency, computation resources, random
access and search capabilities. Typically, every three years a new sequencer is released. A
new generation of sequences encourages service providers to stop using their old sequencers.
Furthermore, a change in sequencing technology giving other lengths of the runs and different
error patterns are expected within 5 years.
3.8. Media-centric Internet of Things
The Requirements subgroup recognizes that MPEG-V provides technology that is applicable
in the area of Internet of Things. Three documents were drafted: N16140 Overview, context
and objectives of Media-centric IoTs and Wearable, N16141 Conceptual mode, architecture
and use cases for Media-centric IoTs and Wearable, and N16139 Draft Requirements for
Media-centric IoTs and Wearable.
3.9. Media Orchestration
For applications like augmented broadcast, concert recording or class room recording with
several cameras as described in N16131 Context and Objectives for Media Orchestration v.3,
play back requires spatial and temporal synchronization of the different displays.
Requirements were extracted and summarized in N16132 Requirements for Media
Orchestration v.3. A call was defined in N16133 Call for Proposals on Media Orchestration
Technologies. Evaluation is planned for the 115th meeting.
3.10.
Visual Identity Management
The purpose of this activity is to enable Processing and sharing of media under user control
(Figure 6). N16142 Use cases and requirements on Visual Identity Management AF was
created.
User
User
Description
Description
User
Description
Privacy Management System
s
ce
s
ce
n
tio
ip
cr
es
Capturing
Device
fa
y
Ke
to
ys
Ke
t
yp
cr
en
D
er
Us
fa
n
tio
rip
sc
De
rp
ge
Fin
of
ts
rin
Media File
Rendering
Device
P
Privacy Management Metadta
Figure 6
Faces in the image become only visible if the key for decrypting a
particular face is known to the rendering device.
– Systems report
Source: Young-Kwon Lim, Chair
1.
General Input Documents
1.1
AHG reports
All AHG reports have been approved.
1.2
General technical contributions
Number Session
Title
m37567 Plenary Table of Replies on ISO/IEC
FDIS 23001-12
m37569 Plenary Table of Replies on ISO/IEC
FDIS 23001-8 [2nd Edition]
m37570 Plenary Table of Replies on ISO/IEC
13818-1:2015/FDAM 3
m37571 Plenary Table of Replies on ISO/IEC
13818-1:2015/FDAM 2
m37572 Plenary Table of Replies on ISO/IEC
13818-1:201x/FDAM 4
m37574 Plenary Table of Replies on ISO/IEC
FDIS 23001-7 [3rd Edition]
m37575 Plenary Table of Replies on ISO/IEC
FDIS 23008-12
m38046 Plenary Table of Replies on ISO/IEC
FDIS 23005-2 [3rd Edition]
m38047 Plenary Table of Replies on ISO/IEC
FDIS 23005-4 [3rd Edition]
m38048 Plenary Table of Replies on ISO/IEC
FDIS 23005-5 [3rd Edition]
m38049 Plenary Table of Replies on ISO/IEC
FDIS 23005-6 [3rd Edition]
m37935 Plenary Conformance Assets
m37937
Plenary Software Assets
m37938
Plenary Content Assets
m37946
Plenary MPEG Workshop on 5G /
Source
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
ITTF via SC 29
Secretariat
Christian Tulvan, Marius
Preda
Christian Tulvan, Marius
Preda
Christian Tulvan, Marius
Preda
Abdellatif Benjelloun
Dispositions
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Noted
Beyond UHD Media
1.3
Touimi, Imed Bouazizi,
Iraj Sodagar, Jörn
Ostermann, Marius
Preda, Per Fröjdh,
Shuichi Aoki
Summary of discussion
-
1.4
Demo
1.5
FAQ
.
1.6
AOB
None.
2.
2.1
MPEG-2 Systems (13818-1)
Topics
2.1.1 ISO/IEC 13818-1:2014 AMD 5 Carriage of 3D Audio
This amendment defines stream type, descriptors and buffer model to carry MPEG-H 3D
audio bitstream in MPEG-2 TS. Two stream types will be assigned to distinguish main stream
from auxiliary stream. Descriptors will provide information on user selectable and/or
modifiable audio objects and information on which object contains either supplementary or
main audio. T-STD extension will allow
splitting an encoded audio scene into several elementary streams. One single audio decoder
decodes all elementary streams to one audio presentation. Each of those elementary streams
carries one or more encoded channel signals.
2.1.2 ISO/IEC 13818-1:2014 AMD 6 Carriage of Quality Metadata
This amendment specifies the carriage of Quality Metadata (ISO/IEC 23001-10) in MPEG-2
systems. Quality metadata enhances adaptive streaming in clients during the presentation of
media. The carriage is specified for transport streams only and includes the signalling of static
metadata as well as dynamic metadata.
2.1.3 ISO/IEC 13818-1:2014 AMD 7 Virtual Segment
This amendment provides several enhancements to the current ISO/IEC 13818-1 (specifically
to Annex U, TEMI), in order to allow easy distributed generation of time-aligned segments
for multiple adaptive streaming systems, such as MPEG DASH and aid in IPTV services. We
also provide tools for verifying stream integrity and authenticity in real time. We would like
to ask for a 2-month PDAM ballot due to the industry and SDO need in features introduced in
this amendment.
2.1.4 ISO/IEC 13818-1:2014 AMD 8 Signaling HDR and WCG video content in
MPEG-2 TS
All Main 10 Profile based receivers will be able to decode the video but may not be able to do
appropriate processing required to make video presentable on a non-HDR or non-WCG
display. Therefore in many applications (such as broadcast) that use the MPEG-2 TS it will be
beneficial to signal presence of WCG and HDR video content as well as additional
information at a program level. This enables HDR and WCG capable receivers process the
information in video ES and render the decoded content correctly. Receivers that do not have
the capability to process WCG and HDR can either ignore the content or do their best effort to
render the content on non-WCG and non-HDR display devices. This amendment suggests
some minimal MPEG-2 TS ‗signaling‘ to indicate presence of WCG and HDR video in the
HEVC elementary stream.
2.2
Contributions
Numbe
r
m3781
2
m3760
3
m3779
0
m3755
6
m3756
4
m3779
3
m3771
1
Session
MPEG2
MPEG2
MPEG2
MPEG2
MPEG2
MPEG2
MPEG2
Dispositio
ns
Summary of Voting on ISO/IEC
SC 29 Secretariat
Refer
13818-1:2015/Amd.1/DCOR 2
N15960
Corrections of TEMI text errors to
Jean Le Feuvre
Accepted
include in COR2
N15919
Defect report on ISO/IEC 13818-1
Karsten Grüneberg, Accepted
regarding STD buffer sizes for HEVC Thorsten Selinger
N15919
Summary of Voting on ISO/IEC
SC 29 Secretariat
Noted
13818-1:2015/DAM 6
Summary of Voting on ISO/IEC
SC 29 Secretariat
Refer
13818-1:2015/PDAM 7
N15915
Suggested modification to
Wendell Sun, Prabhu Accepted
structure in Virtual Segmentation
Navali
N15916
Editorial Clarifications on Text of
Yasser Syed, Alex
Accepted
ISO/IEC 13818-1:2015 AMD7
Giladi, Wendell Sun, N15916
Virtual Segmentation
2.3
Summary of discussions
2.4
Action Points / Ballots
Title
Source
ITU-T H.222.0 (10/2014)|ISO/IEC
13818-1:2015/FDAM 2
(SC 29 N 15177)
Systems
AMENDMENT 2: Carriage of layered
HEVC
FDAM
(201512-02)
ITU-T H.222.0 (10/2014)|ISO/IEC
Systems
FDAM
13818-1:2015/FDAM 3
(SC 29 N 15157)
AMENDMENT 3: Carriage of Green
Metadata in MPEG-2 Systems
(201512-02)
ITU-T H.222.0 (10/2014)|ISO/IEC
13818-1:2015/DAM 5
(SC 29 N 14824)
Systems
AMENDMENT 5: Carriage of MPEG-H
3D audio over MPEG-2 Systems
DAM
(201512-25)
ITU-T H.222.0 (10/2014)|ISO/IEC
13818-1:2015/DAM 6
(SC 29 N 15155)
Systems
AMENDMENT 6: Carriage of Quality
Metadata in MPEG-2 Systems
DAM
(201601-01)
3.
3.1
ISO Base Media File Format (14496-12)
Topics
3.1.1 ISO/IEC 14496-12:201X/AMD 1: Enhanced DRC
This Amendment adds support for DRC configuration extensions in MPEG-D DRC. It
increases the available range of downmix coefficients. The loudness metadata is enhanced to
support the EQ extension of MPEG-D DRC.
.
3.2
Contributions
Numbe Session
r
m3755 File
2
Format
Title
m3785
1
File
Format
Summary of Voting on
ISO/IEC 1449612:201x/PDAM 1
Editorial comment on
14496-12
m3786
9
File
Format
Derived visual tracks for
ISOBMFF and HEIF
m3787
0
File
Format
Relative location offsets for
ISOBMFF
m3789
6
File
Format
Study on Base Media File
Format 14496-12 PDAM1
m3790
5
m3790
7
m3790
File
Format
File
Format
File
Error in the sample groups
in 14496-12
ISOBMFF Defect
Editorial reorganization of
Source
Dispositions
SC 29 Secretariat
Refer
N15922
Mitsuhhiro Hirabayashi,
Kazuhiko Takabayashi,
MItsuru Katsumata
V. K. Malamal Vadakital,
M. M. Hannuksela
(Nokia)
M. M. Hannuksela, V. K.
Malamal Vadakital
(Nokia)
Frank Baumgarte, David
Singer, Michael
Kratschmer,
David Singer
Accepted
N15920
Jean Le Feuvre, Cyril
Concolato
Cyril Concolato, Jean Le
Noted
Noted
Noted
Noted
Noted
Accepted
8
Format
m3791
4
m3792
9
m3793
1
File
Format
File
Format
File
Format
3.3
MIME-related sections in
ISOBMFF
Support of VR video in ISO
base media file format
Updates on Web
Interactivity Track
Updates on Partial File
Handling
Feuvre
N15920
Ye-Kui Wang, Hendry,
Marta Karczewicz
Thomas Stockhammer
Noted
Thomas Stockhammer
Summary of discussions
3.3.1
m37552
Disposition: noted
Covered in Audio.
Summary of Voting on ISO/IEC 14496-12:201x/PDAM 1
3.3.2
m37851 Editorial comment on 14496-12
Disposition: accepted
Into the corrigendum text (this is editorial).
3.3.3
m37869 Derived visual tracks for ISOBMFF and HEIF
Disposition: noted
We wonder if we could use the unified number space for track and item IDs in the track
reference box, and then in the samples use the track reference index to select source(s)?
Quicktime Effects tracks were very much like this and we should see what can be learnt from
them. Other effects include alpha composition and so on. We are concerned that we may be
straying into the composition layer, and we need to be clear what's media carriage and what's
composition. At what point do 'non-destructive operations on video' become composition?
We welcome further contributions.
3.3.4
m37870 Relative location offsets for ISOBMFF
Disposition: noted
Alas, fragment processing is defined to be local to the client and fragments (#) are stripped
before HTTP request. Queries are server-based (?) but the syntax is defined by the server, not
the standard. Maybe there could be an HTTP header or value of content-range that asks for a
set of boxes rather than a set of byte-ranges? We note that adding 4 bytes to the beginning of
any mdat is fine today, but maybe we can use an index rather than identifier? Yes, we need
relative addressing to work; an even more complex itemLocation box could be defined ('mdat
relative construction method' that picks out the mdat), but that doesn't solve data in tracks.
We should study the fragment identifiers defined in MPEG-21. The use-cases may be of
interest to JPEG, and indeed JPIP (which allows retrieval of boxes or parts thereof) should be
studied.
Further contributions welcome.
3.3.5
m37896 Study on Base Media File Format 14496-12 PDAM1
Disposition: noted
Progressed in the audio group.
3.3.6
m37905 Error in the sample groups in 14496-12
Disposition: noted
Into the defect report, and we hope into the COR once the DCOR has closed balloting.
3.3.7
m37907 ISOBMFF Defect
Disposition: noted
We feel that making a study for a ballot that closes a week after the meeting is of questionable
utility. We would welcome these as NB comments, editors to address what they can that are
editorial and incorporate the rest into a revised Defect Report.
3.3.8
m37908 Editorial reorganization of MIME-related sections in
ISOBMFF
Disposition: accepted
We like the general direction, but we need to do the drafting and see how it works. We would
need one or more new RFCs that obsolete the existing ones and point to the new texts.
Editors and enthusiasts to draft and propose.
3.3.9
m37914 Support of VR video in ISO base media file format
Disposition: noted
Seems like a reasonable approach, and maybe 37869 above might apply for when the VR is
tiled over several tracks?
3.3.10
WD of amendment 2 (output from last meeting)
Disposition: noted
We think that the handling of MIME types and parameters should be removed, given the input
above (37908) and a more comprehensive approach taken.
Revised output WD at this meeting, without the MIME material.
3.3.11
Joint meeting with JPEG
In a joint meeting with JPEG we decided to cease the dual publication of 14496-12 with
15444-12; we will withdraw 15444-12, mark it "superceded by 14496-12".
We also look for volunteers to help modernize the Registration Authority, making it easier
and more dynamic to browse, better linked to conformance files, and so on. JPEG notes that
there are boxes missing from the RA that are in their specs, and the RA Representative
explained that the RA is never proactive on registration; registration happens on request.
3.3.12
Misc
There is a clash between the file format, which says that 4CCs are simply four printable
characters, and the codecs parameter, which effectively says that they cannot include a
―.‖. Ouch. How do we address this? Introduce an escape syntax? (Make sure it's in the
defect report.) "Be aware that a period character may be in the first four characters and is not
a separator but part of the 4CC? Or escape it? What do we do with spaces?"
Some people would like to use MP3-in-4 and know it's MP3, but none of our signaling is that
precise. Can we do better?
3.4
Action Points / Ballots
JTC 1.29.13.412/COR1
(14496-12/COR1)
ISO/IEC 1449612:2015/DCOR 1
(SC 29 N 15490)
Part 12: ISO base media file DCOR
format
(2016TECHNICAL
03-07)
CORRIGENDUM 1
4.
MP4 File Format (14496-14)
4.1
Topics
4.2
Contributions
Numbe Session
r
m3763 File
9
Format
4.3
Title
Summary of Voting on
ISO/IEC 1449614:2003/DCOR 2
Source
SC 29 Secretariat
Dispositions
Noted
Summary of discussions
4.3.1
m37639 Summary of Voting on ISO/IEC 14496-14:2003/DCOR 2
Disposition: noted
With no comments, we do not need a DoCR we assume. Corrigendum incorporates other
editorial fixes (e.g. 37851 below) as appropriate.
4.4
Action Points / Ballots
5.
5.1
NAL File Format (14496-15)
Topics
5.1.1 ISO/IEC 14496-15:201x 4th edition
This edition adds the storage of video bitstreams consisting of multiple views and the
associated depth, encoded based on Annex I of ISO/IEC 14496-10. The design is based on the
MVC file format, which is specified in Clause 7 of ISO/IEC 14496-15, in a backwards-
compatible manner. In the design, storage of the texture and depth of a particular view in
either separate tracks or the same track is supported. The design also includes the signalling of
various indications, such as the presence of texture and/or depth for each view, as well as
whether the texture or depth component or both of a view is required for the presentation of
another view. The amendment also adds the signaling (using HEVC video descriptor) to
indicate use of HEVC low-delay coding mode in each access unit where the STD buffer
management is performed using the HEVC HRD parameters. In addition, this new edition
provides file format support for AVC based 3D video excluding MVC, e.g., MVC+D, 3DVAVC, MFC, and MFC+D.
5.2
Contributions
Numbe
Session
r
m3764 File
4
Format
Title
Summary of Voting on
ISO/IEC DIS 14496-15 [4th
Edition]
Input on 14496-15 defect
report
m3752
7
File
Format
m3763
1
File
Format
HEVC tile tracks brand
m3763
2
File
Format
L-HEVC track layout
simplifications
m3783
0
File
Format
Comments on the defect
report of 14496-15
m3786
4
File
Format
m3786
5
File
Format
m3787
3
m3791
5
m3814
7
File
Format
File
Format
File
Format
ISO/IEC 14496-15: on
extractor design for HEVC
files
ISO/IEC 14996-15: decode
time hints for implicit
reconstruction of access
units in L-HEVC
HEVC tile subsets in
14496-15
Comments on ISO/IEC
14496-15
SO/IEC 14496-15: on
extractor design for
HEVC files (merging
Source
Dispositions
SC 29 Secretariat
Refer
15927
Ye-Kui Wang, Guido
Franceschini, David
Singer
Jean Le Feuvre, Cyril
Concolato, Franck
Denoual, Frédéric
Mazé
Jean Le Feuvre, Cyril
Concolato, Franck
Denoual, Frédéric
Mazé
Teruhiko Suzuki,
Kazushiko Takabayashi,
Mitsuhiro Hirabayashi
(Sony), Michael Dolan
(Fox)
M. M. Hannuksela, V. K.
Malamal Vadakital
(Nokia)
V. K. Malamal Vadakital,
M. M. Hannuksela
(Nokia)
Accepted
15929
Karsten Grüneberg,
Yago Sanchez
Hendry, Ye-Kui Wang,
Refer
m37864
Accepted
15928
Accepted
15928
M. M. Hannuksela, V.
K. Malamal Vadakital
(Nokia), K. Grüneberg,
Accepted
15928
Accepted
15928
noted
Refer
m37864
Noted
m37864 and m37873)
m3804
5
5.3
Plenary
National Body
Contribution on the defect
report of ISO/IEC 14496-15
Y. Sanchez
(Fraunhofer HHI)
Japan NB via SC 29
Secretariat
accepted
Summary of discussions
5.3.1
m37644 Summary of Voting on ISO/IEC DIS 14496-15 [4th Edition]
Disposition: see DoCR
Thank you. We expect to issue the draft FDIS text after (a) a 6-week editing period and (b)
actual FDIS text after review for consistency at the next meeting. There are a few questions
remaining that will require study by experts to help us make an informed and correct
conclusion.
One of them concerns the applicability of 'labelling' in tracks that contain more than just the
base layer – the sample tables and sample groups.
Options; they apply to:
(a) entire bitstream
(b) the bitstream formed by the layers in the track and their direct and indirect reference layers
(c) all the layers in this track
(d) base layer only
(e) any valid subset of the layered bitstream
a –vague, but could be the tracks forming the complete subset; implies editing all tracks when
streams are subsetted, or that the information may be incomplete
b – fails to mark useful points in upper layers when the lower layers don't have that (e.g. more
frequent sync points)
c–
d – must be used for the tracks marked hev1/hvc1
e -It may depend on sample entry type and whether there is an LHEVC config in the sample
entry.
For HEVC-compatible tracks, the rules need to be congruent with what the HEVC reader
expects.
sync sample table
independent and disposable samples
sample groups
sap (documents layers, so not a problem)
sync
tsas
stsa
roll
rap
5.3.2
m37632 L-HEVC track layout simplifications
Disposition: partially accepted
We agree that we should produce one or more brands that defined simplified subsets; but we
don't feel we can do definition of these, or major removal of tools, in an FDIS when the DIS
ballot passed.
But some restrictions we feel able to make generally now (casually phrased here, see contrib
for details):
 align in-band/out-of-band sample entry types over all tracks (of the same bitstream)
 if you are only HEVC-compatible, don't use the '2' sample entries
We envision a few brands defined in an amendment, such as:
 implicit recon only
 extractor-based simple cases
 tiled simple cases
5.3.3
m37631 HEVC tile tracks brand
Disposition: accepted in principle
While we did not review the details, we agree to insert such a brand ('simple tiling') into the
expected amendment of 'simple brands'. We understand the last to mean "don't spread both
tiling and temporal scalability over multiple tracks". The language will need work for
precision, and we're not sure we agree with all the restrictions or have all the possible
restrictions here.
5.3.4
m37915 Comments on ISO/IEC 14496-15
Disposition: partially accepted
We wonder whether we can simply disallow EOB Nal Units? No, they have an effect, so we
need the rule that during implicit reconstruction if there are EOBs, they are merged and a
single one is placed at the end of the AU.
Yes, the word 'slice' after NAL unit is odd; we agree that the definition needs editorial work
and note it is not aligned with the HEVC definition.
Extractors must not be circular at the sample level, and we hope to simplify at the file/brand
level. Also, we expect to ban the use of extractors in simple tile tracks in the upcoming
amendment.
On passing only Tile data to a decoder, we think that what is passed is necessarily dependent
on the decoder design and what it needs; we might have informative text of an example
process.
Tile regions are defined by tile region sample groups.
5.3.5
m37865 ISO/IEC 14996-15: decode time hints for implicit
reconstruction of access units in L-HEVC
Disposition: noted
We think that the number of file-based readers that need decode timestamps for pacing or
need HRD-compliant pacing is very small. We note that at the moment we use decode-time
alignment for implicit reconstruction. Do we need to fix it in FDIS, or can we re-use B.4.1 or
something like it, in an amendment? Contributions welcome.
We should note that when operating at lower levels, the timestamps may not correspond to
ideal pacing or HRD compliance, and that ctts may or will be present even if no re-ordering
occurs.
We agree that we need to make a statement that decoding times must be such that if the tracks
were combined into a single stream ordered by decoding time, the access unit order will be
correct.
5.3.6
M38147 ISO/IEC 14496-15: on extractor design for HEVC files
(merging m37864 and m37873)
Disposition: accepted
We note that this would add a byte to all extractors and other contributions have already
expressed concern over their bit-rate in some circumstances; we're also worried about
introducing a technical improvement like this in an FDIS text. We assume that the 'rewrite'
could be done between the file-format parser and the HEVC decoder, but this functionality
acts rather like hint tracks: it tells that re-write process what needs to be done, cook-book
style, whereas a general re-writer would need to inspect the stream and work it out for itself.
There may be cases where working it out on the fly is hard, without any ancillary information.
We like at least some of the use-cases, however.
Choices:
1) Reject
2) Accept
3) Define a new ExtractorMultiple in an amendment, and leave the existing extractor
alone
4) Introduce a reserved=0 byte in the existing extractor now, and rules for when it's not 0,
and then amendment introduce the new material
5) Defer extractors now to a future amendment while we think (remove all mention of
extractors from the HEVC section for the moment)
We like it enough not to do (1). 5 may not be as simple as it looks and leads to a proliferation
of sample entry variants (requiring or not extractors). We are hesitant to put new technology
in an FDIS text (2). Option (4) has the bit-rate anxiety and doesn't have forward compatibility;
if this byte is non-zero, then parsing must cease. That means that we'd need two sets of
sample entry-types, and we start exploding again. (3) has two extractor types, further
increasing choices and complexity, but actually the choices are there in specification versions
and player versions; we could select which extractor is required under the extractor brand. But
3 would require more sample entry types, again (which extractors are you required to
support?).
It looks as if (2) is the only way ahead that doesn't introduce even more complexity. Accept
and check.
5.3.7
m37864
Disposition: noted
See above
ISO/IEC 14496-15: on extractor design for HEVC files
5.3.8
m37873
Disposition: noted
See above
HEVC tile subsets in 14496-15
5.3.9
m37527 Input on 14496-15 defect report
Disposition: accepted
We had some communication also that some are unclear whether emu-prevention bytes are
included, but it's clear that a sample is an Access Unit, and the Access Unit is a set of NAL
Units, and NAL units have emu prevention. It might appear that there are two syntaxes for
NAL unit, but it's clear which we select (the one without start codes). Do we need a note?
We publish a revised defect report incorporating the discussion below, and this point.
5.3.10
m38045 National Body Contribution on the defect report of ISO/IEC
14496-15
Disposition: accepted
We will ensure that any Corrigendum truly is a correction and not a technical change, and we
will take care to make sure any technical change is communicated to possibly concerned
parties.
If needed we will try to de-correlate the Sync Sample Table and the SAP type so as to respect
existing definitions. We also need to ensure that the common definitions (e.g. of what a Sync
sample promises) are in line with the specific definitions for them in part 15, and indeed for
BLA and CRA we have some contradictions. At the moment, we do seem to have such a
contradiction. We need to send liaisons to a number of organizations outlining the problem
and requesting their view and how to resolve it. (DVB, DASH-IF, DECE, BDA, 3GPP SA4,
ATSC).
5.3.11
m37830 Comments on the defect report of 14496-15
Disposition: noted
(See also M38045 above). Another option is that we maintain the formal definition as-is, but
have a strong recommendation to follow a stricter definition, and some way in the files to
indicate which definition was followed. We think a note will be needed no matter what.
5.3.12
Editor's comments
1. In 7.6.7:
When version 1 of the SampleToGroupBox is used for the random access point sample
grouping, the grouping_type_parameter specifies the tier_id value of the layer(s) or view(s)
that are refreshed in the associated sample. [Ed. (MH): This paragraph specifies the bitstream
subset to which the random access point sample group applies, and hence seems to contradict
with the statements above that the information provided by the random access point sample
group is true for any valid subset in the entire MVC/MVD bitstream.]
We need to differentiate version 0 and version 1 clearly.
2. In 9.6.3:
The semantics of the fields that are common to LHEVCDecoderConfigurationRecord and
HEVCDecoderConfigurationRecord remain unchanged. [Ed. (YK): A check is needed on
whether some of the semantics need to be adjusted to be clearly applicable in the multi-layer
context.]
Editors to check.
3. In 9.7.3.1.1:
[Ed. (MH): The constraints above require the needed parameter sets to be present within the
samples or sample entries that are needed for decoding a picture on a particular layer. The
constraints might have to be even stricter so that they would require the needed parameter sets
to be present within the picture units of the present particular layer or the reference layer and
within the samples or sample entries that are needed for decoding a picture on a particular
layer.]
No strong opinions; isn't this a video coding spec. question? (In what layer are the parameter
sets for that layer?). The rules for the video coding layer must also be followed.
4. In 9.7.3.1.1:
[Ed. (YK): There was a note under m37228 in the FF minutes saying: "It‘s also clear (but we
probably don‘t say it and should) that if multiple sample entries are used with in-line
parameter sets, then the parameter sets are reset at a change of sample entry. That also means
that any change of sample entry has to be at an IRAP." The explicit restriction mentioned in
the last sentence should be applicable to all file formats of video codecs using parameter sets.
Should it be separately said in all the FFs of AVC and its extensions, and HEVC and its
extensions, or just be said in a common place? Or, and it seems making sense, to add to part
12 for all video FFs, to say that sample entry change shall be at a random access point (or
even sync sample).]
We state that decoders are reset on change of sample entry of the base track, and behavior is
undefined if the base track does not change sample entry but a layer track does.
5. In A.5:
[Ed. (YK): What if the set of extracted or aggregated NAL units is empty? Same as for SVC,
each of these fields takes a value conformant with the mapped tier description? But there may
be no such mapped tier for MVC and MVD. Disallow empty cases?]
We delete the question, as we think empty extractors and aggregators are at most useless and
probably not allowed.
5.4
Action Points / Ballots
ISO/IEC DIS 14496-15 4th Edition
(SC 29 N 15245)
Part 15: Carriage of NAL unit structured
video in the ISO Base Media File Format
6.
Open Font Format (14496-22)
6.1
Topics
DIS
(201602-05)
6.1.1 ISO/IEC 14496-22 AMD 1 Updates for font collections functionality
The third edition of the "Open Font Format" specification integrated changes introduced by all
previous amendments and corrigendum and added a number of new tools and technologies such as
extended functionality for font collections, new text layout feature tags and new glyph encoding
formats. Early implementations of the new font collection functions revealed certain aspects of the
specification that deemed to be incomplete, this amendment is intended to clarify specific details of
font collection functionality and make other necessary changes and updates.
6.1.2 ISO/IEC 14496-22 AMD 2 Updated text layout features and implementations
6.2
Contributions
.
Number Session
Title
m37774 font
Proposed amendment items for
ISO/IEC 14496-22 "Open Font Format"
6.3
Summary of discussions
6.4
Action Points
JTC 1.29.13.222.01
(14496-22/AMD1)
ISO/IEC 1449622:2015/DAM 1
(SC 29 N 15375)
7.
Timed Text (14496-30)
7.1
Topics
7.2
Contributions
Part 22: Open Font Format
AMENDMENT 1: Updates
for font collections
functionality
.
Number Session
Title
m37811 File
Summary of Voting on ISO/IEC 14496Format 30:2014/DCOR 2
m38165 File
Misc-Clarifications-on-14496-30Format 2014
7.3
Source
Dispositions
Vladimir
Accepted
Levantovsky
15931
(on behalf of
AHG on Font
Format
Representation)
Source
SC 29
Secretariat
Michael
Dolan
DAM
(201604-11)
Dispositions
Refer
15932
Accepted
15933
Summary of discussions
7.3.1
M38165 Misc-Clarifications-on-14496-30-2014
Disposition: accepted with revisions
Thank you. We add an informative section, and a clear definition of 'empty document'.
7.4
Action Points
8.
Reference Software and Conformance for File (14496-32)
8.1
Topics
8.2
Contributions
.
Number Session
Title
m37928 File
On Conformance and Reference
Format Software
m38126 File
Format
8.3
Coverage of, additions and
correction to the ISOBMFF
conformance files
Source
Waqar Zia,
Thomas
Stockhammer
Cyril
Concolato, Jean
Le Feuvre
Dispositions
Accepted
N15935
Accepted
N15935
Summary of discussions
8.3.1
m38126 Coverage of, additions and correction to the ISOBMFF
conformance files
Disposition: accepted
We will also look together at updating the RA, at least to point to tools and conformance files,
but maybe to improve it generally, base it in Github, and so on. We will run the tool here over
the files contributed files for Image file format, below. We also need to look at merging the
old spreadsheet and this feature-coverage list. New WD from this meeting.
8.4
Action Points
9.
9.1
MVCO (21000-19)
Topics
9.1.1 ISO/IEC 21000-19 AMD 1 Extensions on Time-Segments and Multi-Track Audio
This amendment constitutes an extension to MVCO and in particular to its functionality related to
description of composite IP entities in the audio domain, whereby the components of a given IP entity
can be located in time and, for the case of multi-track audio, associated with specific tracks.
9.2
Contributions
Number Session
m38031 MPEG21
Title
Contribution to WD of ISO/IEC
21000-19 AMD 1 Extensions on
Time-Segments and Multi-Track
Audio
9.3
Summary of discussions
9.4
Action Points / Ballots
10.
10.1
Source
Thomas
Wilmering, Panos
Kudumakis
Dispositions
Accepted
15936
Contract Expression Language (21000-20)
Topics
10.1.1 ISO/IEC 21000-20 2nd edition
The identified additions will permit the use of MCO and CEL as electronic formats for media
contracts, by covering aspects that are missing in the first edition. Such aspects are real
narrative contracts in most cases indicate the applicable law and the court agreed for any
dispute among the parties; a number of conditions are found to be used in real contracts but
cannot be modelled with the current version of the two standards: representation of obligation
to payments related to contracts
10.2
Contributions
Number Session
m37550 MPEG21
Title
Summary of Voting on ISO/IEC DIS
21000-20 [2nd Edition]
Source
SC 29 Secretariat
Dispositions
Refer 15937
10.3
Summary of discussions
10.4
Action Points / Ballots
ISO/IEC DIS 21000-20 2nd Edition
(SC 29 N 15144)
11.
11.1
Part 20: Contract Expression Language
DIS
(201512-21)
Media Contract Ontology (21000-21)
Topics
11.1.1 ISO/IEC 21000-21 2nd edition
The identified additions will permit the use of MCO and CEL as electronic formats for media
contracts, by covering aspects that are missing in the first edition. Such aspects are real
narrative contracts in most cases indicate the applicable law and the court agreed for any
dispute among the parties; a number of conditions are found to be used in real contracts but
cannot be modelled with the current version of the two standards: representation of obligation
to payments related to contracts
11.2
Contributions
Number Session
m37547 MPEG21
Title
Summary of Voting on ISO/IEC DIS
21000-21 [2nd Edition]
11.3
Summary of discussions
11.4
Action Points / Ballots
ISO/IEC DIS 21000-21 2nd Edition
(SC 29 N 15146)
Source
SC 29 Secretariat
Part 21: Media contract ontology
Dispositions
Refer 15939
DIS
(201512-21)
12.
User Description (21000-22)
12.1
Topics
12.1.1 User Description
The MPEG User Description (MPEG-UD) aims to provide interoperability among various
personalized applications and services. A user can store all his information in the MPEG-UD.
The MPEG-UD may be safely and securely managed by the users, e.g. by separating between
public and private encrypted data. Some data is static in while other data is dynamic.
12.2
Contributions
Number Session
m37548 user
m37780
user
m37781
user
m37835
user
m37988
user
m37992
user
m38050
user
Title
Summary of Voting on ISO/IEC
DIS 21000-22
Demonstration of audio and
video for MPEG-UD
Source
SC 29 Secretariat
Dispositions
Refer 15941
Hyun Sook Kim,
Noted
Sungmoon Chun,
Hyunchul Ko,
Miran Choi
Demo for MPEG-UD: Face-to-face Miran Choi, Minkyu Noted
speech translation
Lee, Young-Gil Kim,
Sanghoon Kim
RAI's contribution to MPEG-UD
Alberto MESSINA
Accepted
reference software
15943
Mpeg-UD Reference S/W for RRUI Jaewon
Accepted
Service
Moon(KETI),
15943
Jongyun Lim(KETI),
Tae-Boem
Lim(KETI), Seung
Woo Kum(KETI),
Min-Uk
Kim(Konkuk Univ.),
Hyo-Chul
Bae(Konkuk Univ.)
Demonstration of usage of MPEG- Bojan Joveski,
Noted
UD
Mihai Mitrea,
Rama Rao Ganji
Proposal of error modification of Si-Hwan Jang,
Accepted
validator in reference software
Sanghyun Joo,
15943
Kyoung-Ill Kim,
Chanho Jung, Jiwon
m38051
user
Demo program of visual
communication for reference
software
m38053
user
Proposal of Validation Tool for
MPEG-21 User Description
Reference Software
12.3
Summary of discussions
12.4
Action Points
ISO/IEC DIS 21000-22
(SC 29 N 15149)
13.
MP-AF (23000-15)
13.1
Topics
Lee, Hyung-Gi
Byun, Jang-Sik Choi
Si-Hwan Jang,
Noted
Sanghyun Joo,
Kyoung-Ill Kim,
Chanho Jung, Jiwon
Lee, Hyung-Gi
Byun, Jang-Sik Choi
Hyo-Chul Bae,
Accepted
Kyoungro Yoon,
15943
Seung-Taek Lim,
Dalwon Jang,
Part 22: User Description
DIS
(201512-21)
13.1.1 ISO/IEC 23000-15 Multimedia Preservation Application Format
The objective of the Multimedia Preservation Description Information (MPDI) framework is
to provide a standardized description to multimedia content to enable users to plan, execute,
and evaluate preservation operations to achieve the objectives of digital preservation.
13.2
Contributions
Number Session
Title
Source
Dispositions
13.3
Summary of discussions
13.4
Action Points
ISO/IEC FDIS 23000-15
(SC 29 N 15432)
Part 15: Multimedia preservation
application
FDIS
(201604-04)
ISO/IEC 23000-15:201x/DAM 1
(SC 29 N 15434)
Part 15: Multimedia preservation
application
AMENDMENT 1: Implementation
Guideline for MP-AF
DAM
(201604-11)
14.
Publish/Subscribe AF (23000-16)
14.1
Topics
14.1.1 ISO/IEC 23000-16 Publish/Subscribe Application Format
Publish/Subscribe (PubSub) is an established communication paradigm where senders do not
communicate information directly to intended receivers but rely instead on a service that
mediates the relationship between senders and receivers. While generic PubSub specifications
exist, there are some specific features that are typical of a multimedia application that can be
easily supported by a media-friendly PubSub format based on MPEG technology.
14.2
Contributions
Number Session
m37649 AF
Title
Summary of Voting on ISO/IEC DIS
23000-16
Source
SC 29
Secretariat
Dispositions
Accepted
15944
14.3
Summary of discussions
14.4
Action Points
ISO/IEC DIS 23000-16
(SC 29 N 15235)
Part 16: Publish/Subscribe Application
Format
15.
Media Linking AF (23000-18)
15.1
Topics
DIS
(201601-29)
15.1.1 ISO/IEC 23000-17 Media Linking AF
The development of the MAF called ―Media Linking Application Format‖ (MLAF ISO/IEC
23000-18) is prompted by existing many examples of services where media transmitted for
consumption on a primary device (source item) give users hints to consume other related
media on a secondary or companion device (destination items). To facilitate interoperability
of such services it is beneficial to define a standard data structure (a ―format‖) that codifies
the relationship between the two information sources.
15.2
Contributions
Number
M38132
15.3
Session
Title
MLAF demonstration
Summary of discussions
Source
Marius Preda on behalf
of BRIDGET consortium
Dispositions
Noted
15.4
Action Points
ISO/IEC DIS 23000-18
(SC 29 N 15436)
DIS
(201605-12)
Part 18: Media Linking Application
Format
16.
Adaptive Screen Content Sharing AF (23000-19)
16.1
Topics
16.1.1 ISO/IEC 23000-18 Adaptive Screen Content Sharing Application Format
16.2
Contributions
Number Session
Title
Source
16.3
Summary of discussions
16.4
Action Points
17.
Omnidirectional Media AF (23000-20)
17.1
Topics
Dispositions
17.1.1 ISO/IEC 23000-20 Omnidirectional Media Application Format
17.2
Contributions
Number
m37819
Session
AF
Title
Additional Requirements
for Omnidirectional Media
Format
Source
Jangwon Lee, Sejin Oh,
Jong-Yeul Suh
Dispositions
Accepted
16143
m37836
AF
Requirements for
Omnidirectional Media
Application Format
m37837
AF
Proposed Text for
Omnidirectional Media
Application Format
m37914 AF
Support of VR video in
ISO base media file
format
17.3
Summary of discussions
17.4
Action Points
18.
Common Media AF (23000-21)
18.1
Topics
Eric Yip, Byeongdoo Choi,
Gunil Lee, Madhukar
Budagavi, Hossein NajafZadeh, Esmaeil
Faramarzi, Youngkwon
Lim(Samsung)
Byeongdoo Choi, Eric
Yip, Gunil
Lee,Madhukar
Budagavi, Hossein
Najaf-Zadeh, Esmaeil
Faramarzi, Youngkwon
Lim(Samsung)
Ye-Kui
Wang, Hendry, Marta
Karczewicz
Accepted
16143
Accepted
15946
Accepted
15946
18.1.1 ISO/IEC 23000-21 Common Media Application Format
18.2
Contributions
Number Session
m37755 AF
Title
Proposed Common Media
Format for Segmented
Media
m37756
Proposed Requirements for
Common Media Format for
Segmented Media
AF
Source
David Singer, Kilroy
Hughes, Bill May, Jeffrey
Goldberg, Alex Giladi,
Will Law,
Krasimir Kolarov, John
Simmons, David Singer,
Iraj Sodagar, Bill May,
Jeffrey Goldberg, Will
Law, Yasser Syed, Sejin
Oh, Per Fröjdh, Kevin
Streeter, Diego Gibellino
Dispositions
Accepted
15947
Accepted
16144
m37872
AF
Comments on the usecases, requirements and
specification for Common
Media Format
Thorsten Lohmar,
Prabhu Navali, Per
Fröjdh, Jonatan
Samuelsson
18.3
Summary of discussions
18.4
Action Points
19.
Common encryption format for ISO BMFF (23001-7)
19.1
Topics
Accepted
15947
19.1.1 ISO/IEC 23001-7 3rd Edition
This format defines a way to encrypt media (audio, video, etc.) in files of the ISO base media
file format family. By using a common encryption format, a single media asset can be used by
several services and devices using different digital rights management systems, and the
implementation complexity that would be consequent on having duplicate files and formats
for the same content can be reduced or eliminated.
19.2
Contributions
Number
Session
19.3
Summary of discussions
19.4
Action Points / Ballots
Title
Source
Disposition
20.
CICP (23001-8)
20.1
Topics
20.2
Contributions
Number
Session
20.3
Summary of discussions
20.4
Action Points
ISO/IEC FDIS 23001-8:201x [2nd
Edition]
(SC 29 N 15233)
Title
Source
Disposition
Part 8: Coding-independent code-points
21.
Common encryption format for MPEG-2 TS (23001-9)
21.1
Topics
FDIS
(201511-23)
21.1.1 ISO/IEC 23001-9 AMD 1 Support of Sparse Encryption
The 'cbc2' mode in ISO/IEC 23001-7:201X / AMD1 proposes sparse encryption, where every
16 bytes out of 160 bytes of bitstream are encrypted. Carriage of such a mode in MPEG-2
transport will result in very significant overhead, as two TS packets would be needed to carry
160 bytes of bitstream. This amendment provides a framework that significantly reduces the
overhead of such encryption mode while remaining backwards compliant with the existing
MPEG-2 TS conditional access framework.
21.2
Contributions
Number
Session
Title
Source
Disposition
21.3
Summary of discussions
21.4
Action Points
ISO/IEC FDIS 23001-9:201x [2nd
Edition]
(SC 29 N 15445)
FDIS
Part 9: Common Encryption for MPEG-2
(2016Transport Streams
03-13)
22.
Timed metadata metrics of Media in ISOBMFF (23001-10)
22.1
Topics
22.1.1 ISO/IEC 23001-10 Timed Metadata Metrics of Media in the ISO Base Media File
Format
Specifies a storage format for commonly used, timed metadata metrics of media, such as
quality related PSNR, SSIM, and others, for carriage in metadata tracks of the ISO Base
Media File Format.
22.1.2 ISO/IEC 23001-10 AMD 1 Carriage of ROI
This amendment of ISO/IEC 23001 defines a storage format for a new type of timed metadata
metrics which are spatial coordinates. These coordinates relate to the position of media track
with respect to another media track in the ISO Base Media File Format.
22.2
Contributions
Number
Session
22.3
Summary of discussions
22.4
Action Points
ISO/IEC 23001-10:2015/DAM 1
(SC 29 N 15499)
Title
Source
Disposition
Part 10: Carriage of Timed Metadata
DAM
Metrics of Media in ISO Base Media File
(2016Format, AMENDMENT 1: Carriage of
05-17)
ROI coordinates
23.
Green Metadata (23001-11)
23.1
Topics
23.1.1 ISO/IEC 23001-11 Amd.1: Carriage of Green Metadata in an HEVC SEI
Message
This amendment will specify an HEVC SEI message carrying metadata that could be used for
energy-efficient media consumption as specified in ISO/IEC 23001-11:201x. The amendment also
modifies text specifying the carriage of Green Metadata in an AVC SEI message so that the AVC and
HEVC SEI messages are consistent.
23.2
Contributions
Numbe Sessio
r
n
m3756 Green
3
Title
Summary of Voting on
ISO/IEC 23001-11:201x/DAM
1
HEVC Green Metadata
Proposition
m3779
2
Green
m3784
6
Green
Reference Software for ISO
BMFF Green Metadata
parsing
m3801
8
Green
Impact of 8-bits encoding on
the Green Metadata accuracy
m3803
3
Green
m3806
8
Green
Enhancement of
"dequantization and inverse
transform" metadata (III)
Recommended practices for
power measurements and
analysis for multimedia
applications
Source
SC 29 Secretariat
nicolas.derouineau@vitec.
com,
nicolas.tizon@vitec.com,
yahia.benmoussa@univubs.fr
Khaled.Jerbi@thomsonnetworks.com,
Xavier.Ducloux@thomson
-networks.com,
Patrick.Gendron@thomso
n-networks.com
Yahia Benmoussa, Nicolas
Derouineau, Nicolas
Tizon, Eric Senn,
yahia benmoussa, eric
senn, Nicolas Tizon,
Nicolas Derouineau
Steve Saunders, Alexis
Michael Tourapis,
Krasimir Kolarov, David
Singer
Disposition
s
Refer
15948
Noted
Accepted
15951
Noted
Noted
Noted
23.3
Summary of discussions
23.4
Action Points
Part 11: Green metadata, AMENDMENT DAM
1: Carriage of Green Metadata in an
(2016HEVC SEI Message
01-05)
ISO/IEC 23001-11:2015/DAM 1
(SC 29 N 15168)
24.
23001-12 Sample Variants in File Format
24.1
Topics
24.1.1 ISO/IEC 23001-12 Sample Variants in ISOBMFF
This adds support for a general framework for sample ―variants‖ in the ISOBMFF. This
would be used by a forensic ―watermarking‖ system to modify the base sample, but is
independent of the ―watermarking‖ algorithm. Variants are sample data that may be used by a
decoder and DRM system to ultimately output video or audio that is marked in a way that can
be unique to individual decoders or decoder product models. The application of the variants
during the decode process is under control of the DRM system (and ultimately the content
provider).
24.2
Contributions
Number
Session
Title
24.3
Summary of discussions
24.4
Action Points
ISO/IEC FDIS 23001-12
(SC 29 N 15115)
Source
Dispositions
FDIS
Part 12: Sample Variants in the ISO Base
(2015Media File Format
11-04)
25.
MXM (23006-1)
25.1
Topics
25.1.1 MXM
25.2
Contributions
Numbe
r
m3799
1
Sessio
n
MXM
m3755
8
MXM
Title
Proposed changes to ISO/IEC CD
23006-1 3rd edition
Summary of Voting on ISO/IEC CD
23006-1 [3rd Edition]
25.3
Summary of discussions
25.4
Action Points
26.
MXM API (23006-2)
26.1
Topics
Source
Giuseppe
Vavalà (CEDEO),
Kenichi Nakamura
(Panasonic)
SC 29 Secretariat
Disposition
s
Accepted
15953
Refer
15952
26.1.1 MXM
26.2
Contributions
Numbe
r
m3756
Sessio
n
MXM
Title
Summary of Voting on ISO/IEC DIS
Source
SC 29 Secretariat
Disposition
s
Noted
1
23006-2 [3rd Edition]
m37806
mxm
Editor's preparation for ISO/IEC 3rd
FDIS 23006-2
26.3
Summary of discussions
26.4
Action Points
ISO/IEC DIS 23006-2 [3rd Edition]
(SC 29 N 15127)
Jae-Kwan Yun, JongHyun Jang
Part 2: MPEG extensible middleware
(MXM) API
27.
MXM Conformance (23006-3)
27.1
Topics
Accepted
15954
DIS
(201601-05)
27.1.1 MXM
27.2
Contributions
Numbe
r
m3756
2
Sessio
n
MXM
m37807
mxm
Title
Summary of Voting on ISO/IEC DIS
23006-3 [3rd Edition]
SC 29 Secretariat
Disposition
s
Noted
Editor's preparation for ISO/IEC 3rd FDIS
23006-3
Jae-Kwan Yun, JongHyun Jang
Accepted
15955
27.3
Summary of discussions
27.4
Action Points
ISO/IEC DIS 23006-3 [3rd Edition]
Source
Part 3: Conformance and reference
DIS
(SC 29 N 15129)
software
28.
MMT (23008-1)
28.1
Topics
(201601-05)
28.1.1 ISO/IEC 23008-1:201x AMD 1 Use of MMT Data in MPEG-H 3D Audio
This amendment defines normative behavior of using the system data carried over MPEG-H
3D Audio
28.1.2 ISO/IEC 23008-1:201x AMD 2 Mobile MMT
28.2
Contributions
number Sessio
n
m3754 MMT
6
m3755 MMT
4
m3780 MMT
4
m3783
8
m3783
9
m3785
0
m3785
2
m3793
9
m3794
0
m3794
1
Title
Summary of Voting on ISO/IEC
23008-1:2014/DCOR 2
Summary of Voting on ISO/IEC
23008-1:201x/PDAM 1
Proof of concept: NAMF for
Multipath MMT
Source
SC 29 Secretariat
SC 29 Secretariat
MMT
Descriptions on MANE
MMT
MMT Session initiation
MMT
MMT
TRUFFLE: request of virtualized
network function to MANE
The usage of fragment counter for a
bigger size of MFU
MMT Session Setup and Control
Jihyeok Yun, Kyeongwon
Kim, Taeseop Kim, Dough
Young Suh, Jong Min Lee
Youngwan So, Kyungmo
Park
Youngwan So, Kyoungmo
Park
Doug Young Suh, Bokyun
Jo, Jongmin Lee
Dongyeon Kim, Youngwan
So, Kyungmo Park
Imed Bouazizi
MMT
MMT Integration with CDNs
Imed Bouazizi
MMT
Migrating Sessions from HTTP to
MMT
Imed Bouazizi
MMT
Refer
15956
Refer
15958
Noted
Accepted
15960
Accepte
d 15960
Noted
Accepte
d 15957
Accepte
d 15960
Noted
Accepte
d 15960
m3794
2
m3794
3
m3794
9
m3795
9
m3796
0
m3797
4
m3797
8
m3800
7
m3801
0
m3801
1
MMT
Imed Bouazizi
MMT
MMT as a Transport Component for
Broadcast
Support for new Container Formats
MMT
[MMT] Update of MP table
MMT
Personalized Video Insertion for
TRUFFLE
NAT Traversal for TRUFFLE
Hyun-Koo Yang, Kyungmo
Park, Youngwan So
Jongmin Lee
m3801
2
m3802
8
m3804
3
MMT
MMT
MMT
MMT
MMT
MMT
MMT
MMT
MMT
m3807 MMT
3
M3744
2
28.3
MMT signaling for Presentation
Timestamp
MMT signaling message for
consumption Report
An optimized transfer mechanism for
static slices in video stream
Improving protocol efficiency for
short messages transmission
Real-time interaction message
Resource Request/Response
Message
Proposed editorial improvements for
MMT 2nd edition
Request for editorial correction of
header compression in ISO/IEC
23008-1 2nd edition
comment on FEC signaling in
MMT
Signaling of Content Accessible Time
Window
Imed Bouazizi
Jongmin Lee
Kyungmo Park
jaehyeon bae, youngwan
so, kyungmo park
Yiling Xu, Chengzhi Wang,
Hao Chen, Wenjun Zhang
Yiling Xu, Hao Chen, Ning
Zhuang, Wenjun Zhang
Yiling Xu, Ning Zhuang,
Hao Chen, Chengzhi
Wang, Wenjun Zhang
Yiling Xu, Hao Chen, Ning
Zhuang, Wenjun Zhang
Kyungmo Park
Changkyu Lee, Juyoung
Park,
Accepte
d 15960
noted
Accepte
d 15960
Accepte
d 15960
Accepte
d 15960
Accepte
d 15960
Accepte
d 15960
Noted
Noted
Noted
Accepte
d 15961
Accepte
d 15964
Ccepted
15957
Lulin Chen, Shan Liu,
Shawmin Lei
Noted
Yiling Xu, Chengzhi Wang,
Wenjie Zhu, Wenjun Zhang
Accepte
d 15961
Summary of discussions
Refer m38012
28.4
Action Points
ISO/IEC 23008-1:2014/DCOR 2
Part 1: MPEG media transport (MMT),
DCOR
(SC 29 N 15249)
TECHNICAL CORRIGENDUM 2
29.
MMT Reference Software (23008-4)
29.1
Topics
29.2
Contributions
number Sessio
n
m3755 MMT
1
m3794 MMT
8
29.3
Title
Summary of Voting on ISO/IEC DIS 230084
Updates to the MMT reference software
(201512-01)
Source
SC 29 Secretariat
Imed Bouazizi
Note
d
Acce
pted
1596
7
Summary of discussions
Refer m38012
29.4
Action Points
ISO/IEC DIS 23008-4
(SC 29 N 15108)
30.
FEC Codes (23008-10)
30.1
Topics
30.2
Contributions
Part 4: MMT Reference and
Conformance Software
DIS
(201512-21)
number Sessio
n
m3801 MMT
4
Performance of adaptive FEC
proposed in m37481
m3809
4
FF-LDGM Codes for CE on
MMT Adaptive AL-FEC
30.3
MMT
Title
Source
Yiling Xu, Wei Huang,
Hao Chen, Bo Li, Wenjun
Zhang
Takayuki Nakachi,
Masahiko Kitamura,
Tatsuya Fujii
Noted
noted
Summary of discussions
Refer m38012
30.4
Action Points
31.
CI (23008-11)
31.1
Topics
31.1.1 ISO/IEC 23008-11 1st edition
MMT defines a composition layer to enable the authoring and delivery of rich media services.
The Composition Information (Cl) is authored using HTML5 and thus exhibits all the
Capabilities and tools availablefor HTML5. In addition, MMT CI provides tools to support
dynamic media scenes and their delivery over unicast channels, authoring of content for
secondary screens, as well as separation of media dynamics from scene setup. This is
achieved in a backward compatible manner using a dedicated CI file that is in XML format.
31.2
Contributions
number Sessio
n
m3794 MMT
4
m3800 MMT
1
Title
Customized Presentations for
MPEG-CI
Customized Presentation
Scheme for live broadcasting
m3800 MMT
2
Customized Presentation
Scheme for VoD
m3800 MMT
MMT Asset composition
Source
Imed Bouazizi
Yiling Xu, Wenjie Zhu,
Hao Chen, Wenjun
Zhang
Yiling Xu, Wenjie Zhu,
Bo Li, Lianghui Zhang,
Wenjun Zhang
Yiling Xu, Hao Chen,
Accepted
15970
Accepted
15970
Noted
Noted
3
relationship
m3800 MMT
5
MMT Asset homogeneity
relationship
m3801 MMT
3
Spatial Partitioning Based on
the Attributes of Media
Content
31.3
Teng Li, Wenjun
Zhang
Yiling Xu, Hao Chen,
Teng Li, Wenjun
Zhang
Yiling Xu, Shaowei
Xie, Ying Hu, Wenjun
Zhang
Noted
Noted
Summary of discussions
Refer m38012
31.4
Action Points
32.
MMT Implementation Guide (23008-13)
32.1
Topics
32.1.1 ISO/IEC 23008-13 1st edition
The MMT Implementation Guidelines describe the usage of MMT for different media
delivery scenarios. It describes the different functions that MMT provides and shows using
examples how they can deployed separately or together to realize a media delivery service.
32.2
Contributions
number Sessio
n
M3809 MMT
9
m3760 MMT
7
m3762
9
MMT
m3767
2
MMT
m3800
6
MMT
Title
Source
Summary of Voting on ISO/IEC PDTR
23008-13 [2nd Edition]
[MMT-IG] Proposal of additional text
for chapter 5.7.2
SC 29 Secretariat
[MMT-IG]Use case: Combination of
MMT and HTTP streaming for
synchronized presentation
Implementation guidelines for
Multipath MMT: failure moderation
Minju Cho, Yejin Sohn,
Jongho Paik
IG of Content Accessible Time Window
Yejin Sohn, Minju Cho,
Jongho Paik
Jihyeok Yun, Kyeongwon
Kim, Taeseop Kim, Doug
Young Suh, Jong Min
Lee,
Yiling Xu, Chengzhi
Wang, Hao Chen,
Refer
15976
Accept
ed
15978
Accept
ed
15978
Noted
Accept
ed
m3803
0
MMT
[MMT-IG] MMT QoS management for
media adaptation service
Wenjun Zhang
Kyungmo Park,
Youngwan So
m3753
3
MMT
Proposal of additional text and data for
MMT Implementation guidelines
Shuichi Aoki, Yuki
Kawamura
32.3
15978
Accept
ed
15978
Accept
ed
15978
Summary of discussions
Refer m38012
32.4
Action Points
33.
Image File Format (23008-12)
33.1
Topics
33.1.1 ISO/IEC 23008-12 1st edition
Support for
1) sequences, timed or untimed, with or without audio etc.
2) single still images, the simple case, maybe based on JPX
33.2
Contributions
number Sessio
n
m3764 File
1
Forma
t
m3784 File
4
Forma
t
m3786 File
6
Forma
t
m3786 File
7
Forma
t
Title
Source
Summary of Voting on
ISO/IEC 23008-12:201x/DCOR
1
New brand for Image File
Format
SC 29 Secretariat
Refer
15974
Franck Denoual, Frederic
Maze
Rejected
Multi-layer HEVC support in
HEIF
V. K. Malamal Vadakital, M.
M. Hannuksela (Nokia)
Accepted
15973
Progressive refinement
indication for HEIF
M. M. Hannuksela, E. B.
Aksu (Nokia)
Accepted
15973
m3786
8
File
Forma
t
HEIF conformance files
m3790
4
File
Forma
t
On the Image file format and
enhancements
33.3
E. B. Aksu (Nokia), L.
Heikkilä (Vincit), J.
Lainema, M. M. Hannuksela
(Nokia)
David Singer
Accepted
15935
Accepted
15975
Summary of discussions
33.3.1
m37641 Summary of Voting on ISO/IEC 23008-12:201x/DCOR 1
Disposition: see DoCR
We agree to also incorporate the clarification below under Misc.
33.3.2
m37844 New brand for Image File Format
Disposition: not accepted
We note that a similar effect might be achieved for the user by having a primary item that's a
crop of another item, and one could say that the cropped image is also an alternative to a tile
item (i.e. the tile item isn't primary, but an alternative to a primary). This is more accessible to
more readers (those capable only of cropping, and those capable of decoding tiles).
We note also that files using this new brand would be 'general' HEIF files, not specifically
from their MIME type declared as being HEVC-coded, which might be a disadvantage.
We are concerned about 'brand proliferation' and we feel we will probably need new brands
when we see market adoption.
33.3.3
m37866 Multi-layer HEVC support in HEIF
Disposition: accepted
We note that a layered HEVC coded image will deliver the base layer from the decoder if no
layer selection is made, but this is coding-system dependent. We'd prefer not to copy the
syntax of 'oinf', but that means we'd need to make it a record in the part 15 spec., or perhaps
we can say "has the same fields as the oinf"? Editors to do their best. We should state the
reserved values.
33.3.4
m37867 Progressive refinement indication for HEIF
Disposition: partially accepted
We note that a reader might do progressive display 'opportunistically', noticing that files have
the structures (alternative group, sequence of data) that allows it. We wonder if the brand is
needed to warn/promise.
For now, we'd like an informative Annex on progressive display in the amendment, but hold
on making a brand. There is already a note in 6.2, which should point forward to this new
Annex (which should address the subject of the note as well).
33.3.5
m37904
Disposition: accepted
On the Image file format and enhancements
On the extension problem: we add 's' to the end of the extensions for sequences, in the COR,
and hold off on IANA registration for the COR finishing.
Yes, we switch to avci.
The profile-tier-value is a mistake, should be the 3 bytes etc.
Into the amendment.
33.3.6
m37868 HEIF conformance files
Disposition: accepted
We note that the conformance suite doesn't need to cover all the negative cases, and indeed
there may be some positive ones that don't need checking. We seem to have quite a few notyet-covered positive cases (filter: blank on the file, positive on the type). Into the WD of
conformance.
33.3.7
Misc
In ItemPropertyContainerBox, it seems possible that a ‗free‘ atom might be present amongst
the properties. I understand that it isn‘t sensible to be associated with any item, but its
presence uses up a slot, correct (i.e. that index within propertyContainer counts but is
unassociated).
we should clarify this (I think the answer is yes, it uses up a slot)
The definition of ‗ispe‘ of claims that its quantity is to be exactly one, but the text suggests
that it can be more than one. We think the definition is correct and the text should be fixed to
avoid confusion. Could we decide on this e.g. in Thursday's file format session.
Box Type:
'ispe' Property Type:
Descriptive item
property Container:
ItemPropertyContainerBox Mandatory (per an item):
Yes Quantity (per an item):
One
The ImageSpatialExtentsProperty documents the width and height of the associated image
item. Every image item must be associated with at least one property of this type, prior to the
association of all transformative properties, that declares the width and height of the image.
33.4
Action Points
34.
Media presentation description and segment formats (23009-1)
34.1
Topics
34.1.1 ISO/IEC 23009-1 AMD 1 Extended profiles and time synchronization
This amendment will add support of UTC timing synchronization and Inband event
synchronization
34.1.2 ISO/IEC 23009-1 AMD 2 Spatial Relationship Description, Generalized URL
parameters and other extensions
This amendment to ISO/IEC 23009-1 adds the ability for MPD authors to express:
-
Spatial relationships between representations in the MPD;
-
Flexible parameter insertions in URLs used to query media segments;
Role @values compatible with the kind values used in the W3C HTML5
recommendation;
Different signaling of client authentication and content authorization methods
34.1.3 ISO/IEC 23009-1 AMD 3 Authentication, Access Control and multiple MPDs
The following rationales are provided:




Authentication and authorization of a client are key enablers of establishing trust in a client (e.g., a
player). This is needed to ensure only clients that obey a specific set of requirements (e.g. ones imposed
by advertisers or MVPDs) have access to content.
MPD linking allows pointing to media components in other MPDs to provide a relation, such that
seamless transition between such MPDs is possible. A specific example is mosaic channels in
combination with Spatial Relation Description as defined in the ISO/IEC 23009-1:2014 Amd.2.
The callback event is considered relevant for different use cases, including consumption and ad
tracking.
Period continuity enables to break content into multiple Periods, but provides a seamless playout across
Period boundaries. This is relevant for use cases such as providing robust live services and ad insertion.
34.1.4 ISO/IEC 23009-1 AMD 4 Segment Independent SAP Signalling, MPD chaining
and other extensions
This amendment provides several enhancements to the current ISO/IEC 23009-1, addressing
several use cases of significant importance to the industry
 MPD chaining is introduced to simplify implementation of pre-roll ads in cases of
targeted dynamic advertising in live linear content. This use case ("show pre-roll, and
switch into a live channel or to a VoD asset) is very common and essential for
implementing advanced advertising systems
 Segment Independent SAP Signalling (SISSI) is introduced in order to improve DASH
performance in low latency video streaming while using high-efficiency video coding
structures. The improved aspects include, among others, reduced channel change time
in broadcast and multicast. The solution also allows for a more responsive rate
adaptation logic. In this context this amendment defines a new profile which uses the
new signaling and is suitable for a linear TV service.
 MPD Reset: This functionality permits to signal that the updated MPD is no longer
contains previously promised claims, and a reset, typically resulting from an encoder
outage.
 Simplified On-Demand Periods: This functionality allows to split an On-Demand
file into multiple virtual Periods.
34.2
Contributions
Numbe
r
m3754
4
m3781
5
m3791
7
m3762
1
m3777
2
Sessio
n
DASH
DASH
Fragmented MPD for DASH streaming
applications
m3784
9
DASH
Comment on 23009-1 strudy DAM3
m3787
1
DASH
Comments on DAM4
m3790
6
DASH
Possible DASH Defects
m3791
3
DASH
Dependent Random Access Point
(DRAP) pictures in DASH
m3791
6
m3791
8
m3792
0
m3792
1
m3792
2
m3792
3
DASH
Comments on DuI
DASH
Next Generation Audio in DASH
DASH
Roles and Roles schemes
DASH
Labels in DASH
DASH
Selection Priority and Group
DASH
Adaptation Set Switching
DASH
DASH
DASH
Title
Summary of Voting on ISO/IEC 230091:2014/PDAM 4
Summary of Voting on ISO/IEC 230091:2014/DAM 3
Proposed Updates to Broadcast TV
Profile
An automatic push-directive
Source
Dispositions
SC 29 Secretariat Refer 15981
ITTF via SC 29
Secretariat
Thomas
Stockhammer
Peter Klenner,
Frank Herrmann
Lulin Chen, Shan
Liu, PoLin Lai,
Shawmin Lei
Mitsuhiro
HIrabayashi,
Kazuhiko
Takabayashi,
Paul Szucs,
Yasuaki
Yamagishi
Yago Sanchez,
Robert Skupin,
Karsten
Grueneberg,
Cornelius Hellge,
Thomas Schierl
Jean Le Feuvre,
Cyril
Concolato, , ,
J. Samuelsson,
M. Pettersson,
R. Sjoberg
(Ericsson)
Thomas
Stockhammer
Thomas
Stockhammer
Thomas
Stockhammer
Thomas
Stockhammer
Thomas
Stockhammer
Thomas
Stockhammer
Refer 15979
Accepted
15983
noted
Noted
noted
Noted
Accepted
15983
Noted
Accepted
15983
Accepted
15981
Accepted
15983
Accepted
15982
Noted
Accepted
15982
m3792
7
DASH
Clarification on the format of xlink
remote element
m3794
5
m3803
2
m3803
4
DASH
Support for VR in DASH
DASH
m3761
6
DASH
Moving Regions of Interest signalling
in MPD
Technical comments on Amd 3:
authorization and authentication
scheme
[FDH] Comments on DASH-FDH pushtemplate
m3761
7
m3761
8
DASH
m3762
0
m3762
2
DASH
m3762
3
m3762
6
DASH
m3784
3
DASH
m3784
7
DASH
DASH
DASH
DASH
DASH
m3791 DASH
9
m3803 DASH
8
m3807 DASH
0
m3776
1
DASH
CE-FDH: Streaming flow control of
sever push with a push directive
CE-FDH- Server push with a
fragmented MPD
Waqar Zia,
Thomas
Stockhammer
Imed Bouazizi
Accepted
15983
Emmanuel
Thomas
Emmanuel
Thomas
Accepted
15982
Noted
PoLin Lai, Shan
Liu, Lulin Chen,
Shawmin Lei
Lulin Chen, Shan
Liu, Shawmin Lei
Lulin Chen, Shan
Liu, PoLin Lai,
Shawmin Lei
Li Liu, XG Zhang
Noted
Noted
Noted
Comments on DASH-FDH URL
Templates
[CE-FDH] Comments on push template Franck Denoual,
Frederic Maze,
Herve Ruellan
[CE-FDH] Editorial comments on WD
Franck Denoual,
Frederic Maze,
[FDH] Comments on FDH WebSocket
Kevin Streeter,
Bindings
Vishy
Swaminathan
[CE-FDH] Fast start for DASH FDH
Franck Denoual,
Frederic Maze,
Herve Ruellan,
Youenn Fablet,
Nael Ouedraogo
[CE-FDH] Additional parameters for
Kazuhiko
Push Directives
Takabayashi,
Mitsuhiro
Hirabayashi
[FDH] Editorial and technical
Iraj Sodagar
comments on WD3
[FDH] CE-FDH Conference Call Notes
Kevin Streeter
Noted
[FDH] Additional comments on
DASH-FDH push-template
Noted
[PACO] Generalized Playback Control:
a framework for flexible playback
PoLin Lai,
Shan Liu, Lulin
Chen,
Shawmin Lei
Iraj Sodagar
Noted
Accepted
15993
Accepted
15993
Accepted
15993
Accepted
15993
Accepted
15993
Noted
noted
m3776
2
m3784
8
DASH
DASH
M3810
7
M3815
0
M3815
1
M3811
9
M3816
4
M3816
2
M3816
7
34.3
control
[PACO] PACO CE Report
Iraj Sodagar
Noted
[PACO] Carriage of playback control
related information/data
Kazuhiko
Takabayashi,
Mitsuhiro
HIrabayashi
Noted
QEC Effort
Ali C. Begen
Comments on Broadcast TV profile
(MPEG-DASH 3rd edition)
Mary-Luc
Champel
Kurt
Krauss, Thoma
s
Stockhammer
Thomas
Stockhammer
Kevin Streeter,
Vishy
Swaminathan
Scheme for indication of content
element roles
[FDH] Lists are sufficient
[FDH] Proposal for Simplified
Template Format for FDH
[FDH] Complementary comments
on URL Template
Li Liu
[CE-FDH] Push Directive enabling
fast start streaming on MPD
request
Frederic Maze,
Franck
Denoual
(Canon), Kazu
hiko
Takabayashi,
Mitsuhiro
Hirabayashi
(Sony)
Summary of discussions
Refer m38918
Noted
Accepted
15983
Noted
Noted
Accepted
15993
Noted
Noted
34.4
Action Points
ISO/IEC 23009-1:2014/DAM 3
(SC 29 N 15183)
Part 1: Media presentation description
and segment formats,
AMENDMENT 3: Authentication, MPD
linking, Callback Event, Period
Continuity and other Extensions
DAM
(201602-12)
Draft 3rd edition = 2nd edition + COR 1 + AMD 1 + AMD 2 (Emmanuel) + DAM 3 + DCOR
2
35.
Conformance and Ref. SW. for DASH (23009-2)
35.1
Topics
35.1.1 ISO/IEC 23009-2 1st edition
35.2
Contributions
Number Session
DASH
m37281
DASH
m37226
35.3
Title
On DASH Conformance
Software
DASH: On NB comments on
ISO/IEC CD 23009-2 [2nd
Edition]
Source
Waqar Zia, Thomas
Stockhammer
Christian Timmerer,
Brendan Long, Waqar
Zia,
Dispositions
Noted
noted
Summary of discussions
Refer m38918
35.4
Action Points
ISO/IEC DIS 23009-2:201x [2nd Edition] Part 2: Conformance and reference
(SC 29 N 15641)
software
DIS
(201607-28)
36.
Implementation Guidelines (23009-3)
36.1
Topics
36.1.1 ISO/IEC 23009-3 1st edition
36.2
Contributions
Number Session
m37776 DASH
36.3
Title
[Part 3] MPD chaining for DASH ad
insertion with early termination use
case
Source
Iraj Sodagar
Dispositions
Accepted
15990
Summary of discussions
Refer m38918
36.4
Action Points
37.
Segment encryption and authentication (23009-4)
37.1
Topics
37.1.1 ISO/IEC 23009-4 1st edition
37.2
Contributions
Number Session
37.3
Summary of discussions
Title
Source
Dispositions
37.4
Action Points
38.
SAND (23009-5)
38.1
Topics
38.1.1 ISO/IEC 23009-5 1st edition
In order to enhance the delivery of DASH content, Server and network assisted DASH
(SAND) introduces messages that are exchanged between infrastructure components and
DASH clients over underlying network protocols (e.g., HTTP 1.1, 2.0 or Websockets). The
infrastructure components may comprise, but are not limited to, servers, proxies, caches,
CDN and analytics servers.
38.2
Contributions
Number Session
m37721 DASH
m37924 DASH
Title
[SAND] A Proposal to improve SAND
alternatives
[SAND] selection of cached
representations
SAND: Comments on DIS
m37925 DASH
SAND: Addition of Byte Ranges
m37926 DASH
SAND: Partial File Delivery
m37989 DASH
On SAND conformance
m38035 DASH
Report on SAND-CE
M38166
SAND SoDIS 1st draft
M38168
SAND capabilities messages
m37856 DASH
M38154
A Proof of Concept for SAND
conformance
Source
Iraj Sodagar
Remi
Houdaille
Thomas
Stockhammer
Thomas
Stockhammer
Thomas
Stockhammer
Emmanuel
Thomas
Emmanuel
Thomas,
Mary-Luc
Champel, Ali
C. Begen
Mary-Luc
Champel
Mary-Luc
Champel
Emmanuel
Thomas
Dispositions
Accepted
15991
Accepted
15991
Accepted
15991
Accepted
15991
noted
noted
noted
Accepted
15991
Accepted
15991
noted
38.3
Summary of discussions
Refer m38918
38.4
Action Points
ISO/IEC DIS 23009-5
(SC 29 N 15454)
39.
Exploration – Media Orchestration
39.1
Media Orchestration
39.2
Contributions
Number Session
m37532 MORE
m37534
MORE
m37540
MORE
m37609
MORE
m37676
MORE
m37677
MORE
m37751
MORE
m37817
MORE
DIS
(201604-20)
Part 5: Server and network assisted
DASH (SAND)
Title
Functional components for
Media Orchestration
Contribution on Media
Orchestration
Timed Metadata and
Orchestration Data for
Media Orchestration
Interim Output AHG on
Media Orchestration
Tutorial: Media Sync in
DVB-CSS and HbbTV 2.0
Coordination functionality
for MPEG MORE
Media Orchestration in the
Security Domain
Proposal of Additional
Requirements for Media
Orchestration
Source
M. Oskar van Deventer
Dispositions
noted
Jean-Claude Dufourd
noted
Sejin Oh
noted
Several
noted
Oskar van Deventer
noted
Oskar van Deventer
noted
Krishna Chandramouli,
Ebroul Izquierdo
Sejin Oh, Jangwon Lee,
Jong-Yeul suh
noted
noted
m37818
MORE
m37900
MORE
m38015
MORE
m38029
MORE
m38036
MORE
Additional Use Cases and
Requirements for Media
Orchestration
Additional Use Case of
Media Orchestration
Shared experience use
cases for MPEG Media
Orchestration
Architecture for shared
experience - MPEG MORE
Shared experience
requirements for MPEG
Media Orchestration
39.3
Summary of discussions
39.4
Action Points
40.
Exploration – Application Formats
40.1
Contributions
Number
m37633
Session
AF
Title
Selective Encryption for
AVC Video Streams
m37634
AF
Selective encryption for
HEVC video streams
Jangwon Lee, Sejin Oh,
Jong-Yeul Suh
noted
Jin Young Lee
noted
Oskar van Deventer
noted
Oskar van Deventer
noted
Oskar van Deventer
noted
Source
cyril bergeron, Benoit
Boyadjis, Sebastien
lecomte
Wassim Hamidouche,
Cyril Bergeron, Olivier
Déforges
Dispositions
Noted
Noted
40.2
Summary of discussions
40.3
Action Points
41.
Liaison
41.1
List of input liaison letters
Numbe Sessio
r
n
m37579 Liaiso
n
m37585 Liaiso
n
m37653 Liaiso
n
m37656 Liaiso
n
m37787 Liaiso
n
m37899 Liaiso
n
m38040 Liaiso
n
m3809
5
m37658 Liaiso
n
m37589 Liaiso
n
Title
Source
Disposition
Liaison Statement from ITU-T
SG 16 on discontinued work
items on IPTV multimedia
application frameworks
Liaison Statement from IETF
ITU-T SG 16 via SC 29
Secretariat
Noted
IETF via SC 29 Secretariat
16015
Liaison Statement from ITU-T
SG 9
Liaison Statement from 3GPP
ITU-T SG 9 via SC 29
Secretariat
3GPP via SC 29 Secretariat
Noted
Liaison Statement from DVB
DVB via SC 29 Secretariat
16019
Reply Liaison on MPEG-DASH
(N15699)
Liaison Statement from HbbTV
Iraj Sodagar on behalf of
DASH-IF
HbbTV via SC 29
Secretariat
Walt Husak
16018
IEC TC 100 via SC 29
Secretariat
IEC TC 100 via SC 29
Secretariat
noted
VSF Liaison
IEC CD 60728-13-1 Ed. 2.0
IEC CD 61937-13 and 61937-14
16017
16020
noted
noted
m37590 Liaiso
n
m37686 Liaiso
n
m37591 Liaiso
n
m37592 Liaiso
n
m37650 Liaiso
n
m37595 Liaiso
n
m37586 Liaiso
n
m37657 Liaiso
n
m37659 Liaiso
n
m37598 Liaiso
n
m37660 Liaiso
n
m37588 Liaiso
n
m37596 Liaiso
n
m37587 Liaiso
n
m37651 Liaiso
n
m37597 Liaiso
n
m37652 Liaiso
n
m37593 Liaiso
n
m37594 Liaiso
n
IEC CD 61937-2 Ed. 2.0/Amd.2
IEC TC 100 via SC 29
Secretariat
IEC CD 62608-2 Ed.1.0
IEC TC 100 via SC 29
Secretariat
IEC CD 63029 [IEC
IEC TC 100 via SC 29
100/2627/CD]
Secretariat
IEC CD 63029 [IEC
IEC TC 100 via SC 29
100/2627A/CD]
Secretariat
IEC CDV 62394/Ed.3
IEC TC 100 via SC 29
Secretariat
IEC CDV 62680-1-3, 62680-3-1
IEC TC 100 via SC 29
and 62943
Secretariat
IEC CDV 62766-4-1, 62766-4-2, IEC TC 100 via SC 29
62766-5-1, 62766-5-2, 62766-6, Secretariat
62766-7 and 62766-8
IEC CDV 62827-3 Ed. 1.0
IEC TC 100 via SC 29
Secretariat
IEC CDV 62943/Ed. 1.0
IEC TC 100 via SC 29
Secretariat
IEC CDV 63002 Ed. 1.0
IEC TC 100 via SC 29
Secretariat
IEC CDV 63002 Ed.1.0
IEC TC 100 via SC 29
Secretariat
IEC CDV 63028 Ed. 1.0
IEC TC 100 via SC 29
Secretariat
IEC CDV MIDI (MUSICAL
IEC TC 100 via SC 29
INSTRUMENT DIGITAL
Secretariat
INTERFACE) SPECIFICATION 1.0
(ABRIDGED EDITION, 2015)
IEC DTR 62921 Ed. 2.0
IEC TC 100 via SC 29
Secretariat
IEC NP Microspeakers
IEC TC 100 via SC 29
Secretariat
IEC NP MIDI (MUSICAL
IEC TC 100 via SC 29
INSTRUMENT DIGITAL
Secretariat
INTERFACE) SPECIFICATION 1.0
(ABRIDGED EDITION, 2015)
IEC NP Multimedia Vibration
IEC TC 100 via SC 29
Audio Systems - Method of
Secretariat
measurement for audio
characteristics of audio
actuator by pinna-conduction
IEC NP: Universal Serial Bus
IEC TC 100 via SC 29
interfaces for data and power - Secretariat
Part 3-1
IEC NP: Universal Serial Bus
IEC TC 100 via SC 29
interfaces for data and power - Secretariat
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
noted
m37661 Liaiso
n
m37685 Liaiso
n
42.
Part 1-3
IEC TR 62935 Ed1.0
IEC TS 63033-1
Resolutions of Systems
Refer 15904
43.
Action Plan
43.1
Short descriptions
43.2
Check
IEC TC 100 via SC 29
Secretariat
IEC TC 100 via SC 29
Secretariat
noted
noted
44.
References
44.1
Standing Documents
Pr
1
1
1
Pt
1
1
1
No.
N7675
N7676
N7677
Meeting
05/07 Nice
05/07 Nice
05/07 Nice
N7678
N7679
N7680
05/07 Nice
05/07 Nice
05/07 Nice
11
1
1
1
1
6
11
12
14
15
13
13
17
18
20
Documents
MPEG-1 White Paper – Multiplex Format
MPEG-1 White Paper – Terminal Architecture
MPEG-1 White Paper – Multiplexing and
Synchronization
MPEG-2 White Paper – Multiplex Format/
MPEG-2 White Paper – Terminal Architecture
MPEG-2 White Paper – Multiplexing and
Synchronization
MPEG-2 White Paper – MPEG-2 IPMP
MPEG-4 White Paper – MPEG-4 Systems
MPEG-4 White Paper – Terminal Architecture
MPEG-4 White Paper – M4MuX
MPEG-4 White Paper – OCI
MPEG-4 White Paper – DMIF
MPEG-4 White Paper – BIFS
MPEG-4 White Paper – ISO File Format
MPEG-4 White Paper – MP4 File Format
MPEG-4 White Paper – AVC FF
White Paper on MPEG-4 IPMP
MPEG IPMP Extensions Overview
White Paper on Streaming Text
White Paper on Font Compression and Streaming
Presentation Material on LASER
2
2
2
1
1
1
2
4
4
4
4
4
4
4
4
4
4
4
4
4
4
N7503
N7504
N7610
N7921
N8148
N8149
N7608
N8150
N7923
N7924
N7505
N6338
N7515
N7508
N6969
4
4
7
7
21
A
A
20
22
1
1
9
X
X
White Paper on LASeR
White Paper on Open Font Format
MPEG-7 White Paper - MPEG-7 Systems
MPEG-7 White Paper – Terminal Architecture
MPEG-21 White Paper – MPEG-21 File Format
MPEG Application Format Overview
MAF Overview Document
N7507
N7519
N7509
N8151
N7925
N9421
N9840
A
X
MAF Overview Presentation
N9841
B
E
X
X
N7922
N6335
E
E
E
E
X
X
X
X
MPEG-B White Paper – BinXML
MPEG Multimedia Middleware Context and
Objectives
1rst M3W White paper
2nd M3W White Paper : Architecture
Tutorial on M3W
M3W White Paper : Multimedia Middleware
05/07 Poznan
05/07 Poznan
05/10 Nice
06/01 Bangkok
06/04 Montreux
06/04 Montreux
05/10 Nice
06/04 Montreux
06/01 Bangkok
06/01 Bangkok
05/07 Poznan
04/03 München
05/07 Poznan
05/07 Poznan
05/01 HongKong
05/07 Poznan
05/07 Poznan
05/07 Poznan
06/04 Montreux
06/01 Bangkok
07/10 Shenzhen
08/04
Archamps
08/04
Archamps
06/01 Bangkok
04/03 München
N7510
N8152
N8153
N8687
05/07 Poznan
06/04 Montreux
06/04 Monreux
06/10 Hanzhou
E
E
E
X
X
X
E
E
E
X
X
X
Architecture
M3W White Paper : Multimedia API
M3W White Paper : Component Model
M3W White Paper : Resource and Quality
Management
M3W White Paper : Component Download
M3W White Paper : Fault Management
M3W White Paper : System Integrity
Management
N8688
N8689
N8690
06/10 Hanzhou
06/10 Hanzhou
06/10 Hanzhou
N8691
N8692
N8693
06/10 Hanzhou
06/10 Hanzhou
06/10 Hanzhou
44.2
Mailing Lists Reminder
Topic
General
Systems List
File Format
Information
Reflector : gen-sys@lists.aau.at
Subscribe: http://lists.aau.at/mailman/listinfo/gen-sys
Archive: http://lists.aau.at/mailman/private/gen-sys/
Reflector : mp4-sys@lists.aau.at
Subscribe: http://lists.aau.at/mailman/listinfo/mp4-sys
Archive: http://lists.aau.at/mailman/private/mp4-sys/
Application
Format
Reflector : maf-sys@lists.aau.at
Subscribe: http://lists.aau.at/mailman/listinfo/maf-sys
Archive: http://lists.aau.at/mailman/private/maf-sys/
CEL, MCO,
PSAF
Reflector: mpeg-m@lists.aau.at
Subscribe: http://lists.aau.at/mailman/listinfo/mpeg-m
Archive: http://lists.aau.at/mailman/private/mpeg-m
MPEG-V
MPEG
Media
Transport
MP-AF
DASH
Green
User
Description
MLAF
Reflector: metaverse@lists.aau.at
Subscribe: http://lists.aau.at/mailman/listinfo/metaverse
Archive: http://lists.aau.at/mailman/private/metaverse/
Reflector: mmt@tnt.uni-hannover.de
Subscribe: http://mailhost.tnt.unihannover.de/mailman/listinfo/mmt
Reflector: preservation-tnt@listserv.uni-hannover.de
Subscribe:https://listserv.uni-hannover.de/cgibin/wa?A0=preservation-tnt
Reflector
dash@lists.aau.at
Subscribe
http://lists.aau.at/mailman/listinfo/dash
Reflector
green-mpeg@googlegroups.com
Subscribe
To subscribe, contact Felix Fernandes
(felix.f@samsung.com)
Reflector
user@lists.aau.at
Subscribe
http://lists.aau.at/mailman/listinfo/user
Reflector
mpeg-maf-dev@lists.aau.at
Subscribe
http://lists.aau.at/mailman/listinfo/mpegmaf-dev
Kindly
Hosted by
AlpenAdriaUniversität
Klagenfurt
AlpenAdriaUniversität
Klagenfurt
AlpenAdriaUniversität
Klagenfurt
AlpenAdriaUniversität
Klagenfurt
AlpenAdriaUniversität
Klagenfurt
University of
Hannover
44.3
Latest References and Publication Status (as of 100th meeting)
Reference on the ISO Web Site :
Pr
Pt
1
1
1
1
Standard
ISO/IEC 13818-1:2000 (MPEG-2 Systems 2nd Edition)
No.
2
2
2
2
ISO/IEC 13818-1:2000/COR1 (FlexMux Descr.)
ISO/IEC 13818-1:2000/COR2 (FlexMuxTiming_ descriptor)
ISO/IEC 13818-1:2000/Amd.1 (Metadata on 2) & COR1 on Amd.1
N3844
N4404
N5867
2
2
1
1
ISO/IEC 13818-1:2000/Amd.2 (Support for IPMP on 2)
ISO/IEC 13818-1:2000/Amd.3 (AVC Carriage on MPEG-2)
N5604
N5771
2
2
1
1
ISO/IEC 13818-1:2000/Amd.4 (Metadata Application CP)
ISO/IEC 13818-1:2000/Amd.5 (New Audio P&L Sig.)
N6847
N6585
2
2
2
1
1
1
ISO/IEC 13818-1:2000/COR3 (Correction for Field Picture)
ISO/IEC 13818-1:2000/COR4 (M4MUX Code Point)
ISO/IEC 13818-1:2000/COR5 (Corrections related to 3rd Ed.)
N6845
N7469
N7895
2
2
1
1
ISO/IEC 13818-1:2007 (MPEG-2 Systems 3rd Edition)
ISO/IEC 13818-1:2007/Amd.1 (Transport of Streaming text)
N8369
2
1
ISO/IEC 13818-1:2007/Amd.2 (Carriage of Auxialiry Video Data)
N8798
2
1
ISO/IEC 13818-1:2007/Cor.1.2 (Reference to AVC Specification)
N9365
2
1
ISO/IEC 13818-1:2007/Cor.3
2
1
ISO/IEC 13818-1:2007/Amd.3 (SVC in MPEG-2 Systems)
2
1
ISO/IEC 13818-1:2007/Amd.3/Cor.1
2
1
ISO/IEC 13818-1:2007/Amd.4 (Transport of Multiview Video)
2
1
ISO/IEC 13818-1:2007/Amd.5 (Transport of JPEG2000)
2
1
ISO/IEC 13818-1:2007/Amd.6 (Extension to AVC descriptor)
2
1
ISO/IEC 13818-1:2007/AMD 7 Signalling of stereoscopic video in
MPEG-2 systems
2
1
ISO/IEC 13818-1 4th edition
2
1
ISO/IEC 13818-1:201X/AMD 1 Extensions for simplified carriage of
MPEG-4 over MPEG-2
2
1
ISO/IEC 13818-1:201X/AMD 2 Signalling of Transport profiles,
signalling MVC stereo view association and MIME type registration
2
1
ISO/IEC 13818-1:2013/AMD 3 Carriage of HEVC
N1093
7
N1005
8
N1093
8
N1074
5
N1170
8
N1171
0
N1246
2
N1263
3
N1284
0
N1325
6
N1365
6
Draft Systems
144 Agenda
Date
00/12
01/01 Pisa
01/12 Pattaya
03/07
Trondheim
03/03 Pattaya
03/07
Trondheim
04/10 Palma
04/07
Redmond
04/10 Palma
05/07 Poznan
06/01
Bangkok
06/07
Klagenfurt
07/01
Marrakech
07/10
Shenzhen
09/10 Xian
08/07
Hannover
09/10 Xian
09/07 London
11/01 Daegu
11/01 Daegu
12/02 San Jose
12/04 Geneva
12/07
Stockholm
13/01 Geneva
13/07 Vienna
2
1
ISO/IEC 13818-1:2013/AMD 4 Support for event signalling in
Transport Stream in MPEG-2 systems
2
2
1
1
1
2
4
4
4
11
1
1
1
ISO/IEC 13818-1 5th edition
Text of ISO/IEC 13818-1:2015 COR 1
ISO/IEC 13818-1:2015 AMD 4 New Profiles and Levels for MPEG-4
Audio Descriptor
ISO/IEC 13818-1:2003 (IPMP on 2)
ISO/IEC 14496-1 (MPEG-4 Systems 1st Ed.)
ISO/IEC 14496-1/Amd.1 (MP4, MPEG-J)
ISO/IEC 14496-1/Cor.1
4
4
4
4
4
1
1
1
1
1
ISO/IEC 14496-1:2001 (MPEG-4 Systems 2nd Ed.)
ISO/IEC 14496-1:2001/Amd.1 (Flextime)
ISO/IEC 14496-1:2001/Cor.1
ISO/IEC 14496-1:2001/Cor.2
ISO/IEC 14496-1:2001/Cor.3
N3850
4
1
ISO/IEC 14496-1:2001/Amd.2 (Textual Format)
N4698
4
1
ISO/IEC 14496-1:2001/Amd.3 (IPMP Extensions)
N5282
4
4
1
1
ISO/IEC 14496-1:2001/Amd.4 (SL Extension)
ISO/IEC 14496-1:2001/Amd.7 (AVC on 4)
N5471
N5976
4
4
4
4
1
1
1
1
ISO/IEC 14496-1:2001/Amd.8 (ObjectType Code Points)
ISO/IEC 14496-1:200x/Amd.1 (Text Profile Descriptors)
ISO/IEC 14496-1:200x/Cor4 (Node Coding Table)
N6202
N7229
N7473
N5277
4
4
1
1
ISO/IEC 14496-1:200x/Amd.1 (Text Profile Descriptors)
ISO/IEC 14496-1:200x/Cor.1 (Clarif. On audio codec behavior)
N7229
N8117
4
1
ISO/IEC 14496-1:200x/Amd.2 (3D Profile Descriptor Extensions)
N8372
4
1
ISO/IEC 14496-1:200x/Cor.2 (OD Dependencies)
N8646
4
1
ISO/IEC 14496-1:200x/Amd.3 (JPEG 2000 support in Systems)
N8860
4
1
ISO/IEC 14496-1 (MPEG-4 Systems 4th Ed.)
4
1
ISO/IEC 14496-1:2010/Amd.1 (Usage of LASeR in MPEG-4 systems
and Registration Authority for MPEG-4 descriptors)
4
1
ISO/IEC 14496-1:2010 AMD2 Support for raw audiovisual data
4
4
4
4
4
4
ISO/IEC 14496-1 (MPEG-4 Systems 3rd Ed.)
N1365
8
13/07 Vienna
15627
15630
15/10 Geneva
15/10 Geneva
N5607
N2501
N3054
N3278
03/03 Pattaya
98/10 Atl. City
99/12 Hawaii
00/03
Noordwijk.
01/01 Pisa
N4264
N5275
N6587
N1094
3
N1124
8
N1364
7
01/07 Sydney
02/10 Shangai
04/07
Redmond
02/03 Jeju
Island
02/10
Shanghai
02/12 Awaji
03/10
Brisbanne
03/12 Hawaii
05/04 Busan
05/07 Poznan
02/10
Shanghai
05/04 Busan
06/04
Montreux
06/07
Klagenfurt
06/10
Hangzhou
07/01
Marrakech
09/10 Xian
10/04 Dresden
13/04 Incheon
ISO/IEC 14496-4
ISO/IEC 14496-4:200x/Amd.17 (ATG Conformance)
N8861
ISO/IEC 14496-4:200x/Amd.22 (AudioBIFS v3 conformance)
N9295
Draft Systems
145 Agenda
07/01
Marrakech
07/07
Lausanne
4
4
ISO/IEC 14496-4:200x/Amd.23 (Synthesized Texture conformance)
N9369
4
4
ISO/IEC 14496-4:200x/Amd.24 (File Format Conformance)
N9370
4
4
ISO/IEC 14496-4:200x/Amd.25 (LASeR V1 Conformance)
N9372
4
4
ISO/IEC 14496-4:200x/Amd.26 (Open Font Format Conf.)
N9815
4
4
ISO/IEC 14496-4:200x/Amd.27 (LASeR Amd.1 Conformance)
N9816
4
4
ISO/IEC 14496-4:200x/Amd.37 (Additional File Format
Conformance)
4
4
ISO/IEC 14496-4:200x/Amd.40 (ExtendedCore2D profile
conformance)
N1075
0
N1211
7
4
4
4
4
4
5
5
5
5
5
4
5
4
4
4
6
8
11
4
4
4
4
11
11
11
11
N5480
N6205
N6203
ISO/IEC 14496-11/Cor.3 Valuator/AFX related correction N6594
4
11
ISO/IEC 14496-11/Amd.3 Audio BIFS Extensions
N6591
4
11
ISO/IEC 14496-11/Amd.4 XMT and MPEG-J Extensions
N6959
4
4
11
11
ISO/IEC 14496-11/Cor.3 (Audio BIFS Integrated in 3 rd Edition)
ISO/IEC 14496-11/Cor.5 (Misc Corrigendum)
N7230
N8383
4
11
4
4
11
11
ISO/IEC 14496-11/Amd.6 Scene Partitioning
4
11
ISO/IEC 14496-11/Amd.7 ExtendedCore2D Profile
4
12
ISO/IEC 14496-12 (ISO Base Media File Format)
N9021
N1024
7
N1125
1
N5295
4
12
ISO/IEC 14496-12/Amd.1 ISO FF Extension
N6596
07/10
Shenzhen
07/10
Shenzhen
07/10
Shenzhen
08/04
Archamps
08/04
Archamps
09/07 London
11/07 Torino
ISO/IEC 14496-5
ISO/IEC 14496-5:200x/Amd.12 (File Format)
ISO/IEC 14496-5:200x/Amd.16 (SMR Ref. Soft)
ISO/IEC 14496-5:200x/Amd.17 (LASeR Ref. Soft)
ISO/IEC 14496-5:200x/Amd.28 (LASeR Adaptation Ref. Soft)
ISO/IEC 14496-5:2001/Amd. 29 (Reference software for LASeR
presentation and modification of structured information (PMSI))
ISO/IEC 14496-6:2000
ISO/IEC 14496-8 (MPEG-4 on IP Framework)
ISO/IEC 14496-11 (MPEG-4 Scene Description 1st
Edition)
N9020
N9672
N9674
N1156
6
N1211
8
07/04 San Jose
08/01 Antalya
08/01 Antalya
10/10
Guangzhou
11/07 Torino
N4712
N6960
02/03 Jeju
05/01
HongKong
02/12 Awaji
03/12 Hawaii
03/12 Hawaii
04/07
Redmond
04/07
Redmond
05/01
HongKong
05/04 Busan
06/07
Klagenfurt
06/10
Hangzhou
07/04 San Jose
08/10 Busan
ISO/IEC 14496-11/Amd.1 (AFX)
ISO/IEC 14496-11/Amd.2 (Advanced Text and Graphics)
ISO/IEC 14496-11/Cor.1
ISO/IEC 14496-11/Amd.5 Symbolic Music
Representation
ISO/IEC 14496-11/Cor.6 (AudioFx Correction)
Draft Systems
146 Agenda
N8657
10/04 Dresden
02/10
Shanghai
04/07
Redmond
4
4
12
12
ISO/IEC 14496-12/Cor.1 (Correction on File Type Box)
ISO/IEC 14496-12/Cor.2 (Miscellanea)
N7232
N7901
4
12
N8659
4
4
4
12
12
12
4
12
ISO/IEC 14496-12/Amd.1 (Description of timed
metadata)
ISO/IEC 14496-12/Cor.3 (Miscellanea)
ISO/IEC 14496-12/Amd.2 (Flute Hint Track)
ISO/IEC 14496-12 (ISO Base Media File Format 3rd
edition)
ISO/IEC 14496-12/Cor.1
4
12
ISO/IEC 14496-12/Cor.2
4
12
4
12
ISO/IEC 14496-12/Amd.1 General improvements
including hint tracks, metadata support, and sample
groups
ISO/IEC 14496-12/Cor.3
4
12
ISO/IEC 14496-12/Cor.4
4
12
4
12
ISO/IEC 14496-12:2008/Amd.2 Support for sub-track
selection & switching, post-decoder requirements, and
color information
ISO/IEC 14496-12:2008 COR 5
4
12
ISO/IEC 14496-12 4th edition
4
12
4
12
4
12
ISO/IEC 14496-12:201X AMD1 Various enhancements
including support for large metadata
ISO/IEC 14496-12:2012/AMD 2 carriage of timed text
and other visual overlays
ISO/IEC 14496-12:2012 COR 1
4
13
4
N9024
N9023
N9678
05/04 Busan
06/01
Bangkok
06/10
Hangzhou
07/04 San Jose
07/04 San Jose
08/01 Antalya
N1025
0
N1044
1
N1058
0
08/10 Busan
N1075
3
N1172
3
N1226
8
09/07 London
12/04 Geneva
ISO/IEC 14496-13 (IPMP-X)
N1264
2
N1264
0
N1284
4
N1366
3
N1366
7
N5284
14
ISO/IEC 14496-14:2003 (MP4 File Format)
N5298
4
14
ISO/IEC 14496-14:2003/Cor.1 (Audio P&L Indication)
N7903
4
14
4
15
ISO/IEC 14496-14:2003/Amd.1 Handling of MPEG-4
audio enhancement layers
ISO/IEC 14496-15 (AVC File Format)
N1113
8
N5780
4
4
4
15
15
15
ISO/IEC 14496-15/Amd.1 (Support for FREXT)
ISO/IEC 14496-15/Cor.1
ISO/IEC 14496-15/Cor.2 (NAL Unit Restriction)
N7585
N7575
N8387
4
4
15
15
ISO/IEC 14496-15/Amd.2 (SVC File Format Extension)
ISO/IEC 14496-15 (AVC File Format 2nd edition)
N9682
N1113
Draft Systems
147 Agenda
09/02
Lausanne
09/04 Maui
11/01 Daegu
11/11 Geneva
12/04 Geneva
12/07
Stockholm
13/07 Vienna
13/07 Vienna
02/10
Shanghai
02/10
Shanghai
06/01
Bangkok
10/01 Kyoto
03/07
Trondheim
05/10 Nice
05/10 Nice
06/07
Klagenfurt
08/01 Antalya
10/01 Kyoto
4
15
4
15
4
15
4
15
4
4
4
17
18
18
4
18
4
4
4
19
20
20
4
4
20
20
9
ISO/IEC 14496-15:2010/Cor.1
N1172
8
ISO/IEC 14496-15:2010/Amd. 1 (Sub-track definitions)
N1212
8
ISO/IEC 14496-15:2010/COR 2
N1264
5
ISO/IEC 14496-15:2010 3rd edition Carriage of NAL unit N1347
structured video in the ISO Base Media File Format
8
ISO/IEC 14496-17 (Streaming Text)
N7479
ISO/IEC 14496-18 (Font Compression and Streaming) N6215
ISO/IEC 14496-18/Cor.1 (Misc. corrigenda and
N8664
clarification)
ISO/IEC 14496-18:2012 COR 1
N1367
1
ISO/IEC 14496-19 (Synthesized Texture Stream)
N6217
ISO/IEC 14496-20 (LASeR & SAF)
N7588
ISO/IEC 14496-20/Cor.1 (Misc. corrigenda and
N8666
clarification)
ISO/IEC 14496-20/Amd.1 (LASeR Extension)
N9029
ISO/IEC 14496-20/Cor.2 (Profile Removal)
N9381
4
20
ISO/IEC 14496-20/Amd.2 (SVGT1.2 Support)
N9384
4
4
4
20
20
20
ISO/IEC 14496-20 (LASeR & SAF 2nd edition)
ISO/IEC 14496-20/Amd.1 SVGT1.2 support
ISO/IEC 14496-20/Amd.2 Adaptation
4
20
ISO/IEC 14496-20/Amd.3 PMSI
4
20
ISO/IEC 14496-20/Cor. 1
4
22
ISO/IEC 14496-22 (Open Font Format)
N
N
N1075
9
N1095
4
N1137
6
N8395
4
4
22
22
4
22
4
28
ISO/IEC 14496-22 (Open Font Format 2nd edition)
ISO/IEC 14496-22/Amd.1 Support for many-to-one range
mappings
ISO/IEC 14496-22:2009/Amd. 2 Additional script and
language tags
ISO/IEC IS 14496-28 Composite Font Representation
4
28
ISO/IEC 14496-28:2012 COR 1
4
30
7
7
7
7
1
1
1
1
ISO/IEC 14496-30 Timed Text and Other Visual
Overlays in ISO Base Media File Format
ISO/IEC 15938-1 (MPEG-7 Systems)
ISO/IEC 15938-1/Amd.1 (MPEG-7 Systems Extensions)
ISO/IEC 15938-1/Cor.1 (MPEG-7 Systems Corrigendum)
ISO/IEC 15938-1/Cor.2 (MPEG-7 Systems Corrigendum)
Draft Systems
148 Agenda
N
N1095
4
N1247
2
N1247
3
N1367
4
N1367
6
N4285
N6326
N6328
N7490
11/01 Daegu
11/07 Torino
12/04 Geneva
13/04 Incheon
05/07 Poznan
03/12 Hawaii
06/10
Hangzhou
13/07 Vienna
03/12 Hawaii
05/10 Nice
06/10
Hangzhou
07/04 San Jose
07/10
Shenzhen
07/10
Shenzhen
09/07 London
09/10 Xian
10/07 Geneva
06/07
Klagenfurt
09/10 Xian
12/02 San Jose
12/02 San Jose
13/07 Vienna
13/07 Vienna
01/07 Sydney
04/03 Munich
04/03 Munich
05/07 Poznan
7
7
7
7
1
2
5
5
7
7
7
7
7
ISO/IEC 15938-1/Amd.2 (BiM extension)
N7532
N4288
05/10 Nice
01/07 Sydney
N1264
9
12/04 Geneva
ISO/IEC 15938-7/Amd.2 (Fast Access Ext. Conformance)
N8672
12
ISO/IEC 15938-12 MPEG Query Format
N9830
7
12
ISO/IEC 15938-12/Cor.1
7
12
ISO/IEC 15938-12/Cor.2
7
12
ISO/IEC 15938-12/And.1 Ref, SW and flat metadata output
7
12
ISO/IEC 15938-12/And.2 Semantic enhancement
7
12
ISO/IEC 15938-12 2nd edition MPEG Query Format
N1045
2
N1095
9
N1138
3
N1173
4
N1285
0
06/10
Hangzhou
08/04
Archamps
09/02
Lausanne
09/10 Xian
21
21
2
2
ISO/IEC 21000-2 (DID)
ISO/IEC 21000-2:2005/Amd.1Presentation Element
21
21
3
3
21
21
4
4
21
4
ISO/IEC 21000-3 (DII)
ISO/IEC 21000-3:2003 AMD 2 Digital item semantic
relationships
ISO/IEC 21000-4 (IPMP)
ISO/IEC 21000-4:2005/Amd.1Protection of Presentation
Element
ISO/IEC 21000-4:2006/COR 1
21
21
21
5
8
8
21
9
ISO/IEC 21000-5 (Open Release Content Profile)
ISO/IEC 21000-8 (Reference Software)
ISO/IEC 21000-8:2008/Amd. 2 (Reference software for
Media Value Chain Ontology)
ISO/IEC 21000-9 (MPEG-21 File Format)
21
9
ISO/IEC 21000-9/Amd.1 (MPEG-21 Mime Type)
N9837
21
15
ISO/IEC 21000-15 (Security in Event Reporting)
N9839
21
21
16
19
ISO/IEC 21000-16 (MPEG-21 Binary Format)
ISO/IEC 21000-19 (Media Value Chain Ontology)
21
21
ISO/IEC 21000-21 Media Contract Ontology
A
A
4
4
ISO/IEC 23000-4 (Musical Slide Show MAF)
ISO/IEC 23000-4 (Musical Slide Show MAF 2nd Ed.)
N7247
N1114
6
N1309
9
N9037
N9843
ISO/IEC 15938-2 (MPEG-7 DDL)
ISO/IEC 15938-5 MDS
ISO/IEC 15938-5/Amd. 4 Social Metadata
ISO/IEC 15938-7
Draft Systems
149 Agenda
10/07 Geneva
11/01 Daegu
12/07
Stockholm
N1173
6
11/01 Daegu
N1304
1
12/10
Shangahi
N1173
8
N1227
8
N9687
11/01 Daegu
N1213
5
N6975
11/07 Torino
11/11 Geneva
08/01 Antalya
05/01
HongKong
08/04
Archamps
08/04
Archamps
05/04 Busan
10/01 Kyoto
12/10
Shanghai
07/04 San Jose
08/04
Archamps
A
4
A
4
A
A
6
6
A
6
A
A
7
7
A
N
8
ISO/IEC 23000-4 Amd.1 Conformance & Reference
Software
ISO/IEC 23000-4 Amd.2 Conformance & Reference
Software for Protected MSS MAF
ISO/IEC 23000-6 (Professional Archival MAF)
ISO/IEC 23000-6 Amd.1 Conformance and Reference
Software
ISO/IEC 23000-6 2nd edition (Professional Archival
MAF)
ISO/IEC 23000-7 (Open Access MAF)
ISO/IEC 23000-7 Amd.1 Conformance and Reference
Software
ISO/IEC 23000-8 (Portabe Video AF)
A
9
ISO/IEC 23000-9 (Digital Multi. Broadcasting MAF)
N9397
A
9
N9854
A
9
A
9
ISO/IEC 23000-9/Cor.1 (Digital Multi. Broadcasting
MAF)
ISO/IEC 23000-9/Amd.1 (Conformance & Reference
SW)
ISO/IEC 23000-9/Amd.1/Cor. 1
A
9
ISO/IEC 23000-9:2008/Amd.1:2010/COR.2
A
10
ISO/IEC 23000-10 (Video Surveillance AF)
N1115
1
N1174
2
N1228
3
N9397
A
A
10
10
A
10
A
11
ISO/IEC 23000-11/Amd.1 Conformance & Reference SW
ISO/IEC 23000-10 2nd edition Surveillance
Application Format
ISO/IEC 23000-10:2012/AMD 1 Conformance and
reference software
ISO/IEC 23000-11 (Stereoscopic video MAF)
N
N1304
5
N1396
6
N9397
A
A
11
11
A
11
A
11
A
A
12
12
A
12
A
12
B
B
1
1
ISO/IEC 23000-11/Cor.1
ISO/IEC 23000-11/Amd.1 Conformance and Reference
software
ISO/IEC 23000-11:2009/Amd. 2 Signalling of additional
composition type and profiles
ISO/IEC 23000-11:2009/AMD 3 Support movie fragment
for Stereoscopic Video AF
ISO/IEC 23000-12 (Interactive Music AF)
ISO/IEC 23000-12/Amd. 1 Conformance & Reference
SW.
ISO/IEC 23000-12:2010 Amd. 2 Compact representation
of dynamic volume change and audio equalization
ISO/IEC 23000-12:2010/AMD 3 Conformance and
Reference Software
ISO/IEC 23001-1 (XML Binary Format)
ISO/IEC 23001-1/Cor.1 (Misc. Editorial and technical
N
N1157
4
N1214
3
N1396
9
N
N1174
6
N1248
3
N1327
3
N7597
N8680
Draft Systems
150 Agenda
N
N
N
N1285
3
N9698
N
12/07
Stockholm
08/01 Antalya
N9853
08/04
Archamps
07/10
Shenzhen
08/04
Archamps
10/01 Kyoto
11/01 Daegu
11/11 Geneva
07/10
Shenzhen
12/10
Shanghai
13/10 Geneva
07/10
Shenzhen
10/10
Guangzhou
11/07 Torino
13/10 Geneva
11/01 Daegu
12/02 San Jose
13/01 Geneva
05/10 Nice
06/10
B
1
B
1
B
1
B
B
B
2
3
8
B
9
E
clar.)
ISO/IEC 23001-1/Cor.2 (Misc. Editorial and technical
clar.)
ISO/IEC 23001-1/Amd.1 (Reference Soft. & Conf.)
ISO/IEC 23001-1/Amd.1 (Exten. On encoding of wild
cards)
ISO/IEC 23001-2 (Fragment Request Unit)
ISO/IEC 23001-3 (IPMP XML Messages)
ISO/IEC 23001-8 coding-independent code-points
N9049
N8886
N9296
1
ISO/IEC 23001-9 Common Encryption for MPEG-2
Transport Streams
ISO/IEC 23008-1 Architecture
N9051
N9416
N1327
8
N1397
3
N8892
E
2
ISO/IEC 23008-2 Multimedia API
N8893
E
3
ISO/IEC 23008-3 Component Model
N8894
E
4
ISO/IEC 23008-4 Ressource & Quality Management
N8895
E
E
E
E
M
5
6
7
8
1
ISO/IEC 23008-5 Component Download
ISO/IEC 23008-6 Fault Management
ISO/IEC 23008-7 System Integrity Management
ISO/IEC 23008-7 Reference Software
ISO/IEC 23006-1 Architecture and Technologies
M
1
ISO/IEC 23006-1 2nd edition Architecture
M
1
ISO/IEC 23006-1 2nd edition Architecture
M
2
ISO/IEC 23006-2 MXM API
M
2
ISO/IEC 23006-2 2nd edition MXM API
M
3
ISO/IEC 23006-3 Reference Software
M
3
M
4
ISO/IEC 23006-3 2nd edition Conformance and
Reference Software
ISO/IEC 23006-4 MXM Protocols
M
4
ISO/IEC 23006-4 2nd edition Elementary services
M
5
ISO/IEC 23006-5 Service aggregation
U
1
ISO/IEC 23007-1 Widgets
U
1
ISO/IEC 23007-1:2010/FDAM 1 Widget Extensions
N9053
N9054
N9055
N
N1116
3
N1345
4
N1248
7
N1116
5
N1349
2
N1116
8
N1349
3
N1117
0
N1307
2
N1307
4
N1125
6
N1215
3
Draft Systems
151 Agenda
Hangzhou
07/04 San Jose
07/01
Marrakech
07/07
Lausanne
07/04 San Jose
07/04 San Jose
130/01 Geneva
13/10 Geneva
07/01
Marrakech
07/01
Marrakech
07/01
Marrakech
07/01
Marrakech
07/04 San Jose
07/04 San Jose
07/04 San Jose
10/01 Kyoto
13/01 Geneva
12/02 San Jose
10/01 Kyoto
13/04 Incheon
10/01 Kyoto
13/04 Incheon
10/01 Kyoto
12/10
Shanghai
12/10
Shanghai
10/04 Dresden
11/07 Torino
U
1
ISO/IEC 23007-1:2010/Amd.1:2012/ COR 2
U
2
ISO/IEC 23007-2 Advanced user interaction interface
U
3
ISO/IEC 23007-3 Conformance and Reference SW
V
1
ISO/IEC 23005-1 Architecture
V
1
ISO/IEC 23005-1 2nd edition Architecture
V
2
ISO/IEC 23005-2 Control Information
V
3
ISO/IEC 23005-3 Sensory Information
V
3
ISO/IEC 23005-3:2013/COR1
V
4
V
5
V
6
ISO/IEC 23005-4 Virtual World Object
Characteristics
ISO/IEC 23005-5 Data Formats for Interaction
Devices
ISO/IEC 23005-6 Common Data Format
V
7
V
7
H
1
DASH
1
DASH
1
ISO/IEC 23009-1 Media Presentation Description and
Segment Formats
ISO/IEC 23009-1:2012 COR. 1
DASH
1
ISO/IEC 23009-1:201x 2nd edition
DASH
2
ISO/IEC 23009-2 DASH Conformance and reference
software
ISO/IEC 23005-7 Conformance and Reference
Software
ISO/IEC 23005-7 2nd edition Conformance and
reference software
ISO/IEC 23008-1 MPEG Media Transport
Draft Systems
152 Agenda
N1397
4
N1267
0
N1176
7
N1141
9
N1380
3
N1142
2
N1142
5
N1380
7
N1142
7
N1142
9
N1143
2
N1195
2
N1381
2
N1398
2
N1232
9
N1349
5
N1368
7
N1369
1
13/10 Geneva
12/04 Geneva
11/01 Daegu
10/07 Geneva
13/07 Vienna
10/07 Geneva
10/07 Geneva
13/07 Vienna
10/07 Geneva
10/07 Geneva
10/07 Geneva
11/03 Geneva
13/07 Vienna
13/10 Geneva
11/11 Geneva
13/04 Incheon
13/07 Vienna
13/07 Vienna
44.4
Standard development procedure from July 1st, 2012
 It is recommend to issue iterative CD ballots until specs become mature enough not to
receive any technical comments to DIS and to be able to FDIS ballot.
CD
 2 month ballot for AMDs
 3 month ballot for combined NP + CD ballot
 4 month ballot if WG11 decided to have
DIS
minimum 2 weeks of ballot preparation by ITTF
+
2 month translation
+
3 month ballot
NO vote?
Yes
FDIS
No
preparation + 2 month ballot
IS
Page: 153
Date Sav
44.5
Recommended due date for ballot texts
Meeting
Starting Date
Due date of CD
(2 month ballot)
Due date of DIS
113
114
115
116
2015/10/19
2016/02/22
2016/05/30
2016/10/17
2015/08/09
2015/12/12
2016/03/20
2016/08/07
2015/04/25
2015/08/29
2015/12/06
2016/04/23
Page: 154
Date Sav
– Video report
Source: Jens Ohm and Gary Sullivan, Chairs
14 Organization of the work
An opening Video subgroup plenary was held Monday Feb 22nd during 14:00-15:30, at which
the status of work was reviewed and activities for the current meeting were planned.
Video plenaries were held as follows:

Mon 14:00-15:30

Wed 11:30-13:00 – Review work and discuss further proceeding
 Fri 9:00-12:00 – Approval of documents, setup AHGs
Breakout (BO) work was performed on the following topics during the week: AVC, (CICP,)
MPEG-7/CDVS/CDVA, IVC, RVC, HDR, and Future Video Coding / JVET.
14.1
Room allocation
Room allocations during the meeting were as follows:
Video plenary: Salon F&G; MPEG-7: Salon H; Test: Pacific Beach
14.2
AHGs and Breakout Topics
No additional review of the following AHG reports was performed in theVideo plenary.
Mandates for related BoG activities were discussed as follows.
IVC:
-
Produce study of DIS
-
Clarify versioning/profiling mechanism for IVC and possible future extensions
-
Clarify whether IVC needs mechanisms for systems interconnection, e.g. file format,
transport, DASH, etc.
-
Extend ITM by new tools under consideration
-
Check whether an old version of ITM is good enough for a documentation of encoder
description
-
Promotion: Verification test plan, including the possibility to download the bitstreams
and replay them with the realtime decoder
-
Clarify how to proceed with a potential open SW project for IVC realtime encoding /
licensing, some forms of which could be incompatible with ISO standard rules, such that
it could not be published as standard.
MPEG-7:
-
It was discussed whether action should be taken to remove motion trajectory D, but at
this meeting not enough expertise was available to determine which parts of MPEG-7
would be affected.
-
No specific action on CDVS
Page: 155
Date Sav
-
14.3
CDVA: Review CfP submissions jointly with Requirements, decide next steps thereafter,
probably start technical work in Video.
Ballots
Ballot results were inspected and handled accordingly in preparation of DoC documents, in
coordination with JCT-VC and JCT-3V.
m37545 Summary of Voting on ISO/IEC 23008-8:201x/DAM 1 [SC 29 Secretariat]
m37642 Summary of Voting on Combined on ISO/IEC 14496-5:2001/PDAM 42 [SC 29
Secretariat]
m37645 Summary of Voting on ISO/IEC 23008-5:2015/DAM 3 [SC 29 Secretariat]
m37646 Summary of Voting on ISO/IEC 23008-5:201x/DAM 4 [SC 29 Secretariat]
m37648 Summary of Voting on ISO/IEC DIS 15938-6 [2nd Edition] [SC 29 Secretariat]
(new edition of MPEG-7 Reference software)
m37813 Summary of Voting on ISO/IEC 14496-5:2001/PDAM 41 [SC 29 Secretariat]
(IVC Reference software)
m37814 Summary of Voting on ISO/IEC 23002-4:2014/PDAM 3 [SC 29 Secretariat]
m38084 [Preliminary] Summary of Voting on ISO/IEC 23008-8:2015/DAM 3 [ITTF via SC 29
Secretariat]
m38097 Summary of Voting on ISO/IEC 14496-4:2004/PDAM 46 [SC 29 Secretariat]
(IVC conformance)
m38098 Summary of Voting on ISO/IEC 23002-5:2013/PDAM 3 [SC 29 Secretariat]
m38111 Summary of Voting on ISO/IEC 14496-10:2014/DAM 2 [ITTF via SC 29 Secretariat]
m38112 Summary of Voting on ISO/IEC DIS 23008-2:201x [3rd Edition] [ITTF via SC 29
Secretariat]
m37566 Table of Replies on ISO/IEC 14496-4:2004/FDAM 43 [ITTF via SC 29 Secretariat]
m37569 Table of Replies on ISO/IEC FDIS 23001-8 [2nd Edition] [ITTF via SC 29 Secretariat]
Page: 156
Date Sav
14.4
Liaisons
The following Liaison inputs were reviewed, and dispositions were prepared in coordination
with JCT-VC and JCT-3V and other MPEG subgroups, as applicable.
m37578 Liaison Statement from ITU-T SG 16 on video coding collaboration [ITU-T SG 16 via
SC 29 Secretariat]
(Responded to in N16069)
m37654 Liaison Statement from DVB on HDR [DVB via SC 29 Secretariat]
(Responded to in N16070)
m37752 ATSC Liaison on HDR/WCG [Walt Husak]
(Responded to in N16071)
m37788 Liaison Statement from ITU-R WP 6C [ITU-R WP 6C via SC 29 Secretariat]
(Responded to in N16072)
m38082 SMPTE Liaison on HDR [Walt Husak]
(Responded to in N16074)
m38016 Liaison statement from ETSI ISG CCM [David Holliday]
(Responded to in N16073)
14.5
Joint Meetings
The following joint meetings were held:
14.5.1 Joint Video/Req Mon 15:30-16:30 (Salon F&G)
This discussion considered HDR requirements issues. Current test:

CE1: HEVC without normative / decoder-side changes, with ST2084 transfer function
 CE2: HEVC with additional reshaping for colour mapping (sequence-dependent)
The test has been conducted without an intention for viewing the decoder output on an SDR
display; it was only optimized for HDR.
Problems to be solved:
-
Improve coding efficiency of HEVC with the constraint that the Main 10 decoder is not
changed (by the next meeting if not the current one, we expected to be able to answer the
question whether this requires any changes at all) – still necessary to define restrictions to
be observed in encoder optimization
- Enable legacy (SDR) devices to replay HDR content (backward compatibility)
Is backward compatibility mandatory for all market segments? No – for example OTT streaming
does not require this (as a separate encoding can be streamed for HDR and non-HDR clients).
However, there are important market segments that require it – for example, SDR/HDR
simulcast is undesirable for most broadcasters, although some may like to launch HDR only.
SHVC can be used for backward compatibility, requiring additional bandwidth compared to
single layer. Single layer backward compatibility would be desirable. It was agreed to:
Page: 157
Date Sav
-
Update requirements document for single-layer backward compatibility
-
Work towards establishing methods for evaluating HDR and SDR quality jointly
14.5.2 Joint Video/VCEG/Req/VC Mon 16:30-18:00 (Salon D)
Issues discussed in this session focused on the following topics. Further notes can be found in the
corresponding JCT-VC report.
-
HDR issues: including CE1 vs. CE2 test planning, HDR backward compatibility
requirements, and the SMPTE liaison statement in m38082, "HDR10" verification
testing, and development of "good practices" technical report
-
SCC text finalization aspects related to HDR: Clarification of the colour remapping
information (CRI) SEI message semantics, and the ITU-R liaison statement m37788 (and
issues surrounding the draft new Rec. ITU-R BT.[HDRTV])
-
SCC other text finalization aspects (profile chroma format requirements, combined
wavefronts and tiles support, DVB bug fix with current-picture referencing)
-
Review of other SEI message and VUI contributions under consideration
-
Other topics noted of joint interest, which should proceed as planned and feasible (SHVC
verification testing, RExt conformance, SHVC conformance, SHVC software)
14.5.3 Joint Video/VCEG/Req Mon 18:00-18:30 (Salon D)
Issues discussed in this session focused on the following topics. Further notes can be found in the
corresponding JCT-VC report.
-
Liaison letters between MPEG and SG16/VCEG
-
AVC issues (Progressive High 10 profile m37952, AVC support for draft new Rec. ITUR BT.[HDRTV], m37675High level syntax support for ARIB STD-B67, m37954
Generalized constant and non-Constant luminance matrix coefficient code points,
corrections of colour transfer characteristics and display light vs. captured light
clarification)
-
CICP remains a coordination topic and should be kept aligned with HEVC and AVC (and
has been planned for twin text in ITU-T for quite some time).
-
JCT-3V wrapping up its final work
-
JVET exploration and future video:
o JVET is tasked with exploring potential compression improvement technology,
regardless of whether it differs substantially from HEVC or not; discussions of
converting that work into a formal standardization project, whether that's a new
standard or not, timelines for standardization, profiles, etc., belong in the parent
bodies rather than in JVET
o m37709 on video for virtual reality systems
m37675 High level syntax support for ARIB STD-B67 in AVC [M. Naccari, A. Cotton, T. Heritage
(BBC), Y. Nishida, A. Ichigaya (NHK), M. Raulet (ATEME)]
See joint discussion notes above.
Page: 158
Date Sav
m37733 Proposed text for BT.HDR in AVC [C. Fogg (MovieLabs)]
See joint discussion notes above.
m37952 A Progressive High 10 profile in ISO/IEC 14496-10/MPEG-4 part 10/AVC [A.M.
Tourapis, D. Singer, K. Kolarov]
See joint discussion notes above.
m37954 Generalized Constant and Non-Constant Luminance Code Points in ISO/IEC 14496-10
and ISO/IEC 23001-8 [A.M. Tourapis, Y. Su, D. Singer, K. Kolarov, C. Fogg]
New amendment (Amd4) – editors: Alexis, Gary
This should also enroll the correction of colour transfer characteristics.
14.5.4 Joint Vid/Req Tue 9:00-10:00 (Requirements room)
-
CDVA CfP
m37636 PKU‟s Response to MPEG CfP for Compact Descriptor for Visual Analysis [Zhangshuai
Huang, Ling-Yu Duan, Jie Chen, Longhui Wei, Tiejun Huang, Wen Gao]
Encodes a sequence of CDVS descriptors, frame by frame. Intra-frame predictor coding for
global descriptor/local descriptor/coordinate coding.
Local descriptors: multiple reference frames - using previous k-frames (k=3), checking ratio test,
frame by frame, selection of the local descriptor from the closest reference in time, going
backwards. Using CDVS compressed descriptors. Residual quantization and error coding is not
in fact used.
Global descriptor – just copy or use a new one.
Coordinate coding: affine model for the previous frame, residual for all other frames.
Database encoded at 2fps with 4k CDVS.
Matching is using Hough-transform with 5 peak selection and subsequent thresholding at 75%
Retrieval
M37636
Pairwise
Temp localization
16k
64k
256k
16k
64k
256k
16k256k
0.676
0.707
0.683
0.739
0.757
0.798
0.662
16k
64k
256k
16256
0.567
0.596
0.622
0.528
Comments:

Possible problem with the frame-based affine prediction when the assumption of a single
consistent motion is broken.

High complexity in retrieval: 1139s ~ 20 minutes (outside the requirement of 1 min)

Drop in performance – poor interoperability between 256k and 16k, performance below
16k

Patent statement and license declaration missing.

BFLOG – used.
m37854 Keypoint Trajectory Coding on Compact Descriptor for Video Analysis (CDVA) [Dong
Tian, Huifang Sun, Anthony Vetro]
This is a partial proposal addressing the use of motion trajectories to encode positions of keypoints in the sequence of frames. Assumption: one coherent trajectory per keypoint.
Page: 159
Date Sav
Example uses: improved spatial localization or action recognition.
Note: For action recognition the keypoints locations may or may not be on the meaningful
positions of human body or on the object of interest, if they are determined by the key-point
detector and selector.
Two different coding methods are proposed: interframe and scalable keypoint coding (frame vs
trajectory based coding order). Tabulated results presented bit savings (compared to affine
model) of between 8-25%.
The possibility to consider proposal as an extension of CDVA towards video analysis based on
motion (e.g. for surveillance applications) was discussed. That may require, e.g., dense or more
accurate motion information. More detailed requirements would help (retrieval for similar
motion, classification of motions, action recognition, etc).
Currently no data with suitable motion patterns and localization information is in the current
CDVA dataset.
Recommended to acquire suitable datasets and present at the next meeting.
m37880 BRIDGET Response to the MPEG CfP for Compact Descriptors for Video Analysis
(CDVA) - Search and Retrieval [Massimo Balestri (Telecom Italia), Gianluca Francini (Telecom
Italia), Skjalg Lepsoy (Telecom Italia), Miroslaw Bober (University of Surrey), Sameed Husain
(University of Surrey), Stavros Paschalakis (Visual Atoms)]
 Uses CDVS technology as basis, with some differences:
o RVD-W global descriptor: Size variable, binary representation similar to SCFV, cannot be
directly matched with SCFV.
o Coordinate coding for selected frames (from TI proposal to CDVS)
o Different binary descriptor format (time indexed sequence of keyframe descriptors, segment
link information currently not used)
 Extraction
o Same CDVS extraction parameters (mode 0) for all CDVA modes
o VGA, 1:6 temporal subsampling -> candidate key frame
 Intraframe coding of selected frames
 Frame selection (based on RVD-W)
o Threshold to keep frames in same segment
o Determine if shot is static by matching first and last frame of segment -> middle as
representative, otherwise iteratively split shot in the middle and repeat
o Use 300 most relevant local features
 Matching
o Build DB of reference keyframe descriptors
o Query for all query descriptors
o Score is max of all matches
o Localization: match counted if local score is above threshold
 Retrieval
o DB of reference and distractor keyframe descriptors
o Queries are merged, duplicates removed, sort again and return top 20 results
 Only used on one operating point, arriving at 4-5K descriptor sizes
 No information from other frames in segment used
o No location information for other frames encoded
Page: 160
Date Sav
Retrieval
M37880
Pairwise
Temp localisation
16k
64k
256k
16k
64k
256k
16k256k
0.728
0.728
0.728
0.757
0.757
0.757
0.757
16k
64k
256k
16256
0.549
0.549
0.549
0.549
Comments:
Retrieval time 2 mins – outside the 1 mins requirement.
m37794 JRS Response to Call for Proposals for Technologies Compact Descriptors for Video
Analysis (CDVA) - Search and Retrieval [Werner Bailer, Stefanie Wechtitsch]
Generic method of coding image-level descriptors, containing global/local/localisation
information.
Coding is based on temporal segments, determined by similiarities of global descriptors.
Global descriptors are coded with the ―median‖ global descriptor, selected as the medoid of all
global descriptors. Subsequently, a number of global descriptors, which differ from the medoid
descriptor by more then a threshold, are selected and coded differentially.
Local descriptors are selected from the reference frames (filtered out), matched to already
selected and coded differentially or referenced (if there is a small difference). One large local
feature vector is created for the entire segment and the time index of this descriptors is created.
Key-point locations are encoded for the selected frames and independently per intra-frame (no
prediction).
A binary representation format was presented.
Retrieval
M37794
Pairwise
Temp localisation
16k
64k
256k
16k
64k
256k
16k256k
0.491
0.283
0.258
0.577
0.593
0.594
0.498
16k
64k
256k
16256
0.362
0.377
0.376
0.311
Comments:
Surprising drop in performance at higher rate in retrieval.
Does not fulfil requirement of the retrieval in 1 min (~ 2 mins).
Patent statement declaration is missing.
Tuning of parameter is still needed.
Comparison of proposals
The best performing proposal at each operating point is marked yellow.
Retrieval
16
64
256
16
64
256
Pairwise
16-256
Temp loc
16
64
256
16-256
m37636
0.676
0.707
0.683
0.739
0.757
0.798
0.662
0.567
0.596
0.622
0.528
m37794
0.491
0.283
0.258
0.577
0.593
0.594
0.498
0.362
0.377
0.376
0.311
m37780
0.728
0.728
0.728
0.757
0.757
0.757
0.757
0.549
0.549
0.549
0.549
Page: 161
Date Sav
14.5.5 Vid/Req Tue 10 AR/FTV/Lightfields (Requirements room)
See the Requirements report for a description of the activity in this exploration area.
14.5.6 Joint Video/VCEG/Req/VC Tuesday 14:00–18:00 (Salon D)
This session considered topic areas of HDR that had been deferred so far and were pending
discussion. Notes of this session can be found in the corresponding JCT-VC report.
It was reported that MPEG requirements had concluded on Monday that BC is important (but it
was yet to be clarified which types of BC are needed and how to test it).
14.5.7 Joint Video/VCEG/Req/VC Wed 11:00-12:00 (Salon D&E)
Issues discussed in this session focused on HDR and especially on the testing of CE2 vs. CE1
and its implications. Further notes can be found in the corresponding JCT-VC report.
After review of the results of a visual test of CE2 vs. CE1 results, it was concluded that the
amount of benefit for HDR/WCG quality that was demonstrated by the post-processing
techniques of CE2 (as tested) was not so clear or large. Moreover, it was also noted that the CRI
SEI message can already do similar processing to some of what was tested in the CE2 category.
It was thus agreed not to plan to create a new profile (or any other "normative" or implicitly
required specification that would imply that something new is needed to properly enable HDR
service). Further SEI/VUI work may be desirable, but not a new profile, and we have a clear
indication that nothing (not even some SEI message) is necessary to properly deliver HDR (for
an HDR-specific service, without consideration of backward compatibility).
The following JCT-VC meeting resolution was thus prepared: In consultation with the parent
bodies, the JCT-VC reports the conclusion reached at the 114th meeting of WG 11 that the
creation of a new HEVC profile or other new normative specification is not necessary to
properly enable HDR/WCG video compression using the current edition of HEVC (3rd edition
for ISO/IEC). No technology has been identified that would justify the creation of such a new
specification. Future work on HDR/WCG in JCT-VC will focus primarily on formal verification
testing and guidelines for encoding practices.
14.5.8 MPEG/JPEG: Joint Standards Thu 9:00-10:00 (Room 3G)
Potential areas of mutual interest between MPEG and JPEG were reviewed in this joint meeting.
See the corresponding Requirements report for topics discussed, which included virtual reality,
augmented reality, and point clouds.
14.6
Output Documents planned
A provisional list of output documents was drafted in the opening plenary and further updated
during the Wednesday plenary. See the final list of output documents in section 22.1.
15 MPEG-7 Visual and CDVS
The BoG on these topics was chaired by Miroslaw Bober. It met on all days Monday through
Thursday, with meetings announced per reflectors.
The BoG reported its recommendations to the video plenary, where they were confirmed as
decisions of the Video SG.
Page: 162
Date Sav
15.1
MPEG-7 Visual and XM
m38085 Performance Testing for ISO/IEC 15938-3:2002/Amd 3:2009: Image signature tools
[Karol Wnukowicz, Stavros Paschalakis, Miroslaw Bober]
It was detected and reported in previous meetings, that the new XM software (due to update of
external libraries used) was no longer able to fulfil the conformance requirements of image
signature descriptor. This contribution shows that formulating conformance tolerances more
loosely is sufficient to solve the problem, without losing matching performance. No further
action on this is necessary, the corrigendum on conformance (under ballot) is fully resolving the
problem.
Plans were again confirmed for issuing a new edition of ISO/IEC 15938-3; this will be studied
by the AHG until the next meeting to determine how large the effort would be and whether this
might cause any problems.
Since no negative votes and no substantial comments were received in the DIS ballot, the new
edition of ISO/IEC 15938-6 can progress directly to publication without issuing an FDIS.
15.2
CDVS
The BoG was initially tasked to generate an initial version of the white paper and present it in the
Commununications SG on Thursday. However, this unfortunately did not happen.
15.2.1 Conformance testing and reference software
The DIS is currently under ballot; no action was necessary. A minor bug is known in the
software, which will be reolved in the FDIS.
15.2.2 TM
No new version of the TM was issued. It is planned for future work to combine the MPEG-7 XM
and CDVS TM for usage in scene classification tasks. The outcome of this could become an
amendment of TR 15938-8.
15.2.3 Other
m37769 CDVSec: CDVS for Fingerprint Matching [Giovanni Ballocca, Attilio Fiandrotti, Massimo
Mattelliano, Alessandra Mosca]
The contribution was reviewed.
m37887 CDVS: Matlab code maintenance [Alessandro Bay (Politecnico di Torino), Massimo
Balestri (Telecom Italia)]
The contribution was reviewed.
15.3
MPEG-7/CDVA
After receiving substantial input to the CfP (see section 14.5), it was decided to start the
technical work in the Video SG. All proposals are based on CDVS, extending over the timeline.
A common denominator is CDVS based description of keyframes, augmented by expressing
temporal changes (and eventually coding them efficiently). An initial version of the test model
(entitled as CXM0.1) was drafted. This extracts CDVS for keyframes, and only defines a new
description if the change is sufficiently large. Therefore, the resulting description has variable
rate (depending on sequence characteristics) . The CXM also contains elements for extraction,
pairwise matching and retrieval. The bitstream consists of a timestamp and a CDVS descriptor.
For potential extension of CXM, two Core Experiments were defined:
-
CE1: Temporal subsampling. Discuss whether it is mandatory to use same CDVS mode
Page: 163
Date Sav
- CE2: Keypoint locations, including temporally consistent keypoint locations
Furthermore, a document on the results of the CfP was drafted as output doc. This will provide
information about the achieved performance per category, without identifying proposals
explicitly.
16 Internet Video Coding
The breakout group on IVC was chaired by Euee S. Jang. It met from Monday through Thursday,
with meetings announced via the reflector.
The BoG reported its recommendations to the video plenary, where they were confirmed as
decisions of the Video SG.
During the opening plenary, the BoG was given the following tasks:
-
Prepare reference software PDAM (new amendment of 14496-5)
-
Study which are the slow parts of the software and set up work plan to develop faster
modules
-
Prepare conformance a PDAM (or WD), as a new amendment of ISO/IEC 14496-4. Set
up a list of conformance test streams that will be developed (including responsibilities)
-
Investigate ballot comments, prepare a DoC and report to the video plenary on
Wednesday about any critical issues
-
Prepare new ITM, documentation of prior art and CE documents
16.1
Improvement of 14496-33
m37786 Information on how to improve Text of DIS 14496-33 IVC [Sang-hyo Park, Yousun
Park, Euee S. Jang]
Some editorial comments are identified and reflected into the output study document (N16034).
16.2
New technical proposals
m37799 Extension of Prediction Modes in Chroma Intra Coding for Internet Video Coding [Anna
Yang, Jae-Yung Lee, Jong-Ki Han, Jae-Gon Kim]
Two more prediction modes for chroma blocks are proposed, with minor coding gain. Further
EEs can be conducted till the next meeting on whether it is possible to match the prediction
modes between luma and chroma blocks. The proposed method is to be integrated into the
extension of IVC after checking the prior art information during the MPEG week. This work is
included in the on-going EE (N16036).
m38052 Cross-check of m37799 (Extension of Prediction Modes in Chroma Intra Coding for
Internet Video Coding) [Sang-hyo Park, Won Chang Oh, Euee S. Jang]
m37969 Recursive CU structure for IVC+ [Kui Fan, Ronggang Wang, Zhenyu Wang, Ge Li,
Tiejun Huang, Wen Gao]
A new software platform is proposed with recursive quadtree structure. The results on all Intra
(AI) case are provided with 1% gain. Results on Inter frames are to be generated and studied by
the next MPEG meeting. Some new methods, like block scan order, need more analysis on prior
Page: 164
Date Sav
art. Further EEs will be conducted till the next meeting. Some concern was raised on how to
manage new changes on top of the existing ITM. Making the new reference software as a
superset is suggested, which will also be a part of EE till the next meeting. The new EE
description (N16036) is issued on this issue.
16.3
ITM development
m37783 Report on the decoding complexity of IVC [Sang-hyo Park, Seungho Kuk, Haiyan Xu,
Yousun Park, Euee S. Jang]
Throughout the experiment, interpolation filters are found to be critical in the computing time.
The complexity analysis of IVC reference software is provided as an information.
m37970 Speed optimization for IVC decoder [Shenghao Zhang, Kui Fan, Ronggang Wang, Ge
Li, Tiejun Huang, Wen Gao]
A SIMD acceleration method on ITM 13.0 is proposed. For the class D (1280x720, 60 fps)
sequences, the decoding frame rate is accelerated up to 25 ~ 27 frames on different test cases.
The improvement is reportedly 35.8% on average. Optimization of the decoder as well as
encoder is a desirable objective of IVC. It is foreseen that activities both inside and outside of
MPEG will continue to achieve such goals.
16.4
New EE definition
The following (continuing) Exploration Experiments are defined in N16036, which was edited
by the BoG and approved in the video plenary. The EE experiments are intended to extend the
current IVC standard by including new tools in the future extensions. The following areas are the
primary fields of investigation:
-
EE1: Encoder optimization
-
EE2: Multiple reference frames
-
EE3: B-frame coding
-
EE4: Intra prediction
-
EE5: Large block sizes with recursive quadtree structure
-
EE6: OBMC
16.5
Output docs
The following output documents were created:

N16024 Disposition of Comments on ISO/IEC 14496-4:2004/PDAM46 (R. Wang)

N16025 Text of ISO/IEC 14496-4:2004/DAM46 Conformance Testing for Internet
Video Coding (R. Wang)

N16026 Disposition of Comments on ISO/IEC 14496-5:2001/PDAM41 (R. Wang)

N16027 Text of ISO/IEC 14496-1:2001/DAM41 Reference Software for Internet Video
Coding (R. Wang)

N16034 Study Text of ISO/IEC DIS 14496-33 Internet Video Coding (R. Wang)

N16035 Internet Video Coding Test Model (ITM) v 14.1 (S. Park)
Page: 165
Date Sav

N16036 Description of IVC Exploration Experiments (R. Wang)

N16038 Draft verification test plan for Internet Video Coding (R. Wang)
17 Reconfigurable Media Coding
No specific input documents were received on RVC. It was, however, announced verbally that
further achievements on RVC implementation of HEVC Main 10 and scalable profiles, as well
as tools for VTL/CAL-based design support are expected towards the next meeting.
The following standard parts were progressed regularly:
- 23002-4:2014/DAM3 FU and FN descriptions for parser instantiation from BSD
(N16041/N16042)
- 23002-5:2013/DAM2 Reference Software for HEVC related VTL extensions
(N16043/N16044).
18 AVC
The following amendment actions took place for AVC:
-
ISO/IEC 14496-10:2014/FDAM2 Additional Levels and Supplemental Enhancement
Information (N16031/N16031)
-
Text of ISO/IEC 14496-10:2014/PDAM4 Progressive High 10 Profile, additional VUI
code points and SEI messages (N16032/N16033)
The latter newly defines the Progressive High 10 Profile, as decided by the joint meeting with
VCEG and MPEG Requirements SG (see the notes of the joint discussion above). It furthermore
includes new code points for HLG as defined by ITU-R, BT.HDR, and generalized CL/NCL
transfer, all of which are already included in HEVC. It also contains minor corrections on other
colour and transfer related metadata (see report of 113th meeting which contains relevant
discussion).
A new edition had been planned for June 2016, after finalization of Amd.2 and Amd.3, where
however it may need to be further discussed whether this should better be done along with the
finalization of Amd.4.
19 CICP
For 23001-8, ISO/IEC 23001-8:201x/PDAM1 Additional code points for colour description
(N16039/N16040) was issued. This contains the corresponding extensions and corrections of
video related metada, as described above for ISO/IEC 14496-10/Amd.4.
20 Other issues
Restructuring of ISO/IEC 14496-4 and ISO/IEC 14496-5 remains planned.
21 Wednesday Video plenary status review
A Video subgroup plenary was held 11:15-12:30 on Wednesday. Reports of BoGs were given as
follows:
CDVA:
Page: 166
Date Sav
-
Presentation of CXM 0.1.
-
Extraction of CDVS, skipped when change is small; therefore variable rate (depending on
sequence characteristics)
-
Elements for extraction, pairwise matching, retrieval, bitstream consisting of timestamp +
CDVS descriptor
-
CE1: Temporal subsampling. Discuss whether it is mandatory to use same CDVS mode
-
CE2: Keypoint locations, including temporally consistent keypoint locations
-
Document on results: Performance per category,
IVC:
Other Docs: White paper, DoC, 15938-6 FDIS
-
Patent statement resolution
-
Systems interfacing is looked into for the next meeting
-
Verification test was already done by Vittorio; issue draft test plan?
-
New ITM doc with improved encoder description, but no new technology as new
software version is expected towards next meeting
-
new technology will be investigated in CE
-
version control is aleady included
-
Euee presents ideas of configurable extensions, requires further study.
22 Closing plenary topics
A closing plenary of the Video subgroup was held Friday 9:00–11:45. Output docs were
approved, and AHGs were established. No specifically important issues were raised. All issues
regarding breakout activities are basically reflected in the respective subsections about BoG
activities above, or in the section about output docs below.
The Video plenary was closed on Friday 02-26 at 11:45.
22.1
Output docs
During the closing plenary, the following output documents were recommended for approval by
the Video subgroup (including information about publication status and editing periods, as well
as person in charge):
No.
16024
16025
16026
16027
Title
ISO/IEC 14496-4 – Conformance testing
Disposition of Comments on ISO/IEC 144964:2004/PDAM46
Text of ISO/IEC 14496-4:2004/DAM46 Conformance
Testing for Internet Video Coding
ISO/IEC 14496-5 – Reference software
Disposition of Comments on ISO/IEC 144965:2001/PDAM41
Text of ISO/IEC 14496-1:2001/DAM41 Reference
Software for Internet Video Coding
ISO/IEC 14496-5 – Reference software
In charge
TBP Available
R. Wang
N
16/02/26
R. Wang
N
16/04/04
R. Wang
N
16/02/26
R. Wang
N
16/04/04
Page: 167
Date Sav
16028
16029
16030
16031
16032
16033
16034
16035
16036
16038
16039
16040
16041
16042
16043
16044
16045
16046
16047
16048
16049
16050
16051
16052
Disposition of Comments on ISO/IEC 14496T. Senoh
5:2001/PDAM42
Text of ISO/IEC 14496-1:2001/DAM42 Reference
T. Senoh
Software for Alternative Depth Information SEI message
ISO/IEC 14496-10 – Advanced Video Coding
Disposition of Comments on ISO/IEC 14496A. Tourapis
10:2014/DAM2
Text of ISO/IEC 14496-10:2014/FDAM2 Additional
A. Tourapis
Levels and Supplemental Enhancement Information
Request for ISO/IEC 14496-10:2014/Amd.4
A. Tourapis
Text of ISO/IEC 14496-10:2014/PDAM4 Progressive
A. Tourapis
High 10 Profile, additional VUI code points and SEI
messages
ISO/IEC 14496-33 – Internet Video Coding
Study Text of ISO/IEC DIS 14496-33 Internet Video
R. Wang
Coding
Internet Video Coding Test Model (ITM) v 14.1
S. Park
Description of IVC Exploration Experiments
R. Wang
Draft verification test plan for Internet Video Coding
R. Wang
ISO/IEC 23001-8 – Coding-independent code-points
Request for ISO/IEC 23001-8:201x/Amd.1
A. Tourapis
Text of ISO/IEC 23001-8:201x/PDAM1 Additional code
A. Tourapis
points for colour description
ISO/IEC 23002-4 – Video tool library
Disposition of Comments on ISO/IEC 23002H. Kim
4:2014/PDAM3
Text of ISO/IEC 23002-4:2014/DAM3 FU and FN
H. Kim
descriptions for parser instantiation from BSD
ISO/IEC 23002-5 – Reconfigurable media coding
conformance and reference software
Disposition of Comments on ISO/IEC 23002H. Kim
5:2013/PDAM3
Text of ISO/IEC 23002-5:2013/DAM3 Reference software H. Kim
for parser instantiation from BSD
ISO/IEC 23008-2 – High Efficiency Video Coding
Disposition of Comments on DIS ISO/IEC 23008R. Joshi
2:201x
Text of ISO/IEC FDIS 23008-2:201x High Efficiency
R. Joshi
Video Coding [3rd ed.]
WD of ISO/IEC 23008-2:201x/Amd.1 Additional
P. Yin
colour description indicators
High Efficiency Video Coding (HEVC) Test Model 16 C. Rosewarne
(HM16) Improved Encoder Description Update 5
HEVC Screen Content Coding Test Model 7 (SCM 7) R. Joshi
MV-HEVC verification test report
V. Baroncini
SHVC verification test report
Y. Ye
Verification test plan for HDR/WCG coding using
R. Sjöberg
HEVC Main 10 Profile
ISO/IEC 23008-5 – HEVC Reference Software
N
16/02/26
N
16/04/04
N
16/02/26
N
16/04/08
N
Y
16/02/26
16/03/11
N
16/03/31
Y
Y
Y
16/03/31
16/02/26
16/03/31
N
Y
16/02/26
16/03/11
N
16/02/26
N
16/04/04
N
16/02/26
N
16/04/04
N
16/02/26
N
16/04/22
Y
16/03/11
Y
16/05/13
Y
Y
Y
Y
16/05/13
16/04/22
16/04/22
16/03/11
Page: 168
Date Sav
16053
16054
16055
16056
16057
16058
16059
16060
16061
16062
16063
15938
16064
16065
16066
16067
16068
16069
16070
16071
16072
16073
16074
16075
22.2
Disposition of Comments on ISO/IEC 230085:2015/DAM3
Disposition of Comments on ISO/IEC 230085:2015/DAM4
Text of ISO/IEC FDIS 23008-5:201x Reference Software
for High Efficiency Video Coding [2nd ed.]
Request for ISO/IEC 23008-5:201x/Amd.1
Text of ISO/IEC 23008-5:201x/PDAM1 Reference
Software for Screen Content Coding Profiles
ISO/IEC 23008-8 – HEVC Conformance Testing
Disposition of Comments on ISO/IEC 230088:2015/DAM1
Disposition of Comments on ISO/IEC 230088:2015/DAM2
Disposition of Comments on ISO/IEC 230088:2015/DAM3
Text of ISO/IEC FDIS 23008-8:201x HEVC
Conformance Testing [2nd edition]
Working Draft of ISO/IEC 23008-8:201x/Amd.1
Conformance Testing for Screen Content Coding
Profiles
ISO/IEC 23008-14 – Conversion and coding practices for
HDR/WCG video
WD of ISO/IEC TR 23008-14 Conversion and coding
practices for HDR/WCG video
Explorations – Compact descriptors for video analysis
Results of the Call for Proposals on CDVA
CDVA Experimentation Model (CXM) 0.1
Description of Core Experiments in CDVA
Explorations – Future Video Coding
Algorithm description of Joint Exploration Test Model 2
(JEM2)
Description of Exploration Experiments on coding tools
Call for test materials for future video coding
standardization
Liaisons
Liaison statement to ITU-T SG 16 on video coding
collaboration
Liaison statement to DVB on HDR
Liaison statement to ATSC on HDR/WCG
Liaison statement to ITU-R WP6C on HDR
Liaison statement to ETSI ISG CCM on HDR
Liaison statement to SMPTE on HDR/WCG
Statement of benefit for establishing a liaison with
DICOM
T. Suzuki
N
16/02/26
G. Tech
N
16/02/26
K. Suehring
N
16/04/22
G. Sullivan
K. Rapaka
N
Y
16/02/26
16/03/11
K. Kawamura
N
16/02/26
T. Suzuki
N
16/02/26
T. Suzuki
N
16/02/26
V. Seregin
N
16/04/22
R. Joshi
N
16/04/08
J. Samuelsson Y
16/02/26
M. Bober
M. Balestri
M. Bober
Y
Y
N
16/02/26
16/03/25
16/02/26
J. Chen
Y
16/03/25
E. Alshina
A. Norkin
Y
Y
16/03/11
16/03/05
G. Sullivan
N
16/02/26
G. Sullivan
G. Sullivan
G. Sullivan
G. Sullivan
G. Sullivan
G. Barroux
N
N
N
N
N
N
16/02/26
16/02/26
16/02/26
16/02/26
16/02/26
16/02/26
AHGs established
The following AHGs were established by the Video subgroup, as detailed in N15905:
Page: 169
Date Sav

AHG on MPEG-7 Visual and CDVS

AHG on Compact Descriptors for Video Analysis

AHG on Internet Video Coding

AHG on Reconfigurable Media Coding
Page: 170
Date Sav
– JCT-VC report
Source: Jens Ohm and Gary Sullivan, Chairs
Summary
The Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T WP3/16 and ISO/IEC
JTC 1/ SC 29/ WG 11 held its twenty-third meeting during 19–26 Feb 2016 at the San Diego
Marriott La Jolla in San Diego, US. The JCT-VC meeting was held under the chairmanship of Dr
Gary Sullivan (Microsoft/USA) and Dr Jens-Rainer Ohm (RWTH Aachen/Germany). For rapid
access to particular topics in this report, a subject categorization is found (with hyperlinks) in
section 1.14 of this document.
The JCT-VC meeting sessions began at approximately 0900 hours on Friday 19 February 2016.
Meeting sessions were held on all days (including weekend days) until the meeting was closed at
approximately 1240 hours on Friday 26 February 2016. Approximately 159 people attended the
JCT-VC meeting, and approximately 125 input documents were discussed. The meeting took
place in a collocated fashion with a meeting of WG11 – one of the two parent bodies of the JCTVC. The subject matter of the JCT-VC meeting activities consisted of work on the video coding
standardization project known as High Efficiency Video Coding (HEVC) and its extensions.
One primary goal of the meeting was to review the work that was performed in the interim
period since the twenty-second JCT-VC meeting in producing:

The HEVC test model (HM) 16 improved encoder description (including RExt
modifications) update 4;

For the format range extensions (RExt), the RExt reference software draft 4;

For the scalable extensions (SHVC), the SHVC reference software draft 3, conformance
testing draft 4, SHVC test model 11 (SHM 11), and verification test plan for SHVC;

For the HEVC screen content coding (SCC) extensions, the HEVC screen content coding
test model 6 (SCM 6) and SCC draft text 5.
The other most important goals were to review the results from eight Core Experiments on High
Dynamic Range (HDR) video coding (a topic of work that had been allocated to JCT-VC by the
parent bodies to be effective as of the beginning of the 23rd meeting), and review other technical
input documents. Finalizing the specification of screen content coding tools (integrated in the
preparation for version 4 of the HEVC text specification) and making progress towards HDR
support in HEVC were the most important topics of the meeting. Advancing the work on
development of conformance and reference software for recently finalized HEVC extensions
(RExt and SHVC) was also a significant goal. Evaluating results of SHVC verification tests was
also conducted, and possible needs for corrections to the prior HEVC specification text were also
considered.
The JCT-VC produced 13 particularly important output documents from the meeting:

The HEVC test model (HM) 16 improved encoder description (including RExt
modifications) update 5;

For the format range extensions (RExt), conformance testing draft 6 (including improved
HEVC version 1 testing);
Page: 171
Date Sav

For the scalable extensions (SHVC), the SHVC reference software draft 4, conformance
testing draft 5, and a verification test report for SHVC;

For the HEVC screen content coding (SCC) extensions, a draft text for version 4 of
HEVC including draft text 6 of the text of SCC extensions, the HEVC screen content
coding test model 7 (SCM 7), reference software draft 1, and conformance testing draft 1.

For high dynamic range (HDR) and wide colour gamut (WCG) video coding extensions,
a text amendment for ICTCP colour representation support in HEVC draft 1, a technical
report text of conversion and coding practices for HDR/WCG video draft 1, a verification
test plan for HDR/WCG video coding using the HEVC Main 10 profile, and a description
of common test conditions (CTC) for HDR/WCG video coding experiments.
For the organization and planning of its future work, the JCT-VC established 14 "ad hoc groups"
(AHGs) to progress the work on particular subject areas. The next four JCT-VC meetings are
planned for Thu. 26 May – Wed. 1 June 2016 under ITU-T auspices in Geneva, CH, during Fri.
14 – Fri. 21 Oct. 2016 under WG 11 auspices in Chengdu, CN, during Thu. 12 – Wed. 18 Jan.
2017 under ITU-T auspices in Geneva, CH, and during Fri. 31 Mar. – Fri. 7 Apr. 2017 under
WG 11 auspices in Hobart, AU.
The document distribution site http://phenix.it-sudparis.eu/jct/ was used for distribution of all
documents.
The reflector to be used for discussions by the JCT-VC and all of its AHGs is the JCT-VC
reflector:
jct-vc@lists.rwth-aachen.de hosted at RWTH Aachen University. For subscription to this list, see
https://mailman.rwth-aachen.de/mailman/listinfo/jct-vc.
1
Administrative topics
1.1 Organization
The ITU-T/ISO/IEC Joint Collaborative Team on Video Coding (JCT-VC) is a group of video
coding experts from the ITU-T Study Group 16 Visual Coding Experts Group (VCEG) and the
ISO/IEC JTC 1/ SC 29/ WG 11 Moving Picture Experts Group (MPEG). The parent bodies of
the JCT-VC are ITU-T WP3/16 and ISO/IEC JTC 1/SC 29/WG 11.
The Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T WP3/16 and ISO/IEC
JTC 1/ SC 29/ WG 11 held its twenty-third meeting during 19–26 Feb 2016 at the San Diego
Marriott La Jolla in San Diego, US. The JCT-VC meeting was held under the chairmanship of Dr
Gary Sullivan (Microsoft/USA) and Dr Jens-Rainer Ohm (RWTH Aachen/Germany).
1.2 Meeting logistics
The JCT-VC meeting sessions began at approximately 0900 hours on Friday 19 February 2016.
Meeting sessions were held on all days (including weekend days) until the meeting was closed at
approximately 1240 hours on Friday 26 February 2016. Approximately 159 people attended the
JCT-VC meeting, and approximately 125 input documents were discussed. The meeting took
place in a collocated fashion with a meeting of ISO/IEC JTC 1/ SC 29/ WG 11 (MPEG) – one of
the two parent bodies of the JCT-VC. The subject matter of the JCT-VC meeting activities
consisted of work on the video coding standardization project known as High Efficiency Video
Coding (HEVC) and its extensions.
Some statistics are provided below for historical reference purposes:
 1st "A" meeting (Dresden, 2010-04):
188 people, 40 input documents
 2nd "B" meeting (Geneva, 2010-07):
221 people, 120 input documents
 3rd "C" meeting (Guangzhou, 2010-10):
244 people, 300 input documents
Page: 172
Date Sav
 4th "D" meeting (Daegu, 2011-01):
248 people, 400 input documents
 5th "E" meeting (Geneva, 2011-03):
226 people, 500 input documents
 6th "F" meeting (Turin, 2011-07):
254 people, 700 input documents
 7th "G" meeting (Geneva, 2011-11)
284 people, 1000 input documents
 8th "H" meeting (San Jose, 2012-02)
255 people, 700 input documents
 9th "I" meeting (Geneva, 2012-04/05)
241 people, 550 input documents
 10th "J" meeting (Stockholm, 2012-07)
214 people, 550 input documents
 11th "K" meeting (Shanghai, 2012-10)
235 people, 350 input documents
 12th "L" meeting (Geneva, 2013-01)
262 people, 450 input documents
 13th "M" meeting (Incheon, 2013-04)
183 people, 450 input documents
 14th "N" meeting (Vienna, 2013-07/08)
162 people, 350 input documents
 15th "O" meeting (Geneva, 2013-10/11)
195 people, 350 input documents
 16th "P" meeting (San José, 2014-01)
152 people, 300 input documents
 17th "Q" meeting (Valencia, 2014-03/04) 126 people, 250 input documents
 18th "R" meeting (Sapporo, 2014-06/07)
150 people, 350 input documents
 19th "S" meeting (Strasbourg, 2014-10)
125 people, 300 input documents
 20th "T" meeting (Geneva, 2015-02)
120 people, 200 input documents
 21st "U" meeting (Warsaw, 2015-06)
91 people, 150 input documents
 22nd "V" meeting (Geneva, 2015-10)
155 people, 75 input documents
 23rd "W" meeting (San Diego, 2016-02)
159 people, 125 input documents
Information regarding logistics arrangements for the meeting had been provided via the email
reflector jct-vc@lists.rwth-aachen.de and at http://wftp3.itu.int/av-arch/jctvcsite/2016_02_W_SanDiego/.
1.3 Primary goals
One primary goal of the meeting was to review the work that was performed in the interim
period since the twenty-second JCT-VC meeting in producing:

The HEVC test model (HM) 16 improved encoder description (including RExt
modifications) update 4;

For the format range extensions (RExt), the RExt reference software draft 4;

For the scalable extensions (SHVC), the SHVC reference software draft 3, conformance
testing draft 4, SHVC test model 11 (SHM 11), and verification test plan for SHVC;

For the HEVC screen content coding (SCC) extensions, the HEVC screen content coding
test model 6 (SCM 6) and SCC draft text 5.
The other most important goals were to review the results from eight Core Experiments on High
Dynamic Range (HDR) video coding (a topic of work that had been allocated to JCT-VC by the
parent bodies to be effective as of the beginning of the 23rd meeting), and review other technical
input documents. Finalizing the specification of screen content coding tools (integrated in the
preparation for version 4 of the HEVC text specification) and making progress towards HDR
support in HEVC were the most important topics of the meeting. Advancing the work on
development of conformance and reference software for recently finalized HEVC extensions
(RExt and SHVC) was also a significant goal. Evaluating results of SHVC verification tests was
Page: 173
Date Sav
also conducted, and possible needs for corrections to the prior HEVC specification text were also
considered.
1.4 Documents and document handling considerations
1.4.1 General
The documents of the JCT-VC meeting are listed in Annex A of this report. The documents can
be found at http://phenix.it-sudparis.eu/jct/.
Registration timestamps, initial upload timestamps, and final upload timestamps are listed in
Annex A of this report.
The document registration and upload times and dates listed in Annex A and in headings for
documents in this report are in Paris/Geneva time. Dates mentioned for purposes of describing
events at the meeting (other than as contribution registration and upload times) follow the local
time at the meeting facility.
For this meeting, some difficulties were experienced with the operation of the document
handling web site. These included the following:

The initial registration record for JCTVC-W0033 was lost and replaced with a different
document; a version of the prior document was resubmitted as JCTVC-W0061.

The initial registration record for JCTVC-W0034 was lost and replaced with a different
document; a version of the prior document was resubmitted as JCTVC-W0026.
Highlighting of recorded decisions in this report:

Decisions made by the group that affect the normative content of the draft standard are
identified in this report by prefixing the description of the decision with the string
"Decision:".

Decisions that affect the reference software but have no normative effect on the text are
marked by the string "Decision (SW):".

Decisions that fix a "bug" in the specification (an error, oversight, or messiness) are
marked by the string "Decision (BF):".

Decisions regarding things that correct the text to properly reflect the design intent, add
supplemental remarks to the text, or clarify the text are marked by the string "Decision
(Ed.):".

Decisions regarding simplification or improvement of design consistency are marked by
the string "Decision (Simp.):".

Decisions regarding complexity reduction (in terms of processing cycles, memory
capacity, memory bandwidth, line buffers, number of entropy-coding contexts, number of
context-coded bins, etc.) … "Decision (Compl.):".
This meeting report is based primarily on notes taken by the chairs and projected for real-time
review by the participants during the meeting discussions. The preliminary notes were also
circulated publicly by ftp and http during the meeting on a daily basis. Considering the high
workload of this meeting and the large number of contributions, it should be understood by the
reader that 1) some notes may appear in abbreviated form, 2) summaries of the content of
contributions are often based on abstracts provided by contributing proponents without an intent
to imply endorsement of the views expressed therein, and 3) the depth of discussion of the
content of the various contributions in this report is not uniform. Generally, the report is written
to include as much information about the contributions and discussions as is feasible (in the
interest of aiding study), although this approach may not result in the most polished output report.
Page: 174
Date Sav
1.4.2 Late and incomplete document considerations
The formal deadline for registering and uploading non-administrative contributions had been
announced as Tuesday, 9 February 2016.
Non-administrative documents uploaded after 2359 hours in Paris/Geneva time Wednesday 10
February 2016 were considered "officially late".
Most documents in the "late" category were CE reports or cross-verification reports, which are
somewhat less problematic than late proposals for new action (and especially for new normative
standardization action).
At this meeting, we again had a substantial amount of late document activity, but in general the
early document deadline gave a significantly better chance for thorough study of documents that
were delivered in a timely fashion. The group strived to be conservative when discussing and
considering the content of late documents, although no objections were raised regarding allowing
some discussion in such cases.
All contribution documents with registration numbers JCTVC-W0108 and higher were registered
after the "officially late" deadline (and therefore were also uploaded late). No break-out activity
reports were generated during this meeting.
In many cases, contributions were also revised after the initial version was uploaded. The
contribution document archive website retains publicly-accessible prior versions in such cases.
The timing of late document availability for contributions is generally noted in the section
discussing each contribution in this report.
One suggestion to assist with the issue of late submissions was to require the submitters of late
contributions and late revisions to describe the characteristics of the late or revised (or missing)
material at the beginning of discussion of the contribution. This was agreed to be a helpful
approach to be followed at the meeting.
The following technical design proposal contributions were registered on time but were uploaded
late:

JCTVC-W0072 (a proposal of an alternative source format transfer function for video
representation) [uploaded 02-11]

JCTVC-W0085 (a proposal of a method for coding HDR video) [uploaded 02-16]
 JCTVC-W0089 (a proposal of a method for coding HDR video) [uploaded 02-19]
The following technical design proposal contributions were both registered late and uploaded
late:

JCTVC-W0129 (a proposal document from Sharp Labs proposing changes to SCC
profile/level limits based on chroma format) [uploaded 02-18]

JCTVC-W0133 (a proposal document from Philips proposing text changes in the HEVC
specification to support metadata of SMPTE 2094-20 standard) [uploaded 02-22]
The following other documents were registered on time but were uploaded late:

JCTVC-W0045 (an information document with comments on the hybrid log-gamma
approach to HDR video) [uploaded 02-15]

JCTVC-W0050 (an information document about ICTCP colour representation) [uploaded
02-22]

JCTVC-W0090 (an information document containing subjective test results for HDR
video) [uploaded 02-15]

JCTVC-W0091 (an information document reporting on objective video quality metrics
for HDR video) [uploaded 02-15]
Page: 175
Date Sav

JCTVC-W0096 (proposed editorial improvements to the HEVC SCC draft text)
[uploaded 02-20]

JCTVC-W0105 (an information document reporting effects of 4:2:0 chroma subsampling
for HDR video) [uploaded 02-19]

JCTVC-W0106 (comments on the performance of two possible scenarios for distributing
HDR content employing a single layer) [uploaded 02-20]

JCTVC-W0134 (providing SHM software modifications for multi-view support)
[uploaded 02-23]
The following cross-verification reports were registered on time but were uploaded late: JCTVCW0064 [initial placeholder improved on 02-19], JCTVC-W0070 [initial placeholder improved
on 02-18], JCTVC-W0073 [uploaded 02-17], JCTVC-W0080 [uploaded 02-15], and JCTVCW0081 [uploaded 02-16].
(Documents that were both registered late and uploaded late, other than technical proposal
documents, are not listed in this section, in the interest of brevity.)
The following contribution registrations were later cancelled, withdrawn, never provided, were
cross-checks of a withdrawn contribution, or were registered in error: JCTVC-W0036, JCTVCW0049, JCTVC-W0065, JCTVC-W0082, JCTVC-W0083, JCTVC-W0102, JCTVC-W0124,
JCTVC-W0136, JCTVC-W0137, JCTVC-W0138.
Ad hoc group interim activity reports, CE summary results reports, break-out activity reports,
and information documents containing the results of experiments requested during the meeting
are not included in the above list, as these are considered administrative report documents to
which the uploading deadline is not applied.
As a general policy, missing documents were not to be presented, and late documents (and
substantial revisions) could only be presented when sufficient time for studying was given after
the upload. Again, an exception is applied for AHG reports, CE summaries, and other such
reports which can only be produced after the availability of other input documents. There were
no objections raised by the group regarding presentation of late contributions, although there was
some expression of annoyance and remarks on the difficulty of dealing with late contributions
and late revisions.
It was remarked that documents that are substantially revised after the initial upload are also a
problem, as this becomes confusing, interferes with study, and puts an extra burden on
synchronization of the discussion. This is especially a problem in cases where the initial upload
is clearly incomplete, and in cases where it is difficult to figure out what parts were changed in a
revision. For document contributions, revision marking is very helpful to indicate what has been
changed. Also, the "comments" field on the web site can be used to indicate what is different in a
revision.
"Placeholder" contribution documents that were basically empty of content, with perhaps only a
brief abstract and some expression of an intent to provide a more complete submission as a
revision, were considered unacceptable and were to be rejected in the document management
system, as has been agreed since the third meeting.
The initial uploads of the following contribution documents (both crosscheck reports) were
rejected as "placeholders" without any significant content and were not corrected until after the
upload deadline: JCTVC-W0064 [improved on 02-19] and JCTVC-W0070 [improved on 02-18].
A few contributions may have had some problems relating to IPR declarations in the initial
uploaded versions (missing declarations, declarations saying they were from the wrong
companies, etc.). Any such issues were corrected by later uploaded versions in a reasonably
timely fashion in all cases (to the extent of the awareness of the chairs).
Page: 176
Date Sav
Some other errors were noticed in other initial document uploads (wrong document numbers in
headers, etc.) which were generally sorted out in a reasonably timely fashion. The document web
site contains an archive of each upload, along with a record of uploading times.
1.4.3 Measures to facilitate the consideration of contributions
It was agreed that, due to the continuingly high workload for this meeting, the group would try to
rely extensively on summary CE reports. For other contributions, it was agreed that generally
presentations should not exceed 5 minutes to achieve a basic understanding of a proposal – with
further review only if requested by the group. For cross-verification contributions, it was agreed
that the group would ordinarily only review cross-checks for proposals that appear promising.
When considering cross-check contributions, it was agreed that, to the extent feasible, the
following data should be collected:

Subject (including document number).

Whether common conditions were followed.

Whether the results are complete.

Whether the results match those reported by the contributor (within reasonable limits,
such as minor compiler/platform differences).

Whether the contributor studied the algorithm and software closely and has demonstrated
adequate knowledge of the technology.

Whether the contributor independently implemented the proposed technology feature, or
at least compiled the software themselves.

Any special comments and observations made by a cross-check contributor.
1.4.4 Outputs of the preceding meeting
The output documents of the previous meeting, particularly including the meeting report
JCTVC-V1000, the improved HEVC Test Model 16 (HM16) JCTVC-V1002, the RExt
Reference Software Draft 4 JCTVC-V1011, the SHVC test model 11 (SHM11) JCTVC-V1007,
the SHVC Conformance Testing Draft 4 JCTVC-V1008, the SHVC Reference Software Draft 3
JCTVC-V1013, the SHVC Verification Test Plan JCTVC-V1004, the Screen Content Coding
(SCC) Draft Text 5 JCTVC-V1005 (integrated into a draft of HEVC version 4), and the SCC test
model 6 JCTVC-V1014, were approved. The HM reference software and its extensions for RExt,
SHVC and SCC were also approved.
The group was initially asked to review the prior meeting report for finalization. The meeting
report was later approved without modification.
All output documents of the previous meeting and the software had been made available in a
reasonably timely fashion.
The chairs asked if there were any issues regarding potential mismatches between perceived
technical content prior to adoption and later integration efforts. It was also asked whether there
was adequate clarity of precise description of the technology in the associated proposal
contributions.
It was remarked that, in regard to software development efforts – for cases where "code cleanup"
is a goal as well as integration of some intentional functional modification, it was emphasized
that these two efforts should be conducted in separate integrations, so that it is possible to
understand what is happening and to inspect the intentional functional modifications.
The need for establishing good communication with the software coordinators was also
emphasized.
Page: 177
Date Sav
At some previous meetings, it had been remarked that in some cases the software
implementation of adopted proposals revealed that the description that had been the basis of the
adoption apparently was not precise enough, so that the software unveiled details that were not
known before (except possibly for CE participants who had studied the software). Also, there
should be time to study combinations of different adopted tools with more detail prior to
adoption.
CE descriptions need to be fully precise – this is intended as a method of enabling full study and
testing of a specific technology.
Greater discipline in terms of what can be established as a CE may be an approach to helping
with such issues. CEs should be more focused on testing just a few specific things, and the
description should precisely define what is intended to be tested (available by the end of the
meeting when the CE plan is approved).
It was noted that sometimes there is a problem of needing to look up other referenced
documents, sometimes through multiple levels of linked references, to understand what
technology is being discussed in a contribution – and that this often seems to happen with CE
documents. It was emphasized that we need to have some reasonably understandable basic
description, within a document, of what it is talking about.
Software study can be a useful and important element of adequate study; however, software
availability is not a proper substitute for document clarity.
Software shared for CE purposes needs to be available with adequate time for study. Software of
CEs should be available early, to enable close study by cross-checkers (not just provided shortly
before the document upload deadline).
Issues of combinations between different features (e.g., different adopted features) also tend to
sometimes arise in the work.
1.5 Attendance
The list of participants in the JCT-VC meeting can be found in Annex B of this report.
The meeting was open to those qualified to participate either in ITU-T WP3/16 or ISO/IEC
JTC 1/SC 29/WG 11 (including experts who had been personally invited by the Chairs as
permitted by ITU-T or ISO/IEC policies).
Participants had been reminded of the need to be properly qualified to attend. Those seeking
further information regarding qualifications to attend future meetings may contact the Chairs.
1.6 Agenda
The agenda for the meeting was as follows:

IPR policy reminder and declarations

Contribution document allocation

Reports of ad hoc group activities

Reports of Core Experiment activities

Review of results of previous meeting

Consideration of contributions and communications on project guidance

Consideration of technology proposal contributions

Consideration of information contributions

Coordination activities
Page: 178
Date Sav

Future planning: Determination of next steps, discussion of working methods,
communication practices, establishment of coordinated experiments, establishment of
AHGs, meeting planning, refinement of expected standardization timeline, other planning
issues

Other business as appropriate for consideration
1.7 IPR policy reminder
Participants were reminded of the IPR policy established by the parent organizations of the JCTVC and were referred to the parent body websites for further information. The IPR policy was
summarized for the participants.
The ITU-T/ITU-R/ISO/IEC common patent policy shall apply. Participants were particularly
reminded that contributions proposing normative technical content shall contain a non-binding
informal notice of whether the submitter may have patent rights that would be necessary for
implementation of the resulting standard. The notice shall indicate the category of anticipated
licensing terms according to the ITU-T/ITU-R/ISO/IEC patent statement and licensing
declaration form.
This obligation is supplemental to, and does not replace, any existing obligations of parties to
submit formal IPR declarations to ITU-T/ITU-R/ISO/IEC.
Participants were also reminded of the need to formally report patent rights to the top-level
parent bodies (using the common reporting form found on the database listed below) and to
make verbal and/or document IPR reports within the JCT-VC as necessary in the event that they
are aware of unreported patents that are essential to implementation of a standard or of a draft
standard under development.
Some relevant links for organizational and IPR policy information are provided below:

http://www.itu.int/ITU-T/ipr/index.html (common patent policy for ITU-T, ITU-R, ISO,
and IEC, and guidelines and forms for formal reporting to the parent bodies)

http://ftp3.itu.int/av-arch/jctvc-site (JCT-VC contribution templates)

http://www.itu.int/ITU-T/studygroups/com16/jct-vc/index.html (JCT-VC general
information and founding charter)

http://www.itu.int/ITU-T/dbase/patent/index.html (ITU-T IPR database)
 http://www.itscj.ipsj.or.jp/sc29/29w7proc.htm (JTC 1/SC 29 Procedures)
It is noted that the ITU TSB director's AHG on IPR had issued a clarification of the IPR
reporting process for ITU-T standards, as follows, per SG 16 TD 327 (GEN/16):
"TSB has reported to the TSB Director's IPR Ad Hoc Group that they are receiving Patent
Statement and Licensing Declaration forms regarding technology submitted in Contributions
that may not yet be incorporated in a draft new or revised Recommendation. The IPR Ad
Hoc Group observes that, while disclosure of patent information is strongly encouraged as
early as possible, the premature submission of Patent Statement and Licensing Declaration
forms is not an appropriate tool for such purpose.
In cases where a contributor wishes to disclose patents related to technology in Contributions,
this can be done in the Contributions themselves, or informed verbally or otherwise in
written form to the technical group (e.g. a Rapporteur's group), disclosure which should then
be duly noted in the meeting report for future reference and record keeping.
It should be noted that the TSB may not be able to meaningfully classify Patent Statement
and Licensing Declaration forms for technology in Contributions, since sometimes there are
no means to identify the exact work item to which the disclosure applies, or there is no way
Page: 179
Date Sav
to ascertain whether the proposal in a Contribution would be adopted into a draft
Recommendation.
Therefore, patent holders should submit the Patent Statement and Licensing Declaration form
at the time the patent holder believes that the patent is essential to the implementation of a
draft or approved Recommendation."
The chairs invited participants to make any necessary verbal reports of previously-unreported
IPR in draft standards under preparation, and opened the floor for such reports: No such verbal
reports were made.
1.8 Software copyright disclaimer header reminder
It was noted that, as had been agreed at the 5th meeting of the JCT-VC and approved by both
parent bodies at their collocated meetings at that time, the HEVC reference software copyright
license header language is the BSD license with preceding sentence declaring that contributor or
third party rights are not granted, as recorded in N10791 of the 89th meeting of ISO/IEC JTC 1/
SC 29/WG 11. Both ITU and ISO/IEC will be identified in the <OWNER> and
<ORGANIZATION> tags in the header. This software is used in the process of designing the
HEVC standard and its extensions, and for evaluating proposals for technology to be included in
the design. After finalization of the draft (current version JCTVC-M1010), the software will be
published by ITU-T and ISO/IEC as an example implementation of the HEVC standard and for
use as the basis of products to promote adoption of the technology.
Different copyright statements shall not be committed to the committee software repository (in
the absence of subsequent review and approval of any such actions). As noted previously, it must
be further understood that any initially-adopted such copyright header statement language could
further change in response to new information and guidance on the subject in the future.
1.9 Communication practices
The documents for the meeting can be found at http://phenix.it-sudparis.eu/jct/. For the first two
JCT-VC meetings, the JCT-VC documents had been made available at http://ftp3.itu.int/avarch/jctvc-site, and documents for the first two JCT-VC meetings remain archived there as well.
That site was also used for distribution of the contribution document template and circulation of
drafts of this meeting report.
JCT-VC email lists are managed through the site https://mailman.rwthaachen.de/mailman/options/jct-vc, and to send email to the reflector, the email address is jctvc@lists.rwth-aachen.de. Only members of the reflector can send email to the list. However,
membership of the reflector is not limited to qualified JCT-VC participants.
It was emphasized that reflector subscriptions and email sent to the reflector must use real names
when subscribing and sending messages, and subscribers must respond adequately to basic
inquiries regarding the nature of their interest in the work.
It was emphasized that usually discussions concerning CEs and AHGs should be performed
using the JCT-VC email reflector. CE internal discussions should primarily be concerned with
organizational issues. Substantial technical issues that are not reflected by the original CE plan
should be openly discussed on the reflector. Any new developments that are result of private
communication cannot be considered to be the result of the CE.
For the headers and registrations of CE documents and AHG reports, email addresses of
participants and contributors may be obscured or absent (and will be on request), although these
will be available (in human readable format – possibly with some "obscurification") for primary
CE coordinators and AHG chairs.
Page: 180
Date Sav
1.10
Terminology
Some terminology used in this report is explained below:

ACT: Adaptive colour transform.

Additional Review: The stage of the ITU-T "alternative approval process" that follows a
Last Call if substantial comments are received in the Last Call, during which a proposed
revised text is available on the ITU web site for consideration as a candidate for final
approval.

AHG: Ad hoc group.

AI: All-intra.

AIF: Adaptive interpolation filtering.

ALF: Adaptive loop filter.

AMP: Asymmetric motion partitioning – a motion prediction partitioning for which the
sub-regions of a region are not equal in size (in HEVC, being N/2x2N and 3N/2x2N or
2NxN/2 and 2Nx3N/2 with 2N equal to 16 or 32 for the luma component).

AMVP: Adaptive motion vector prediction.

APS: Active parameter sets.

ARC: Adaptive resolution conversion (synonymous with DRC, and a form of RPR).

AU: Access unit.

AUD: Access unit delimiter.

AVC: Advanced video coding – the video coding standard formally published as ITU-T
Recommendation H.264 and ISO/IEC 14496-10.

BA: Block adaptive.

BC: May refer either to block copy (see CPR or IBC) or backward compatibility. In the
case of backward compatibility, this often refers to what is more formally called forward
compatibility.

BD: Bjøntegaard-delta – a method for measuring percentage bit rate savings at equal
PSNR or decibels of PSNR benefit at equal bit rate (e.g., as described in document
VCEG-M33 of April 2001).

BL: Base layer.

BoG: Break-out group.

BR: Bit rate.

BV: Block vector (MV used for intra BC prediction, not a term used in the standard).

CABAC: Context-adaptive binary arithmetic coding.

CBF: Coded block flag(s).

CC: May refer to context-coded, common (test) conditions, or cross-component.

CCP: Cross-component prediction.
Page: 181
Date Sav

CD: Committee draft – a draft text of an international standard for the first formal ballot
stage of the approval process in ISO/IEC – corresponding to a PDAM for amendment
texts.

CE: Core experiment – a coordinated experiment conducted after the 3rd or subsequent
JCT-VC meeting and approved to be considered a CE by the group (see also SCE and
SCCE).

CGS: Colour gamut scalability (historically, coarse-grained scalability).

CL-RAS: Cross-layer random-access skip.

CPR: Current-picture referencing, also known as IBC – a technique by which sample
values are predicted from other samples in the same picture by means of a displacement
vector sometimes called a block vector, in a manner basically the same as motioncompensated prediction.

Consent: A step taken in the ITU-T to formally move forward a text as a candidate for
final approval (the primary stage of the ITU-T "alternative approval process").

CTC: Common test conditions.

CVS: Coded video sequence.

DAM: Draft amendment – a draft text of an amendment to an international standard for
the second formal ballot stage of the approval process in ISO/IEC – corresponding to a
DIS for complete texts.

DCT: Discrete cosine transform (sometimes used loosely to refer to other transforms
with conceptually similar characteristics).

DCTIF: DCT-derived interpolation filter.

DoC / DoCR: Disposition of comments report – a document approved by the WG 11
parent body in response to comments from NBs.

DIS: Draft international standard – the second formal ballot stage of the approval process
in ISO/IEC – corresponding to a DAM for amendment texts.

DF: Deblocking filter.

DRC: Dynamic resolution conversion (synonymous with ARC, and a form of RPR).

DT: Decoding time.

ECS: Entropy coding synchronization (typically synonymous with WPP).

EOTF: Electro-optical transfer function – a function that converts a representation value
to a quantity of output light (e.g., light emitted by a display.

EPB: Emulation prevention byte (as in the emulation_prevention_byte syntax element of
AVC or HEVC).

EL: Enhancement layer.

ET: Encoding time.

ETM: Experimental test model (design and software used for prior HDR/WCG coding
experiments in MPEG).
Page: 182
Date Sav

FDAM: Final draft amendment – a draft text of an amendment to an international
standard for the third formal ballot stage of the approval process in ISO/IEC –
corresponding to an FDIS for complete texts.

FDIS: Final draft international standard – a draft text of an international standard for the
third formal ballot stage of the approval process in ISO/IEC – corresponding to an
FDAM for amendment texts.

HEVC: High Efficiency Video Coding – the video coding standard developed and
extended by the JCT-VC, formalized by ITU-T as Rec. ITU-T H.265 and by ISO/IEC as
ISO/IEC 23008-2.

HLS: High-level syntax.

HM: HEVC Test Model – a video coding design containing selected coding tools that
constitutes our draft standard design – now also used especially in reference to the (nonnormative) encoder algorithms (see WD and TM).

IBC (also Intra BC): Intra block copy, also known as CPR – a technique by which
sample values are predicted from other samples in the same picture by means of a
displacement vector called a block vector, in a manner conceptually similar to motioncompensated prediction.

IBDI: Internal bit-depth increase – a technique by which lower bit-depth (esp. 8 bits per
sample) source video is encoded using higher bit-depth signal processing, ordinarily
including higher bit-depth reference picture storage (esp. 12 bits per sample).

IBF: Intra boundary filtering.

ILP: Inter-layer prediction (in scalable coding).

IPCM: Intra pulse-code modulation (similar in spirit to IPCM in AVC and HEVC).

JM: Joint model – the primary software codebase that has been developed for the AVC
standard.

JSVM: Joint scalable video model – another software codebase that has been developed
for the AVC standard, which includes support for scalable video coding extensions.

Last Call: The stage of the ITU-T "alternative approval process" that follows Consent,
during which a proposed text is available on the ITU web site for consideration as a
candidate for final approval.

LB or LDB: Low-delay B – the variant of the LD conditions that uses B pictures.

LD: Low delay – one of two sets of coding conditions designed to enable interactive realtime communication, with less emphasis on ease of random access (contrast with RA).
Typically refers to LB, although also applies to LP.

LM: Linear model.

LP or LDP: Low-delay P – the variant of the LD conditions that uses P frames.

LUT: Look-up table.

LTRP: Long-term reference pictures.

MANE: Media-aware network elements.

MC: Motion compensation.
Page: 183
Date Sav

MOS: Mean opinion score.

MPEG: Moving picture experts group (WG 11, the parent body working group in
ISO/IEC JTC 1/SC 29, one of the two parent bodies of the JCT-VC).

MV: Motion vector.

NAL: Network abstraction layer (as in AVC and HEVC).

NB: National body (usually used in reference to NBs of the WG 11 parent body).

NSQT: Non-square quadtree.

NUH: NAL unit header.

NUT: NAL unit type (as in AVC and HEVC).

OBMC: Overlapped block motion compensation (e.g., as in H.263 Annex F).

OETF: Opto-electronic transfer function – a function that converts to input light (e.g.,
light input to a camera) to a representation value.

OLS: Output layer set.

OOTF: Optical-to-optical transfer function – a function that converts input light (e.g.
l,ight input to a camera) to output light (e.g., light emitted by a display).

PCP: Parallelization of context processing.

PDAM: Proposed draft amendment – a draft text of an amendment to an international
standard for the first formal ballot stage of the ISO/IEC approval process – corresponding
to a CD for complete texts.

POC: Picture order count.

PoR: Plan of record.

PPS: Picture parameter set (as in AVC and HEVC).

QM: Quantization matrix (as in AVC and HEVC).

QP: Quantization parameter (as in AVC and HEVC, sometimes confused with
quantization step size).

QT: Quadtree.

RA: Random access – a set of coding conditions designed to enable relatively-frequent
random access points in the coded video data, with less emphasis on minimization of
delay (contrast with LD).

RADL: Random-access decodable leading.

RASL: Random-access skipped leading.

R-D: Rate-distortion.

RDO: Rate-distortion optimization.

RDOQ: Rate-distortion optimized quantization.

ROT: Rotation operation for low-frequency transform coefficients.

RPLM: Reference picture list modification.
Page: 184
Date Sav

RPR: Reference picture resampling (e.g., as in H.263 Annex P), a special case of which
is also known as ARC or DRC.

RPS: Reference picture set.

RQT: Residual quadtree.

RRU: Reduced-resolution update (e.g. as in H.263 Annex Q).

RVM: Rate variation measure.

SAO: Sample-adaptive offset.

SCC: Screen content coding.

SCE: Scalability core experiment.

SCCE: Screen content core experiment.

SCM: Screen coding model.

SD: Slice data; alternatively, standard-definition.

SEI: Supplemental enhancement information (as in AVC and HEVC).

SH: Slice header.

SHM: Scalable HM.

SHVC: Scalable high efficiency video coding.

SIMD: Single instruction, multiple data.

SPS: Sequence parameter set (as in AVC and HEVC).

TBA/TBD/TBP: To be announced/determined/presented.

TE: Tool Experiment – a coordinated experiment conducted toward HEVC design
between the 1st and 2nd or 2nd and 3rd JCT-VC meetings, or a coordinated experiment
conducted toward SHVC design between the 11th and 12th JCT-VC meetings.

TGM: Text and graphics with motion – a category of content that primarily contains
rendered text and graphics with motion, mixed with a relatively small amount of cameracaptured content.

VCEG: Visual coding experts group (ITU-T Q.6/16, the relevant rapporteur group in
ITU-T WP3/16, which is one of the two parent bodies of the JCT-VC).

VPS: Video parameter set – a parameter set that describes the overall characteristics of a
coded video sequence – conceptually sitting above the SPS in the syntax hierarchy.

WD: Working draft – a term for a draft standard, especially one prior to its first ballot in
the ISO/IEC approval process, although the term is sometimes used loosely to refer to a
draft standard at any actual stage of parent-level approval processes.

WG: Working group, a group of technical experts (usually used to refer to WG 11, a.k.a.
MPEG).

WPP: Wavefront parallel processing (usually synonymous with ECS).

Block and unit names:
o CTB: Coding tree block (luma or chroma) – unless the format is monochrome,
there are three CTBs per CTU.
Page: 185
Date Sav
o CTU: Coding tree unit (containing both luma and chroma, synonymous with
LCU), with a size of 16x16, 32x32, or 64x64 for the luma component.
o CB: Coding block (luma or chroma), a luma or chroma block in a CU.
o CU: Coding unit (containing both luma and chroma), the level at which the
prediction mode, such as intra versus inter, is determined in HEVC, with a size of
2Nx2N for 2N equal to 8, 16, 32, or 64 for luma.
o LCU: (formerly LCTU) largest coding unit (name formerly used for CTU before
finalization of HEVC version 1).
o PB: Prediction block (luma or chroma), a luma or chroma block of a PU, the level
at which the prediction information is conveyed or the level at which the
prediction process is performed1 in HEVC.
o PU: Prediction unit (containing both luma and chroma), the level of the prediction
control syntax1 within a CU, with eight shape possibilities in HEVC:

2Nx2N: Having the full width and height of the CU.

2NxN (or Nx2N): Having two areas that each have the full width and half
the height of the CU (or having two areas that each have half the width
and the full height of the CU).

NxN: Having four areas that each have half the width and half the height
of the CU, with N equal to 4, 8, 16, or 32 for intra-predicted luma and N
equal to 8, 16, or 32 for inter-predicted luma – a case only used when
2N×2N is the minimum CU size.

N/2x2N paired with 3N/2x2N or 2NxN/2 paired with 2Nx3N/2: Having
two areas that are different in size – cases referred to as AMP, with 2N
equal to 16 or 32 for the luma component.
o TB: Transform block (luma or chroma), a luma or chroma block of a TU, with a
size of 4x4, 8x8, 16x16, or 32x32.
o TU: Transform unit (containing both luma and chroma), the level of the residual
transform (or transform skip or palette coding) segmentation within a CU (which,
when using inter prediction in HEVC, may sometimes span across multiple PU
regions).
1.11
Liaison activity
The JCT-VC did not directly send or receive formal liaison communications at this meeting.
However, the following information was conveyed in liaison communication at the parent-body
level:

[ TD 362-GEN ] / m37654 was received from DVB including remarks on BT.2020
colorimetry, both for HDR and for backward compatibility to SDR for "UHD-1 Phase 2"
(also interested in NBC). WG 11 responded in N 16070 to convey the conclusion reached on
not developing a new profile as recorded in section 7.1, to comment on further gap analysis
1
The definitions of PB and PU are tricky for a 64x64 intra luma CB when the prediction control
information is sent at the 64x64 level but the prediction operation is performed on 32x32 blocks.
The PB, PU, TB and TU definitions are also tricky in relation to chroma for the smallest block
sizes with the 4:2:0 and 4:2:2 chroma formats. Double-checking of these definitions is
encouraged.
Page: 186
Date Sav
to be done, to convey information about recent SHVC verificition testing and planned
guideline development and testing of single-layer coding using the HEVC Main 10 Profile,
and to list some existing possible approaches to backward compatibility for HDR/WCG
video coding.

M37752 was received by WG 11 from ATSC on HDR. WG 11 responded in N 16071 to
convey the conclusion reached on not developing a new profile as recorded in section 7.1, to
convey news of planned guideline development and testing of single-layer coding using the
HEVC Main 10 Profile, and of planned study for backward compatible use cases.

[ TD 385-GEN ] / M37788 was received from ITU-R WP 6C on BT.[HDR]. WG 11
responded in N 16072 to confirm plans to move forward toward support of BT.[HDR] in the
three most relevant standards: High Efficiency Video Coding (HEVC), Advanced Video
Coding (AVC), and Coding-Independent Code Points (CICP), and the expectation of the
completion of the approval process of associated amendments to be completed by mid-2017.

M38016 was received from ETSI. WG 11 responded in N 16073 to confirm that HDR/WCG
video services can be effectively enabled using the HEVC video coding standard (Main 10
profile) and describe plans to work to produce a technical report on conversion and coding
practices for HDR/WCG video, focused primarily on single-layer coding using the HEVC
Main 10 Profile, and related verification testing. Some available approaches for backward
compatible use cases were also surveyed for information.

[ TD 387-GEN ] / m38082 was received from SMPTE on ST 2094 HDR remapping
information requested support in HEVC for new supplemental information being developed
in SMPTE. WG 11 responded in N 16074 to convey the conclusion reached on not
developing a new profile as recorded in section 7.1, to suggest that the user data registered
SEI message approach be used by SMPTE for its new remapping information, and the plan to
produce a technical report on conversion and coding practices for HDR/WCG video, focused
primarily on single-layer coding using the HEVC Main 10 Profile, and is also conducting
related verification testing.
1.12
Opening remarks
Opening remarks included:

Meeting logistics, review of communication practices, attendance recording, and
registration and badge pick-up reminder
Primary topic areas were noted as follows:

Screen content coding
o Input from editors (incl. forgotten/open technical aspects such as combined
wavefronts & tiles)
o Input on WP with current picture referencing
o Input on DPB handling with CPR
o Non-SCC aspects (VUI/SEI, scalable RExt profiles, high-throughput profiles,
errata)

HDR: New work allocated to JCT-VC effective at this meeting
o There has been prior work on parent bodies (e.g., eight CEs of MPEG, AHGs of
MPEG & VCEG, and a January meeting of MPEG AHG)
Page: 187
Date Sav
o A significant development is the report of new non-normative techniques for
improved "anchor" encoding with ordinary HEVC (see esp. CE1 of MPEG)
o That can be part of the development of the planned "best practices" / "good
practices" document development, which is a planned deliverable (see CE1 of
MPEG and AHG report of VCEG)
o The top priority is to confirm whether an enhanced signal processing extension
can provide an adequate improvement over such an anchor to justify a new
extension

How to measure the benefit achieved?

Should organize some viewing during this meeting and discuss metrics.

Non-normative vs. normative for decoding post-processing also to be
considered
o There is new relevant liaison input (e.g. from ITU-R WP6C and SMPTE).
o Backward compatibility is an additional issue to consider.

Multiple dimensions to the issue

The highest priority is to have the best HDR quality (regardless of BC)

There have been liaison inputs requesting backward compatibility

Some ways to achieve BC that are already supported (or soon will be):

Simply coding HDR video (e.g. with HLG or PQ) and letting an
SDR decoder receive it and display it as SDR

Scalable coding

Perhaps using some existing SEI messages (tone mapping or CRI)

BC brings the need to consider relative importance of the SDR and HDR,
and which flavours of SDR and HDR can be supported

Two flavors of BC have been proposed:

Decoders that do not use new extension data, for "bitstream
backward compatibility"

Using new extension data for "display backward compatibility"

Potential for needing multiple profiles / multiple extensions depending on
the use case

We need to properly define requirement aspects and priorities

Prior expressions of requirements were sought: It was suggested to
look for an MPEG output, and for the VCEG output, to check
C.853 and to see if there was also another VCEG output (circa
February 2015 in both cases)

There are approaches that have been studied in CEs to address
improvement over existing BC solutions
Page: 188
Date Sav

Requirement issues need parent-level discussion, clarification, and
joint agreement for BC issues, before spending effort on that in
JCT-VC.

Corrigenda items for version 3 (see, e.g., the AHG2 and AHG7 reports and SCC editor
input JCTVC-W0096)

Verification testing for SHVC

Reference software and conformance testing for RExt & SHVC
o RExt reference software was forwarded for approval at the last meeting, and
conformance is scheduled for FDAM at this meeting
o SHVC conformance is scheduled for FDAM (and also for MV-HEVC and 3DHEVC from JCT-3V, but needed editorial improvement (a couple of bugs also
identified)

Test model texts and software manuals

Common test conditions for coding efficiency experiments (hasn't been under active
development here recently; being considered more elsewhere)
Status of deliverables (all delivered).
Key deliverables initially planned from this meeting:

SCC specification Draft 6 (FDIS)

FDAM for RExt conformance (draft 6)

Verification test report for SHVC

SHVC software Draft 4 (FDAM)

SHVC conformance Draft 5 (FDAM)

SCC Reference software

SCC Conformance

SCC verification testing plan (postponed)

HDR outputs
o Suggested practices draft
o CEs (none later planned)
o Test model (none later issued)
 New HM, SHM (none later issued), SCM
A single meeting track was followed for most meeting discussions.
1.13
Scheduling of discussions
Scheduling: Generally, meeting time was scheduled during 0900–2000 hours, with coffee and
lunch breaks as convenient. The meeting had been announced to start with AHG reports and
continue with parallel review on Screen Content Coding CE work and related contributions
during the first few days. Ongoing scheduling refinements were announced on the group email
reflector as needed.
Some particular scheduling notes are shown below, although not necessarily 100% accurate or
complete:
Page: 189
Date Sav

Fri. 19 Feb., 1st day
o 0900–1400, 1530–2100 Opening remarks, status review, AHG report review, CE1,
CE2, CE3, CE4

Sat. 20 Feb., 2nd day
o Did not meet in the morning (except side activity for visual testing preparation).
o 1400–2000 CE1 further consideration & related contributions, CE5, CE6, CE7

Sun. 21 Feb., 3rd day
o 0900–1300 CE1 & related, CE2 further consideration & related contributions
o 1630 SCC with multiview &scalability, SCC profile/level definitions, SCC editorial,
SCC performance analysis, non-normative, encoder optimization

Mon. 22 Feb., 4th day
o Morning parent-body plenary
o 1630 Joint discussion on JCT-VC topics
o 1800 Joint discussion on other topics

Tue. 23 Feb., 5th day
o CE2 & related further consideration, CE6 & related, CE7 related, CE8 related
o 1400 Joint discussion on HDR

Wed. 24 Feb., 6th day
o Morning parent-body plenary
o Joint discussion on HDR
o 1400 Alternative colour spaces, encoding practices, SEI messages and VUI, CE7
related, CE8 related

Thu. 25 Feb., 7th day
o Remaining topics

Fri. 26 Feb., 8th day
o Remaining topics (SHVC verification test results)
o Meeting closing (1240)
1.14
Contribution topic overview
The approximate subject categories and quantity of contributions per category for the meeting
were summarized and categorized as follows. Some plenary sessions were chaired by both cochairmen, and others by only one. Chairing of other discussions is noted for particular topics.
 AHG reports (12+2 HDR) (section 2)
 Documents of HDR AHG in Vancouver (6) (for information) (section 3)
 Project development status (6) (section 4)
 HDR CE1: Optimization without HEVC specification change (1) (section 5.1)
 HDR CE2: 4:2:0 YCbCr NCL fixed point (13) (section 5.2)
 HDR CE3: Objective/subjective metrics (3) (section 5.3)
 HDR CE4: Consumer monitor testing (1) (section 5.4)
 HDR CE5: Colour transforms and sampling filters (6) (section 5.5)
 HDR CE6: Non-normative post processing (10) (section 5.6)
 HDR CE7: Hybrid Log Gamma investigation (4) (section 5.7)
 HDR CE8: Viewable SDR testing (1) (section 5.8)
 SCC coding tools (5) (section 6.1)
 HDR coding (34) (section 6.2)
Page: 190
Date Sav





• CE1 related (11)
• CE2 related (7)
• CE3 related (1)
• CE6 related (1)
• CE7 related (6)
• Other (8)
High-level syntax (0) (section 6.3)
VUI and SEI messages (5) (section 6.4)
Non-normative encoder optimization (9) (section 6.5)
Plenary discussions (section 7)
Outputs & planning: AHG & CE plans, Conformance, Reference software, Verification
testing, Chroma format, CTC (sections 8, 9, and 10)
NOTE – The number of contributions in each category, as shown in parenthesis above, may
not be 100% precise.
1.15
Topics discussed in final wrap-up at the end of the meeting
Potential remaining issues reviewed near the end of the meeting (Thursday 25 February and
Friday 26 February) were as follows:

AHG report on HEVC conformance W0004

AHG report on test sequence material W0010

SCC open issues (discussed several times Thu & Fri GJS)
•
DPB with CPR W0077 (an alternative solution involving a constraint was discussed,
which could also involve double bumping sometimes) – Decision: Adopt the JCTVCW0077 fix (and don't forget to implement it in the software).
•
Alignment with multiview & scalable W0076 revisit – Decision: Approach agreed –
use the layer id
•
Layer-related NB remarks: two technical comments, which were confirmed –
invocation of INBLD decoding for additional layer sets with more than one layer, and
layers not present issue related to pruning of layers – Decision: OK per NB
comments. See section 4.4 for further detail.
•
Scalable RExt decoder conformance requirements – editorial, delegated to the editors
•
3V text bugs reported in JCT3V-N0002 (also found in editor's input V0031 and
W0096) – OK to resolve as suggested
•
When DPB size is equal to 1 and CPR is enabled, the design intent is to allow allintra with filtering disabled (only) (i.e. with the TwoVersions flag equal to 0). Editors
were asked to check the text in this regard and add an expression of the constraint for
this use case if it is not already present.

SHVC verification test review W0095 – one sequence was discussed – see notes on that
contribution.

SHM software modifications for multiview W0134 – no concerns were expressed

HDR
•
CE1-related viewing for modified QP handling W0039
•
CE2 viewing results for W0097 chroma QP offset
Page: 191
Date Sav

•
CE2-related adaptive quantization W0068, adaptive PQ W0071, combination
of CE1&CE2 W0100
CE3 subjective & objective inputs W0090, W0091

CE3-related VQM reference code
•
CE4 viewing results for consumer monitor testing? – 1000 Thu viewing was done –
the JS9500 was used for some viewing of CE4 & CE5 – there were somewhat mixed
comments about the adequacy of using the consumer monitor for HDR test viewing –
efforts to use HDMI to play the video had not been successful, so re-encoding was
needed with playback of bitstreams
•
CE5 viewing results for colour transforms & upsampling/downsampling filters? –
1000 Thu viewing was done – see notes above
•
CE5 contributions – no need for further discussion was noted
•
CE6 post-processing – no need for detailed presentation was noted
•
CE7 viewing results for HLG? – some viewing was done to illustrate the
phenomenon reported in W0035 (Arris), and it was confirmed that the phenomenon
was shown
•
CE7 1 – no need for further detailed review (see comment on viewing above)

CE7-related – no need for further discussion was noted
•
Encoding practices – no need for further discussion was noted
•
SEI/VUI – no need for further discussion was noted

General non-normative revisit W0062 GOP size and W0038 encoder optimizations - closed

HDR encoding conversion process W0046, HDRTools W0053 – no need for further
discussion was noted

Output preparations (see section 10 for full list of output documents)
•
DoCRs were prepared (SCC, RExt conformance – issuing a new edition was
confirmed, SHVC conformance – note the impact of issuing a new edition, SHVC
software – new edition, to be coordinated with JCT-3V) [4 DoCRs total – one
handled by Rajan Joshi and 3 by Teruhiko Suzuki]
•
Editorships were recorded for each output document
•
Potential press release content (SCC, SHVC VT, New edition of reference software &
conformance)
•
Request documents to be issued by WG 11 [Reference software for SCC, handled by
Teruhiko Suzuki]
•
Resolutions for WG 11 parent body
•
Draft suggested practices for HDR
•
Liaison statements issued by WG 11 parent body
•
Immediate deliverables
•
Plenary summary
•
HDR verification test plan
Page: 192
Date Sav
•

SCC software (PDAM) & conformance (not verification test plan – that will be done
later)
Plans
•
AHGs
•
CEs
•
Reflectors (jct-vc) & sites (test sequence location to be listed in CTC document) to be
used in future work
•
Meeting dates of the next meeting (Thu - Wed)
• Document contribution deadline (Mon 10 days prior to the next meeting)
There were no requests to present any "TBP" contributions in the closing plenary.
2
AHG reports (12+2=14)
The activities of ad hoc groups (AHGs) that had been established at the prior meeting are
discussed in this section.
(Consideration of these reports was chaired by GJS & JRO on Friday 19th, 1145–14:00 and
1530–1800, except as noted.)
JCTVC-W0001 JCT-VC AHG report: Project management (AHG1) [G. J. Sullivan, J.-R.
Ohm]
This document reports on the work of the JCT-VC ad hoc group on Project Management,
including an overall status report on the project and the progress made during the interim period
since the preceding meeting.
In the interim period since the 22nd JCT-VC meeting, the following (8) documents had been
produced:

The HEVC test model (HM) 16 improved encoder description (including RExt
modifications) update 4;

For the format range extensions (RExt), the RExt reference software draft 4;

For the scalable extensions (SHVC), the SHVC reference software draft 3, conformance
testing draft 4, SHVC test model 11 (SHM 11), and verification test plan for SHVC;

For the HEVC screen content coding (SCC) extensions, the HEVC screen content coding
test model 6 (SCM 6) and SCC draft text 5.
Advancing the work on further development of conformance and reference software for HEVC
and its extensions was also a significant goal.
The work of the JCT-VC overall had proceeded well and actively in the interim period with a
considerable number of input documents to the current meeting. Active discussion had been
carried out on the group email reflector (which had 1649 subscribers as of 2016-02-18), and the
output documents from the preceding meeting had been produced.
Except as noted below, output documents from the preceding meeting had been made available
at the "Phenix" site (http://phenix.it-sudparis.eu/jct/) or the ITU-based JCT-VC site
(http://wftp3.itu.int/av-arch/jctvc-site/2016_02_W_SanDiego/), particularly including the
following:

The meeting report (JCTVC-V1000) [Posted 2016-02-19]

The HM 16 improved encoder description update 4 (JCTVC-V1002) [Posted 2016-02-12]

Verification test plan for scalable HEVC profiles (JCTVC-V1004) [Posted 2016-01-11]
Page: 193
Date Sav

HEVC screen content coding draft 5 (JCTVC-V1005) [Posted 2016-01-31]

SHVC Test Model 11 (JCTVC-V1007) [Posted 2016-02-15]

SHVC Conformance Testing Draft 4 (JCTVC-V1008) [Posted 2016-01-16]

HEVC Reference Software for Format Range Extensions Profiles Draft 4 (JCTVC-V1011)
[First posted 2015-12-02, last updated 2015-12-04]

Reference Software for HEVC scalable extensions Draft 3 (JCTVC-V1013) [Posted 201601-21]

Screen Content Coding Test Model 6 Encoder Description (JCTVC-V1014) [Posted 201602-10]
The twelve ad hoc groups had made progress, and various reports from those activities had been
submitted.
The different software modules (HM16.7/8, SHM11.0 and SCM6.0) had been prepared and
released with appropriate updates approximately as scheduled (HM16.8 to be released during the
23rd meeting, SHM11.0 and SCM6.0 are based on HM16.7).
Since the approval of software copyright header language at the March 2011 parent-body
meetings, that topic seems to be resolved.
Released versions of the software are available on the SVN server at the following URL:
https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/tags/version_number,
where version_number corresponds to one of the versions described below – e.g., HM-16.7.
Intermediate code submissions can be found on a variety of branches available at:
https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/branches/branch_name,
where branch_name corresponds to a branch (eg., HM-16.7-dev).
Various problem reports relating to asserted bugs in the software, draft specification text, and
reference encoder description had been submitted to an informal "bug tracking" system
(https://hevc.hhi.fraunhofer.de/trac/hevc). That system is not intended as a replacement of our
ordinary contribution submission process. However, the bug tracking system was considered to
have been helpful to the software coordinators and text editors. The bug tracker reports had been
automatically forwarded to the group email reflector, where the issues were discussed – and this
is reported to have been helpful. It was noted that contributions had generally been submitted
that were relevant to resolving the more difficult cases that might require further review.
The ftp site at ITU-T is used to exchange draft conformance testing bitstreams. The ftp site for
downloading bitstreams is http://wftp3.itu.int/av-arch/jctvc-site/bitstream_exchange/.
A spreadsheet to summarize the status of bitstream exchange, conformance bitstream generation
is available in the same directory. It includes the list of bitstreams, codec features and settings,
and status of verification.
Approximately 100 input contributions to the current meeting had been registered. The bulk of
these relates to coding of HDR video using HEVC, a work topic newly assigned to JCT-VC by
the parent bodies. A number of late-registered and late-uploaded contributions were noted, even
though most were cross-check documents.
A preliminary basis for the document subject allocation and meeting notes for the 23rd meeting
had been circulated to the participants by being announced in email, and was publicly available
on the ITU-hosted ftp site.
Page: 194
Date Sav
JCTVC-W0002 JCT-VC AHG report: HEVC test model editing and errata reporting
(AHG2) [B. Bross, C. Rosewarne, M. Naccari, J.-R. Ohm, K. Sharman,
G. Sullivan, Y.-K. Wang]
This document reports the work of the JCT-VC ad hoc group on HEVC test model editing and
errata reporting (AHG2) between the 22nd meeting in Geneva, CH (October 2015) and the 23rd
meeting in San Diego, USA (February 2016).
An issue tracker (https://hevc.hhi.fraunhofer.de/trac/hevc) was used in order to facilitate the
reporting of errata with the HEVC documents.
WD tickets in tracker for text specification

#1439 (IRAP constraint for Main 4:4:4 Intra profile)

#1440 (Wavefront+tiles) are noted. Also noted are WD tickets #1435-1437 (editorial in
nature).
Editor action item: These should be addressed in the SCC output.
The ‗High Efficiency Video Coding (HEVC) Test Model 16 (HM 16) Update 4 of Encoder
Description‘ was published as JCTVC-V1002. This document represented a refinement of the
previous HM16 Update 2 of the Encoder Description document (JCTVC-U1002). The resultant
document provides a source of general tutorial information on HEVC Edition 1 and Range
Extensions, together with an encoder-side description of the HM-16 software.
The recommendations of the HEVC test model editing and errata reporting AHG are for JCT-VC
to:

Encourage the use of the issue tracker to report issues with the text of both the HEVC
specification and the Encoder Description.

Review the list of bug fixes collected for HEVC Edition 2, and include all confirmed bug
fixes, including the outcome of the above items, if any, into a JCT-VC output document
for the purpose of HEVC Edition 2 defect reporting.
JCTVC-W0003 JCT-VC AHG report: HEVC HM software development and software technical
evaluation (AHG3) [K. Suehring, K. Sharman]
This report summarizes the activities of the AhG on HEVC HM software development and
software technical evaluation that had taken place between the 22nd and 23rd JCT-VC meetings.
Activities focused on integration of software adoptions and software maintenance, i.e. code
tidying and fixing bugs.
The one proposal that was adopted at the last meeting regarding modifications to rate control
(JCTVC-V0078), and one proposal that was adopted at the 21st meeting on the Alternative
Transfer Characteristics SEI (JCTVC-U0033) were added to the HM development code base.
In addition, some minor bug fixes and cleanups were addressed. The distribution of the software
was made available through the SVN server set up at HHI, as announced on the JCT-VC email
reflector, and http://hevc.info has been updated.
Version 16.8 is due to be released during the meeting.
There were a number of reported software bugs that should be fixed.
HM16.8 was due to be released during the meeting. It includes:

JCTVC-V0078: modification of the rate control lower bounds. Rate control is not used in
common test conditions.

JCTVC-U0033: add support for Alternative Transfer Characteristics SEI message (with
syntax element naming taken from the current SCC draft text, V1005).

Bug reports were closed, although only one of these related to a bug within the code
(relating to the Mastering Display Colour Volume SEI).
Page: 195
Date Sav
None of the changes affect the coding results of the common test conditions set out in JCTVCL1100 or JCTVC-P1006, and therefore these have not been re-published.
The following are persistent bug reports where study is encouraged:

High level picture types: IRAP, RASL, RADL, STSA (Tickets #1096, #1101, #1333,
#1334, #1346)

Rate-control and QP selection – numerous problems with multiple slices (Tickets #1314,
#1338, #1339)

Field-coding (Tickets #1145, #1153)

Decoder picture buffer (Tickets #1277, #1286, #1287, #1304)

NoOutputOfPriorPicture processing (Tickets #1335, #1336, #1393)
 Additional decoder checks (Tickets #1367, #1383)
However, a patch has been generated that adds some conformance checks. It is being considered
for potential inclusion in a future release.
As described to the community at the last three JCT-VC meetings, alterations to remove the
unused software hierarchy in the entropy coding sections of the code, and to remove terms such
as CAVLC, are being considered. However, this will now need to also consider the impact on the
JEM branch.
The possibility of merging branches has been considered, in particular merging of the SHM
(Scalable coding HEVC Model) into HM, although further discussions are required. Currently,
within the JCT-VC and JCT-3V, there are the HM, the SCM (for SCC), the SHM (for scalable)
and the HTM (3D and multi-view) software models. There is also the JVET‘s JEM branch for
exploration of future technologies. The biggest problem to merging these branches is the clash in
the overlapping functionality in SHM and HTM – for example, the handling of multiple layers is
not consistent between the two. By careful consideration of these two codebases, it would be
possible to incrementally modify the HM so that the SHM and HTM branches gradually aligned,
although it is also very important to minimize the impact on the core HM code base as this also
impacts the other branches. The work appears to be significant and would require close
coordination between the software coordinators.
Recommendations:

Continue to develop reference software based on HM version 16.7/8 and improve its
quality.

Test reference software more extensively outside of common test conditions

Add more conformance checks to the decoder to more easily identify non-conforming
bit-streams, especially for profile and level constraints.

Encourage people who are implementing HEVC based products to report all (potential)
bugs that they are finding in that process.

Encourage people to submit bit-streams that trigger bugs in the HM. Such bit-streams
may also be useful for the conformance specification.
 Continue to investigate the merging of branches with the other software coordinators.
In discussion, it was suggested that harmonization of HM with SCM is probably the highest
priority, since SCC features should be included (or at least testable) in combination with future
video work.
Page: 196
Date Sav
JCTVC-W0004 JCT-VC AHG report: HEVC conformance test development (AHG4) [T. Suzuki,
J. Boyce, R. Joshi, K. Kazui, A. Ramasubramonian, Y. Ye]
This was not reviewed in detail at the meeting, as its review was deferred due to AHG chair late
arrival and it was later suggested that detailed review seemed undesirable due to other prorities.
The ftp site at ITU-T is used to exchange bitstreams. The ftp site for downloading bitstreams is
http://wftp3.itu.int/av-arch/jctvc-site/bitstream_exchange/
A spreadsheet stored there is used to summarize the status of bitstream exchange, and
conformance bitstream data is available at this directory. It includes the list of bitstreams, codec
features and settings, and status of verification.
The guideline to generate the conformance bitstreams is summarized in JCTVC-O1010.
JCTVC-S1004 (an output document from Strasbourg meeting of October 2014) summarized the
defects of conformance bitstreams at that time. Since the Strasbourg meeting, all known
problems were resolved. The revised bitstreams were uploaded at the following site, separating
out the bitstreams under ballot.
http://wftp3.itu.int/av-arch/jctvc-site/bitstream_exchange/under_test/
There was an offer to provide one more stressful bitstream for HEVC v.1 conformance. Main
Concept (DivX) provided two stress-testing bitstreams (for in-loop filtering) and those are
included in DAM2 of HEVC conformance. File names were corrected.

SAODBLK_A_MainConcept_4
 SAODBLK_B_MainConcept_4
They also proposed another bitstream to check the corner case of the combination of deblocking
and SAO.
 VPSSPSPPS_A_MainConcept_1
It was suggested to discuss whether to add these new bitstreams for corner cases into the official
conformance test set.
Those bitstreams are available at
http://wftp3.itu.int/av-arch/jctvc-site/bitstream_exchange/under_test_new/
A table of bitstreams that were originally planned to be generated was provided in the report,
with yellow highlighting for bitstreams that were not generated yet.
Problems in the following bitstreams were reported. The revised bitstreams are available at the
ftp site.
http://wftp3.itu.int/av-arch/jctvc-site/bitstream_exchange/under_test_new/
TransformSkip constraint: TransformSkip size setting was over the limit. In the RExt spec, the
restriction was not specified, but the constraint had already defined in JCTVC-V1005.
 PERSIST_RPARAM_A_RExt_Sony_3
TRAIL NAL: TRAIL NAL is not allowed to use in all-intra profiles, but was used in the
following bitstreams.
 GENERAL_8b_444_RExt_Sony_2.zip
The constraint on TRAIL NAL for Main 444 Intra was not defined in the RExt spec. It was
suggested that this is a bug in the spec. (See Ticket #1439)

GENERAL_10b_444_RExt_Sony_2.zip

GENERAL_12b_444_RExt_Sony_2.zip

GENERAL_16b_444_RExt_Sony_2.zip
 GENERAL_16b_444_highThroughput_RExt_Sony_2.zip
4:4:4 tools in 4:2:2 profile: Flags for 4:4:4 coding tools were turned on in a 4:2:2 profile.
Page: 197
Date Sav
 ADJUST_IPRED_ANGLE_A_RExt_Mitsubishi_2.zip
Tile size: Tile size was under the limit.
 WAVETILES_RExt_Sony_2.zip
The SHVC conformance document was available in JCTVC-V1008, which includes a
supplemental notes attachment that includes instructions for generation of bitstreams. Additional
editorial improvement of this document is expected before finalization.
New bitstreams were provided since the previous release. These include two previously
identified bitstreams that were made available: unaligned POC and 8-layer bitstream.
Additionally, six new bitstreams corresponding to the new scalable range extensions profiles
were provided.
Bitstreams had since been provided for all identified categories. However, an update in the SHM
software had exposed minor problems, requiring some of the earliest generated bitstreams to be
replaced. The bitstream contributors of the affected bitstreams had been notified, and several of
those bitstreams had already been replaced. Several bitstreams were still in need of replacement,
and were indicated in a bitstream status list in the report.
The AHG recommended to finalize RExt and SHVC conformance testing and to start to discuss
SCC conformance testing.
JCTVC-W0005 JCT-VC AHG report: SHVC verification testing (AHG5) [V. Baroncini, Y.-K.
Wang, Y. Ye (co-chairs)]
The main activities of this AHG included the following:

Finalized all rate points, encoded all test sequences using SHM10.0 reference software,
and delivered all bitstreams, sequences, and MD5SUM values for subjective evaluations.

Conducted verification test following the finalized SHVC verification test plan.
 Produced a preliminary report of the test for review and finalization in JCTVC-W0095.
The AHG recommended to review the preliminary verification test report JCTVC-W0095.
The test coordinator (V. Baroncini) remarked that the refinements of the selection of test content
proved beneficial, so the test was able to be effective and provide a good discrimination covering
a wide range of coded video quality.
JCTVC-W0006 JCT-VC AHG report: SCC coding performance analysis (AHG6) [H. Yu,
R. Cohen, A. Duenas, K. Rapaka, J. Xu, X. Xu (AHG chairs)]
This report summarized the activities of the JCT-VC ad hoc group on SCC coding performance
analysis (AHG6) between the JCT-VC 22nd meeting in Geneva, Switzerland, and the 23rd
meeting in San Diego, USA.
A kick-off message for AHG 6 was sent out on Dec. 1, 2015.
As decided in the last meeting, the test conditions described in JCTVC-U1015 remained valid
during this meeting cycle. To facilitate new simulations that would be done with the new
reference software SCM-6.0, the new test-results reporting templates with SCM-6.0 anchor data
were uploaded in JCTVC-U1015-v4 on Dec. 2, 2015. These templates were also distributed via
the AHG6 kick-off email.
One relevant contribution was noted: JCTVC-W0104.
It was recommended to continue to evaluate the coding performance of the draft SCC coding
features in comparison with the existing HEVC tools in the Main profile and range extensions.
JCTVC-W0007 JCT-VC AHG report: SCC extensions text editing (AHG7) [R. Joshi, J. Xu (AHG
co-chairs), Y. Ye, S. Liu, G. Sullivan, R. Cohen (AHG vice-chairs)]
Page: 198
Date Sav
This document reported on the work of the JCT-VC ad hoc group on SCC extensions text editing
(AHG7) between the 22nd JCT-VC meeting in Geneva, Switzerland (October 2015).and the
23nd JCT-VC meeting in San Diego, USA (February 2016).
The fifth specification text draft (JCTVC-V1005) for the High Efficiency Video Coding Screen
Content Coding (HEVC SCC) extensions was produced by the editing ad hoc group as an output
document following the decisions taken at the 22nd JCT-VC meeting in Geneva, Switzerland
(October 2015).
The text of JCTVC-V1005 was submitted to ISO/IEC JTC1/SC29 as an output document WG11
N15776 Study Text of ISO/IEC DIS 23008-2:201X 3rd Edition
The following is a list of changes with respect to JCTVC-U1005-v1:

Added constraints on the range of palette escape samples to disallow nonsense values
(JCTVC-V0041)

Allowed zero size for palette initialization in PPS, so palette initialization can be
explicitly disabled at the PPS level (JCTVC-V0042)

Integrated restriction for maximum palette predictor size (JCTVC-V0043)

Modified the formula for computing PaletteMaxRun including consideration of
copy_above_indices_for_final_run_flag (JCTVC-V0065)

Removed special treatments of IBC as different from inter (w.r.t. usage for predicting
intra regions and TMVP disabling) (JCTVC-V0066)

Moved the SPS level IBC mode check aspect to the PPS level; MV & referencing picture
based memory bandwidth constraint rather than AMVR based; if the decoder detects the
prohibited case, the decoding process will convert so that list 0 uniprediction is used (and
the converted motion data is stored) (JCTVC-V0048)

When the merge candidate references the current picture, round it to an integer value
(JCTVC-V0049)

When conversion to uniprediction is performed due to 8x8 biprediction, only do that
conversion if TwoVersionsOfCurrDecPicFlag is equal to 1 (JCTVC-V0056)

About sps_max_num_reorder_pics and sps_max_dec_pic_buffering_minus1 when IBC is
used (JCTVC-V0050)

About sps_max_dec_pic_buffering_minus1 when IBC is used (JCTVC-V0057)

Editorial improvements in palette run coding (JCTVC-V0060)

Decoding process for intra_boundary_filtering_disabled_flag (previously missing)

(HEVC_ERRATA): Added a constraint that the reference layer active SPS (specified by
sps_scaling_list_ref_layer_id or pps_scaling_list_ref_layer_id) shall have
scaling_list_enabled_flag = 1 (JCTVC-V0011)

Generalized Constant and Non-Constant Luminance Code Points (JCTVC-V0035)

Clarification of colour description semantics (especially for transfer_characteristics)
(JCTVC-V0036)

New High Throughput Profiles for HEVC (JCTVC-V0039)

HEVC corrigendum: On parsing of bitstream partition nesting SEI message (JCTVCV0062)
Page: 199
Date Sav

Fixes to colour remapping information SEI message (JCTVC-V0064)

New HEVC scalable format range extension profiles (JCTVC-V0098)

(HEVC_ERRATA): Replaced undefined variable "Log2MaxTrafoSize" with
"MaxTbLog2SizeY" (Ticket #1363)
The screen content coding test model 6 (SCM 6) (document JCTVC-V1014) was released on 10
February 2016. Its main changes were modification of the restriction on the use of 8×8 biprediction with IBC, conversion of 8×8 bi-prediction to uni-prediction when the restriction is in
effect, removing special treatments of IBC as different from inter when using constrained intra
prediction, rounding merge candidates to integers when they reference the current picture, and
allowing zero size for palette predictor initialization in the PPS.
JCTVC-V0096 proposes editorial improvements to address the feedback and comments related
to the SCC draft text 5. It also summarizes known open issues.
The recommendations of the HEVC SCC extension draft text AHG were to:

Approve the documents JCTVC-V1005 and JCTVC-V1014 as JCT-VC outputs (agreed,
although it was noted that some issues remained, such as wavefront-tile combination)

Address the comments and feedback on SCC extensions text specification as appropriate

Compare the HEVC SCC extensions document with the HEVC SCC extensions software
and resolve any discrepancies that may exist, in collaboration with the SCC extension
software development (AHG8)
JCTVC-W0008 JCT-VC AHG report: SCC extensions software development (AHG8)
[K. Rapaka, B. Li (AHG co-chairs), R. Cohen, T.-D. Chuang, X. Xiu, M. Xu (AHG vice-chairs)]
This report summarizes the activities of Ad Hoc Group 8 on screen content extensions software
(SCM) development that have taken place between the JCT-VC 22nd meeting in Geneva,
Switzerland, and the 23rd meeting in San Diego, USA.
Multiple versions of the HM SCM software were produced and announced on the JCT-VC email
reflector. The integration details and performance summary of these revisions are provided in the
next subsections. The performance results of software revisions were observed to be consistent
with the adopted techniques.
HM-16.7_SCM-6.0 was announced on the email reflector on November 23rd 2015. The software
was tagged as https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/tags/HM-16.7+SCM-6.0/ .
HM-16.7_SCM-6.0 incorporates following adoptions/bug fixes:

JCTVC-V0034: Palette encoder improvement for 4:2:0.

JCTVC-V0040: Enable by default the method in U0095 on fast encoder for ACT and
intra.

JCTVC-V0041: Constrain the range of escape values.

JCTVC-V0042: Allow zero size palette in PPS.

JCTVC-V0043: Restriction for maximum palette predictor size.

JCTVC-V0048: Relax 8x8 bi-bred restriction based on mv's and temporal referencing.

JCTVC-V0049: Round merge MVs when ref picture is curr pic.

JCTVC-V0056: Relax 8x8 bi-bred restriction based on the value of
TwoVersionsOfCurrDecPicFlag.
Page: 200
Date Sav

JCTVC-V0065: Modified formula for computing PaletteMaxRun including consideration
of copy_above_indices_for_final_run_flag.

JCTVC-V0066: Remove special treatments of IBC as different from inter in the case of
CIP.
 Bug fix for delta QP for palette mode.
JCTVC-V0057/ JCTVC-U0181 (storage of unfiltered decoded picture in DPB) are disabled for
now in the SCM and are planned to be enabled in the future releases after software issues
involved are fixed. (These do not impact CTC).
The performance HM-16.7_SCM-6.0 compared to HM-16.6_SCM-5.2 was described according
to the common test conditions in JCTVC-U1015. For the lossy 4:4:4 configuration, it is reported
that this version provides BD-rate reduction of 0.0%, 0.2% and 0.3% for RGB 1080p & 720p
text and graphics category in AI/RA/LB configurations respectively and BD-rate reduction of
0.0%, 0.1% and 0.2% for YUV 1080p & 720p text and graphics category in AI/RA/LB
configuration, respectively.
For the lossy 4:2:0 configuration, it is reported that this version provides BD-rate reduction of
3.2%, 2.5% and 1.6% for YUV 1080p & 720p text and graphics category in AI/RA/LB
configurations, respectively.
The improved 4:2:0 performance was suggested to be due to non-normative palette mode
encoder improvement specifically for 4:2:0 (from document JCTVC-V0034, with better handling
of the fact that some chroma is coded but discarded in the 4:2:0 case).
HM-16.6_SCM-5.3, HM-16.6_SCM-5.4 were tagged on the HHI Server on November 2, 2015
and can be downloaded at https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/tags/
The changes over HM-16.6_SCM5.2 are

Following tickets were fixed: #1411, #1417, #1418, #1419, #1420, #1421, #1422.

Macro Removals related to SCM 5.2

Merge to HM 16.7
 Misc. Cleanups/ fixes for memory leaks
It was reported that there was no noticeable change in performance under common test
configuration due to above integrations.
The JCT-VC issue tracker at https://hevc.hhi.fraunhofer.de/trac/hevc/ has been updated to allow
bug reports to be entered for SCM, currently under milestone HM+SCC-7.0, version SCC-6.0
(HM16.7).
The following tickets were closed during the meeting cycle: #1411, #1417, #1418, #1419, #1420,
#1421, and #1422. As of the time of the report, there were no open tickets.
Recommendations

Continue to develop reference software based on HM16.7_SCM6.0 and improve its
quality.

Remove macros introduced in previous versions before starting integration towards
SCM-6.x/SCM-7.0 such as to make the software more readable.

Continue merging with later HM versions.
JCTVC-W0009 JCT-VC AHG report: Complexity of SCC extensions (AHG9) [A. Duenas,
M. Budagavi, R. Joshi, S.-H. Kim, P. Lai, W. Wang, W. Xiu (co-chairs)]
This report summarized the activities of the JCT-VC ad hoc group on Complexity of SCC
extensions (AHG9)) between the JCT-VC 22nd meeting in Geneva, Switzerland, and the 23rd
meeting in San Diego, USA.
Page: 201
Date Sav
No coordinated AhG activity took place on the JCT-VC reflector between the 22nd JCT-VC
meeting in Geneva, Switzerland (October 2015). and the 23rd JCT-VC meeting in San Diego,
USA (February 2016).
Related contributions:

Encoder-only contributions
o JCTVC-W0042: SCC encoder improvement.
o JCTVC-W0075: Palette encoder improvements for the 4:2:0 chroma format and
lossless.

Aspects that affect normative requirements
o JCTVC-W0076: Comments on alignment of SCC text with multi-view and
scalable
o JCTVC-W0077: Bug fix for DPB operations when current picture is a reference
picture.
o JCTVC-W0129: SCC Level Limits Based on Chroma Format.
The AhG recommended to review the contributions related to this topic.
JCTVC-W0010 JCT-VC AHG report: Test sequence material (AHG10) [T. Suzuki, V. Baroncini,
R. Cohen, T. K. Tan, S. Wenger, H. Yu]
This was not reviewed in detail at the meeting, as its review was deferred due to AHG chair late
arrival. and it was later suggested that detailed review seemed undesirable due to other prorities.
The following related contribution was submitted at this meeting.

JCTVC-W0087 "Description of Color Graded SDR content for HDR/WCG Test
Sequences", P. J. Warren, S. M. Ruggieri, W. Husak, T. Lu, P. Yin, F. Pu (Dolby)
The AHG report contained tabulated records of the available test sequences for various types of
testing.
The AHG recommended

To add HDR test sequences to the JCTVC test sequence list.

To continue to create the list of test sequences available for HEVC development
including licensing statement.

To continue to collect test materials.
JCTVC-W0011 JCT-VC AHG report: SHVC test model editing (AHG11) [G. Barroux, J. Boyce,
J. Chen, M. Hannuksela, G. J. Sullivan, Y.-K. Wang, Y. Ye]
This document reports the work of the JCT-VC ad hoc group on SHVC text editing (AHG11)
between the 22th JCT-VC meeting in Geneva, Switzerland (15–21 October 2015) and the 23th
JCT-VC meeting in San Diego, USA (19–26 February 2016).
During this period, the editorial team worked on the Scalable HEVC (SHVC) Test Model text to
provide the corresponding text for the newly added profiles. Six related profiles were defined in
the draft SHVC specification:

Scalable Main (previously specified)

Scalable Main10 (previously specified)

Scalable Monochrome (newly added, planned to be final at this meeting)

Scalable Monochrome12 (newly added, planned to be final at this meeting)
Page: 202
Date Sav

Scalable Monochrome16 (newly added, planned to be final at this meeting)

Scalable Main 4:4:4 (newly added, planned to be final at this meeting)
JCTVC-W0012 JCT-VC AHG report: SHVC software development (AHG12) [V. Seregin,
H. Yong, G. Barroux]
This report summarizes activities of the AHG12 on SHVC software development between 22th
and 23th JCT-VC meetings.
The latest software version was SHM-11.0.
SHM software can be downloaded at
https://hevc.hhi.fraunhofer.de/svn/svn_SHVCSoftware/tags/
The software issues can be reported using bug tracker https://hevc.hhi.fraunhofer.de/trac/shvc
The latest version was SHM-11.0 and it was released with JCTVC-V1013 for DAM ballot in
ISO/IEC.
SHM-11.0 is based on HM-16.7 and it includes newly adopted profiles for scalable monochrome
and 4:4:4 chroma colour format coding. The following software improvements had been
additionally made:

Fixes for tickets ## 90, 93, 94, 96, 97
 Reduce amount of software changes relative to HM for potential software merging
After adoption of the new profiles, SHM software supports higher than 10 bit-depth input. For
such high bit-depth, macro RExt__HIGH_BIT_DEPTH_SUPPORT should be enabled for better
performance.
Anchor data and templates had been generated based on common test conditions JCTVC-Q1009
and attached to this report.
Development plan and recommendations

Continue to develop reference software based on SHM-11.0 and improve its quality.

Fix open tickets.
JCTVC-W0013 VCEG AHG report: HDR Video Coding (VCEG AHG5) [J. Boyce, E. Alshina,
J. Samuelsson (AHG chairs)]
This document reports on the work of the VCEG ad hoc group on HDR Video Coding. The first
version of the document contained the mandates, a short introduction and the notes from the
teleconference meeting. In the second version of the document, text was added related to email
discussions, contributions and recommendations.
During 2015 there were several input contributions to VCEG and JCT-VC reporting on,
discussing and encouraging the experts to investigate coding of HDR video in a configuration
that is sometimes called HDR10 (meaning that the representation format is 10 bit, 4:2:0, nonconstant luminance, Y‘CbCr, SMPTE ST 2084 EOTF, ITU-R BT.2020 colour primaries and,
when compressed, is using the Main10 Profile of HEVC). Examples of such contributions are
COM-16 C.853, JCTVC-U0045, JCTVC-V0052, JCTVC-V0053 and COM-16 C.1031. In the
discussions related to COM-16-C.1031 it was agreed that VCEG would start working towards
documenting best practice techniques for using the existing HEVC standard for compression of
HDR video.
A kick-off message was sent on December 9, 2015, listing the mandates of ad hoc group 5 and
encouraging emails related to the ad hoc group to include both the VCEG reflector, vcegexperts@yahoogroups.com, and the JCT-VC reflector, jct-vc@lists.rwth-aachen.de.
In relation to software development, it was reported on December 9, 2015 that several
improvements and extensions to HDRTools had been included in the gitlab repository (see also
JCTVC-W0046).
Page: 203
Date Sav
A suggestion for a starting point for documenting best practice techniques was circulated on
December 10, 2015. The discussion on the email reflectors where primarily related to mandate 4,
the study of best practices techniques for pre-processing, encoding, post-processing, and
metadata for handling HDR video content using the existing HEVC standard, and the collection
of such information toward documenting such practices in a technical paper or supplement. In
total approximately 50 email messages were exchanged.
A teleconference was held on February 8, 22.00-23.59 CET with attendance of approximately 30
experts.
The purpose of the teleconference was to continue the work the request to "Study best practices
techniques for pre-processing, encoding, post-processing, and metadata for handling HDR video
content using the existing HEVC standard, and collect such information toward documenting
such practices in a technical paper or supplement" and to try to resolve some of the open issues
that had been discussed via email, in preparation for the upcoming JCT-VC meeting.
There was a question about whether other operating points besides "HDR10" should be included
in this work. In JCT-VC review, it was agreed to not necessarily limit the scope in that manner.
The agreement that VCEG‘s and MPEG's work on HDR should be performed jointly in JCT-VC
was noted.
Approximately 80 HDR related input contributions had been submitted to JCT-VC (including
CE reports and crosschecks) as of the time of preparation of the report.
A summary of issues discussed in the teleconference (see the report for details about the
discussion in the teleconference about each of these) is as follows:

Should we aim for one or multiple documents? The AHG conference call agreed to start
focusing on one document but with the different parts as separated as possible.

Is the scope clear and is the concept of a "generalized hypothetical reference viewing
environment" useful? The call agreed to use the generalized hypothetical reference
viewing environment for the time being and to further investigate the effect of the
reference viewing environment.
o In JCT-VC review, it was commented that we need better clarity of what such an
environment is, especially if we recommend certain practices that target the
viewing experience in that environment, and should make sure that a benefit is
provided in that environment with those practices.
o In JCT-VC review, it was noted that the new ITU-R BT.[HDR] document
describes a reference viewing environment for critical viewing

Should SEI messages be included? (Which ones? Only signalling, or also how to apply
them?) The call agreed to put SEI messages (the mastering display colour volume SEI
message and the content light level SEI message) and VUI parameters in an appendix.
o In JCT-VC review, it was suggested not to limit the scope to just two such SEI
messages, and just the ones most supported elsewhere (e.g. by HDMI), and to
survey what other SEI message might be appropriate for HDR.

At what level of detail should encoding and decoding processes be described? (Only
HEVC, or also AVC?) The call agreed to include recommendations for encoding with
HEVC and also with AVC (but with secondary priority).

Should pre- and/or post-filtering (such as denoising etc) be included/considered? The call
agreed that denoising filters, smoothing filters etc. should not be considered to be in
scope, but that colour transform and chroma resampling etc. should be considered to be in
Page: 204
Date Sav
scope. The type of filters that are in scope and the type of filters that are not in scope
should be explicitly defined in the document.
o In JCT-VC review, it was suggested to at least mention such processing and
describe why further information has not been provided, as a scope clarification,
and perhaps to provide cautionary remarks if some practice that has been used for
SDR needs special consideration because of HDR issues.

How to handle the multitude of different processing platforms and target applications?
The call agreed to consider creating categories and evaluate if the process should be
different for the different categories. For those processes where alternatives are motivated
these should be included and it should be described under what circumstances it might be
beneficial to use them.

Should there be insights and motivations for the processes? The call agreed to include
motivation and insights related to the described technologies.
o In JCT-VC review, it was commented that the proposed draft seems a bit too
much like a specification of a single prescribed thing to do rather than providing
general informative and educational comments, and this should be addressed.

What experiments would we like to perform?
o In JCT-VC review, it was said that this remained an open issue and various things
should be considered; tests should be used to verify that the suggestions in the
document are beneficial.

How to proceed with the document? The call agreed to include the circulated document
as an attachment to the AHG report.
o In JCT-VC review, it was agreed that additional information could and should be
included in the TR beyond what has been drafted.
Attached to this AHG report was the straw-man suggestion for the technical paper that was made
available on the VCEG email reflector and JCT-VC email reflector on December 10, 2015 with
modifications as suggested on December 17, 2015.
The ad hoc group recommended to create a break out group to continue the work and the
discussion related to developing the TR. Rather than using a break-out group, the work at the
JCT-VC meeting was dones in a "single track" fashion.
Topics identified for this meeting:

Review the draft of the TR

Consider relevant input contributions
JCTVC-W0014 MPEG AHG report: HDR and WCG Video Coding [C. Fogg, E. Francois,
W. Husak, A. Luthra]
This document provides the report of the MPEG AHG on HDR and WCG Video Coding.
An email reflector xyz@lists.uni-klu.ac.at was set up and hosted by Alpen-Adria-Universität
Klagenfurt, and industry experts were invited to join via http://lists.uniklu.ac.at/mailman/listinfo/xyz and participate in this activity. As of February 3, 2016 there were
459 email addresses on this reflector‘s list. Several email messages were exchanged on various
topics related to the mandates. Eight separate sub-reflectors were also set up associated with
eight categories of the Core Experiments identified in Geneva (N15622).
It was agreed that we should clarify which reflectors will be used by JCT-VC in future
HDR/WCG work, and later clarified that the main JCT-VC reflector should be used.
Page: 205
Date Sav
The AHG had one face-to-face meeting in Vancouver from January 12–14, 2016, with
attendance of about 30 people (according to the registration record – there may have been
somewhat more that were not recorded). As this activity of HDR/WCG video coding was
planned to be made a part of JCTVC starting from February 19, 2016, the 2nd face to face
meeting of the AHG, tentatively planned to start on February 19, 2016, was not held.
Documents reviewed at the AHG F2F meeting
JCT-VC
MPEG
Source
Title
W0032
m37539
InterDigital
Encoder optimization for HDR/WCG
coding
W0031
m37536
CE2 chairs
Description of the reshaper parameters
derivation process in ETM reference
software
W0024
m37537
CE4 chairs
Report of HDR CE4
W0025
m37538
CE5 chairs
Report of HDR CE5
Preliminary Preliminary Philips
Report on CE6 (see notes regarding
draft for
draft for
document registration difficulties)
W0026
m37738
W0033
m37541
BBC
Partial results for Core Experiment
W0101
m37535
BBC
Hybrid Log-Gamma HDR
W0035
m37543
Arris
Some observations on visual quality of
Hybrid Log-Gamma (HLG) TF processed
video (CE7)
Preliminary Preliminary BBC
Partial results for Core Experiment 7(see
draft for
draft for
notes regarding document registration
W0061
m37695
difficulties)
Preliminary Preliminary Technicolor SDR quality evaluation of HLG
draft for
draft for
W0110
m37771
W0036
m37602
Philips
Report on CE7 xCheck and Viewing (later
withdrawn)
A substantial of additional detail about the AHG activities and AHG meeting was provided in the
AHG report.
The CE topics were as follows:

HDR CE1: Optimization without HEVC specification change

HDR CE2: Post-decoding signal proc with 4:2:0 10 b YCbCr NCL fixed point output for
HDR video coding
o Setting 0: Improving HDR coding beyond what CE1 could do (may or may not be
BC), see W0084/W0085
o Setting 1: Backward compatibility by one method for chroma (may also improve
HDR)
o Setting 2: Backward compatibility by another method for chroma (may also improve
HDR)

HDR CE3: Objective/subjective metrics

HDR CE4: Consumer monitor testing

HDR CE5: Alternative colour transforms and 4:4:44:2:0 resampling filters
Page: 206
Date Sav

HDR CE6: Using other processing not included in CE2 (e.g. 4:4:4 domain processing)

HDR CE7: Hybrid log gamma investigation

HDR CE8: Viewable SDR testing (for single-layer BC)
CE2 experimental test model (ETM) experiments can be categorized as follows:
Mode
Use case
Luma
Chroma mode
0
Single-layer
HDR
8 piecewise
polynomial
segments.
Intra plane
Mode
1 piece-wise linear
segment each for
Cb, Cr.
1
Single-layer SDR
Cross plane
8 piece-wise linear
backward
segments each for
compatible.
Cb, Cr.
2
8 piece-wise linear
segments shared by
Cb, Cr.
(All modes are now completely automatic. This was not the case before.)
Setting 0 was said to be better performing for HDR quality than "setting 1".
It was agreed to first check "CE1 v3.2" versus one of the automatic "CE2 setting 0" variants.
3
Documents considered at the MPEG AHG meeting in Vancouver (4)
See JCTVC-W0014 for details of the prior AHG discussion of these contributions – their further
review was not considered necessary at the JCT-VC meeting.
Preliminary versions of JCTVC-W0061 and JCTVC-W0110 and at least one of the CE reports
(CE6, see JCTVC-W0026) were also reviewed at the Vancouver meeting.
JCTVC-W0031 Description of the reshaper parameters derivation process in ETM reference
software [K. Minoo (Arris), T. Lu, P. Yin (Dolby), L. Kerofsky (InterDigital), D. Rusanovskyy
(Qualcomm), E. Francois (Technicolor)]
JCTVC-W0032 Encoder optimization for HDR/WCG coding [Y. He, Y. Ye, L. Kerofsky
(InterDigital)]
JCTVC-W0035 Some observations on visual quality of Hybrid Log-Gamma (HLG) TF processed
video (CE7) [A. Luthra, D. Baylon, K. Minoo, Y. Yu, Z. Gu]
JCTVC-W0037 Hybrid Log-Gamma HDR [A. Cotton, M. Naccari (BBC)]
4
Project development, status, and guidance (6)
4.1 Corrigenda items (0)
See the AHG 2 report.
Page: 207
Date Sav
4.2 Profile/level definitions (1)
JCTVC-W0129 SCC Level Limits Based on Chroma Format [S. Deshpande, S.-H. Kim (Sharp)]
[late]
Initially discussed Sunday 21st 1700 (chaired by GJS).
The current calculation in HEVC SCC Draft 5 for MaxDpbSize does not account for different
chroma subsampling. This proposal describes extending the current design to support using
higher MaxDpbSize for lower chroma format values. In this document MaxDpbSize calculation
which accounts for memory use due to chroma components is proposed for Screen-Extended
Main 10 4:4:4 profile.
A similar concept was previously proposed in V0037.
It was asked how common the use case might be.
It was remarked that the only SCC profiles that support monochrome are 4:4:4 profiles.
This was later considered in joint meeting discussions (see section 7.1).
4.3 Conformance test set development (0)
See the AHG report JCTVC-W0004 and outputs JCTVC-W1008 for SHVC, JCTVC-W1012 for
RExt and improved version 1 testing, and JCTVC-W1016 for SCC.
4.4 SCC text development (2)
Note also the E′ [0,1] versus [0, 12] scaling issue and the "full range" scaling issue and JCTVCW0044 and parent-level ILS from ITU-R ([ TD 385-GEN ]), and also the IPT colour space,
which were considered in the HDR category. See other notes on those topics.
Discussed Sunday 21st 1430 (chaired by GJS).
JCTVC-W0096 Proposed editorial improvements to HEVC Screen Content Coding Draft Text 5
[R. Joshi, G. Sullivan, J. Xu, Y. Ye, S. Liu, Y.-K. Wang, G. Tech] [late]
This document summarizes the proposed editorial improvements to HEVC Screen Context
Coding Draft Text 4. It also identifies known issues that should be addressed in version 6 of the
draft text. The accompanying documents, JCTVC-V1005-W0096.doc and JCTVCV0031_fixes_annexes_F_G_v2, contain the proposed changes with revision marks relative to
JCTVC-V1005-v1.

Proposed editorial improvements related to adaptive colour transform (editorial – OK)

Wavefronts & tiles should be allowed to be used together in Screen-Extended 4:4:4
profiles Decision (BF): OK (this was an overlooked item from the previous meeting) –
the editors were asked to check the wording of the CPR region constraint

Small editorial items – OK
o Ticket #1440 – editorial clarification
o Ticket #1435: Changed "residual_adaptive_colour_transform_flag" to
"residual_adaptive_colour_transform_enabled_flag" in subclause 8.4.1.
o Ticket #1434 and #1436: Introduced the variable ControlParaAct as an input to
subclause 8.4.4.1: General decoding for intra blocks.
o Ticket #1437 small luma/chroma copy-paste error
o In subclause I.7.4.3.3.7, "Picture parameter set 3D extension semantics", the
variable MaxNumLayersMinus1 is used without a definition.
o In subclauses G.11.2.1 and H.11.2.1, an equation number is missing.
Page: 208
Date Sav
o Ticket #1431: In subclause 7.3.6.3, pred_weight_table( ), which is invoked from
slice_segment_header( ), has dependencies on RefPicList0 and PicOrderCntVal,
variables that (according to the first sentence of subclause 8.3.2) are only
computed "after decoding of a slice header".
o Ticket #1433: Missing CABAC information.
o Ticket #1439: IRAP constraint for Main 4:4:4 Intra profile (forgotten mention of
one profile in a list)

Regarding support for 4:2:2 in the new High Throughput profiles (esp. the ScreenExtended ones), it was tentatively agreed to not support 4:2:2 in the Screen-Extended
ones, but this plan was reversed after further discussion in a joint meeting session (see
section 7.1)

Monochrome is not supported in Screen-Extended Main and Screen-Extended Main 10
profiles - no action on that

Decoder conformance for Scalable Range Extensions – Editorial – delegated to the
editors, as the intent is clear.

Should an Annex A SCC decoder be required to have INBLD capability? That doesn't
really seem necessary or appropriate – no action on that.

Should the SCC HT profiles be called HT profiles? Editors have discretion to consider
some terminology restructuring, but no clear approach was identified in the discussion
that seemed better than the current text in that regard.

Multiview & SHVC aspects were noted.
NB comments & disposition
Two technical comments from Finland (bug fixes of prior scalability-related aspects) were
received and agreed.
Item 1: In F.8.1.3, When nuh_layer_id is greater than 0 and the general profile_idc signals a
single-layer profile (specified in Annex A), no decoding process is specified. This can happen
when the target output layer set is an additional output layer set (which, by definition, excludes
the base layer). In this case the layer with SmallestLayerId is an independent non-base layer that
is not decoded. In another example, this happens when an output layer set contains the base layer
and an independent non-base layer. In this case, the independent non-base layer is not decoded if
its profile_idc indicates a single-layer profile.
It was proposed to change the end of the decoding process invocation logic in F.8.1.3 as follows:
"– Otherwise, if general_profile_idc in the profile_tier_level( ) syntax structure
VpsProfileTierLevel[ profile_tier_level_idx[ TargetOlsIdx ][ lIdx ] ] is equal to 8, the
decoding process for the current picture takes as inputs the syntax elements and upper-case
variables from clause I.7 and the decoding process of clause I.8.1.2 is invoked.
– Otherwise (the current picture belongs to an independent non-base layer), the decoding
process for the current picture takes as inputs the syntax elements and upper-case variables
from clause F.7 and the decoding process of clause 8.1.3 is invoked by changing the
references to clauses 8.2, 8.3, 8.3.1, 8.3.2, 8.3.3, 8.3.4, 8.4, 8.5, 8.6, and 8.7 with clauses
F.8.2, F.8.3, F.8.3.1, F.8.3.2, F.8.3.3, F.8.3.4, F.8.4, F.8.5, F.8.6, and F.8.7, respectively."
The decoding process invocation for an independent non-base layer is hence identical to that of
the invocation for the base layer (also in F.8.1.3).
Page: 209
Date Sav
Item 2: F.14.3.3 includes the following constraint: "A layers not present SEI message shall not
be included in an SEI NAL unit with TemporalId greater than 0." This constraint unnecessarily
limits the signalling flexibility of an entity that prunes layers from a bitstream. Layer pruning
decisions may be made at any access unit, not just access units with TemporalId equal to 0.
Presumably the intent of the constraint has been to avoid the removal of layers not present SEI
message(s) as a consequence of removing the highest sub-layer(s) of the bitstream. However, it
is asserted the intent can be achieved by a differently phrased constraint without the loss of
flexibility and access unit wise preciseness of indicating layers not present. It was proposed to
replace the constraint with the following:
"When a layers not present SEI message is associated with coded picture with TemporalId
firstTid that is greater than 0 and that coded picture is followed, in decoding order, by any coded
picture with TemporalId less than firstTid in the same CVS, a layers not present SEI message
shall be present for the next coded picture, in decoding order, with TemporalId less than
firstTid."
Other NB comments seemed to be on issues that have been resolved.
4.5 HEVC coding performance, implementation demonstrations and design analysis
(1)
4.5.1 HM performance (0)
See JCTVC-W0104.
4.5.2 RExt performance (0)
See JCTVC-W0104.
4.5.3 SHVC performance/verification test (1)
JCTVC-W0095 SHVC verification test results [Y. He, Y. Ye (InterDigital), Hendry, Y. K. Wang
(Qualcomm), V. Baroncini (FUB)]
Discussed Fri 26th 1145 (GJS)
This document reports the results for the SHVC verification test.
A subjective evaluation was conducted to compare the SHVC Scalable Main and Scalable Main
10 profiles to HEVC simulcast using the HEVC Main and Main 10 profiles, respectively. The
verification tests cover a range of video resolutions from 540p to 4K, and various scalability
cases, including spatial 1.5x, spatial 2x, SNR and color gamut scalabilities (CGS).
Average bitrate savings of SHVC vs. HEVC simulcast was reported as illustrated below:
Page: 210
Date Sav
The results confirmed that the SHVC Main and Main 10 profile can achieve the same subjective
quality as HEVC simulcast while requiring on average approximately 40~60% less bit rate.
The EL/BL ratio used was lower than previously planned for CGS since additional bit rate didn't
seem necessary. This change had been agreed in the interim, with email reflector confirmation.
The "Parakeet" sequence was tested and was in the plan for the test, but had been used in the
prior design work for SHVC. It actually had worse test result than other sequences used in that
part of the test, so any concern about that issue would seem unwarranted. Including that test
sequence in the final report thus seemed appropriate and was agreed.
4.5.4 SCC performance, design aspects and test conditions (1)
JCTVC-W0104 Comparison of Compression Performance of HEVC Screen Content Coding
Extensions Test Model 6 with AVC High 4:4:4 Predictive profile [B. Li, J. Xu, G. J. Sullivan
(Microsoft)]
Presented on Sunday 21st at 1730 (Rajan).
This contribution is a study of the relative objective (i.e. PSNR-based) compression performance
of HEVC Screen Content Coding (SCC) Test Model 6 (SCM 6) and AVC High 4:4:4 Predictive
Profile. It builds upon the prior work reported in JCTVC-G399, JCTVC-H0360, JCTVC-I0409,
JCTVC-J0236, JCTVC-K0279, JCTVC-L0322, JCTVC-M0329, JCTVC-O0184, JCTVC-P0213,
JCTVC-R0101, JCTVC-S0084, JCTVC-T0042, JCTVC-U0051, and JCTVC-V0033 – updating
the results by using the latest available reference software (JM-19.0, HM-16.7+SCM-6.0), profile
and test model designs, and SCC common test conditions (CTC) test sequences. The overall
results indicate that for screen content CTC sequences, the HEVC SCC Test Model 6 improves
quite substantially over JM-19.0. For example, for RGB text and graphics with motion (TGM)
1080p&720 sequences, HEVC SCC Test Model 6 saves 86%, 81%, and 78% bits for AI, RA and
LB lossy coding over JM-19.0, respectively (the corresponding numbers are also 86%, 81% and
78% in JCTVC-V0033, which compares HM-16.6+SCM-5.2 with JM-19.0).
Results were also provided for non-CTC sequences (used during RExt development). For some
simple sequences such as the waveform sequence, the BD-rate saving were over 90%.
4.6 Software development (1)
JCTVC-W0134 SHM software modifications for multi-view support [X. Huang (USTC),
M. M. Hannuksela (Nokia)] [late]
Discussed Sat. 25th p.m. (GJS).
Page: 211
Date Sav
The contribution includes SHM source code that contains multiview support. It is asserted that
such software could be useful e.g. for creating and decoding MV-HEVC bitstreams with an
external base layer and for cross-verifying the HTM encoder (in its MV-HEVC mode) with SHM
decoder or vice versa. The operation of the software was reportedly verified by a simulation with
the JCT-3V test set with an IBP inter-view prediction hierarchy. It was proposed to include the
proposed software changes into the SHM.
In revision 1 of the contribution, a bug pointed out by Tomohiro Ikai was fixed.
Further discussed near the end of the meeting. See section 1.15.
Decision (SW): This was adopted subject to software quality check with the software
coordinator(s).
4.7 Source video test material (0)
No contributions were noted for JCT-VC on source video test sequence material. (See the JVET
report for video test material considered there.)
5
Core experiments in HDR (39)
5.1 HDR CE1: Optimization without HEVC specification change (2)
Initially discussed Fri 19th 1745 (GJS & JRO).
5.1.1 CE1 summary and general discussion (1)
JCTVC-W0021 Report on HDR CE1 [Jacob Ström, Joel Sole, Yuwen He]
This document provides a description of work done in Core Experiment (CE1). The goal of this
CE was to identify and investigate methods for the optimization of the HEVC Main 10 coding of
High Dynamic Range (HDR) and Wide Colour Gamut (WCG) content without any specification
changes. During the course of the CE1, several ways to improve the anchors have been found
and incorporated into the anchor generation process. At the interim meeting in Vancouver
meeting, anchor v3.2 was selected as the anchor to be used.
The anchor generation process has gone through a number of versions summarized in Table 1.
Final code
released
Version Comment
number
Oct 30, 2015
Date of decision to
use
as
current
anchor
v1.0
CfE anchor
v2.0
CE1.a + CE1.b + HM16.7 + VUI/SEI Oct 30 2015
parameter changes + chroma position change +
integer QP
Nov
2015
10, v2.1
New sequences, split sequences, new rate
points
Nov
2015
21, v3.0
Average-luma controlled adaptive QP
Dec 6, 2015
v3.1
Bugfix1 (luma dQP LUT metadata file reading)
Jan 5, 2016
v3.2
Bugfix2 (luma dQP signaling) + integer QP
Dec 1, 2015
Jan 12, 2016
On Jan 5, 2016, Interdigital shared two anchor improvement proposals to the reflector based on
m37227. The software was based on v3.0 of the anchors and contained two tests; test1 was
Page: 212
Date Sav
anchor v3.0 with added deblocking optimization and chroma QP offset adjustment, and test 2
was anchor v3.0 with deblocking optimization only.
During the interim AHG meeting, anchors 3.2 were shown on the SIM2 display and compared
against v2.1. It was concluded that the new anchors (v3.2) were a substantial improvement over
v2.1.
A viewing was also held between Interdigital‘s test1 and anchor v3.2. The group decided to
select anchor v3.2, which became the current anchor.
It was commented that this QP control is deterministic; similar in spirit to fixed-QP, and
suggested not for the group get into the territory of full rate control for such experiments.
It was commented that reference software for "best practices", e.g. as a branch to the HM and the
HDRtools software package, should be developed (and probably approved as standard reference
software).
Further discussed Saturday GJS & JRO 1800
For software distribution:

The CE1 anchor will be made a branch of the HM

The HDRtools package will made accessible via a login available to all JCT-VC members
5.1.2 CE1 primary contributions (0)
5.1.3 CE1 cross checks (1)
JCTVC-W0108 Cross-check report on HDR/WCG CE1 new anchor generation [D. Jun, J. Lee,
J. W. Kang] [late]
5.2 HDR CE2: HDR CE2: 4:2:0 YCbCr NCL fixed point for HDR video coding (13)
Initially discussed Fri 1800 GJS & JRO
5.2.1 CE2 summary and general discussion (1)
JCTVC-W0022 Report of HDR Core Experiment 2 [D. Rusanovskyy (Qualcomm), E. Francois
(Technicolor), L. Kerofsky (InterDigital), T. Lu (Dolby), K. Minoo (Arris)]
The exploratory experiment 2 (CE2) was established at the 113th MPEG meeting (N15800) with
investigate HDR video coding technologies operating in 4:2:0 YCbCr NCL fixed point domain.
Detailed CE2 description and planned sub-tests was provided in ISO/IEC JTC1/SC29/WG11
document N15795. There were 6 cross-checked input contribution reporting results of 10 subtests and 5 CE2 related contributions.
As a result of CE2 activity all given mandates were reportedly accomplished in time. The
Exploration Test Model (ETM) with automated encoder-side algorithms was released on 201512-19, and the software description and simulation results were provided on on 2016-01-12.
A preliminary CE2 report was presented in the January 2016, F2F meeting of the MPEG AHG
on HDR. The CE2 description was updated based on the outcome of the F2F meeting. This
document reports CE2 activity which followed the F2F meeting.
Planned CE2 sub-tests
Planed sub-test
Proponents
CE2.a-1 luma ATF with LCS (m37245)
Qualcomm
CE2.a-2 reshaping (m37267)
Dolby
Page: 213
Date Sav
CE2.a-3 adaptive transfer function (m37091)
CE2.b-1 Cross-channel modulation (m37088 and m37285)
CE2.b-2 SDR backward compatibility study (m37092)
CE2.c Chroma QP offset (m37179)
CE2.d DeltaQP adjustment for Luma
CE2.e-1 Enhancement of ETM
CE2.e-2 harmonization of luma and chroma reshaping
(m37332)
CE2.e-3 automatic selection of ETM parameters
Arris
Technicolor
Arris
Qualcomm,
Arris
Arris
Qualcomm
Arris
Interdigital
A revised CE description was requested to be uploaded as part of the report.
JCTVC
doc
W0022
W0033
Title
Source
Sub-test
HDR CE2 report (This document)
HDR CE2: report of CE2.b-1 experiment
(reshaper setting 1)
HDR CE2: CE2.a-2, CE2.c, CE2.d and CE2.e3
HDR CE2: Report of CE2.a-3, CE2.c and
CE2.d experiments (for reshaping setting 2)
HDR CE2: Report of CE2.b-2, CE2.c and
CE2.d experiments (for reshaping setting 2)
HDR CE2: CE2.c-Chroma QP offset study
report
HDR CE2: Report on CE2.a-1 LCS
CE2-related: Adaptive Quantization-Based
HDR video Coding with HEVC Main 10
Profile
CE2-related: Adaptive PQ: Adaptive
Perceptual Quantizer for HDR video Coding
with HEVC Main 10 Profile
HDR CE2 related: Further Improvement of
JCTVC-W0084
HDR CE2-related: some experiments on
reshaping with input SDR
HDR CE2-related: Results for combination of
CE1 (anchor 3.2) and CE2
HDR CE2: cross-check report of JCTVCW0084
HDR CE2: cross-check report of JCTVCW0101
Coordinators
Technicolor
CE2.b-1
W0093
HDR CE2: cross-check report of JCTVCW0033
Dolby
W0094
HDR CE2: cross-check report of JCTVCW0097
Dolby
W0084
W0093
W0094
W0097
W0101
W0068
W0071
W0085
W0089
W0100
W0112
W0113
Dolby,
InterDigital
Arris
Arris
Qualcomm
Qualcomm
ZTE
CE2.a-2, CE2.c,
CE2.d, C2.e-3
CE2a-3, CE2.c,
CE2.d
CE2.b-2, CE2.c,
CE2.d
CE2.c
CE2.a
ZTE
Dolby,
InterDigital
Technicolor
Qualcomm
Technicolor
Technicolor
Page: 214
Date Sav
W0101
W0128
W0130
HDR CE2 related: cross-check report of linear
luma re-shaper
HDR CE2: Cross-check of JCTVC-W0084
(combines solution for CE2.a-2, CE2.c, CE2.d
and CE2.e-3)
Crosscheck for further improvement of HDR
CE2 (JCTVC-W0085)
Samsung
Qualcomm
Intel
It was asked whether there were changes happening in CE2 after the stabilization of CE1 that
could have been applied to CE1, and whether there had been sufficient time to study what had
been happening. However, it was commented that some or all of the changes that took place
were not really the development of new ideas but rather the application of things that had taken
place in the CE1 context.
In principle, CE1 itself could be a moving target (e.g. further improvement of the table mapping
of brightness to QP) but its evolution was stopped by a desire to deliver it on a deadline.
It was agreed that we should do an initial test of CE2 vs. CE1 and that it should consider only
one of the two CE2 variants since they had been stabilized earlier: W0084 and W0097.
Some participants had received the Jan 27 software ETM r1 (which was not the W0084 software)
and had studied it and had it ready to use. Samsung had checked this.
Some proponents suggested using the Feb 1 W0084 CE2 software instead, as it was delivered as
the software to be used for the CE. This had been cross-checked by Technicolor (although they
had reported a rate allocation issue - see notes elsewhere).
It was agreed to use the Feb 1 software for the viewing test conducted during the meeting.
Some concern about the bit allocation was expressed – see notes elsewhere. We could investigate
that question afterwards as appropriate.
We have two codecs * 15 test sequences * 4 bit rates = 120 total possible encodings to review.
It was suggested to trim the number of test sequences to avoid ones that are too short - to select
about 6-8 total.
It was agreed to use the three lowest bit rates.
This results in 2*6*3=36 encodings to consider.
A dry run was requested fro Sat a.m.; with testing Sat 6 p.m. or by lunch Sunday.
See discussion of W0112.
It was noted that it is important to make sure the PC graphics card sync rate is 50 Hz if the
display is 50 Hz.
The selected sequences were:

Balloon [guy lines]

Market [walls, canopy, clothing]

EBU06Starting (play slowed to 50; source is 100) [lines of tracks, problems in dark]

Garage Exit [power line at the end]

Showgirl [face area, background dark region blocking & noise]
 Sunrise [dark areas, clouds in sky, trees with blocking]
V. Baroncini was asked to coordinate the testing. See section 7.1 for the results reporting.
Tests previously performed in the CE were as follows.
Page: 215
Date Sav
Reported sub-tests
Proponents Crosschecker
CE2.a-1 luma ATF with LCS
(m37245)
CE2.a-2 reshaping (m37267)
Jointly with CE2.a-2, CE2.c,
CE2.d, CE2.e-3
CE2.e-3 automatic selection of
ETM parameters
Qualcomm
Technicolor
Dolby,
Interdigital
Qualcomm
Technicolor
Notes
W0101
Merged
with
CE2.a-2
Merged
with
CE2.a-2
Merged
with
CE2.a-2
CE2.c Chroma QP offset
(m37179)
CE2.d DeltaQP adjustment for
Luma
Report
Doc.
Crosscheck
report
W0113
W0084
W0128
W0112
W0033
W0093
CE2.b-1 Cross-channel
modulation (m37088 and
m37285)
CE2.a-3 adaptive transfer
function (m37091)
Technicolor Dolby
CE2.b-2 SDR backward
compatibility study (m37092)
Arris
CE2.c Chroma QP offset
(m37179)
CE2.e-1 Enhancement of ETM
CE2.e-2 harmonization of luma
and chroma reshaping
(m37332)
Qualcomm
Dolby
W0097
Qualcomm
Arris
Arris
Withdrawn W0130
Withdrawn
Arris
Combined
with
CE2.c and
CE2.d
Combined
with
CE2.c and
CE2.d
W0093
W0094
W0094
CE2 sub-tests summary:
W0033:
CE2.b-1 experiment (reshaper setting 1)
Backward compatible configuration of ETM, reshaper setting 1 (cross component YCbCr
processing)
Algorithms of luma and chroma reshapers derivation
No changes to the syntax or decoding process of the ETM reported
These experiments relate to SDR backward compatible configuration, and focus on reshape
setting 1 that involves cross-plane colour reshaping. The experiments only involve the reshaping
algorithm, and do not require any change in the current ETM syntax and inverse reshaping
process. The modified reshaper tuning algorithm for reshape setting 1 was proposed to be
adopted in the ETM. It was reported that the proposed changes improve the reshaped SDR
content quality (in terms of visual rendering of contrast and colours), while preserving the
compression performance.
W0084: Combined solution for CE2.a-2, CE2.c, CE2.d and CE2.e-3, includes:
Page: 216
Date Sav

Non backward compatible configuration of ETM, reshaper setting 0 (independent
component YCbCr processing)

Algorithm for luma reshaper derivation and automatic update mechanism

HM encoder optimization

DeltaQP adjustment for luma and ChromaQP offset adjustment for chroma for local
control

Change to deblocking filter param derivation
 No changes to the syntax or decoding process of the ETM reported
This proposal reports a combination of CE2 subtests: CE2.a-2 on luma forward reshaping
improvement, CE2.c on chromaQPOffset, CE2.d on DeltaQP adjustment for luma, and CE2.e-3
on automatic selection of ETM parameters. The experiment is to test non-normative luma
reshaping improvement as well as joint optimization of reshaper and encoder. The encoder
optimization methods include chromaQPOffset, luma DeltaQP adjustment, and deblocking filter
parameter selection.
W0093 HDR CE2: Report of CE2.a-3, CE2.c and CE2.d experiments (for reshaping setting 2),
includes:

Backward compatible configuration of ETM, reshaper setting 2 (cross component YCbCr
processing)

Algorithms for Adaptive Transfer Function (ATF) derivation for luma and chroma

HM encoder optimization

DeltaQP adjustment for luma
 No changes to the syntax or decoding process of the ETM reported
This document reports the results of experiments for CE2.a-3 in conjunction with encoding
optimization techniques which have been studied under CE2.c and CE2.d categories. The
combined experiment was conducted to measure the performance of an alternative transfer
function (instead of conventional PQ TF) as well as joint optimization of the reshaper and
encoder, specifically when the reshaping setting is in "setting 2". Informal subjective evaluations
were conducted for both the reconstructed SDR and HDR sequences. It is asserted that compared
to the version 3.2 of anchor, the proposed approach provides similar perceptual quality of the
reconstructed HDR signal while supporting SDR backward compatibility.
W0094 HDR CE2: Report of CE2.b-2, CE2.c and CE2.d experiments (for reshaping setting 2),
includes

Backward compatible configuration of ETM, reshaper setting 2 (cross component YCbCr
processing)

Algorithms for Adaptive Transfer Function (ATF) derivation for luma and chroma

HM encoder optimization

DeltaQP adjustment for luma

ChromaQP offset adjustment for chroma

No changes to the syntax or decoding process of the ETM reported
Page: 217
Date Sav
This document reports a combination of CE2 subtests: CE2.b-2 on SDR backward compatibility,
CE2.c on chromaQPOffset, and CE2.d on DeltaQP adjustment for luma. The experiment was
conducted to study bitstream SDR backward compatibility as well as joint optimization of the
reshaper and encoder. Subjective evaluation was conducted on a SIM2. It is asserted that
compared to anchor v3.2, the proposed joint optimization of reshaper and encoder provides
similar perceptual quality while supporting SDR backward compatibility.
W0097 HDR CE2: CE2.c-Chroma QP offset study report, includes:

Non backward compatible configuration of ETM, reshaper setting 0 (independent
component YCbCr processing)

Algorithm for chroma ATF derivation

Tested in 2 frameworks:

Reference ETM software

Software of the W0084 (alternative luma reshaper)

HM encoder optimization:

CE1 anchor 3.2 default settings
 No changes to the syntax or decoding process of the ETM reported
This document reports results of CE2.c sub-test which studies harmonization of chroma QP
offset utilized in CE1 anchor and CrCb processing of the ETM. As a result of the study,
modification of encoder side algorithm is proposed to the ETM. Proposed in this document
solution employs an independent multirange ATF (reshaper) for each colour component.
Proposed solution provides objective gain as well as improvement in visual quality over
reference ETM design as well as over the CE1 anchor. Proposed multirange ATF was also
integrated and tested in software of W0084.
W0101 HDR CE2: Report on CE2.a-1 Luma based Chroma Scaling, includes:

Non backward compatible configuration of ETM, reshaper setting 1 (cross-component
YCbCr processing)

Algorithm for chroma ATF derivation
 No changes to the syntax or decoding process of the ETM reported
This document reports the results of the study of CE2.a-1 on luma-based chroma scaling (LCS).
The chroma sample values are scaled by values that are dependent on the co-sited luma sample
values, and the scale values are computed based on a function that increases with luma values.
The document reports objective evaluation on the tested configuration.
CE2 – related summary:
W0068 CE2-related: Adaptive Quantization-Based HDR video Coding with HEVC Main 10
Profile
 Non-ETM framework
The perceptual quantization (PQ) for HDR video coding has perceptual uniformity in the
luminance range with the modest bit-depth. However, the dynamic range of HDR video is not
fully utilized in PQ, especially PQ for chrominance channels. Thus, there exist the wasted
dynamic ranges in PQ which cause detail loss and colour distortions. In this contribution,
adaptive quantization-based HDR video coding is proposed. Adaptive mapping of sample values
with bit depths of 16 bit and 10 bit is established to perform conversion based on cumulative
Page: 218
Date Sav
distribution function (CDF), i.e. adaptive quantization, instead of linear mapping. First, CDF is
derived from the histogram of each picture. Then, adaptive quantization is performed based on
CDF. The metadata for adaptive conversion are coded for performing the inverse conversion at
destination. Experimental results demonstrate that the proposed method achieves average 1.7 dB
gains in tPSNR XYZ and 2 dB gains in PSNR_DE over Anchor. Subjective comparisons show
that the proposed method preserves image details while reducing colour distortion.
W0071 CE2-related: Adaptive PQ: Adaptive Perceptual Quantizer for HDR video Coding with
HEVC Main 10 Profile
 Non-ETM framework
Proposed is an adaptive transform function based perceptual quantizer (PQ) for HDR video,
called adaptive PQ. Adaptive PQ is to solve the problem that PQ is not adaptive to HDR content
because of its using a fixed mapping function from luminance to luma. By introducing a ratio
factor derived from the luminance information of HDR content, adaptive PQ maps luminance to
luma adaptively according to the content. Experimental results demonstrate that adaptive PQ
achieves average bit-rate reductions of 3.51% in tPSNR XYZ and 5.51% in PSNR_DE over PQ.
W0085 HDR CE2 related: Further Improvement of JCTVC-W0084
This proposal reports a further improvement of JCTVC-W0084. The improvement is solely on
luma DeltaQP adjustment.
W0089 HDR CE2-related: some experiments on reshaping with input SDR

Backward compatible configuration of ETM, reshaper setting 1 (cross component YCbCr
processing)

Algorithms of luma and chroma reshapers derivation
 No changes to the syntax or decoding process of the ETM reported
This document reports preliminary experiments on ETM with dual SDR/HDR grading. In this
configuration, a graded SDR version as well as the usual graded HDR version are given as inputs
to the reshaper. The reshaping process uses the input SDR as reference and generates reshaping
data to match as much as possible the reshaped input HDR to the input SDR. In the experiments,
reshape setting 1 configuration is used (cross-plane chroma reshaping). It is reported that these
preliminary experiments tend to show that ETM models are able to properly reproduce an SDR
intent.
W0100 Results for combination of CE1 (anchor 3.2) and CE2

Non backward compatible configuration of ETM, reshaper setting 0 (independent
component YCbCr processing)

HM encoder optimization

CE1 default anchor settings

Additional DeltaQP adjustment for luma

Additional ChromaQP offset adjustment for chroma
 No changes to the syntax or decoding process of the ETM reported
This document reports results of combining the software of anchor 3.2 that is the outcome of
CE1 and the ETM that is the outcome of CE2. Results for the direct combination of the CE1 and
CE2 software are provided. It is noted that CE1 and CE2 algorithms have some degree of
Page: 219
Date Sav
overlap, so the direct combination could be improved by a proper adjustment of the parameters.
Results obtained by adjusting the parameters are also provided.
5.2.2 CE2 primary contributions (6)
JCTVC-W0033 HDR CE2: report of CE2.b-1 experiment (reshaper setting 1) [E. Francois,
Y. Olivier]
Tues 1415 (GJS & JRO).
Setting 1: This (as configured for the CE2 test) targets backward compatibility using crosscomponent signal processing.
This document reports the results of experiments HDR CE2.b-1. These experiments relate to
SDR backward compatible configuration, and focus on reshape setting 1 that involves crossplane color reshaping. The experiments only involve the reshaping algorithm, and do not require
any change in the current ETM syntax and inverse reshaping process. The modified reshaper
tuning algorithm for reshape setting 1 is proposed to be adopted in the ETM. It was reported that
the proposed changes improve the reshaped SDR content quality (in terms of visual rendering of
contrast and colours), while preserving the compression performance.
This is in the category of "bitstream backward compatibility".
It is not in the category of "decoder-side content DR/CG adaptation within an HDR context.
The proponent was asked whether the proposed scheme could be configured to provide enhanced
HDR quality rather than SDR backward compatibility, and said it seemed possible.
The automatic adaptation of the reshaper is considering both SDR (at decoder output) and HDR
(after reshaper). It is claimed that due to the dependency between luma and chroma (hue control)
it is simpler to achieve this. 32 pieces of a piecewise linear transfer curve are used.
It is claimed that the multiplicative dependency of chroma from luma is doing something that is
not possible with CRI.
Basically, all three reshaper settings (as well as CRI) could be configured for non-backward
compatible as well as backward-compatible solution. Currently, no hard evidence exists which
benefit any of these solutions has for one or the other case.
JCTVC-W0084 HDR CE2: CE2.a-2, CE2.c, CE2.d and CE2.e-3 [T. Lu, F. Pu, P. Yin, T. Chen,
W. Husak (Dolby), Y. He, L. Kerofsky, Y. Ye (InterDigital)]
Discussed Sunday 21st 1010–1130 (GJS & JRO).
This proposal reports a combination of CE2 subtests: CE2.a-2 on luma forward reshaping
improvement, CE2.c on chromaQPOffset, CE2.d on DeltaQP adjustment for luma, and CE2.e-3
on automatic selection of ETM parameters. The experiment is to test non-normative luma
reshaping improvement as well as joint optimization of reshaper and encoder. The encoder
optimization methods include chromaQPOffset, luma DeltaQP adjustment, and deblocking filter
parameter selection. Subjective evaluation was conducted on both Pulsar and SIM2. It is asserted
that compared to Anchor v3.2, the proposed joint optimization of reshaper and encoder provides
visible subjective quality improvements for many of the test clips, primarily in the form of more
texture details.
The ST 2084 luminance range is divided into a number of pieces and codewords are allocated
into each piece based on image statistics computed on the fly.
The remapper is determined automatically and without look-ahead. Updating is done when a
change of image content range or content characteristics is detected. It is sent with each IDR,
even when not changed.
For the version of Feb1 (used in visual comparison), one improvement is that no update is made
when unnecessary (the first fully automatic version ETMr1 made an update at least once per
second, based on the characteristics of the new IDR picture). This however only saves little data
rate. Other changes are more fundamental. The reshaper function of ETMr1 is a simple power
Page: 220
Date Sav
function, which is changed to become more flexible based on the alpha parameter in 0084. 0084
also uses adaptive chroma QP offset and luma QP adaptation similar to CE1. Furthermore,
deblocking is modified. The encoder selects tc and beta for minimum distortion between original
and reconstructed. This method is equivalent to the method that had been investigated in CE1
(but not used there per decision of the Vancouver meeting).
In the discussion, it was asked what would be done if two different image areas are included in
the same picture (e.g., by an editing operation, or different areas of the same scene having
different characteristics). The scheme applies only one remapper to the entire picture and would
thus need to apply a single remapping to all regions within a picture.
In principle, the same thing could be done to SDR video if desirable. However, the encoding
scheme used here is targeted for ST 2084.
One expert mentioned that similar adjustment as by the suggested reshaping curve could also be
used to further improve the QP adaptation of CE1.
The alpha parameter which adjusts the reshaper characteristics is determined based on
percentage of dark and bright areas. alpha equal 1 would retain the original PQ codeword. The
actual reshaping is done based on the representation of the ATM (piecewise linear etc.)
For the local QP adaptation, the change is less aggressive than in CE1, as it only compensates
locally what the globally optimized reshaper is not able to do. The subsequent contribution 0085
makes this further dependent on reshaper characteristics, whereas W0084 is reshaper agnostic.
Another modification is made by adjusting the QP offset over hierarchy layers that was made.
This explains the shift to first frame as observed by a crosschecker, but some concern was raised
about such strategy in visual comparison.
JCTVC-W0093 HDR CE2: Report of CE2.a-3, CE2.c and CE2.d experiments (for reshaping
setting 2) [Y. Yu, Z. Gu, K. Minoo, D. Baylon, A. Luthra (Arris)]
Tues 1500 (GJS & JRO).
Setting 2: This (as configured for the CE2 test) targets backward compatibility.
This document reports the results of experiments for CE2.a-3 in conjunction with encoding
optimization techniques which have been studied under CE2.c and CE2.d categories. The
combined experiment was conducted to measure the performance of an alternative transfer
function (instead of conventional PQ TF) as well as joint optimization of the reshaper and
encoder, specifically when the reshaping setting is in mode 2. Informal subjective evaluations
were conducted for both the reconstructed SDR and HDR sequences. It is asserted that compared
to the version 3.2 of anchor, the proposed approach provides similar perceptual quality of the
reconstructed HDR signal while supporting SDR backward compatibility.
This is in the category of "bitstream backward compatibility".
It is not in the category of "decoder-side content DR/CG adaptation within an HDR context.
Decoder side is "mode 1", which is identical to the decoder used for "setting 1".
JCTVC-W0094 HDR CE2: Report of CE2.b-2, CE2.c and CE2.d experiments (for reshaping
setting 2) [Z. Gu, K. Minoo, Y. Yu, D. Baylon, A. Luthra (Arris)]
Tues pm 1515 (GJS & JRO).
Setting 2: This (as configured for the CE2 test) targets backward compatibility.
This document reports a combination of CE2 subtests: CE2.b-2 on SDR backward compatibility,
CE2.c on chromaQPOffset, and CE2.d on DeltaQP adjustment for luma. The experiment was
conducted to study bitstream SDR backward compatibility as well as joint optimization of the
reshaper and encoder. Subjective evaluation was conducted on a SIM2. It is asserted that
compared to anchor v3.2, the proposed joint optimization of reshaper and encoder provides
similar perceptual quality while supporting SDR backward compatibility.
This is in the category of "bitstream backward compatibility".
It is not in the category of "decoder-side content DR/CG adaptation within an HDR context.
Page: 221
Date Sav
Decoder side is "mode 1", which is identical to the decoder used for "setting 1".
JCTVC-W0097 HDR CE2: CE2.c-Chroma QP offset study report [D. Rusanovskyy, J. Sole,
A. K. Ramasubramonian, D. Bugdayci, M. Karczewicz (Qualcomm)]
Discussed Sunday 21st 1145–1230 (GJS & JRO).
This document reports results of CE2.c sub-test which studies modification of the chroma QP
offset proposed in m37179 and used in CrCb processing in the HDR/WCG ETM. As a result of
the study, of modification of the encoder side algorithm is proposed to the ETM. The proposal in
this document employs an independent multirange "adaptive transfer function" (ATF) (a.k.a.
reshaper, remapper) for each color component. It reportedly provides objective gain as well as
improvement in visual quality over the reference ETM design as well as over the CE1 anchor.
This is in "setting 0" of the ETM – i.e., it does not include cross-component processing and does
not have spatial effects. It uses a multi-segment TF, whereas the CE2 ETM encoder uses just a
single segment (one offset and scaling) for each chroma component. This uses 4 segments. The
ETM decoder software supports 32 segments.
Viewing was suggested to be needed after CE1 vs. CE2 testing.
The presentation deck was missing from the uploaded contribution, and was requested to be
provided in a revision of the contribution.
As of 2016-05-24, the requested new version of the contribution had not yet been provided.
JCTVC-W0101 HDR CE2: Report on CE2.a-1 LCS [A. K. Ramasubramonian, J. Sole,
D. Rusanovskyy, D. Bugdayci, M. Karczewicz (Qualcomm)]
Discussed Sunday 21st 1230–1245 (GJS & JRO).
This document reports the results of the study of CE2.a-1 on luma-based chroma scaling (LCS).
The chroma sample values are scaled by values that are dependent on the co-sited luma sample
values, and the scale values are computed based on a function that increases with luma values.
The document reports objective evaluation on the tested configuration.
As used, when the luma value increases, the chroma gain increases.
The cross-checker reports that in regard of visual quality this is equivalent to W0084. This may
be explainable by the fact that W0101 uses the luma reshaper of ETMr1, whereas W0084 has an
improved method of determining the luma reshaper. Could potentially be combined on top of
0084.
The cross-checker said this did not seem visually better than W0084, although further study for a
future combination may be desirable.
The presentation deck was missing from the uploaded contribution, and was asked to be
provided in a revision.
As of 2016-05-24, the requested new version of the contribution had not yet been provided.
5.2.3 CE2 cross checks (6)
Discussed Fri 19:00 (GJS & JRO).
JCTVC-W0112 HDR CE2: crosscheck of CE2.a-2, CE2.c, CE2.d and CE2.e-3 (JCTVC-W0084)
[C. Chevance, Y. Olivier (Technicolor)] [late]
This document reports on cross-checking of core experiment HDR CE2.a-2, CE2.c, CE2.d and
CE2.e-3, described in document JCTVC-W0084. It is reported that objective results are perfectly
matching. It is also reported that the overall visual quality compared to the (latest) anchors is
improved. Some comments on the differences of bitrate repartition compared to the anchors are
also made.
According to HDR CE2 plan, subjective viewing was conducted, on SIM2 display. The
evaluation was performed in priority in video mode, then in still picture mode to assess more
specific details.
Page: 222
Date Sav
Compared to the CE1 - V3.2 anchors, the visual quality is reportedly better on the following
sequences:

BalloonFestival : Sharper on mountain, better on rope

BikeSparklers (cut1, cut2) : Sharper everywhere (roof, ground)

Hurdles : Racetrack sharper

Starting : sharper on Grass, red line, roof, cameras, rope and pylon
 Market : wall and tower
Compared to the CE1 - V3.2 anchors, the visual quality is reportedly slightly better on the
following sequences (mostly when viewing in still picture mode, the difference being rather
difficult to catch in video mode):

Showgirl (face)

Stem_MagicHour (cut1, cut2,cut3) : Slightly sharper

Stem_WramNight (cut1, cut2) : Slightly sharper
 garageExit : faces and ropes
Finally, an equivalent quality with the anchors is reported on:

FireEater
 SunRise
In conclusion the visual quality is reported to be globally better than HDR CE1 - v3.2 anchors.
An analysis of the bit rate repartition has also been made to check the differences compared to
the anchors.
In particular, it seems that more bits are spent on intra pictures with the proposal. One possible
explanation is that the proposed reshaping by itself results in distributing more bits in intra
pictures, because of a larger use of codewords range, which is then compensated by a lower cost
spent for inter pictures. The improvement of the intra frames quality generally leads to a quality
improvement on the entire sequence, especially for sequences with low amplitude global motion.
The bit rate repartition actually strongly depends on the automatic reshaping function and on the
way the codewords are redistributed by the reshaping.
More bits were devoted to the I picture than for the anchor. The bit cost difference with anchors,
per frame, is diagrammed below.
Page: 223
Date Sav
There was concern expressed about whether a visual improvement relative to the anchor might
be related to the bit allocation effect illustrated above.
JCTVC-W0113 HDR CE2: Crosscheck of CE2.a-1 LCS (JCTVC-W0101) [C. Chevance,
Y. Olivier (Technicolor)] [late]
Page: 224
Date Sav
JCTVC-W0121 HDR CE2: Cross-check report of JCTVC-W0033 [T. Lu, F. Pu, P. Yin (Dolby)]
[late]
JCTVC-W0122 HDR CE2: Cross-check report of JCTVC-W0097 [T. Lu, F. Pu, P. Yin (Dolby)]
[late]
JCTVC-W0128 HDR CE2: Cross-check of JCTVC-W0084 (combines solution for CE2.a-2,
CE2.c, CE2.d and CE2.e-3) [D. Rusanovskyy (Qualcomm)] [late]
JCTVC-W0131 HDR CE2: Cross-check report of CE2.a-3, CE2.c and CE2.d experiments
(JCTVC-W0093) [C. Chevance, Y. Olivier (Technicolor)]
5.3 HDR CE3: Objective/subjective metrics (3)
Initially discussed Fri 1930 (GJS & JRO).
5.3.1 CE3 summary and general discussion (1)
JCTVC-W0023 Report of HDR Core Experiment 3 [V. Baroncini, L. Kerofsky, D. Rusanovskyy]
The exploratory experiment 3 (CE3) was established at the 113th MPEG meeting (N15800) with
the purpose of understanding various proposed objective video quality metrics. Two categories
of objective metrics were previously identified based on sensitivity to luma or colour distortions.
Software for additional objective metrics was collected. The details of the experiments that were
conducted is provided in ISO/IEC JTC1/SC29/WG11 N15457
Currently in MPEG, an objective metric for HDR/WCG coding which correlates well with
subjective results is absent. Separate classes of metrics corresponding to "luminance only" and
"colour" specific metrics have been proposed. This CE served to examine the performance of
these metrics. The lack of subjective data hiders the process of examining objective metrics. It
was realized the subjective data recorded along with the CfE responses could be leveraged to
help understand the objective metrics. The lab in Rome had access to the CfE response source
data as well as subjective evaluations. They graciously offered to assist with leveraging this data
understand the objective metrics.
Four luminance specific objective metrics and four colour specific objective metrics were
identified of interest. Software for these objective metric calculation was collected. The original
plan was for executables only to be provided for the HDR-VDP2 and HDR-VQM metrics.
Software for the other metrics was delivered as an extension to the HDRTools package. With
exception of the HDR-VDP2, software for these algorithms was distributed following the
process developed to share CE software via MPEG ftp site file exchange.
An update on progress with the generation of objective data and corresponding subjective results
was given at the interim meeting held January 12–14 in Vancouver Canada. Concern was raised
about both processing speed and exact details of the metics under examinatoin. Desire for
publicly available C code for metrics was an outcome of this discussion. Several highlights of the
discussion are repeated below.

Samsung has kindly publicly released C++ code for the HDR-VQM metric (for the
HDRtools software package).
Page: 225
Date Sav

The Dolby VDP2 implementation has been verified by Arris. A request was made to Dolby
to examine release of their C++ implementation ASAP.

EPFL has completed a subjective evaluation of the CFE anchors and will have an input
contribution for the San Diego meeting. EPFL will release the MOS scores for independent
cross correlation to the objective metrics.

UBC has indicated VIF C++ package would be released by the San Diego meeting.
 UBC also has MOS scores that they are willing to release.
Based on discussion at this interim meeting, several contributions on objective metrics were
expected at the San Diego meeting. Review of these contributions was expected to be a major
source of discussion. The following contributions to the San Diego meeting were identified as of
interest to the objective metric topic of CE3:

JCTVC-W0041/ m37625 "HDR-VQM Reference Code and its Usage". Source: Samsung,
Describes structure and use of the SW provided for the HDR-VQM objective metric.

JCTVC-W0090 "HDR CE3: Results of subjective evaluations conducted with the DSIS
method". Source EPFL. Describes additional subjective testing done on HDR/WCG CfE
responses. Replaces previous paired comparison method with DSIS method. Notes results
may be used to evaluate objective metrics.

JCTVC-W0091 "HDR CE3: Benchmarking of objective metrics for HDR video quality
assessment". Source EPFL. Provides evaluation of sample set of objective metrics using
updated subjective test results. A subset of metrics listed in CE-3 were considered. Objective
luminance metrics examined include:: tPSNR-Y, HDR-VDP-2, HDR-VQM, avLumaPSNR.
Objective colour metrics examined include: avColourPSNR and PSNR-DE100. The VIF
metric was also included in this study.

JCTVC-W0115 "VIF Code for HDR Tools" (unavailable on document repository as of 201602-18)
The following items were requested to be addressed in San Diego:

Examine the HDR-VQM metric C++ code kindly provided by Samsung, trying to fix the
problems encountered (cfg info for a correct execution; analysis of the speed and of the
results).

Ask the Proponents to the CfE to provide the decoded files in .exr format (the HDR-VQM
metric works on those). They could provide or the .exr files or the bitstreams and the scripts
to decode them (so headache for the CE3 people in decoding stuff, please!!).

Get bitstreams of the new Anchors and run subjective assessment and metric on those; FUB
is volunteering to assess the new Anchors; we need volunteers to run the other metrics.
Page: 226
Date Sav
VQM behaviour was reported as follows:
5.3.2 CE3 primary contributions (2)
JCTVC-W0090 HDR CE3: Results of subjective evaluations conducted with the DSIS method
[M. Rerabek, P. Hanhart, T. Ebrahimi (EPFL)] [late]
(Not reviewed in detail.)
This contribution reports the details and results of a second subjective quality evaluation
conducted at EPFL to assess responses to the Call for Evidence (CfE) for HDR and WCG Video
Coding. Unlike the contributors' previous evaluations, the DSIS method was used for these
assessments instead of a paired comparison to an Anchor. The same test material as in our
previous evaluations was used in this subjective assessment. Results show similar trends as in
prior work, but less statistically significant differences are observed because of the lower
discrimination power of DSIS approach when compared to paired comparison, as well as the
color differences induced by the viewing angle dependency of the Sim2 monitor. These results
can also be used to evaluate the performance of objective quality metrics.
JCTVC-W0091 HDR CE3: Benchmarking of objective metrics for HDR video quality assessment
[P. Hanhart, T. Ebrahimi (EPFL)] [late]
(Not reviewed in detail.)
This contribution reports the performance of several objective metrics for HDR quality
assessment. To benchmark the metrics, we used as ground truth the subjective scores collected in
two different evaluations conducted at EPFL. Three different analyses are performed to assess
the performance of the different metrics. Based on the results, the contributor recommended
using HDR-VDP-2 or PSNR-DE1000, which showed similar performance for a lower
computational complexity and considers colour differences.
Page: 227
Date Sav
5.3.3 CE3 cross checks (0)
5.4 HDR CE4: Consumer monitor testing (1)
Initially discussed Fri 2000 GJS & JRO
5.4.1 CE4 summary and general discussion (1)
JCTVC-W0024 Report of HDR CE4 [P. Topiwala (FastVDO), E. Alshina (Samsung), A. Smolic
(Disney), S. Lee (Qualcomm)]
HDR CE4 had the mandate to investigate the potential suitability of consumer HDR televisions
for use in our investigations of HDR/WCG video compression. The CE4 Description is included
below as an Appendix for the convenience of the JCT-VC. It is remarked that this CE is not a
traditional contest between proposals, but rather an open and collaborative investigation, more
like an AHG. Moreover, due to device testing, most of the work of the CE can be performed
mainly at meetings.
Legacy receivers have been in the range of 100 nits of peak brightness, and supporting 8-bit
video with the BT.709 colour space. HDR can be viewed as video in the 1000+ nits range,
supporting 10+ bits/sample, and potentially supporting the BT.2020 colour space. The work in
the MPEG AhG on HDR/WCG has been exclusively using two monitors for all subjective
assessments: (1) the Dolby Pulsar monitor, which is a private model internal to Dolby and
unavailable; and (2) the Sim2, which at approximately $40K dollars is a reference monitor that is
unlikely to reach consumers soon. Nevertheless, the work of the AhG aimed to provide a coding
technology for consumer services to the home. The only way to assess the value of any design
for suitability for consumer services is therefore to actually test on devices that may be deployed
in the home. OLED reference monitors were included in this category for test purposes.
Volunteer Disney Research, using proprietary algorithmic methods developed at their labs,
created some 1k nit regraded version of a couple of test sequences. These sequences have been
uploaded to the MPEG Repository, under \Explorations\HDR\1k Nit grading directory location.
Volunteer Samsung produced some test output sequences by using the latest source codes
produced by CE1 and CE2, with a view to compare them, both on professional monitors and
consumer devices.
The interim meeting of the HDR activity in Vancouver afforded some viewing of both the
sequences produced by Samsung, as well as the 1k nit data produced by Disney. The devices
available at the Telus labs included the Samsung JS9500. A number of impressionistic
observations were made in Vancouver in a non-systematic, non-scientific way. It was noted that
the 1k nit data appeared not to be a close match to the 4k nit originals (they appeared more
washed out). In fact, Disney reports that it was part of the intent in the regrading to make the 1k
nit data paler, and somewhat less colourful, but closer to a cinematic look. If there are other
preferences on how the regraded data should appear, it would be meaningful to collect such
opinion if possible. It was also found that the original 4k nit data, as shown on the 1k nit max
JS9500 device, appeared somewhat washed out and "non-HDRish". These observations required
further review of the test setup. Upon investigation, E. Alshina concluded that the way the video
was fed to the device did not indicate how to perform the necessary display adaptation, resulting
in brightness clipping and therefore suboptimal performance. So these experiments would need
to be repeated before for adequate results could be obtained.
E. Alshina of Samsung brought coded sequences to Vancouver according to CE1 and CE2, using
the latest available source codes (e.g, v3.2 for CE1), for comparison on the consumer class
JS9500, and general impressions were gathered in a non-scientific viewing. It seemed to be
Page: 228
Date Sav
general opinion that on the whole both seemed to be about equal in quality, while there was some
opinion that CE1 was better (opinion the other way seemed less common).
It was anticipated that activities similar to those at the Vancouver meeting would be conducted in
San Diego, potentially as a breakout group. It was also anticipated to propose continuing this
activity beyond this San Diego meeting.
It was agreed to have a side activity (coordinated by P. Topiwala) to try to arrange a similar test
of CE1 and CE2 results using a consumer monitor to help determine the ability to test with a
consumer device. Some viewing results were later reported informally as noted in section 1.15.
It was suggested to try to measure and study the response characteristics of some consumer
monitor(s) and professional monitor(s) and determine what characteristics exist and how this
may effect viewing results.
5.4.2 CE4 primary contributions (0)
5.4.3 CE4 cross checks (0)
5.5 HDR CE5: Colour transforms and sampling filters (6)
Initially discussed Saturday 1400 GJS & JRO.
5.5.1 CE5 summary and general discussion (1)
JCTVC-W0025 Report of HDR CE5 [P. Topiwala (FastVDO), R. Brondijk (Philips), J. Sole
(Qualcomm), A. Smolic (Disney)]
HDR CE5 had the mandate to investigate the impact of alternative colour transforms and
resampling filters as pre- and post-processing steps on the efficiency of HDR/WCG coding. In
addition, there was a mandatory transfer function, as well as an optional one, to incorporate in
the processing chain. Several proposals were submitted under this CE, investigating these
parameters.
A number of proposed tests were set up in this CE, stemming from the work presented at the
Geneva meeting. Three tests were performed – a Y′′u′′v′′ colour space proposed in W0069, a
Y′FbFr lifting-based colour space proposed in m37065, and constant-luminance colour space
from BT.2020 (see W0099) with different ordering of the application of the transformation
matrix and different scaling of the chroma – each tested with some alternative resampling filter.
The Y′FbFr proposal used a significantly different resampling filter than used in the anchor. The
source code for these experiments were released and uploaded to the MPEG repository, under
/Explorations/HDR/CE5/. As of the time of preparation of the CE5 report, the following
documents had been registered.

JCTVC-W0069, HDR CE5: Report of Core Experiment CE5.3.1, R. Brondijk et al
(Philips)

JCTVC-W0055, HDR CE5: Report of CE5.3.2, W. Dai et al (FastVDO)

JCTVC-W0070, CE5-related, xCheck of CE5.3.2

JCTVC-W0080, Crosscheck of HDR CE5.3.1 (JCTVC-W0069).
 JCTVC-W0099, HDR CE5 test 3: Constant Luminance results.
The cross-checkers did not assess the proposals for visual improvement – only looked for serious
visual problems.
Page: 229
Date Sav
One point that was raised on the reflector was by Tomohiro Ikai of Sharp. It is stated in the CE5
description that the colour transforms and resampling filters are the only tools under test; no
other parameters were to be varied. That is, to follow CE1 directly in other matters. (One
exception was already envisioned… that the Transfer Function could optionally be allowed to be
different from the ST 2084 TF, as an additional test.) Tomohiro Ikai observed that as we vary the
colour space instead of the usual Y′CbCr, a certain QP offset value may also have to be modified
for optimal performance. So it was suggested to allow that adjustment. A decision among
participants in the CE was reached to allow for that variation, but as an additional test.
The suggestion was that if gain is shown, we might liaise with other organizations to determine
their degree of interest in an alternative colour space.
It was asked whether the upsampling after decoding would need to be designed to match the
downsampling.
It was suggested that we should be cautious about considering the idea of defining new colour
spaces.
No immediate action was expected from the outcome of the informal viewing. Visual testing was
suggested to be done. This was agreed, but to be lower priority than other visual testing for this
meeting.
It was planned that informal viewing should be performed later during week to demonstrate the
claimed benefit versus the new CE1 anchors; it would however not be possible to identify
whether benefit would be due to filters or other colour transforms. Sampling filters could also be
used with CE1 as optional pre-and post-processing.
5.5.2 CE5 primary contributions (3)
JCTVC-W0055 HDR CE5: Report of Experiment 5.3.2 [W. Dai, M. Krishnan, P. Topiwala
(FastVDO)]
(Not reviewed in detail.)
This proposal presents two approaches to coding HDR video, by modifying certain components
of the anchor HDR video coding processing chain developed in CE1 (essentially an HDR10
compliant system). The following components are modified. The intermediate colour
representation is changed from Y‘CbCr, to Y‘FbFr, a proposed colour space. The resampling
filters used for 4:4:4->4:2:0->4:4:4 conversion in the anchor are replaced by a different proposed
set of fixed resampling filters. In addition to these changes, the two variants of the proposal,
herein called FastVDO_a, and FastVDO_b, arise from the use of two different transfer functions:
(a) ST.2084, and (b) a Philips-developed transfer function. It is asserted that both systems
perform well, with subjective visual quality equal to or superior to the output of either CE1
(anchor) or CE2 (reshaper). Moreover, by testing with two different transfer functions, it is
asserted that the tools proposed meet the criteria of modularity and adaptability, in addition to
providing performance value. The developed system reportedly constitutes a pre- and postprocessing system, which acts outside of an application of the core HEVC Main10 video coding
profile. Reported objective results for these systems include: (a) FV_a, Overall results for
DE100, MD100, PSNRL100 were +16.5%, +48.8%, +5.8% ; (b) FV_b, Overall results for
DE100, MD100, PSNRL100 were reportedly +6.1%, -28.3%, +28.6%, respectively. It is asserted
(and for which there is appears to be fairly broad agreement) that such objective metrics are not
very predictive of visual quality in HDR coding studies. It is furthermore asserted that the
proposed coding scheme in fact provides equal or superior visual quality to the output of both
CE1 and CE2.
JCTVC-W0069 HDR CE5: Report on Core Experiment CE5.3.1 [R. Brondijk, W. de Haan,
J. Stessen (Philips)]
(Not reviewed in detail.)
Page: 230
Date Sav
This document describes the core experiment CE5.3.1. In this core experiment, the quality of
Y‖u‖v‖ is investigated when used with the ST.2084 instead of the Philips Transfer Function. The
purpose is to compare the results of this experiment to other pixel representations identified in
CE5.
JCTVC-W0099 HDR CE5 test 3: Constant Luminance results [J. Sole, D. Rusanovskyy,
A. Ramasubramonian, D. Bugdayci, M. Karczewicz (Qualcomm)]
(Not reviewed in detail.)
This document reports results of the CE6 test 3 on a Constant Luminance (CL) approach
discussed in the MPEG contributions m36256 and m37067. The standard NCL chroma weights
and another set of weights are tested. There is no sequence or frame level optimization of the
proposed CL approach: identical CL coefficients are applied throughout all the test sequences.
5.5.3 CE5 cross checks (2)
JCTVC-W0070 CE5-related: xCheck of CE5.3.2 [R. Brondijk, R. Goris (Philips)] [late]
JCTVC-W0080 Crosscheck of HDR CE5.3.1 (JCTVC-W0069) [J. Samuelsson, M. Pettersson,
J. Strom (Ericsson)] [late]
5.6 HDR CE6: Non-normative post processing (10)
Initially discussed Saturday 1500 GJS & JRO.
5.6.1 CE6 summary and general discussion (1)
JCTVC-W0026 Report of HDR Core Experiment 6 [R. Brondijk, S. Lasserre, D. Rusanovskyy,
Y. He]
This CE was originally planned for processing that is performed, after decoding, in the 4:4:4
space. However, some proposals in the CE used output in the 4:2:0 space.
Page: 231
Date Sav
CE
Test
4.1
Contribution
Summary
Proponent
Cross Checker
JCTVC-W0103
Qualcomm
Philips
JCTVCW0109
BBC
JCTVCW0125
4.2
JCTVC-W0034
InterDigital
Samsung
JCTVCW0123
4.3
JCTVC-W0063
Philips
Technicolor
(version 1)
JCTVCW0114
Qualcom
(version 2)
4.6a
JCTVC-W0063
Philips
FastVDO
JCTVCW0073
4.6b
JCTVC-W0059
Technology based on m37064:
Study of reshaper in 4:4:4
Dynamic range adjustment with
signalling of parameters for HDR
reconstruction through SEI message
Has concept of decoder-side
content DR adaptation for the
coding space, re-expanded to
destination DR
Colour enhancement (m37072):
modifying decoded chroma (in
4:2:0 domain) based on luma signal
No SDR concept
Dynamic range adaptation and resaturation (m36266) in 4:2:0
Version 1: HM 16.7 fixed QP
Version 2: Using CE1 adaptive QP
Has concept of decoder-side
content DR adaptation for the
coding space, re-expanded to
destination DR
Technologies based upon m37285:
m36266, using an SEI message
described in m37285.
Similar to 4.3, except signalling.
Dynamic range adaptation, based
on application of 1D-mapping
functions to the YCbCr and/or RGB
components, and colour correction
consisting of the modulation of the
CbCr and/or RGB components
samples by a 1D scaling function
depending on the Y component
sample and signalled as presented
in m37245.
Has concept of decoder-side
content DR adaptation for the
coding space, re-expanded to
destination DR
Technologies based upon m37285:
the test model, with usage of an SEI
message.
Has concept of decoder-side
content DR adaptation for the
coding space, re-expanded to
destination DR
Technicolor
Philips
JCTVCW0064
Page: 232
Date Sav
The "concept of decoder-side content DR adaptation" is designed to be used for adapting the
content for display with a different HDR display DR, different environment light level, useradjusted light level, or SDR display DR.
The bitstream VUI could indicate an SDR space or some other space for default interpretation in
the absence of adaptation metadata.
It was asked how this capability could be demonstrated – and the coordinator responded that the
adaptation to a different display could be demonstrated using various displays.
SMPTE 2094 (see incoming SMPTE LS, m38082, TD 387-GEN for VCEG) is being developed
to provide some display adaptation metadata. The nominal purpose of SMPTE 2094 is, at least
primarily, to enable adaptation of a video signal to a smaller colour volume (e.g., lower dynamic
range).
W0063 is different.
5.6.2 CE6 primary contributions (4)
JCTVC-W0034 CE-6 test4.2: Colour enhancement [Y. He, L. Kerofsky, Y. Ye (InterDigital)]
(Not reviewed in detail.)
The document reports simulation results of test 4.2 defined in CE6. It investigated the
performance of applying a color enhancement process as post processing. Compared to the
HDR/WCG anchor v3.2, an average BD rate reduction of 6.4% based on the DE100 metric is
reported. BD rate reduction is also reported for other objective metrics that measure color fidelity.
A BD rate increase of 0.5% is reported for metrics that measure luma only.
JCTVC-W0059 HDR CE6: report of CE6-4.6b experiment (ETM using SEI message)
[E. Francois, C. Chevance, Y. Olivier (Technicolor)]
(Not reviewed in detail.)
This document reports results from experiment HDR CE6-4.6b. The experiment aims at
replacing the SPS/PPS signalling used in ETM to convey the reshaper metadata by an SEI
container. The tests are made in reshape setting mode 1 focused on SDR backward compatibility,
with same algorithm as in CE2.b-1. It is reported that same performance are obtained as CE2.b1.
JCTVC-W0063 HDR CE6: Core Experiments 4.3 and 4.6a: Description of CE6 system in 4:2:0
and with automatic reshaper parameter derivation [W. de Haan, R. Brondijk, R. Goris, R. van
der Vleuten (Philips)]
Discussed Tues 1520 GJS & JRO
This has a backward-compatible aspect, but is also intended to provide enhanced HDR quality
relative to "HDR10".
This document describes the operation of an adapted version of an earlier system as proposed for
the CfE by Philips (M36266). It now uses automatic reshaper parameter derivation, internal
processing in the 4:4:4 domain, with output in the 4:2:0 domain.
Two variants of the system are described in this contribution (W0063) depending on the transfer
of the reshaper curve. In version a) the curve shape is transferred using parameters (CE6 4.3). In
version b) the curve shape transferred using a piece wise linear approximation of the curve itself
(CE6 4.6a).
It is reported that the newly adapted system performs similar to the original CfE system, and it is
shown that both version a) and version b) of the proposal also perform similar quality-wise, but
that version a) is more efficient from a transmission bandwidth point of view.
This is about "bitstream backward compatibility".
It is also in the category of "decoder-side content DR/CG adaptation within an HDR context"
(but this is not described in the contribution).
Page: 233
Date Sav
The scheme involves decoder-side inverse tone mapping controlled by parameters sent by the
encoder. The output is in the 4:2:0 domain. This tone mapping is somewhat more complex than,
e.g., the schemes tested in CE1.
At the decoder end, the method determines for each pixel individual multiplication factors that
convert SDR to HDR inYUV 4:2:0. To determine these factors, the pixel is upconverted to 4:4:4
and RGB, where tone mapping and scaling is performed. Generally, more complex than a plain
mapping operation as in the CE2 methods.
Shown here was with an intent for backward compatibility, uses automatic determination of
mapping, but could also be used for HDR-only purposes.
JCTVC-W0103 HDR CE6: Test 4.1 Reshaper from m37064 [D. B. Sansli,
A. K. Ramasubramonian, D. Rusanovskyy, J. Sole, M. Karczewicz (Qualcomm)]
Discussed Tues 1600 GJS & JRO
This has a backward-compatible aspect.
This document reports the results of CE6 sub-test 4.1 that studies the Backwards Compatible
configuration of the technology proposed in m37064. The technology is based on the dynamic
range adjustment (DRA) proposed in m37064 which is similar to mode 0 in ETM and provides a
method for a guided, end-user side dynamic range conversion. Parameters of the DRA are
proposed to be conveyed to the decoder through an SEI message. Objective and subjective test
results are provided for an example application of reconstructing an HDR signal from
compressed SDR bitstreams. It is proposed to adopt the technology described in this
contribution, to provide a guided, end-user side dynamic range conversion.
This has some aspect to support "bitstream backward compatibility".
It could be configured for other use cases.
It has a piecewise linear mapping of decoded colour components separately (e.g. like CRI with a
unity matrix).
Asserted to be possible to use for HDR to SDR conversion, SDR to HDR conversion, and other
conversions (guided by the metadata).
Uses two building blocks of dynamic range adaptation, and in YUV 4:2:0 after decoder output
and another one after RGB 4:4:4 conversion. A new SEI message is requested. Hypothetically,
this could be implemented using CRI.
5.6.3 CE6 cross checks (5)
JCTVC-W0064 HDR CE6-related: Cross Check of CE6.46b [R. Brondijk, W. de Haan (Philips)]
[late]
JCTVC-W0073 HDR CE6: Cross-check of 6.46a [W. Dai, M. Krishnan, P. Topiwala (FastVDO)]
[late]
JCTVC-W0109 CE6-related: xCheck of CE6 4.1 [Robert Brondijk (Philips)] [late]
JCTVC-W0114 HDR CE6: crosscheck of CE6.4.3 (JCTVC-W0063) [C. Chenvance, Y. Olivier
(Technicolor)] [late]
JCTVC-W0123 HDR CE6: Cross-check report of test4.2: Colour enhancement (JCTVC-W0034)
[E.Alshina (Samsung)] [late]
Page: 234
Date Sav
JCTVC-W0125 CE6-related: Cross check report for Test 4.1 Reshaper from m37064
[M. Naccari (BBC)] [late]
5.7 HDR CE7: Hybrid Log Gamma investigation (4)
Initially discussed Saturday 1645 GJS & JRO.
5.7.1 CE7 summary and general discussion (1)
JCTVC-W0027 Report of HDR/WCG CE 7 on investigating the visual quality of HLG generated
HDR and SDR video [A. Luthra (Arris), E. Francois (Technicolor), L. van de Kerkhof (Philips)]
This document provides the report of the Core Experiment 7 investigating the visual quality of
Hybrid Log-Gamma (HLG) generated HDR and SDR video.
The exploratory experiment 7 (CE7) was established at the 113th MPEG meeting (N15800) with
the purpose of understanding various techniques used to generate video using Hybrid LogGamma (HLG) transfer function and investigating the visual quality of the video.
Hybrid Log-Gamma (HLG) Transfer Function (TF) is designed to possibly provide backward
compatibility with the "legacy" SDR systems. At this stage, the "legacy" systems are taken to be
the SDR systems not needing colour gamut conversion (i.e., for example, not needing BT.2020
to BT.709 colour gamut conversion) but needing only the dynamic range conversion. If the
legacy SDR TV cannot process the BT.2020 "container" then it is assumed the BT.2020
container is removed externally and the video is provided with appropriate format. The details of
the experiments that were conducted is provided in ISO/IEC JTC1/SC29/WG11 N15800.
The initial version of N15800 lacked specific details. Those details were provided and uploaded
as the second version of N15800. Various issues associated with HLG were discussed on the
email reflector as well as during the HDR/WCG AHG's face-to-face meeting in Vancouver in
Jan 2016.
An overview of the motivation for development of HLG was given to the group by BBC during
the Vancouver meeting and uploaded as ISO/IEC JTC1/SC29/WG11 m37535 (JCTVC-W0037).
There was substantial discussion on the impact and interactions of various elements of the
OOTF. The utility of modifying the system gamma to change the peak brightness was discussed.
BBC reported on a subjective test where viewers modified the system gamma by a factor in order
to adapt the signal from one peak brightness to another. The value is dependent on the source and
target peak brightness values. It was observed that only 2–3% of TV programs go through a
reference display environment and only one of 24 in-house studios surveyed by BBC followed
BT.2035 lighting environment recommended practice.
BBC was requested to provide its ITU-R WP6C contribution on system gamma design for
monitors to the MPEG/JCT HDR ad hoc group (see notes below and JCTVC-W0110).
Arris presented a document during the Vancouver meeting on the backward compatibility of
HLG. The document was uploaded to the MPEG and JCT repositories as ISO/IEC
JTC1/SC29/WG11 m37543 (JCTVC-W0035) respectively. Arris presented images that showed
CE7.1b looked noticeably washed out and suggested focusing on CE7.1a. CE7.1a uses gamma
processing in the YUV domain and CE7.1b uses gamma processing in the RGB domain. It was
also reported that there was a change in hue when the HLG processed video is shown on a legacy
SDR TV. The amount of change in hue was observed to be a function of the brightness of the
object. Its impact was that as an object moved from less bright area to brighter area, the
displayed colour changed. It was further investigated. To be able to test it in more controllable
environment, Arris generated colour bars that showed the similar behavior of change in the hue
depending on the brightness of the object as seen in other video sequences. The EXR file
Page: 235
Date Sav
consisting of those colour bars were also provided on the CE7 reflector. The results are reported
in ISO/IEC JTC1/SC29/WG11 m37543 (JCTVC-W0035).
A demo for this was requested be arranged and announced on our email reflector (targeting 1100
Sunday if feasible).
Technicolor presented an informal document during the Vancouver meeting on the backward
compatibility of HLG – specifically whether the compatibility is content dependent. The
document was later completed and uploaded as document JCTVC-W0110. The test, done using
BT.709 container, was similar to BBC‘s where viewers were asked match SDR and HDR by
adjusting the system gamma. Technicolor showed selected ranges from gamma 1.2 to 1.6.
Technicolor reported that that the system gamma value for an optimal SDR rendering is content
dependent, and not only dependent on the peak brightness and viewing environment. Bright
content would require smaller gamma values (close to 1.2) while dark content would need large
gamma values (above 1.4). In the discussions, BBC stated that gamma 1.6 is no longer the
recommended value. The second round of test done using BT.2020 container confirmed the
previous observations.
Philips presented a document (JCTVC-W0036 / m37602, which was later withdrawn) during the
interim meeting describing their crosscheck of CE7.1 and CE7.2. The document confirmed the
work of Technicolor and Arris. For 7.1.b, the SDRs looked washed out and there are noticeable
colour tone changes. The 7.1.a SDRs look better than those of 7.1.b, however their colours are
often too saturated (reds are too red, greens too green and blues too blue). The compression
results vary: for some sequences, 7.1.b has fewer issues and for others 7.1.a have fewer issues.
At the lower gammas of CE 7.2 the oversaturation is less noticeable, and the errors in luminance
and colour are smaller. Cross-checking of 7.2 experiments are provided by BBC in JCTVCW0043.
It was also noted during the Vancouver meeting that the conversion from SDR BT.2020 to
BT.709 could be achieved by HDRTools with a two-step approach: conversion from SDR
BT.2020 to "EXR2020", then conversion of EXR2020 to SDR709. Philips made a fix to enable
direct conversion of SDR2020 to SDR709. There are still ongoing discussions on the reflector to
clarify how the conversion from SDR2020 to SDR709 has to be applied. An input document
(JCTVC-W0046) was provided as one of the suggested starting points, with the target of
providing a recommendation by the end of the JCTVC meeting, for further visual quality
evaluations of the SDR produced by the various studied solutions (see CE8 report for in-depth
coverage of the topic of SDR backward compatiblity systems evaluation).
Contributions:

Results report
o JCTVC-W0061/m37695 - CE7: Results Core Experiment 7.1 test a and b, M.
Naccari, A. Cotton, M. Pindoria
o JCTVC-W0079/m37717 - HDR CE7-related: Additional results for Experiment
7.2a, M. Naccari, M. Pindoria, A. Cotton (BBC)

Comments/Observations
o JCTVC-W0035/m37543 - Some observations on visual quality of Hybrid LogGamma (HLG) TF processed video (CE7), A. Luthra, D. Baylon, K. Minoo, Y.
Yu, Z. Gu (Arris)
o JCTVC-W0110/m37771 - HDR CE7: Comments on visual quality and
compression performance of HLG for SDR backward compatible HDR coding
system, E. Francois, Y. Olivier, C. Chevance (Technicolor)
Page: 236
Date Sav
o JCTVC-W0119/m37979 - Some considerations on hue shifts observed in HLG
backward compatible video, M. Pindoria, M. Naccari, T. Borer, A. Cotton (BBC)

Cross-checking
o JCTVC-W0043/m37630 - CE7: Cross-check report for Experiment 7.2, M.
Naccari, M. Pindoria (BBC)
o JCTVC-W0067/m37702 - HDR CE7: xCheck 1a 1b and Core Experiment 2a and
2b, Robert Brondijk, Wiebe de Haan, Rene van der Vleuten, Rocco Goris

Others
o JCTVC-W0046 /m37680 - Recommended conversion process of YCbCr 4:2:0
10b SDR content from BT.709 to BT.2020 colour gamut, E. Francois, K. Minoo,
R. van de Vleuten, A. Tourapis
Full presentation of all related documents was suggested not to be necessary.
5.7.2 CE7 primary contributions (1)
JCTVC-W0061 CE7: Results Core Experiment 7.1 test a and b [M. Naccari, A. Cotton,
M. Pindoria (BBC)]
(Not discussed in detail.)
This document summarizes results associated with Core Experiment 7 (CE7) which aims to
investigate on the generation of Hybrid Log-Gamma (HLG) signals with the WG11 test material.
Two groups of experiments were performed: Experiment 7.1 where HLG signals are generated
according to what described in m36249 and Experiment 7.2 where HLG signals are generated
using different values for the system gamma parameter. This document reports on the results and
experimental conditions used to generate the data for Experiment 7.1.
5.7.3 CE7 cross checks (2)
JCTVC-W0043 CE7: Cross-check report for Experiment 7.2 [M. Naccari, M. Pindoria (BBC)]
JCTVC-W0067 HDR CE7: xCheck 1a 1b and Core Experiment 2a and 2b [R. Brondijk, W. de
Haan, R. van der Vleuten, R. Goris (Philips)]
5.8 HDR CE8: Viewable SDR testing (1)
5.8.1 CE8 summary and general discussion (1)
JCTVC-W0028 CE8 Report [W. Husak, V. Baroncini]
Discussed Wed 1740 (GJS)
This CE had worked on the following:

To develop a test methodology to evaluate the quality of SDR content generated from
HDR content, with the HDR content as reference;

To define the goal of the test and which should be the expected results;

To define a test plan to evaluate the performance of the candidate proposals for an ―SDR
Viewable‖ production;
Page: 237
Date Sav

Conduct a dry run and conclude about the relevancy of the proposed test methodologies.
The model illustrated below was used.
metadata
input
HDR
video
HDR analysis
and processing
SDR
HEVC
encoding
HEVC
decoding
HDR
reconstruction
HEVC 10 bits
output
HDR
video
output
SDR
video
A "single step approach" uses SDR and HDR displays in the same test session, asking viewers to
look for the ability of a tested SDR to approach the (original) HDR in quality.
A "two step approach" uses SDR display only

Selection of SDR examples by experts

Test with System-Under-Test by Naïve viewers

Test Method

Provide high example

Provide low example
 SUT is tested with the two
The two step approach gives the viewers an easier cognitive test.
This is for testing SDR quality. Testing of HDR quality is conducted separately.
SDR grades of content were made – see M37726 / JCTVC-W0087.
Test sequences were selected as tabulated below.
Num
Properties
Sequence
Fps
Frames
1920x1080
YUV 4:2:0
BT.709
10bit
EBU_04_Hurdles_CG_1920x1080p_50_10_709_420.yuv
50
0–499
EBU_06_Starting_CG_1920x1080p_50_10_709_420.yuv
50
0–499
SunriseClip4000r1_CG_1920x1080p_24_10_709_420.yuv
24
0–199
GarageExitClip4000_CG_1920x1080p_24_10_709_420.yu
v
24
0–287
S10
S11
S12
S13
Open issues were reported as follows:

Selection of low example

Compressed or uncompressed tests

Test timeline and logistics
5.8.2 CE8 primary contributions (0)
Page: 238
Date Sav
5.8.3 CE8 cross checks (0)
6
Non-CE technical contributions (54)
6.1 SCC coding tools (2)
JCTVC-W0076 Comments on alignment of SCC text with multi-view and scalable [C. Gisquet,
G. Laroche, P. Onno (Canon)]
Discussed Sunday 21st 1630 GJS.
At the 21st JCT-VC meeting in June 2015, disabling of weighted prediction parameters syntax
for CPR was adopted, by checking whether a reference picture has the same POC as the current
frame. It is asserted that the text specifications modifications have consequences outside of the
SCC extension.
The problem was confirmed. A suggested fix in the discussion was to check for equality of the
layer ID.
This was further discussed after offline drafting to check draft text for that. See section 1.15.
JCTVC-W0077 Bug fix for DPB operations when current picture is a reference picture [X. Xu,
S. Liu, S. Lei (MediaTek)]
Initially discussed Sunday 21st 1645 GJS.
In the current HEVC SCC specification, the current decoded picture prior to the loop filtering
operations is used as a reference picture. This reference picture is also put and managed in the
decoded picture buffer (DPB), together with other decoded pictures. An asserted bug fix is
proposed in this contribution for the DPB operation such that before the start of decoding current
picture, one empty picture buffer is created when TwoVersionsOfCurrDecPicFlag is equal to 0;
alternatively, two empty picture buffers are created when TwoVersionsOfCurrDecPicFlag is
equal to 1.
Extensive discussion was conducted regarding whether the current draft text was really broken
and how to fix it. This was further discussed after offline study of the issue. See section 1.15.
6.2 HDR coding (35)
6.2.1 CE1 related (9)
JCTVC-W0039 Luma delta QP adjustment based on video statistical information [J. Kim, J. Lee,
E. Alshina, Y. Park (Samsung)]
Initially discussed Saturday 1800 GJS & JRO.
This contribution is targeting to performance improvement of high dynamic range video
compression using existing HEVC tools. The contribution asserts that the performance can be
improved by adjusting the luma QP based on the video statistical properties. Depending on the
average of the luma components in a picture and the luma block average and variance, the
proposed algorithm adjusts the delta QP for each CTU. The performance of the algorithm was
evaluated with the same method of "SuperAnchor v3.2" and reportedly shows consistently better
compression performance.
Additional criteria included the brightness of entire picture and the variance of luma at the CTU
level. By this, in some cases the adjustment is disabled either at the frame or CTU level. It was
claimed that some improvement in dark areas is visible relative to the new anchors of CE1.
Several experts expressed the opinion that the idea is interesting.
Page: 239
Date Sav
It is also suggested by another expert that taking into account colour qp offset for dark areas
would be another option to give improvement. It was asked what scope of techniques we should
consider:

This looks at whole-frame luma, which seems fine (note that CE2 also includes whole-frame
analysis)

It also looks at local variance, which was previously discouraged. It was remarked that the
variance is only used to disable an adjustment that would otherwise be made based on a local
average. It was also commented that the spirit is to compensate for what is done by the colour
transfer function.
The presentation deck was uploaded in a -v2 version of the contribution.
It was initially agreed that we probably want to include this in a CE or go ahead and adopt it as
the new anchor. Viewing was also requested.
JCTVC-W0081 Crosscheck of Luma delta QP adjustment based on video statistical information
(JCTVC-W0039) [J. Samuelsson, J. Ström, P. Hermansson (Ericsson)] [late]
JCTVC-W0054 AHG on HDR and WCG: Average Luma Controlled Adaptive dQP [A. Segall,
J. Zhao (Sharp), J. Strom, M. Pettersson, K. Andersson (Ericsson)]
Initially discussed Saturday 1830 GJS & JRO.
This document describes the average luma-controlled adaptive dQP method that is part the
anchor generating process since anchor v3.0, as outlined in JCTVC-W0021/m37605. The main
idea behind the method has previously been described partly in m37439, but this document
provides implementation details that have previously only been communicated over email and/or
as part of the software packages associated with anchors v3.0, v3.1 and v3.2.
It was also commented that the spirit is to compensate for what is done by the colour transfer
function (in the way that its effect differs from that for SDR).
Some possible modifications discussed:

It was noted that lambda can also be used to provide some amount of intermediate points
between QP values.

It was noted that chroma is not considered in this technique.
 Different block sizes for the adaptation
It was agreed that the information about the anchor encoding technique should be included in the
CTC encoding algorithm adjustment, and perhaps in the draft TR.
JCTVC-W0056 CE1-related: LUT-based luma sample adjustment [C. Rosewarne, V. Kolesnikov
(Canon)]
Discussed Saturday 1900 GJS & JRO
The ‗Common test conditions for HDR/WCG video coding experiments‘ defines an HDR anchor
(at version 3.2) that includes a method known as ‗luma sample adjustment‘. This method
compensates for a shift in luminance that occurs e.g. when downsampling from 4:4:4 to 4:2:0
using non-constant luminance and YCbCr. The method performs an iterative search to determine
each luma sample values, which is asserted to be overly complex for real time implementations.
This contribution shows a method for performing ‗luma sample adjustment‘ that replaces an
iterative search with a LUT-based approximation whereby a second-order model is applied to
predict the adjusted luma sample, with the coefficients for the model obtained from the LUT.
tPSNR values when no luma sample adjustment is applied are 52.2 dB, 62.6 dB and 43.3 dB in
Page: 240
Date Sav
X, Y and Z channels, respectively. The corresponding values for the v3.2 anchor are 53.8 dB,
69.8 dB and 43.1 dB. With the LUT-based method, the values are 53.3 dB, 64.4 dB and 43.0 dB.
It was agreed to study this in an AHG on luma adjustment for chroma subsampling methods.
JCTVC-W0052 Enhanced Luma Adjustment Methods [A. M. Tourapis, Y. Su, D. Singer (Apple
Inc)]
Discussed Saturday 1915 GJS & JRO
Changes are proposed for the luma adjustment method that is currently used for the creation of
the anchor video material used in the JCT-VC HDR/WCG video standardization development
process. These changes reportedly result in considerable speedups in the conversion process,
with little if any impact in quality. Alternative implementations that try to emphasize other video
quality aspects and not only luminance, are also presented.
The contribution also proposes changes to micrograding for XYZ and RGB accuracy
optimization by multi-component optimization.
It was agreed to replace the anchor generation process with the bounded luma micrograding
method using a LUT (10*106 elements).
It was agreed to study this in an AHG the RGB and XYZ aspects.
JCTVC-W0107 Closed form HDR 4:2:0 chroma subsampling (HDR CE1 and AHG5 related)
[Andrey Norkin (Netflix)]
Discussed Saturday 2000 GJS & JRO
Two algorithms are proposed for removal of colour artefacts in saturated colours of HDR video
that appear in non-constant luminance Y′CbCr 4:2:0 colour subsampling. Both proposed
algorithms perform calculations in one step, which reportedly results in lower complexity
compared to the luma micro-grading algorithm that is currently used for HDR anchor generation.
Algorithm 1 is reported to produce higher average PSNR and DE1000 numbers than the current
HDR anchor, wheres showing smaller numbers on tPSNR and L0100 measures. The
performance of algorithm 2 across the studied metrics is claimed to resemble performance of the
current HDR anchor. It is also claimed that both algorithms improve the subjective quality of the
sequences with the described artefacts, whereas algorithm 1 avoids certain colour shifts possible
in the anchor.
Regarding the subjective quality comparison, a participant remarked that viewing the result using
tone mapping may cause artefacts that do not occur with viewing on an HDR monitor.
A participant suggested that since this message has a performance loss, it should be considered
as a complexity reduction method to document and study but not to replace the anchor method
with it.
It was also suggested that this could be used in an iterative method to speed it up.
It was agreed to study this in an AHG.
JCTVC-W0111 Cross-check of JCTVC-W0107: Closed form HDR 4:2:0 chroma subsampling
[A. Tourapis (Apple)] [late]
JCTVC-W0051 Enhanced Filtering and Interpolation Methods for Video Signals [A. M. Tourapis,
Y. Su, D. Singer (Apple Inc)]
Discussed Sunday 21st 0900 GJS & JRO
New adaptive filtering mechanisms for the 4:4:4 to 4:2:0 and 4:2:0 to 4:4:4 conversion processes
are introduced in HDRTools. Two classes of filters were introduced: a parallel multi-filter
scheme that tries to determine the "best" filter, given a criterion for each column/row/segment,
and schemes that adaptively select the length of the filters based on local colour variations/edges.
Page: 241
Date Sav
Currently, the JCT-VC HDR/WCG activity employs relatively simplistic filtering mechanisms
when converting 4:4:4 video signals to 4:2:0 and vice versa. In particular, a 3 tap linear filter,
with coefficients [1 6 1]/8, is employed for the down-conversion process, while for upconversion the Lanczos2 4 tap filter with coefficients [-1 9 9 -1]/16 is used. Both of these filters
and the down-conversion and up-conversion processes, were applied on fixed precision data.
In this contribution, several adaptive filtering mechanisms are presented that can reportedly
result in better performance, both objectively and subjectively, compared to the filters currently
used by the JCT-VC HDR/WCG activity. The implementation of these filters is also combinable
with other methods such as the luma adjustment method and its refinements.
Two schemes were described:

Filtering optimization using parallel filters: Instead of utilizing a single filter for filtering an
image, multiple filters could be used in parallel and the one determined as best, given a
particular criterion, could be selected for filtering an area during down-conversion.

Edge adaptive/colour variation adaptive filtering methods: This is done by performing a
neighborhood analysis and, using predefined classification conditions, selecting the most
appropriate filter for filtering this sample. HDRTools currently supports three different
schemes for neighborhood analysis. In the first scheme, starting from the longest filter and
for each filter that is eligible for selection, an analysis of the local neighborhood mean is
performed for each sample to be filtered. The local neighborhood is defined based on the
length of the filter. If the distance of any sample within this neighborhood from the local
neighborhood mean exceeds a certain tolerance value, then this filter is disallowed for this
sample, and the next filter is evaluated. At the last stage, and if all other earlier filters were
disallowed, a 3 tap filter is considered instead.
A "min/max" filtering scheme is also supported in the HDRtools.
A number of other techniques could certainly be studied in further work.
The degree to which the effect is preserved in visual quality through the encoding-decoding
process was discussed, and is not necessarily clear.
It was agreed to study this in an AHG.
JCTVC-W0066 CE1-related: Optimization of HEVC Main 10 coding in HDR video Based on
Disorderly Concealment Effect [C. Jung, S. Yu, Q. Lin (Xidian Univ.), M. Li, P. Wu (ZTE)]
Discussed Sunday 21st 0945 GJS & JRO
Proposed is an optimization of coding HDR video based on the "disorderly concealment effect"
using an HEVC encoder. Perceptual block merging is implemented on HEVC Main 10 Profilebased HDR video coding. Firstly, luminance adaptation (LA) for quantized HDR video is
calculated based on the Barten contrast sensitivity function (CSF) model. Then a "free-energy"based just-noticeable-difference (FEJND) map is derived to represent the disorderly concealment
effect for each picture. Finally, a perceptual Lagrange multiplier is derived using the average
FEJND values of the coding unit (CU) and picture. Experimental results reportedly demonstrate
that the proposed method achieves average bit-rate reductions of 5.23% in tPSNR XYZ and
8.02% in PSNR_DE over Anchor, while producing block partitioning merging results correlated
with human visual perception.
The method is compared to HM 16.2.
The technique has also been applied to SDR as well.
The encoder runtime effect is approximately 2–3x, and thus substantial.
This is viewed as a general encoding technique that could be applied in various contexts, not
necessarily related to the techniques.
The presented results were for LD configuration; study on RA seemed more desirable.
No immediate action was requested on this, and further study was encouraged.
Page: 242
Date Sav
6.2.2 CE2 related (9)
JCTVC-W0040 Adaptive Gamut Expansion for Chroma Components [A. Dsouza, A. Aishwarya,
K. Pachauri (Samsung)]
Discussed Sunday 21st 1245–1300 GJS & JRO.
This proposal describes a method for content adaptive gamut expansion of chroma components
before quantization in order to minimize the noise. The document also proposes the addition of
an additional SEI message to be used for post processing. Objective gains of 14% at scaling
factor of 1.4 in deltaE BD-Rate is reported for the HDR anchor sequences.
It was unclear why this is not equivalent with adjusting chroma QP (since the scaling is linear,
this should be the case), and why additional signalling & scaling steps would be needed.
This could also be seen as a simplified (chroma only) reshaper.
The contribution raises the question of, if a reshaper is used, how complicated of a reshaping
function should be supported (number of segments, nonlinearity, etc.) It was agreed that this
should be studied in any further reshaping study e.g., in CE or AHG.
JCTVC-W0068 CE2-related: Adaptive Quantization-Based HDR video Coding with HEVC Main
10 Profile [C. Jung, Q. Fu, G. Yang (Xidian Univ.), M. Li, P. Wu (ZTE)]
(Not discussed in detail.)
The perceptual quantization (PQ) transfer function for HDR video coding reportedly has
perceptual uniformity in the luminance range with a modest bit depth. However, the contribution
asserts that the dynamic range of HDR video is not fully utilized in PQ, especially for chroma
channels. Thus, there is asserted to exist some wasted dynamic range in PQ which causes detail
loss and color distortions. In this contribution, adaptive quantization-based HDR video coding is
proposed. Adaptive mapping of sample values with bit depths of 16 bit and 10 bit is established
to perform conversion based on cumulative distribution function (CDF), i.e. adaptive
quantization, instead of linear mapping. First, CDF is derived from the histogram of each picture.
Then, adaptive quantization is performed based on CDF. The metadata for adaptive conversion
are coded for performing the inverse conversion at destination. Experimental results reportedly
demonstrate that the proposed method achieves average 1.7 dB gains in tPSNR XYZ and 2 dB
gains in PSNR_DE over the anchor. Subjective comparisons reportedly show that the proposed
method preserves image details while reducing color distortion.
JCTVC-W0071 CE2-related : Adaptive PQ: Adaptive Perceptual Quantizer for HDR video
Coding with HEVC Main 10 Profile [C. Jung, S. Yu, P. Ke (Xidian Univ.), M. Li, P. Wu (ZTE)]
(Not discussed in detail.)
Proposed is an adaptive transform function based perceptual quantizer (PQ) for HDR video,
called adaptive PQ. Adaptive PQ is said to solve an asserted problem that PQ is not adaptive to
HDR content because of its using a fixed mapping function from luminance to luma. By
introducing a ratio factor derived from the luminance information of HDR content, the proposed
adaptive PQ maps luminance to luma adaptively according to the content. Experimental results
reportedly demonstrate that the proposed adaptive PQ scheme achieves average bit-rate
reductions of 3.51% in tPSNR XYZ and 5.51% in PSNR_DE over PQ.
JCTVC-W0085 HDR CE2 related: Further Improvement of JCTVC-W0084 [T. Lu, F. Pu, P. Yin,
T. Chen, W. Husak (Dolby), Y. He, L. Kerofsky, Y. Ye (InterDigital)] [late]
See summary in section 5.2.1. This was further discussed Sunday 21st 1010 (GJS & JRO). See
notes under W0084. Further study was suggested.
JCTVC-W0120 Cross-check of JCTVC-W0085 [J. Lee, J. W. Kang, D. Jun, H. Ko (ETRI)] [late]
Page: 243
Date Sav
JCTVC-W0130 Crosscheck for further improvement of HDR CE2 (JCTVC-W0085) [Y. Wang,
W. Zhang, Y. Chiu (Intel)] [late]
JCTVC-W0089 HDR CE2-related: some experiments on reshaping with input SDR [Y. Olivier,
E. Francois, C. Chevance (Technicolor)] [late]
Tues pm 1520 (GJS & JRO).
This (as configured for the proposal) targets backward compatibility.
This document reports preliminary experiments on ETM with dual SDR/HDR grading. In this
configuration, a graded SDR version as well as the usual graded HDR version are given as inputs
to the reshaper. The reshaping process uses the input SDR as reference and generates reshaping
data to match as much as possible the reshaped input HDR to the input SDR. In the experiments,
reshape setting 1 configuration is used (cross-plane chroma reshaping). It is reported that these
preliminary experiments tend to show that ETM models are able to properly reproduce an SDR
intent.
This is in the category of "bitstream backward compatibility".
It is not in the category of "decoder-side content DR/CG adaptation within an HDR context.
Decoder side is "mode 1", which is identical to the decoder used for "setting 1" and "setting 2".
JCTVC-W0092 Description of the Exploratory Test Model (ETM) for HDR/WCG extension of
HEVC [K. Minoo (Arris), T. Lu, P. Yin (Dolby), L. Kerofsky (InterDigital), D. Rusanovskyy
(Qualcomm), E. François (Technicolor)]
This was a description of ETMr0, and there was no need for its detailed presentation.
JCTVC-W0100 HDR CE2-related: Results for combination of CE1 (anchor 3.2) and CE2
[J. Sole, A. K. Ramasubramonian, D. Rusanovskyy, D. Bugdayci, M. Karczewicz (Qualcomm)]
(Not presented in detail.)
This document reports results of combining the software of anchor 3.2 that is the outcome of
CE1 and the ETM that is the outcome of CE2. Results for the direct combination of the CE1 and
CE2 software are provided. It is noted that CE1 and CE2 algorithms have some degree of
overlap, so the direct combination could reportedly be improved by a proper adjustment of the
parameters. Results obtained by adjusting the parameters are also provided.
6.2.3 CE3 related (2)
JCTVC-W0041 HDR-VQM Reference Code and its Usage [K. Pachauri, S. Sahota (Samsung)]
Discussed Thu 0830 (GJS).
At the 113th meeting of MPEG in Geneva, Oct. 2015, a metric, HDR-VQM, for HDR quality
estimation was introduced. This contribution provides a description of HDR-VQM software and
its parameters and its usage. Source code is made available to members for use.
This contribution provides the location of the software. The coordinator A. Tourapis was asked
to incorporate this into SDTRools, pending timely resolution of any software quality issues, such
as use of a platform-dependent external library.
JCTVC-W0115 VIF Code for HDR Tools [H. Tohidypour, M. Azimi, M. T. Pourazad,
P. Nasiopoulos (UBC)] [late]
Discussed Wed 1725 (GJS).
The "Visual Information Fidelity" (VIF) measure (developed by Bovik et al.) is a full reference
quality assessment method. In this contribution, an implementation of VIF for HDRTools is
presented.
Page: 244
Date Sav
It uses a wavelet decomposition (selectable between two types) and operates only on luma.
Temporal differences are not analyzed.
The luma channel for the metric measurement is converted to the PQ transfer characteristic
domain without rounding to an integer.
It was noted that J.341 is an objective quality metric, but was remarked that it is not for HDR
video.
The software was initially provided in the contribution. However, this software has not generally
been made publicly available, so a new version was requested to be uploaded without it.
It was agreed to include the new measure into HDRtools for further testing of quality metric
usage.
The presentation deck was requested to be uploaded.
As of 2016-05-24, the requested new version of the contribution had not yet been provided.
6.2.4 CE4 related (0)
6.2.5 CE5 related (0)
6.2.6 CE6 related (1)
JCTVC-W0060 Re-shaper syntax extension for HDR CE6 [H. M. Oh, J.-Y. Suh (LGE)]
Tues 1630 GJS & JRO
This has a backward compatibility aspect.
In the extension of exploratory test model (ETM) framework which is comprised of decoder and
HDR post-processing, the HDR post-processing block could be considered as video format
conversion process when ETM is used for SDR backward compatible mode. To support the
process of different input and output picture, it is proposed to extend the HDR reshaping syntax
in ETM syntax structure.
This proposes to create a modified ETM that would include HDR processing with output in non4:2:0 domains and with different bit depths, and proposes syntax for that.
The amount of syntax proposed was significant. It was remarked that some of the proposed
aspects may be redundant (e.g., with VUI) or undefined (e.g., the meaning of RGB/YCbCr) or
defined in an unnecessarily different way than what is done elsewhere. It was also commented
that including things in the PPS that do not appear to need to change on a picture-to-picture
basis.
The presentation deck had not been uploaded and was requested to be provided in a revision.
As of 2016-05-24, the requested new version of the contribution had not yet been provided.
It was remarked that the status of "ETM" work remains to be determined for future work.
For further study.
6.2.7 CE7 related (7)
JCTVC-W0088 SDR backward compatibility requirements for HEVC HDR extension [L. van de
Kerkhof, W. de Haan (Philips), E. Francois (Technicolor)]
This was a requirements contribution. (Not discussed in JCT-VC.)
JCTVC-W0119 Some considerations on hue shifts observed in HLG backward compatible video
[M. Pindoria, M. Naccari, T. Borer, A. Cotton (BBC)] [late]
Tues 1700 (GJS & JRO)
Page: 245
Date Sav
This document reports on an initial analysis conducted by the BBC in response to the hue shifts
reported by Arris in contribution m37543. In their document, Arris reported seeing hue shifts
when viewing the HDR video on a legacy SDR screen. This contribution outlines the BBC tests
to recreate the observations, and also some further analysis of hue shift occurring in a purely
SDR workflow.
This report acknowledges that hue distortions are present when comparing the image achieved
from an HDR image directly against the SDR compatible image. Other participants in the
meeting confirmed that they have also seen this phenomenon. However, the contribution asserts
that this task is inappropriate without the context of the legacy SDR image.
The contribution asserts that when the SDR legacy system images are compared to the SDR
compatible images, the level of distortion between the compatible image and that derived from
SDR is comparable. The contribution thus asserts that the colour distortions reported by Arris
primarily illustrate the limitations of SDR TV rather than a lack of compatibility of the HLG
signal.
JCTVC-W0045 Hybrid Log Gamma observations [C. Fogg (Movielabs)] [late]
Information document. Not presented in detail in JCT-VC.
This input contribution document examines the impact of conversion between HLG and PQ
signals, and tries to identify the operating sweet spot range of each. While the HLG coded video
signal is scene referred, the non-linear slope of the non-linear output of HLG by definition does
not uniformly quantize linear input level levels, thus implying some perceptual-bias towards
darker pixels. One goal of this study is to find the perceptually optimal display range of HLG
graded video signals, or confirm that the display gamma adjustment renders any perceptual
quantization effect irrelevant within a range of concern, such as 400 to 4000 nits. To test this, the
BT.HDR DNR formulae for mapping scene relative linear light to display linear light are applied.
Although is expected to be less optimal than tone mapping provided in ACES and proprietary
tools, proponents of HLG state the Note 5e display gamma to be a reasonable proxy for displayembedded tone mapping (OOTF). By contrast, PQ signals are display referred, but in practice
displays most also tone map PQ signals if the display characteristics differ significantly from the
mastering monitor characteristics that the PQ signal was originally graded upon, or if the PQ
signal utilizes code level combinations outside those that can be directly displayed. Observations
are noted of these converted video clips, sent to the Samsung JS9500 as HLG and PQ 10-bit
integer signals in BT.2020 primaries color space (4:4:4 R'G'B' or 4:2:2 Y'C'BC'R). Observations
noted from viewing on Sony BVM-X300 professional OLED studio reference monitor may be
provided later. A reported emerging explanation of the observations is that HLG behaves like a
display-referred transfer system within an operating sweet-spot near 1000 nits, and superior
performance to PQ in the 10 to 40 nits sub-rage.
JCTVC-W0079 HDR CE7-related: Additional results for Experiment 7.2a [M. Naccari,
M. Pindoria, A. Cotton (BBC)]
Information document. Not presented in detail in JCT-VC.
This contribution presents additional results for Experiment 7.2a as defined in Core Experiment
7 (CE7, see N15800). In Experiment 7.2a two values of the so-called system gamma are tested:
1.2 and 1.4 as described in W0067 and associated cross-check report W0043. In this document,
the system gamma is set to 1.45 which is the suggested value reported in m37535 and discussed
during the ad-hoc group interim meeting in Vancouver. AVI files for the SIM2 display have been
generated for SDR and HDR and was offered to be shown in viewing sessions held during the
meeting.
Page: 246
Date Sav
JCTVC-W0110 HDR CE7: Comments on visual quality and compression performance of HLG
for SDR backward compatible HDR coding system [E. Francois, Y. Olivier, C. Chevance
(Technicolor)] [late]
Information document. Not presented in detail in JCT-VC.
This document reports the visual tests performed by Technicolor on the HLG-based HDR coding
system in context of SDR backward compatibility. The evaluation aims at evaluating the
performance of HLG for different settings. The tests are done both on the quality of SDR
obtained after applying HLG to the input HDR content and on the HDR obtained after
compression and reconstruction. They have been performed on a sub-set of CTCs HDR test
sequences, with different settings (parameter system gamma), and using either BT.709 or
BT.2020 as a container. One of the main reported observations is that it seems the system gamma
value has to be adjusted per content, and not only based on the grading monitor peak luminance.
It is reported that for some of the considered test sequences, some color shift is observed
between HDR and SDR, this shift being much more noticeable when using a BT.709 than a
BT.2020 container. The shift mostly appears in saturated red/orange and green colors. It is also
reported that the optimal system gamma for HDR compression performance also seems to be
content dependent.
JCTVC-W0132 Information and Comments on Hybrid Log Gamma [J. Holm] [late]
Information document. Not presented in detail in JCT-VC.
This contribution consisted of a PowerPoint deck commenting on various aspects of the HLG
scheme, and is available for study.
JCTVC-W0135 Some Elementary Thoughts on HDR Backward Compatibility
[P. Topiwala (FastVDO)] [late]
Information document. Not presented in detail in JCT-VC.
One suggested requirement going forward in the HDR activity is to somehow test proposed
technologies for backward compatibility. That is, study the capability of an HDR coding
approach that can also produce a ―usable‖ SDR HEVC Main 10 stream as an intermediate
product. It has been suggested that this is not a well-defined problem, so it is difficult to know
how to approach it. Some remarks are hereby offered to find a way forward.
6.2.8 CE8 related (0)
JCTVC-W0087 Description of Colour Graded SDR content for HDR/WCG Test Sequences
[P. J. Warren, S. M. Ruggieri, W. Husak, T. Lu, P. Yin, F. Pu (Dolby)]
(Not reviewed in detail.)
To satisfy the need in testing SDR backward compatibility, Dolby has made available color
graded SDR content for the corresponding HDR content inside MPEG HDR/WCG AHG
activity. The SDR content is color graded by a professional colorist using DaVinci Resolve and
professional reference displays. An onsite expert viewing session is planned to review the SDR
versions on a PRM-4200 reference display during the La Jolla meeting.
This relates to the testing of BC capability.
JCTVC-W0106 Evaluation of Backward-compatible HDR Transmission Pipelines [M. Azimi,
R. Boitard, M. T. Pourazad, P. Nasiopoulos (UBC)] [late]
Tues 1730 (GJS & JRO)
This relates to backward compatibility, both in regard to "bitstream compatibility" and "display
compatibility" – especially the latter, suggesting that the SDR & HDR quality produced by the
latter approach can be high.
Page: 247
Date Sav
This contribution comments on the performance of two possible scenarios for distributing HDR
content employing a single layer. One of them compresses HDR content (HDR10), with color
conversion and tone mapping performed at the decoding stage to generate SDR, while the other
scenario compresses tone-mapped SDR content with inverse tone-mapping and colour
conversion at the decoding side to generate HDR. The contributor performed subjective tests to
evaluate the viewing quality of the SDR video on an SDR display. Some reported aspects:

Note that previous work had reported that the HDR10 approach had resulted in better
HDR visual quality.

The results reportedly showed that chroma subsampling is playing a very important role
in determining the final visual quality of the SDR content.

Several tone mapping operators (TMOs) were tested. A non-invertible TMO, which was
reported to yield very high quality SDR, was reported to produce very good SDR results
for the HDR10 pipeline.

No metadata was used.
 The luma adjustment scheme for subsampling improvement was not used in this test.
It was commented that testing the tone mapping approaches can be done without compression to
determine how much distortion is coming from compression and how much is coming from other
elements of the processing.
6.2.9 Alternative colour spaces (4)
These contributions were discussed Wed 1400–1700 (GJS & JRO, later GJS)
JCTVC-W0072 Highly efficient HDR video compression [A. Chalmers, J. Hatchett, T. B. Rogers,
K. Debattista (Univ. Warwick)] [late]
The two current standards for HDR video source formats, SMPTE ST 2084 (PQ) and ARIB
STD-B67 (HLG), are said to be computationally complex for applying the transfer function. A
scheme called a power transfer functions (PTF) is said to be straightforward and well suited to
implementation on GPUs. This contribution describes PTF4, a transfer function that is said to
offer improved computational performance, even when compared with LUTs, without loss in
quality.
Some improvement in compressed video quality was also suggested in the contribution.
It was remarked that our usual approach for transfer function would be to expect some other
organization (ITU-R or SMPTE) to standardize the transfer function and then just indicate it.
The contribution seemed interesting but did not seem to need action by us at this time.
JCTVC-W0050 Overview of ICtCp [J. Pytlarz (Dolby)]
(A web site problem was noted regarding the document number for this contribution. This
corresponds to m38148.)
This informational contribution presents an overview of Constant Intensity ICTCP signal format
of Draft Rec. ITU-R BT.[HDR-TV].
The familiar Y′C′BC′R non-constant luminance format is a colour-opponent based encoding
scheme (in which signals are interpreted based on colour differences in an opposing manner)
intended to separate luma from chroma information for the purposes of chroma subsampling (i.e.,
4:2:2 and 4:2:0). High dynamic range and wide colour gamut content are asserted to show
limitations of existing colour encoding methods. Errors that were previously small with standard
dynamic range can reportedly become magnified. BT.2020 provides an alternative to Y′C′BC′R,
i.e., the Y′CC′BCC′RC constant luminance format. This format reportedly resolves the issue of
chroma leakage into the Y′ luma signal, but does not solve the problem of luminance
Page: 248
Date Sav
contamination of the C′BC and C′RC components. Draft Recommendation ITU-R BT.[HDR-TV]
provides an alternative method for colour difference encoding called constant intensity, which is
based on the IPT colour space developed by Ebner and Fairchild.
See also W0047 and W0044.
Differences in this colour space are reported to be more perceptually uniform than in traditional
Y′C′BC′R, roughly 1.5 bits better in visual perceptual colour distortion effects with th PQ transfer
function.
This space is reportedly the same as what was used in Dolby's prior response to the MPEG CfE
on HDR/WCG.
It was asked what the coding efficiency effect of this was. A participant said there was a
significant improvement from it. There had been a previous CE in which this was studied along
with other technical changes. There had been a plan to include this in the prior CE5, but that had
not been followed through due to resourcing issues and the higher prioritization of standardizing
this in ITU-R, which has had progress reported in incoming LSs.
A participant asked about whether comparison had minimized subsampling effects of chroma on
luma in the comparison, and this had not been done – the subsampling used a simple [1 6 1]/8
filter (with both colour domains).
Another participant questioned whether the details of the scheme were fully stable and were
assured to be approved in ITU-R. Some assurances were made in that regard, saying that it was
technically stable and unlikely to change, although some reservation had been expressed within
ITU-R and further study is being conducted there. It is a candidate for approval by
correspondence in ITU-R as a Draft New Recommendation (and there is a closely related
Technical Report).
Supporting ICTCP would involve adding a new value of matrix_coeffs. A participant commented
that some additional or somewhat different specification method could be used, involving adding
another set of colour primaries to represent the equivalent of the transformation of the RGB
primaries into a space known as LMS.
JCTVC-W0047 ICtCp testing [C. Fogg (Movielabs)]
Example image patches comparing 4:4:4 to 4:2:0 chroma sampling in the Y′CbCr NCL and
ICTCP domain options of BT.HDR are included in this document. From the image patches
provided in this document, the authors report that ICTCP space appears to exhibit less correlated
error on natural video content, compared to Y′CbCr NCL. It was reported that a test on the
Samsung JS9500 display could be shown at the meeting. Matlab code was also provided to
recreate the test patterns and process the CfE video clips used in HDR/WCG experiments.
Luma adjustment for the 4:2:0 conversion was not applied – just simple down- and up-sampling.
See also W0050 and W0044.
Constant-luminance YCbCr was not tested.
The contributor suggested further study in a CE and liaison communication with ITU-R WP6C
(mentioning our study plans and noting that our study would consider compression effects). This
should also be considered in the work on suggested practices for HDR.
A participant asked about the precision necessary to perform forward and inverse matrix
transformation, and remarked that high precision appeared necessary.
JCTVC-W0074 CIECAM02 based Orthogonal Colour Space Transformation for HDR/WCG
Video [Y. Kwak, Y. Baek (UNIST), J. Lee, J. W. Kang, H.-Y. Kim (ETRI)]
This document proposes a wide color gamut/high dynamic range encoding scheme that uses an
orthogonal color space based on the international color appearance model, CIECAM02, and
separates the orthogonal color encoding part and the high dynamic range signal encoding part.
It was commented that the scheme is quite similar to ICTCP, in terms of its use of LMS prior to
the transfer function.
Page: 249
Date Sav
The contribution seemed interesting but did not seem to need action by us at this time.
6.2.10 Encoding practices (3)
JCTVC-W0048 HDR-10 status update [C. Fogg (Movielabs)]
(Not presented in detail.)
This document lists the industry forums and standards that have included the HDR-10 definition
created over one year ago. (See also the prior contribution JCTVC-V0050.)
JCTVC-W0105 Evaluation of Chroma Subsampling for High Dynamic Range Video
Compression [R. Boitard, M. T. Pourazad, P. Nasiopoulos (UBC)] [late]
Discussed Wed 1700 (GJS)
This contribution evaluates the impact of chroma subsampling using two different downsampling
filters using an objective metric. Objective results reportedly show that distributing 4:4:4 is more
efficient than its 4:2:0 counterpart for medium to high bit rates. For low bit rates, this increase in
efficiency is reportedly reduced and in some cases even reversed.
Using chroma subsampling reportedly always decreases the accuracy of color reproduction.
Results obtained from the tPSNR-XYZ metric indicated that using chroma subsampling reduces
the compression efficiency at medium to high bit-rates, while some gain can be achieved at low
bit-rates. The results also indicated that using a longer filter does not seem to increase
compression efficiency.
The comparison was done after upconversion back to 4:4:4 domain.
Producing corresponding bit rates for subjective quality evaluation had not been completed.
The downsampling tested used [1 6 1]/8 and Lanczos 3, without luma adjustment.
It was commented that "micrograding" with consideration of the transfer function and chroma
quantization to the bit depth can be used to produce higher quality even when working in the
4:4:4 domain.
It was commented that a distortion metric that operates across the three colour components might
be better to use than the tPSNR-XYZ metric, as it can measure distortion of three-component
colours.
JCTVC-W0126 Preliminary study of filtering performance in HDR10 pipeline [K. Pachauri,
Aishwarya Samsung [late]
This contribution presents a study of different filtering effects for HDR10 pipelines. Objective
and Subjective results are presented in this contribution. The contribution was not requested to
be presented in detail.
6.3 HL syntax (0)
No contributions on general matters of high-level syntax were specifically noted, although
various contributions include consideration of high-level syntax issues relating to specific
features.
6.4 SEI messages and VUI (6)
JCTVC-W0044 BT.HDR and its implications for VUI [C. Fogg, J. Helman (Movielabs)
G. Reitmeier (NBC Universal), P. Yin, W. Husak, S. Miller (Dolby), T. Borer, M. Naccari (BBC)]
Discussed Wed 1530 (GJS)
At the February 2016 meeting of ITU-R WP6C, the document ―Draft new Recommendation
ITU-R BT.[HDR-TV]: Image parameter values for high dynamic range television for use in
production and international programme exchange‖ was put forward for approval that combines
color primaries identical to BT.2020 with independent selections in each aspect of : transfer
Page: 250
Date Sav
function (PQ or Hybrid Log Gamma); color formats (Y'C'BC'R Non-constant luminance (NCL),
ICTCP Constant Intensity, or R'G'B'); integer code level range ("full" or "narrow") and bit depth
(10 or 12 bits). "BT.HDR" is a temporary name for this recommendation: it will be assigned a
BT. series number upon final approval. This contribution suggests text changes to HEVC Annex
E (VUI) that provides BT.HDR indicators as per WP6C liaison to MPEG and VCEG at this
meeting in San Diego.
Aspects noted: An E′ scaling issue for "full range" (scaling by 1024) and [0,1] versus [0, 12], and
also the IPT colour space.
For SCC FDIS purposes, the following was discussed and agreed:

Not to mention the ITU-R BT.[HDR] draft new Recommendation since it is not yet fully
approved in ITU-R (unless this can be included in the publication process after approval
by ITU-R)

Not to include ICTCP since it is not yet fully approved in ITU-R

The editors may editorially express this with a scaling factor adjustment of 1/12 in the
range and formulas and add a note (this is editorial only)

"Full" scaling by 1024 / 4096 rather than 1023 / 4095 (see the contribution). This should
only be done for HLG, since SMPTE 2084 was already included in an approved version
of HEVC that has the other scaling and the ITU-R spec that differs is not yet approved.
Some future correction may be needed, depending on future events.
It was also agreed to issue a WD for the ICTCP aspect.
The way the EOTF vs. OETF was handled in the prior V1005 was agreed to be adequate (one
subscript needed correction, for the value 16).
Further study was encouraged for whether we should take some action on aspects such as the
OOTF issues.
See also W0050 and W0047.
JCTVC-W0057 Content colour gamut SEI message [H. M. Oh, J. Choi, J.-Y. Suh (LGE)]
(Not discussed in detail.)
In this proposal, an SEI message is proposed to signal content colour gamut which describes
actual colour distribution of a video content. This information is asserted to mostly be useful
when the content colour is partially distributed in the possible colour area within the container
colour gamut. Given the content colour gamut information, the performance of colour gamut
mapping reportedly could be improved in terms of colour saturation.
JCTVC-W0058 Video usability information signalling for SDR backward compatibility [H. M. Oh,
J.-Y. Suh (LGE)]
(Not discussed in detail.)
In the developing of high dynamic range (HDR) extension of HEVC, two different output points
are defined in standard dynamic range (SDR) backward compatible mode. Although the output
SDR and HDR video has different video features in terms of transfer function and/or color
gamut, the video characteristics of reconstructed HDR output is asserted to not be sufficiently
described in current syntax. This contribution proposes to define additional VUI parameters to
describe the video characteristics of one output point while the other output point uses the
current VUI.
JCTVC-W0086 Indication of SMPTE 2094-10 metadata in HEVC [R. Yeung, S. Qu, P. Yin,
T. Lu, T. Chen, W. Husak (Dolby)]
(Not discussed in detail.)
Page: 251
Date Sav
This contribution proposes text changes in the HEVC specification to support metadata of
SMPTE 2094-10 standard: Dynamic Metadata for Color Volume Transform - Application #1.
Specifically, in Annex D, a new SEI message is proposed for carriage of such metadata.
JCTVC-W0098 Effective Colour Volume SEI message [A. Tourapis, Y. Su, D. Singer (Apple)]
(Not discussed in detail.)
This contribution proposes the ―Effective Colour Volume‖ SEI message that was first proposed
in JCTVC-V0038. This SEI message can reportedly indicate the effective colour volume
occupied by the current layer of a coded video stream (CVS). This SEI message can reportedly
be seen as an extension of the ―Content Light Level Information‖ (CLL) SEI message that is
already supported by these two standards, and can provide additional information to a decoder
about the characteristics and limitations of the video signal. This information can then be used to
appropriately process the content for display or other types of processing, e.g. to assist in
processes such as colour conversion, clipping, colour gamut tailoring, and tone mapping, among
others.
JCTVC-W0133 Indication of SMPTE 2094-20 metadata in HEVC [W. de Haan, L. van de
Kerkhof, R. Nijland, R. Brondijk (Philips)] [late]
(Not discussed in detail.)
This contribution proposes text changes in the HEVC specification to support metadata of
SMPTE 2094-20 standard: Dynamic Metadata for Color Volume Transform - Application #2.
Specifically, in Annex D, a new SEI message is proposed for carriage of such metadata.
6.5 Non-normative: Encoder optimization, decoder speed improvement and
cleanup, post filtering, loss concealment, rate control, other information (11)
6.5.1 General (3)
JCTVC-W0062 Non-normative HM encoder improvements [K. Andersson, P. Wennersten,
R. Sjöberg, J. Samuelsson, J. Ström, P. Hermansson, M. Pettersson (Ericsson)]
Initially presented on Sunday 21st at 1850 (with Rajan Joshi chairing).
This contribution reports that a fix to an asserted misalignment between QP and lambda
improves the BD rate for luma by 1.8% on average for RA and 1.7% for LD using the common
test conditions. This change, in combination with extension to a hierarchy of length 16 for
random access, is reported to improve the BD rate for luma by 7.1% on average using the
common test conditions. To verify that a longer hierarchy does not decrease the performance for
difficult-to-encode content, four difficult sequences were also tested. An average improvement in
luma BD rate of 4.8% is reported for this additional test set. Results for HDR content are also
presented. It is also reported that subjective quality improvements have been seen by the authors.
The contribution proposes that both the change of the relationship between QP and lambda and
the extension to a hierarchy of 16 pictures be included in the reference software for HEVC and
used in the common test conditions. Software is provided in the contribution.
A subjective viewing using Samsung TV was planned at this meeting. A related document is
JVET B0039.
It is proposed to adjust the alignment between lambda and QP. This would be an encoder-only
change.
Decision (SW): Adopt the QP and lambda alignment change to the HM encoder software.
The proposed increase in the hierarchy length would require a larger DPB size than HEVC
currently supports if the picture resolution is at the maximum for the level and would
substantially increase the algorithmic delay. It was commented that most encoders might not
Page: 252
Date Sav
actually use the larger hierarchy, so that proposed change might not represent expected realworld conditions.
The proponent mentioned that the GOP size could be extended to 32 to provide even higher
gains.
This was further discussed Thu 0900 (GJS).
The current CTC config files use a period of 8 for temporal level 0. V0075 reported visual
improvement for SDR content for increased GOP size (less intra pulsing and better prediction
relating to occlusions). The intra period would be affected in cases where the GOP size is not a
multiple of 16 or 32 (e.g. for 24 Hz, the intra period would need to increase to 32). Some concern
was expressed about potential quality fluctuation artefacts, esp. on HDR. Appropriate intra
period adjustment would be necessary for both GOP sizes of 16 and 32.
It was noted that we haven't updated the CTC in quite a while (the basic CTC is defined in an
"L" document, with the RExt CTC in a "P" document) and suggested that we should be
conservative about changes to that. The random access functionality would be affected for the 24
Hz sequences. Thus, it was agreed not to change the GOP period at this time.
Further study was encouraged. Config files were provided in a revision of the contribution.
JCTVC-W0117 Cross-check of Non-normative HM encoder improvements (JCTVC-W0062)
[B. Li, J. Xu (Microsoft)] [late]
JCTVC-W0038 HEVC encoder optimization [Y. He, Y. Ye, L. Kerofsky (InterDigital)]
Initially presented on Sunday 21st at 1900 (with Rajan Joshi chairing).
This proposal proposes two non-normative encoder optimization methods for HEVC encoders:
1) deblocking filter parameter selection, and 2) chroma quantization parameter adjustment
support at the slice level (providing config file support for different offsets depending on the
temporal level).
Compared to HEVC HM-16.7 anchor, the average luma BD rate saving of the deblocking filter
parameter selection method is reported to be 0.2% for AI, 0.4% for RA, 0.3% for LDB, and 0.5%
for LDP. The average chroma BD rate saving of the chroma QP offset adjustment method is
reported to be 11.6% for RA, 14.2% for LDB, and 13.9% for LDP (as measured in the usual
manner), with a small loss for the luma component.
These two optimization methods are also applicable to HDR/WCG coding. It is asserted that on a
SIM2 display, compared to the HDR/WCG anchors, quality improvements can be observed for
low to medium bit rates for a range of HDR video content, mainly in the form of more details
and fewer blocking artefacts.
For the 2nd aspect: For setting 1, there were some losses for luma (0.8% – 0.9%) with large
improvements for chroma. A participant commented that for setting 1, you could shift some bits
from luma to chroma to get such results. The proponent commented that setting 2 had no loss for
luma and >3% gains for chroma. Another participant asked whether similar gains would be
possible by redistributing bits by changing the QP hierarchy. This may not be possible at this
granularity.
Another participant asked whether the methods were for subjective quality improvement. The
proponent commented that with aspect 1 they observed some blocking artefact improvement at
low bit-rates.
Informal subjective study was encouraged to verify that there were subjective gains (or no
artefacts).
Further discussed Thu 0900 (GJS & JRO).
This was not suggested to be put into CTC.
It was reported that informal visual viewing did not show additional artefacts.
Page: 253
Date Sav
Decision (SW): It was agreed to include these two capabilities in the HM (with config file
options, disabled by default), pending consideration of any software quality issues with the
software coordinator.
Further study, including consideration of the interaction with Lambda control, was encouraged.
6.5.2 SCC (6)
JCTVC-W0075 Palette encoder improvements for the 4:2:0 chroma format and lossless
[C. Gisquet, G. Laroche, P. Onno (Canon)]
Discussed Sunday 1615 GJS.
At the 22nd JCT-VC meeting in October 2015, the proponents' non-normative modification for
palette lossy coding of non-4:4:4 content was adopted. It was asserted then that the fact that
superfluous chroma samples are coded was not properly taken into account, resulting in
suboptimal coding. This contribution asserted that similar considerations affect lossless encoding.
In addition, it was proposed for the encoder to track whether an entry is only associated with
pixels with superfluous chroma samples. Finally, it is asserted that various palette evaluations
need not be run, the removal of which is asserted to result in encoder runtime improvements. It is
reported that the non-normative changes provide, for 4:2:0 content, up to 1.1%, 0.5% and 0.5%
BDR gains, for runtimes of 90%, 94% and 95%, for the AI, RA and LDB lossless configurations,
respectively.
Substantial speed-up was shown, and some coding efficiency improvement for lossless coding.
In terms of software impact, one main function is modified (perhaps 50 lines modified), and the
proponent said the impact was not so large.
Decision (SW): Adopt.
JCTVC-W0127 Cross-check of JCTVC-W0075 on palette encoder improvements for lossless
mode [V. Seregin (Qualcomm)] [late]
No problem was identified by the cross-checker.
JCTVC-W0078 Bottom-up hash value calculation and validity check for SCC [W. Xiao (Xidian
Univ.), B. Li, J. Xu (Microsoft)]
Presented on Sunday 21st at 1750 (with Rajan Joshi chairing).
This document presents a bottom-up hash value calculation and validity check method in hash
table generation, where large blocks' results can reuse measurements from small blocks. Such a
design can reduce the redundant computation and decrease encoding time in the SCM.
Experimental results show that the encoding time is reduced by 7% and 12% on average for RA
and LB, respectively, while the BD-rate performance almost same as the anchor.
The hash calculation was changed. Now the higher block size hash value can be derived from the
lower block size hash values. This results in the 2/3 reduction in computations. For validity
testing, no change in the formula was made but hierarchical implementation was used.
It was asked whether the breakdown of the run-time savings was known between has calculation
and validity testing. That was not tested separately.
Currently, this was applied only to 2Nx2N cases. A participant commented that this could be
extended to asymmetric motion partitions.
A participant asked whether the hash used in current picture referencing could be improved in a
similar manner. The hash functions for current picture referencing and inter prediction are
different. It is not clear whether this could be extended to the current picture reference hash. The
presenter commented that they may investigate this further.
The software patch was released in a revision. The crosschecker was not present to comment in
further detail about their cross-checking activity.
Decision (SW): Adopt.
Page: 254
Date Sav
JCTVC-W0118 Crosscheck for bottom-up hash value calculation and validity check for SCC
(JCTVC-W0078) [W. Zhang (Intel)] [late]
JCTVC-W0042 SCC encoder improvement [Y.-J. Chang, P.-H. Lin, C.-L. Lin, J.-S. Tu, C.-C. Lin
(ITRI)]
Presented on Sunday 21st at 1815 (with Rajan Joshi chairing).
To reduce the SCM encoding time for 4:4:4 chroma format, JCTVC-U0095 proposed to disallow
the TU splitting processes for the intra prediction mode enabling adaptive colour transform when
the CU sizes are 64x64 and 32x32. The proposed method was adopted into SCM anchor in the
last (Geneva) meeting for substantial encoder improvement in 4:4:4 chroma format. However,
the TU splitting processes for the intra prediction mode in non-4:4:4 chroma formats are also a
huge computational cost. This contribution proposes to apply the same method which disallows
TU splitting processes to the intra prediction mode in non-4:4:4 chroma formats. Compared to
the SCM6.0 anchor, it is reported that the proposed encoder can save 3% encoding time on
average with 0.1% BDR gain on the AI-lossy conditions in 4:2:0 chroma format. This
contribution also re-tests the performance of the adopted method in JCTVC-U0095 when it is
applied to different sizes of CUs for the intra prediction mode enabling adaptive colour transform
in 4:4:4 chroma format.
The speedup discussed in the contribution is applied only to non-4:4:4 chroma formats. If the CU
size is 32x32 or higher, checking of RQT is disabled (no TU split). A participant commented if
this method is adopted, the SCM would behave differently than the HM for 4:2:0 cases when all
the SCC and RExt tools are disabled, which was thought to be undesirable. Another participant
commented that this method would need to be tested on variety of camera-captured content.
Supplemental results were presented for the method in U0095 for other block sizes (16x16 and
8x8). There were some losses: 0.1% for 16x16 for some classes, 0.2% for 8x8 for some classes.
The encoding time reduction was 1–3% for 16x16 and ~5% for 8x8.
The proponent suggested that a config parameter be added to control the block size at which the
U0095 method is applied.
No action was taken on this.
JCTVC-W0116 Cross-check of SCC encoder improvement (JCTVC-W0042) [B. Li, J. Xu
(Microsoft)] [late]
6.5.3 HDR (2)
JCTVC-W0046 Recommended conversion process of YCbCr 4:2:0 10b SDR content from
BT.709 to BT.2020 colour gamut [E. Francois, K. Minoo, R. van de Vleuten, A. Tourapis]
(Not discussed in detail.)
This document describes recommended processes for the following conversions, in the context
of the JCT-VC experiments on HDR with SDR backward compatibility:

Conversion of SDR BT.709 YCbCr 4:2:0 to SDR BT.709 R′G′B′

Conversion of SDR BT.709 R′G′B′ to SDR BT.709 YCbCr 4:2:0

Conversion of SDR BT.709 R′G′B′ to SDR BT.2020 R′G′B′

Conversion of SDR BT.2020 YCbCr 4:2:0 to SDR BT.2020 R′G′B′

Conversion of SDR BT.2020 R′G′B′ to SDR BT.2020 YCbCr 4:2:0

Conversion of SDR BT.2020 R′G′B′ to SDR BT.709 R′G′B′
Page: 255
Date Sav
JCTVC-W0053 HDRTools: Extensions and Improvements [A. M. Tourapis, Y. Su, D. Singer
(Apple Inc), C. Fogg (Movielabs)]
(Not discussed in detail.)
This document provides a short report about the various enhancements and extensions that had
been recently introduced in the HDRTools package.
The following enhancements had been made to the HDRTools package since its last release in
SG16 document C.886, Geneva, SW, Oct. 2015:
7

Six (6) new variants of the luma adjustment method, including improved boundary and
look-up table considerations. Improvements result in considerable speed improvements as
well as in the ability to evaluate different optimization target variants.

Three (3) new adaptive downsampling filters based on parallel optimization or
edge/sample variation analysis.

Initial implementation for NLMeans video denoising.

Addition of a modified Wiener filter (Wiener2DDark) that accounts for brightness.

Support of additional transfer functions (i.e. Hybrid PQ).

Support for Y4M video files (input only currently; output will be added later).

RAW and AVI format support for three (3) 4:4:4 10 bit video formats, i.e. V410, R210,
and R10K. V210 (a 4:2:2 10 bit format) and R210 formats are currently also supported on
Blackmagic cards.

Improved conversion support from YCbCr raw signals to other formats, including
formats that use different color primaries, or require additional format conversion steps.
For example, conversion from a YCbCr 4:2:0 BT.709 signal to a YCbCr 4:2:0 or 4:2:2
BT.2020 signal is now fully supported in a single pass.

Additional filter support for 4:4:4 to 4:2:2 conversion.
Plenary discussions, joint meetings, BoG reports, and summary of actions
taken
7.1 Project development
Joint meetings are discussed in this section of this report. Additional notes on the same topics
may appear elsewhere in this report. Joint discussions were held on Monday 22 Feb., Tuesday 23
Feb., and Wednesday 24 Feb., as recorded below.
Joint discussion Monday 1630 on JCT-VC related topics (esp. HDR and SCC)

HDR
o HDR CE1 vs. CE2

Expert testing was suggested to be conducted at the Geneva meeting (fixing
intra bit allocation, etc.)

It was also planned to define restrictions to be imposed on the encoders

Restrictions were to be imposed before the meeting

A testing plan was expected to be established during the current meeting (see
notes on later joint discussions in which this plan was changed).
Page: 256
Date Sav
o For HDR backward compatibility: Two flavours have been proposed

Decoders that do not use new extension data, for "bitstream backward
compatibility"

Using new extension data for "display backward compatibility"

Also there is a concept of "decoder-side content dynamic range adaptation"
within the HDR context

Note that colour gamut / colour volume as well as dynamic range is a related
issue

Requirements were asked to be reviewed (it was suggested to check SG16
C.853 and the Feb 2015 report for VCEG, and the MPEG requirements doc
for HDR of around the same time)

[ TD 362-GEN ] from DVB on BT.2020 colorimetry, both for HDR and for
backward compatibility to SDR for "UHD-1 Phase 2" (also interested in NBC)

This LS requests a list of approaches
o For BC: VCEG suggested mentioning HLG (possibly with
alternative transfer characteristics SEI message or colour
remapping information SEI message) or SHVC
o For NBC or not-necessarily BC: VCEG noted HLG or ST 2084
"PQ"
o These are not suggestions or tested selections - just lists of
possible approaches

Info on verification testing was requested

There had been previous Cablelabs input on BC for BT.709, and ATSC has an
interest in BT.709 BC

How to test the effectiveness of a BC approach was discussed (e.g. reference /
no reference)

Tuesday 2-6pm further discussion was planned
o [ TD 387-GEN ] / m38082 SMPTE ST 2094 HDR remapping info (also JCTVCW0086 for 2094-10, JCTVC-W0133 for 2094-20; more planned) – it was agreed to
suggest that the user data registered SEI message approach be used by SMPTE.
o Issues affecting the SCC text:

CRI SEI message semantics (2094-30) clarification – Decision: It was agreed
to clarify the text in the other direction than previously planned, so the ID may
affect the domain in which the transformation is applied

[ TD 385-GEN ] ITU-R WP 6C on BT.[HDR] (equation details, ICtCp) (also
JCTVC-W0044)

It was remarked that there may be slightly different values or formulas
for some equations than found in prior study of the HLG and PQ
schemes - and asked whether we need two code points in such cases. It
was agreed that this does not seem needed.
Page: 257
Date Sav

It was asked whether there is an interpretation for out-of-range values
that may representable in colour component values.

ICtCp was agreed not to be put into the SCC text currently being
finalized at this meeting as an ISO/IEC FDIS, as the BT.[HDR]
Recommendation was not yet fully final and some aspects may benefit
from further study.
o HDR10 verification testing was suggested and planned
o The "good practices" text development was discussed and continued to be of interest
o Other SEI message and VUI contributions under consideration were highlighted for
potential parent-level discussion


JCTVC-W0057 Content colour gamut SEI message [H. M. Oh, J. Choi, J.-Y.
Suh]

JCTVC-W0058 Video usability information signalling for SDR backward
compatibility [H. M. Oh, J.-Y. Suh (LGE)]

JCTVC-W0098 Effective colour volume SEI [A. Tourapis, Y. Su, D. Singer
(Apple Inc.)]
SCC
o JCTVC-W0129 Requiring support for more reference pictures in 4:4:4 profiles when
using 4:2:0 video (note the MV & mode storage requirements) – no action taken
o Combined wavefronts & tiles in 4:4:4 SCC (editorial expression) - OK to allow this
combination, correcting a prior oversight in text editing
o It was noted that the SCC High Throughput 4:4:4 profiles, as drafted, require support
for 4:2:2. In the discussion, it was noted that 4:2:2 has a memory bandwidth savings
and there's a corresponding non-SCC profile with 4:2:2, so it was agreed to keep
these profiles as drafted (no action). It was asked if we should add a 4:2:2 support
requirement to others profiles in which the draft does not currently require it, but it
was agreed not to do that. (No action.)
o JCTVC-W0077 Bug fix for DPB operations with CPR (no technical change intended).
o It was noted that monochrome is not supported in Screen-Extended Main and ScreenExtended Main 10 profiles – and no action was taken on this observation.

Other topics noted of joint interest, which should proceed as planned and feasible:
o SHVC verification testing
o RExt conformance
o SHVC conformance
o SHVC software
Joint discussion Monday 1800 on other topics (primarily outside scope of JCT-VC - recorded
here for information only)

General [ TD 349-GEN ] from MPEG, m37578 from VCEG
Page: 258
Date Sav

AVC
o VCEG-BA09 Progressive High 10 profile m37952 (significant support at previous
meeting) - OK.
o [ TD 385-GEN ] ITU-R WP 6C on BT.[HDR] / m37733 (for non-B67 aspects, further
study was suggested and it was also agreed to issue a PDAM for HEVC and a
working draft for AVC and also to move forward on this for CICP and also to put
support into HDRTools software)
o VCEG-BA08 / m37675 High level syntax support for ARIB STD-B67
o VCEG-BA10 / m37954 Generalized constant and non-Constant luminance matrix
coefficient code points
o Include corrections of colour transfer characteristics that have been agreed (as
necessary, as this may have already been in DAM2) and V0036 changes (i.e., display
light versus captured light clarification).

JCT-3V wrapping up final work

Future video
o JVET exploration

JVET is tasked with exploring potential compression improvement technology,
regardless of whether it differs substantially from HEVC or not; discussions
of converting that work into a formal standardization project, whether that's a
new standard or not, timelines for standardization, profiles, etc., belong in the
parent bodies rather than in JVET
o VCEG-BA07 / m37709 on video for virtual reality systems

A plan for an LS reply to ITU-R was noted.

CICP remains a coordination topic and should be kept aligned with HEVC and AVC (and
has been planned for twin text in ITU-T for quite some time)
Tuesday joint session (Salon D, 1400–1800)
This session was considered a joint meeting of MPEG Requirements and Video, VCEG, and
JCT-VC. Some outcomes and discussions of this session may be recorded in integrated form in
other sections of this report.
Below is an overview of topic areas of HDR that had been deferred so far and were pending
discussion that were suggested to be desirable to discuss in this session, with counts of JCT-VC
contributions in each category, in the order suggested for discussion (not necessarily exhaustive
and sequential):

CE2 and CE2 related post-decoding signal processing in 4:2:0 domain that may involve
backward compatibility issues (6 primary contributions)

CE6 post-processing that may reach outside the 4:2:0 domain (further discussion of
summary report, 4 primary contributions, 1 related syntax proposal)

SEI/VUI (~5 contributions that might need some review, but not all in full depth)

Future CE planning and CE1 vs. CE2 testing

Alternative colour spaces and transfer functions (~3)
Page: 259
Date Sav

CE7 study of HLG (~6 docs that might need review, although they mostly did not seem
high priority or to need action)

Best practices for encoding guidelines (~6, not especially needing joint requirements
consideration)

CE3 video quality metric contribution (1, not especially urgent to review early in the
meeting, although did not seem lengthy to discuss)
Technical BC related review on Tuesday included W0033, W0093, W0094, W0089, W0063,
W0103, W0060, W0119, W0106.
It was discussed whether "CE1" could include use of the CRI SEI message (e.g. with a unity
transformation matrix so it could be applied in the 4:2:0 domain just as a per-component LUT
with up to 33 piece-wise linear segments in the mapping).
Potential CE planning was mentioned (with BC testing).
It was reported that MPEG requirements had concluded on Monday that BC is important (but it
was yet to be clarified which types of BC are needed and how to test it).
Wed joint discussion 1115-1200 (GJS, JRO, JO)

This session began with a review of the previous status and an update on review of
technical contributions.

The results of the BoG activity to test CE2 vs. CE1 were reviewed. The test, which was
described as a relatively informal "quick and dirty" test, had been conducted as follows
(with each test session being 15 minutes long)
o 36 test points were obtained using "expert viewing" with 1.5H viewing distance
and an 11-point MOS scoring (ranging from 0 to 10), with


6 video sequences: Balloon, EBU, Cars, Market, ShowGirl, and Sunrise
balloon

3 bit rates, two codecs, eight sessions with two viewers per session

Each "basic test cell" was as illustrated below
Illustrated preliminary results of the test with rough "error bars" were shown as follows:
Page: 260
Date Sav

It was noted that satisfactorily reliable results were only obtained for three or four of the
test sequences, and particularly that the results for sequences S04 and S06 had
undesirable bit rate discrimination between the MOS quality measured at different bit
rates, and that therefore those results should be discounted. Further testing would be
needed if more reliable results are to be obtained.

It was concluded that the amount of benefit for HDR/WCG quality that was demonstrated
by the post-processing techniques of CE2 (as tested) was not so clear or large

It was also noted that the CRI SEI message can already do similar processing to some of
what was tested
o CE2 mode 0 was the use of a per-component LUT
o The CRI SEI message supports a per-component LUT, and a matrix multiply,
followed by another per-component LUT (each LUT piecewise linear having up
to 33 segments)
o There are also other things that can be done for HDR handling using previously
designed technology (e.g., the existing tone mapping SEI message and alternative
transfer characteristics SEI message)

It was thus agreed not to plan to create a new profile (or any other "normative" or
implicitly required specification that would imply that something new is needed to
properly enable HDR service)

Further SEI/VUI work may be desirable, but not a new profile, and we have a clear
indication that nothing (not even some SEI message) is necessary to properly deliver
HDR (for an HDR-specific service)

We will focus more further work on guidelines and verification testing

Study of potential enhancement techniques (e.g. for possible additional SEI messages) is
still planned

Guidelines could include how to use some SEI message(s)

Longer-term coding exploration is under way (e.g., in JVET).

The following meeting resolution was thus prepared for reporting to the parent bodies:
In consultation with the parent bodies, the JCT-VC reports the conclusion reached at
the 114th meeting of WG 11 that the creation of a new HEVC profile or other new
normative specification is not necessary to properly enable HDR/WCG video
compression using the current edition of HEVC (3rd edition for ISO/IEC). No
Page: 261
Date Sav
technology has been identified that would justify the creation of such a new
specification. Future work on HDR/WCG in JCT-VC will focus primarily on formal
verification testing and guidelines for encoding practices.
7.2 BoGs
The results of the BoG work on CE2 vs. CE1 testing are reported in section 7.1. The results of
HDR viewing using a consumer monitor for some viewing of CE4 & CE5 are reported in section
1.15. Some viewing was also done to illustrate the phenomenon reported in JCTVC-W0035 by
Arris, and it was confirmed that the phenomenon was shown, as noted in section 1.15.
Some plans for additional viewing or other follow-up activities that were recorded in the meeting
notes were not followed up after other decisions were made (e.g., as recorded in section 7.1).
7.3 List of actions taken affecting the draft HEVC specification
The following is a summary, in the form of a brief list, of the actions taken at the meeting that
affect the draft text of the HEVC specification. Both technical and editorial issues are included
(although some relatively minor editorial / bug-fix matters may not be listed). This list is
provided only as a summary – details of specific actions are noted elsewhere in this report and
the list provided here may not be complete and correct. The listing of a document number here
only indicates that the document is related, not that what it proposes was adopted (in whole or in
part).

A bug fix for DPB operation with CPR was agreed (see JCTVC-W0077).

Combining wavefronts and tiles in the 4:4:4 SCC profiles was agreed to be allowed (as
previously planned but forgotten in the text drafting)

For the CRI SEI message semantics clarification – It was agreed to clarify the text in the
other direction than previously planned, so the ID may affect the domain in which the
transformation is applied.

It was agreed to start work on a new amendment/revision of HEVC (as well as AVC and
CICP) to include support for ICTCP colour representation.
Post-meeting note: two issues arose in post-meeting editing preparation regarding SEI messages
in the SCC text and an email notification was sent to the email reflector about them by Gary
Sullivan on May 10, 2016, as follows:

The coded region completion SEI message was intended to be allowed to be a suffix SEI
message as well as a prefix SEI message. This is clear from prior proposals and meeting
notes, but the drafts had it in the syntax table only as a prefix SEI message. The editors
said they would correct this by adding it in the suffix part of the syntax table as well.

In prior work, some payloadType values have been given numbers above 180. The intent
of this was to put SEI messages that are for layered coding extensions (Annexes F, G, H,
I) in the higher numbered range and leave a gap below the value 160 where additional
new SEI messages could go that are not for layered extensions, so that the numbers could
have logical groupings. The highest numbered value defined in 3D work was 181, for an
SEI message added in Annex I. When the text was drafted for two new ones, it was
apparently not noticed that they should go in that gap, and instead they were just given
numbers higher than 181, but those numbers shouldn‘t have been used since those SEI
messages are not related to layered coding extensions. This affects two SEI messages:
alternative transfer characteristics and ambient viewing environment. It was reported that
Page: 262
Date Sav
these should have given payloadType values 147 and 148 instead of 182 and 183, and the
editors said they would correct that.
8
Project planning
8.1 Text drafting and software quality
The following agreement has been established: the editorial team has the discretion to not
integrate recorded adoptions for which the available text is grossly inadequate (and cannot be
fixed with a reasonable degree of effort), if such a situation hypothetically arises. In such an
event, the text would record the intent expressed by the committee without including a full
integration of the available inadequate text. Similarly, software coordinators have the discretion
to evaluate contributed software for suitability in regard to proper code style, bugginess, etc., and
to not integrate code that is determined inadequate in software quality.
8.2 Plans for improved efficiency and contribution consideration
The group considered it important to have the full design of proposals documented to enable
proper study.
Adoptions need to be based on properly drafted working draft text (on normative elements) and
HM encoder algorithm descriptions – relative to the existing drafts. Proposal contributions
should also provide a software implementation (or at least such software should be made
available for study and testing by other participants at the meeting, and software must be made
available to cross-checkers in CEs).
Suggestions for future meetings included the following generally-supported principles:

No review of normative contributions without draft specification text

HM text is strongly encouraged for non-normative contributions

Early upload deadline to enable substantial study prior to the meeting
 Using a clock timer to ensure efficient proposal presentations (5 min) and discussions
The document upload deadline for the next meeting was planned to be the Monday of the week
preceding the meeting (16 May 2016).
As general guidance, it was suggested to avoid usage of company names in document titles,
software modules etc., and not to describe a technology by using a company name. Also, core
experiment responsibility descriptions should name individuals, not companies. AHG reports and
CE descriptions/summaries are considered to be the contributions of individuals, not companies.
8.3 General issues for CEs and TEs
Group coordinated experiments have been planned in previous work, although none were
established at the current meeting. These may generally fall into one of two categories:

"Core experiments" (CEs) are the experiments for which there is a draft design and
associated test model software that have been established.

"Tool experiments" (TEs) are the coordinated experiments on coding tools at a more
preliminary stage of work than those of "core experiments".
A preliminary description of each experiment is to be approved at the meeting at which the
experiment plan is established.
It is possible to define sub-experiments within particular CEs and TEs, for example designated as
CEX.a, CEX.b, etc., for a CEX, where X is the basic CE number.
As a general rule, it was agreed that each CE should be run under the same testing conditions
using one software codebase, which should be based on the HM software codebase. An
Page: 263
Date Sav
experiment is not to be established as a CE unless there is access given to the participants in (any
part of) the CE to the software used to perform the experiments.
The general agreed common conditions for single-layer coding efficiency experiments were as
described in the prior output document JCTVC-L1100.
The general timeline agreed for CEs was expected to be as follows: 3 weeks to obtain the
software to be used as the basis of experimental feature integration, 1 more week to finalize the
description and participation, 2 more weeks to finalize the software.
A deadline of four weeks after the meeting would be established for organizations to express
their interest in participating in a CE to the CE coordinators and for finalization of the CE
descriptions by the CE coordinator with the assistance and consensus of the CE participants.
Any change in the scope of what technology will be tested in a CE, beyond what is recorded in
the meeting notes, requires discussion on the general JCT-VC reflector.
As a general rule, all CEs are expected to include software available to all participants of the CE,
with software to be provided within two (calendar) weeks after the release of the relevant
software basis (e.g. the SCM). Exceptions must be justified, discussed on the general JCT-VC
reflector, and recorded in the abstract of the summary report.
Final CE descriptions shall clearly describe specific tests to be performed, not describe vague
activities. Activities of a less specific nature are delegated to Ad Hoc Groups rather than
designated as CEs.
Experiment descriptions should be written in a way such that it is understood as a JCT-VC
output document (written from an objective "third party perspective", not a company proponent
perspective – e.g. referring to methods as "improved", "optimized" etc.). The experiment
descriptions should generally not express opinions or suggest conclusions – rather, they should
just describe what technology will be tested, how it will be tested, who will participate, etc.
Responsibilities for contributions to CE work should identify individuals in addition to company
names.
CE descriptions should not contain excessively verbose descriptions of a technology (at least not
unless the technology is not adequately documented elsewhere). Instead, the CE descriptions
should refer to the relevant proposal contributions for any necessary further detail. However, the
complete detail of what technology will be tested must be available – either in the CE description
itself or in referenced documents that are also available in the JCT-VC document archive.
Those who proposed technology in the respective context (by this or the previous meeting) can
propose a CE or CE sub-experiment. Harmonizations of multiple such proposals and minor
refinements of proposed technology may also be considered. Other subjects would not be
designated as CEs.
Any technology must have at least one cross-check partner to establish a CE – a single proponent
is not enough. It is highly desirable have more than just one proponent and one cross-checker.
It is strongly recommended to plan resources carefully and not waste time on CE work on
technology that may have little or no apparent benefit – it is also within the responsibility of the
CE coordinator to take care of this.
A summary report written by the coordinator (with the assistance of the participants) is expected
to be provided to the subsequent meeting. The review of the status of the work on the CE at the
meeting is expected to rely heavily on the summary report, so it is important for that report to be
well-prepared, thorough, and objective.
A non-final CE plan document would be reviewed and given tentative approval during the
meeting (with guidance expressed to suggest modifications to be made in a subsequent revision).
The CE description for each planned CE would be described in an associated output document
numbered as, for example, JCTVC-W11xx for CExx, where "xx" is the CE number (xx = 01, 02,
etc.). Final CE plans would be recorded as revisions of these documents.
It must be understood that the JCT-VC is not obligated to consider the test methodology or
outcome of a CE as being adequate. Good results from a CE do not impose an obligation on the
Page: 264
Date Sav
group to accept the result (e.g., if the expert judgment of the group is that further data is needed
or that the test methodology was flawed).
Some agreements relating to CE activities have been established as follows:

Only qualified JCT-VC members can participate in a CE.

Participation in a CE is possible without a commitment of submitting an input document
to the next meeting.

All software, results, documents produced in the CE should be announced and made
available to all CE participants in a timely manner.

If combinations of proposals are intended to be tested in a CE, the precise description
shall be available with the final CE description; otherwise it cannot be claimed to be part
of the CE.
8.4 Alternative procedure for handling complicated feature adoptions
The following alternative procedure had been approved at a preceding meeting as a method to be
applied for more complicated feature adoptions:
1. Run CE + provide software + text, then, if successful,
2. Adopt into HM, including refinements of software and text (both normative & nonnormative); then, if successful,
3. Adopt into WD and common conditions.
Of course, we have the freedom (e.g. for simple things) to skip step 2.
8.5 Common test conditions for HEVC Coding Experiments
An output document for common test conditions for HDR/WCG video coding experiments was
issued.
No particular changes were noted w.r.t. the prior CTC for work within the current scope of JCTVC particularly for the SCC extensions development, as that work is at such a late stage of
development that such changes would seem necessary to consider.
8.6 Software development planning
Software coordinators were asked to work out the scheduling of needed software modifications,
together with the proponents of adopted changes as appropriate (for HDRTools, HM, SCM, and
SHM).
Any adopted proposals where necessary software is not delivered by the scheduled date in a
timely manner may be rejected.
At a previous meeting (Sapporo, July 2014), it was noted that it should be relatively easy to add
MV-HEVC capability to the SHVC software, and it was strongly suggested that this should be
done. This remains desirable. Further study was encouraged to determine the appropriate
approach to future software maintenance, especially in regard to alignment of 3D video software
with the SHM software.
9
Establishment of ad hoc groups
The ad hoc groups established to progress work on particular subject areas until the next meeting
are described in the table below. The discussion list for all of these ad hoc groups was agreed to
be the main JCT-VC reflector (jct-vc@lists.rwth-aachen.de).
Page: 265
Date Sav
Title and Email Reflector
Chairs
Mtg
JCT-VC project management (AHG1)
(jct-vc@lists.rwth-aachen.de)
G. J. Sullivan, J.-R. Ohm
(co-chairs)
N
B. Bross, C. Rosewarne
(co-chairs), M. Naccari,
J.-R. Ohm, K. Sharman,
G. J. Sullivan,
Y.-K. Wang (vice-chairs)
N
K. Sühring (chair),
K. Sharman (vice-chair)
N

Coordinate overall JCT-VC interim efforts.

Report on project status to JCT-VC reflector.

Provide a report to next meeting on project
coordination status.
HEVC test model editing and errata reporting
(AHG2)
(jct-vc@lists.rwth-aachen.de)

Produce and finalize JCTVC-W1002 HEVC
Test Model 16 (HM 16) Update 5 of Encoder
Description

Collect reports of errata for HEVC

Gather and address comments for refinement of
these documents.

Coordinate with AHG3 on software
development and HM software technical
evaluation to address issues relating to
mismatches between software and text.
HEVC HM software development and software
technical evaluation (AHG3)
(jct-vc@lists.rwth-aachen.de)

Coordinate development of the HM software
and its distribution.

Produce documentation of software usage for
distribution with the software.

Prepare and deliver HM 16.x software versions
and the reference configuration encodings
according to JCTVC-L1100 and JCTVC-P1006
common conditions.

Suggest configuration files for additional testing
of tools.

Investigate how to minimize the number of
separate codebases maintained for group
reference software.

Coordinate with AHG2 on HEVC test model
editing and errata reporting to identify any
mismatches between software and text.
Page: 266
Date Sav
HEVC conformance test development (AHG4)
(jct-vc@lists.rwth-aachen.de)

Study the requirements of HEVC conformance
testing to ensure interoperability.

Prepare and deliver the JCTVC-W1008 SHVC
conformance draft 5 specification.

Develop proposed improvements to the SCC
conformance testing draft JCTVC-W1016.

Discuss work plans and testing methodology to
develop and improve HEVC v.1, RExt, SHVC,
and SCC conformance testing.

Establish and coordinate bitstream exchange
activities for HEVC.

Identify needs for HEVC conformance
bitstreams with particular characteristics.
T. Suzuki (chair),
J. Boyce, R. Joshi,
K. Kazui,
A. K. Ramasubramonian,
W. Wan, Y. Ye
(vice-chairs)
N
V. Baroncini, Y.-K.
Wang, Y. Ye (co-chairs)
N
H. Yu (chair), R. Cohen,
A. Duenas, K. Rapaka,
X. Xu, J. Xu (vice-chairs)
N

Collect, distribute, and maintain bitstream
exchange database and draft HEVC
conformance bitstream test set.
SHVC verification test reporting (AHG5)
(jct-vc@lists.rwth-aachen.de)

Edit and produce the final SHVC verification
test report.
SCC extensions verification testing (AHG6)
(jct-vc@lists.rwth-aachen.de)

Study test conditions and coding performance
analysis methods for verification of SCC coding
performance.

Prepare a proposed draft verification test plan
for SCC.
SCC extensions text editing (AHG7)
(jct-vc@lists.rwth-aachen.de)

Produce and finalize HEVC screen content
coding extensions draft 6 and test model 7 text.

Gather and address comments for refinement of
the test model text.

Coordinate with AHG8 to address issues relating
to mismatches between software and text.
R. Joshi, J. Xu (co-chairs), N
R. Cohen, S. Liu,
G. Sullivan, Y.-K. Wang,
Y. Ye (vice-chairs)
Page: 267
Date Sav
SCC extensions software development (AHG8)
(jct-vc@lists.rwth-aachen.de)

Coordinate development of the SCM software
and its distribution.

Prepare and deliver HM 16.x-SCM-7.0 software
version and the reference configuration
encodings according to JCTVC-U1015.

Prepare and deliver additional "dot" version
software releases and software branches as
appropriate.

Perform analysis and reconfirmation checks of
the behaviour of the draft design, and report the
results of such analysis.

Suggest configuration files for additional testing
of tools.
B. Li, K. Rapaka (chairs),
P. Chuang, R. Cohen,
X. Xiu (vice-chairs)
N
G. Barroux, Y. He,
V. Seregin (co-chairs)
N

Coordinate with AHG7 to address any identified
issues regarding text and software relationship.
SHVC software development (AHG9)
(jct-vc@lists.rwth-aachen.de)

Prepare and deliver the SHM 11.x software
(based on HM 16.x) for scalable HEVC
extensions draft 4 JCTVC-W1013.

Generate anchors and templates based on
common test conditions.

Discuss and identify additional issues related to
SHVC software.
Page: 268
Date Sav
Test sequence material (AHG10)
(jct-vc@lists.rwth-aachen.de)

Maintain the video sequence test material
database for development of HEVC and its
RExt, SHVC and SCC extensions.

Identify, collect, and make available a variety of
video sequence test material, especially focusing
on new needs for HDR test material and
corresponding SDR test material.

Study coding performance and characteristics in
relation to video test materials.

Identify and recommend appropriate test
materials and corresponding test conditions for
use in development of HEVC and its extensions.
T. Suzuki, V. Baroncini,
R. Cohen (co-chairs),
E. Francois, T. K. Tan,
P. Topiwala, S. Wenger,
H. Yu (vice-chairs)
N
V. Baroncini,
P. Topiwala, E. Alshina
(co-chairs)
N
A. K. Ramasubramonian,
R. Sjöberg (co-chairs)
N

Coordinate with the activities in AHG6
regarding screen content coding testing, with
AHG11 regarding HDR/WCG visual content
testing, and with AHG12 regarding HDR/WCG
verification test planning
HDR/WCG visual testing (AHG11)
(jct-vc@lists.rwth-aachen.de)

Study content characteristics and identify
appropriate HDR/WCG test sequences for visual
testing.

Identify and develop test methodologies for
HDR/WCG visual testing including
consideration and characterization of test
equipment.
HDR/WCG verification test planning (AHG12)
(jct-vc@lists.rwth-aachen.de)

Finalize the verification test plan JCTVCW1018

Generate and collect the encoded bitstreams for
the HDR/WCG verification test.

Identify and coordinate arrangements toward the
preparation of test sites for subjective testing.

Perform the subjective testing as described in
JCTVC-W1018.

Analyze the test results and prepare a draft
report of the subjective testing.
Page: 269
Date Sav
HDR/WCG coding practices guideline
development (AHG13)
(jct-vc@lists.rwth-aachen.de)

Identify and study tech approaches to singlelayer HDR coding using the existing HEVC
standard (up to SCC-extended edition) with ST
2084 transfer characteristics, including potential
use of SEI messages

Study and consider potential guidelines for use
of the ICTCP colour representation with the
HEVC standard
Study and propose improvements of draft
guidelines W10xx for HEVC single-layer
coding using ST 2084 with YCbCr NCL narrow
range
Conduct one or more teleconferences to discuss
these matters, with appropriate advance notice.


HDR/WCG technology for backward
compatibility and display adaptivity (AHG14)
(jct-vc@lists.rwth-aachen.de)

Study the technical characteristics of singlelayer coding with HEVC using existing SEI and
VUI indicators for backward compatibility and
display adaptivity.

Study the technical characteristics of two-layer
coding with SHVC for backward compatibility
and display adaptivity.

Identify and study the technical characteristics
of proposed approaches to backward
compatibility and display adaptivity with
potential additional new SEI messages.

Study and propose test conditions for associated
experiments.
J. Samuelsson (chair),
C. Fogg, A. Norkin,
J. Strom, J. Sole,
A. Tourapis, P. Yin
(vice-chairs)
Tel.
E. Francois, W. Husak,
D. Rusanovskyy
(co-chairs)
N
10 Output documents
The following documents were agreed to be produced or endorsed as outputs of the meeting.
Names recorded below indicate the editors responsible for the document production.
JCTVC-W1000 Meeting Report of the 23rd JCT-VC Meeting [G. J. Sullivan, J.-R. Ohm (chairs)]
[2016-05-13] (near next meeting)
Remains valid – not updated: JCTVC-H1001 HEVC software guidelines [K. Sühring, D. Flynn,
F. Bossen (software coordinators)]
Page: 270
Date Sav
JCTVC-W1002 High Efficiency Video Coding (HEVC) Test Model 16 (HM 16) Update 5 of
Encoder Description [C. Rosewarne (primary editor), B. Bross, M. Naccari, K. Sharman,
G. J. Sullivan (co-editors)] (WG 11 N 16048) [2016-05-13] (near next meeting)
JCTVC-W1003 Draft text for ICTCP support in HEVC (Draft 1) [P. Yin, C. Fogg, G. J. Sullivan,
A. Tourapis (editors)] (WG 11 N 16047) [2016-03-11] (2 weeks)
JCTVC-W1004 SHVC Verification Test Report [Y. Ye, V. Baroncini, Y.-K. Wang (editors)]
(WG 11 N 16051) [2016-03-18] (3 weeks)
JCTVC-W1005 HEVC Screen Content Coding Draft Text 6 [R. Joshi, S. Liu, G. J. Sullivan, Y.-K.
Wang, J. Xu, Y. Ye (editors)] (WG 11 N 16046 Text of ISO/IEC FDIS 23008-2:201X 3rd Edition)
[2016-04-22] (8 weeks)
Basic elements (no changes of features at the current meeting):

IBC

Adaptive colour transform

Palette mode

Adaptive MV resolution

Intra boundary filtering disabling
Remains valid – not reissued: JCTVC-P1006 Common test conditions and software reference
configurations for HEVC range extensions [D. Flynn, C. Rosewarne, K. Sharman (editors)]
Remains valid – not reissued: JCTVC-V1007 SHVC Test Model 11 (SHM 11) Introduction and
Encoder Description [G. Barroux, J. Boyce, J. Chen, M. M. Hannuksela, Y. Ye (editors)] (WG 11
N 15778)
JCTVC-W1008 Conformance Testing for SHVC Draft 5 [J. Boyce, A. K. Ramasubramonian
(editors)] (Included in WG 11 N 16061 Text of ISO/IEC FDIS 23008-8:201x 2nd Edition) [201604-22] (8 weeks)
(Merged into a new edition)
Remains valid – not updated JCTVC-Q1009 Common SHM Test Conditions and Software
Reference Configurations [V. Seregin, Y. He (editors)]
Remains valid – not updated JCTVC-O1010 Guidelines for Conformance Testing Bitstream
Preparation [T. Suzuki, W. Wan (editors)]
JCTVC-W1011 Reference Software for Screen Content Coding Draft 1 (WG 11 N 16057 Text of
ISO/IEC 23008-5:201x/PDAM1) [K. Rapaka, B. Li, X. Xiu (editors)] [2016-03-11] (2.5 months
prior to next meeting)
Page: 271
Date Sav
JCTVC-W1012 Conformance Testing for Improved HEVC Version 1 Testing and Format Range
Extensions Profiles Draft 6 (Included in WG 11 N 16061 Text of ISO/IEC FDIS 23008-8:201x
2nd Edition) [T. Suzuki, K. Kazui (editors)] [2016-04-22] (8 weeks)
(Merged into a new edition)
JCTVC-W1013 Reference software for Scalable HEVC (SHVC) Extensions Draft 4 (Merged into
WG 11 N 16055 Text of ISO/IEC FDIS 23008-5:201x Reference Software for High Efficiency
Video Coding [2nd ed.]) [Y. He, V. Seregin (editors)] [2016-04-22] (8 weeks)
(Merged into a new edition)
JCTVC-V1014 Screen Content Coding Test Model 7 Encoder Description (SCM 7) [R. Joshi,
J. Xu, R. Cohen, S. Liu, Y. Ye (editors)] (WG 11 N 16049) [2016-04-30] (near next meeting)
Remains valid – not updated: JCTVC-U1015 Common Test Conditions for Screen Content
Coding [H. Yu, R. Cohen, K. Rapaka, J. Xu (editors)]
JCTVC-W1016 Conformance Testing for HEVC Screen Content Coding (SCC) Extensions Draft
1 (MPEG WD) [R. Joshi, J. Xu] (6 weeks)
JCTVC-W1017 Conversion and Coding Practices for HDR/WCG Video [J. Samuelsson] (now)
(Using local activity for QP setting as well as luma)
JCTVC-W1018 Verification Test Plan for HDR/WCG Video Coding Using HEVC Main 10 Profile
[R. Sjöberg, V. Baroncini, A. K. Ramasubramonian] [2016-03-09] (2 weeks)
(with the existing HEVC standard)
JCTVC-W1020 CTC for HDR/WCG video coding experiments [E. François, J. Sole, J. Strom,
P. Yin] (now)
Remains valid – not updated: JCTVC-L1100 Common Test Conditions and Software Reference
Configurations for HM [F. Bossen (editor)]
11 Future meeting plans, expressions of thanks, and closing of the meeting
Future meeting plans were established according to the following guidelines:

Meeting under ITU-T SG 16 auspices when it meets (starting meetings on the Thursday
of the first week and closing it on the Tuesday or Wednesday of the second week of the
SG 16 meeting – a total of 6–6.5 meeting days), and

Otherwise meeting under ISO/IEC JTC 1/SC 29/WG 11 auspices when it meets (starting
meetings on the Friday prior to such meetings and closing it on the last day of the WG 11
meeting – a total of 7.5 meeting days).
Some specific future meeting plans (to be confirmed) were established as follows:

Thu. 26 May – Wed. 1 June 2016, 24th meeting under ITU-T auspices in Geneva, CH.

Fri. 14 – Fri. 21 Oct. 2016, 25th meeting under WG 11 auspices in Chengdu, CN.
Page: 272
Date Sav

Thu. 12 – Wed. 18 Jan. 2017, 26th meeting under ITU-T auspices in Geneva, CH.
 Fri. 31 Mar. – Fri. 7 Apr. 2017, 27th meeting under WG 11 auspices in Hobart, AU.
The agreed document deadline for the 24th JCT-VC meeting is Monday 16 May 2016. Plans for
scheduling of agenda items within that meeting remained TBA.
The JCT-VC thanked all the organizations and individuals who contributed to the SHVC
verification test, including:

Cable Television Laboratories, Netflix, NTIA (The National Telecommunications and
Information Administration), Technicolor, and Vidyo for providing the test sequences.

InterDigital Communications Inc., Microsoft Corporation, Nokia Corporation,
Qualcomm Inc., Technicolor, and Vidyo for providing financial support for this activity.

InterDigital Communications Inc. and Qualcomm Inc. for providing the resources to
prepare the test material.

Giacomo Baroncini of GBTech for conducting the subjective test.

Dr. Vittorio Baroncini (MPEG Test Chairman) for his guidance and coordination of the
subjective test.
The USNB of WG11 and the companies that financially sponsored the meeting and associated
social event – Apple, CableLabs, Google, Huawei, InterDigital, Microsoft, Mitsubishi Electric,
Netflix and Qualcomm – were thanked for the excellent hosting of the 23rd meeting of the JCTVC.
Companies that provided equipment that was used for subjective viewing of video – Dolby,
InterDigital, Qualcomm, Samsung and Sony, were also thanked.
The JCT-VC meeting was closed at approximately 1240 hours on Friday, 26 February 2016.
Page: 273
Date Sav
Annex A to JCT-VC report:
List of documents
JCT-VC number
MPEG
number
JCTVC-W0001
m38086
JCTVC-W0002
B. Bross, C. Rosewarne,
JCT-VC AHG report:
M. Naccari, J.-R. Ohm,
2016-02-18 2016-02-19 2016-02-19
m38071
HEVC test model editing
K. Sharman,
23:53:05
18:03:47
18:03:47
and errata reporting (AHG2) G. Sullivan, Y.-K.
Wang
JCTVC-W0003
JCT-VC AHG report:
HEVC HM software
2016-02-18 2016-02-19 2016-02-19
m38058
development and software
11:40:11
19:57:30
21:24:56
technical evaluation
(AHG3)
K. Suehring,
K. Sharman
JCTVC-W0004
JCT-VC AHG report:
2016-02-18 2016-02-20 2016-02-20
m38059
HEVC conformance test
12:29:55
21:13:56
21:13:56
development (AHG4)
T. Suzuki, J. Boyce,
R. Joshi, K. Kazui,
A. Ramasubramonian,
Y. Ye
JCTVC-W0005
m38072
JCTVC-W0006
H. Yu, R. Cohen,
JCT-VC AHG report: SCC
2016-02-18 2016-02-18 2016-02-18
A. Duenas, K. Rapaka,
m38066
coding performance analysis
20:15:33
20:16:18
20:16:18
J. Xu, X. Xu (AHG
(AHG6)
chairs)
JCTVC-W0007
JCT-VC AHG report: SCC
2016-02-19 2016-02-19 2016-02-19
m38076
extensions text editing
04:03:19
04:04:39
04:04:39
(AHG7)
R. Joshi, J. Xu (AHG
co-chairs), Y. Ye,
S. Liu, G. Sullivan,
R. Cohen (AHG vicechairs)
JCTVC-W0008
JCT-VC AHG report: SCC
2016-02-19 2016-02-19 2016-02-19
m38080
extensions software
07:48:25
07:52:15
07:52:15
development (AHG8)
K. Rapaka, B. Li (AHG
co-chairs), R. Cohen,
T.-D. Chuang, X. Xiu,
M. Xu (AHG vicechairs)
JCTVC-W0009
JCT-VC AHG report:
2016-02-19 2016-02-19 2016-02-19
m38089
Complexity of SCC
18:11:55
18:25:15
18:32:04
extensions (AHG9)
A. Duenas (AHG chair),
M. Budagavi, R. Joshi,
S.-H. Kim, P. Lai,
W. Wang, W. Xiu (cochairs)
JCTVC-W0010
m38060
JCTVC-W0011
JCT-VC AHG report:
2016-02-19 2016-02-19 2016-02-19
m38088
SHVC test model editing
17:41:43
17:42:35
22:13:21
(AHG11)`
JCTVC-W0012
m38087
JCT-VC AHG report:
2016-02-19 2016-02-19 2016-02-19
SHVC software
17:30:13
17:34:03
17:34:03
development (AHG12)
V. Seregin, H. Yong,
G. Barroux
JCTVC-W0013
m37728
VCEG AHG report: HDR
2016-02-09 2016-02-09 2016-02-19
Video Coding (VCEG
21:52:34
21:53:11
18:14:08
AHG5)
J. Boyce, E. Alshina,
J. Samuelsson (AHG
chairs)
JCTVC-W0014
m38093
2016-02-19 2016-02-19 2016-02-19 MPEG AHG report: HDR
23:24:40
23:28:04
23:28:04
and WCG Video Coding
C. Fogg, E. Francois,
W. Husak, A. Luthra
Created
First upload Last upload
Title
JCT-VC AHG report:
2016-02-19 2016-02-19 2016-02-19
Project management
17:15:46
18:07:39
21:58:14
(AHG1)
JCT-VC AHG report:
2016-02-19 2016-02-19 2016-02-19
SHVC verification testing
00:09:08
00:10:17
00:10:17
(AHG5)
Authors
G. J. Sullivan, J.-R.
Ohm
V. Baroncini, Y.-K.
Wang, Y. Ye
T. Suzuki, V. Baroncini,
2016-02-18 2016-02-21 2016-02-21 JCT-VC AHG report: Test
R. Cohen, T. K. Tan,
12:44:38
03:28:34
03:28:34
sequence material (AHG10)
S. Wenger, H. Yu
G. Barroux, J. Boyce,
J. Chen, M. Hannuksela,
G. J. Sullivan, Y.-K.
Wang, Y. Ye
Page: 274
Date Sav
JCTVC-W0021
m37605
2016-01-27 2016-01-27 2016-01-27 Report of HDR Core
15:20:27
15:22:15
15:22:15
Experiment 1
J. Strom, J. Sole, Y. He
JCTVC-W0022
m37739
2016-02-10 2016-02-10 2016-02-20 Report of HDR Core
00:29:58
07:33:50
08:23:30
Experiment 2
D. Rusanovskyy(Qualco
mm), E. Francois
(Technicolor),
L. Kerofsky
(InterDigital),
T. Lu( Dolby),
K. Minoo(Arris)
JCTVC-W0023
m37753
2016-02-10 2016-02-10 2016-02-19 Report of HDR Core
17:49:05
17:52:40
22:22:57
Experiment 3
V. Baroncini,
L. Kerofsky,
D. Rusanovskyy
JCTVC-W0024
2016-01-10 2016-01-10 2016-02-11 Report of HDR Core
m37537
21:01:49
21:13:08
02:12:52
Experiment 4
P. Topiwala (FastVDO),
E. Alshina (Samsung),
A. Smolic (Disney),
S. Lee (Qualcomm)
JCTVC-W0025
2016-01-10 2016-01-10 2016-02-11 Report of HDR Core
m37538
21:05:36
21:12:29
02:14:47
Experiment 5
P. Topiwala (FastVDO),
R. Brondijk (Philips),
J. Sole (Qualcomm),
A. Smolic (Disney),
JCTVC-W0026
2016-02-10 2016-02-10 2016-02-21 Report of HDR Core
m37738
00:03:57
00:04:26
00:34:58
Experiment 6
Robert Brondijk,
Sebastien Lasserre,
Dmytro Rusanovskyy,
yuwen he
JCTVC-W0027
Report of HDR Core
2016-02-08 2016-02-10 2016-02-20 Experiment 7: On the visual
m37681
21:09:52
23:01:03
20:10:22
quality of HLG generated
HDR and SDR video
A. Luthra (Arris),
E. Francois
(Technicolor), L. van de
Kerkhof (Philips)
JCTVC-W0028
m38181
2016-02-24 2016-02-24 2016-02-24 Report of HDR Core
23:36:18
23:36:38
23:42:31
Experiment 8
K. Minoo (Arris), T. Lu,
P. Yin (Dolby),
L. Kerofsky
(InterDigital),
D. Rusanovskyy
(Qualcomm),
E. Francois
(Technicolor)
JCTVC-W0031
Description of the reshaper
2016-01-10 2016-01-11 2016-01-11 parameters derivation
m37536
20:36:57
05:49:15
05:49:15
process in ETM reference
software
JCTVC-W0032
m37539
2016-01-11 2016-01-11 2016-01-12 Encoder optimization for
06:23:48
06:27:23
08:36:13
HDR/WCG coding
JCTVC-W0033
m37691
HDR CE2: report of CE2.b- E. Francois, Y. Olivier,
2016-02-09 2016-02-09 2016-02-21
1 experiment (reshaper
C. Chevance
08:58:26
23:00:12
20:48:34
setting 1)
(Technicolor)
JCTVC-W0034
m37614
2016-02-03 2016-02-09 2016-02-09 CE-6 test4.2: Color
20:34:39
21:50:15
21:50:15
enhancement
JCTVC-W0035
Some observations on visual
2016-01-12 2016-01-12 2016-02-15 quality of Hybrid LogA. Luthra, D. Baylon,
m37543
01:35:54
17:28:26
18:42:09
Gamma (HLG) TF
K. Minoo, Y. Yu, Z. Gu
processed video (CE7)
JCTVC-W0036
m37602
2016-01-14 2016-01-14 2016-02-09
Withdrawn
19:29:53
20:43:35
23:21:15
JCTVC-W0037
m37535
2016-01-08 2016-01-08 2016-01-14
Hybrid Log-Gamma HDR
17:01:05
17:05:06
22:15:30
JCTVC-W0038
m37615
Y. He, Y. Ye,
2016-02-03 2016-02-09 2016-02-22
HEVC encoder optimization L. Kerofsky
20:41:01
21:50:50
02:44:36
(InterDigital)
JCTVC-W0039
m37619
Luma delta QP adjustment
2016-02-04 2016-02-04 2016-02-21
based on video statistical
07:32:39
07:34:57
04:32:53
information
JCTVC-W0040
m37624 2016-02-04 2016-02-08 2016-02-10 Adaptive Gamut Expansion A. Dsouza, Aishwarya,
Y. He, Y. Ye,
L. Kerofsky
(InterDigital)
Y. He, L. Kerofsky,
Y. Ye (InterDigital)
A. Cotton, M. Naccari
(BBC)
J. Kim, J. Lee,
E. Alshina,
Y. Park(Samsung)
Page: 275
Date Sav
11:28:39
07:23:05
05:52:06
for Chroma Components
K. Pachauri (Samsung)
JCTVC-W0041
m37625
2016-02-04 2016-02-08 2016-02-08 HDR-VQM Reference Code K. Pachauri, S. Sahota
11:32:18
07:24:08
07:24:08
and its Usage
(Samsung)
JCTVC-W0042
m37627
2016-02-05 2016-02-07 2016-02-22
SCC encoder improvement
04:05:38
03:28:45
02:16:56
JCTVC-W0043
m37630
2016-02-05 2016-02-05 2016-02-05 CE7: Cross-check report for M. Naccari, M. Pindoria
09:46:17
18:38:29
18:38:29
Experiment 7.2
(BBC)
JCTVC-W0044
m37664
2016-02-08 2016-02-09 2016-02-22 BT.HDR and its
04:17:39
23:27:23
18:08:02
implications for VUI
C. Fogg
JCTVC-W0045
m37666
2016-02-08 2016-02-15 2016-02-15 Hybrid Log Gamma
04:22:05
08:52:10
08:52:10
observations
C. Fogg
JCTVC-W0046
m37680
Recommended conversion
E. Francois, K. Minoo,
2016-02-08 2016-02-09 2016-02-09 process of YCbCr 4:2:0 10b
R. van de Vleuten,
18:42:33
20:04:05
20:04:05
SDR content from BT.709
A. Tourapis
to BT.2020 color gamut
JCTVC-W0047
m37668
2016-02-08 2016-02-10 2016-02-10
ICtCp testing
04:30:41
18:29:07
18:29:07
C. Fogg
JCTVC-W0048
m37669
2016-02-08 2016-02-10 2016-02-10
HDR-10 status update
04:33:09
19:56:22
19:56:22
C. Fogg
JCTVC-W0049
m37670
2016-02-08
04:34:37
JCTVC-W0050
m38148
2016-02-22 2016-02-22 2016-02-25
Overview of ICtCp
19:29:37
22:39:26
02:53:15
J. Pytlarz (Dolby)
JCTVC-W0051
m37682
Enhanced Filtering and
2016-02-08 2016-02-10 2016-02-21
Interpolation Methods for
21:49:54
02:35:57
16:43:32
Video Signals
A. M. Tourapis, Y. Su,
D. Singer (Apple Inc)
JCTVC-W0052
m37683
2016-02-08 2016-02-09 2016-02-15 Enhanced Luma Adjustment A. M. Tourapis, Y. Su,
21:51:56
23:17:08
19:12:40
Methods
D. Singer (Apple Inc)
JCTVC-W0053
m37684
2016-02-08 2016-02-10 2016-02-10 HDRTools: Extensions and
21:54:05
03:25:22
03:25:22
Improvements
JCTVC-W0054
AHG on HDR and WCG:
2016-02-09 2016-02-09 2016-02-09
m37687
Average Luma Controlled
03:02:20
18:35:38
18:35:38
Adaptive dQP
JCTVC-W0055
m37688
2016-02-09 2016-02-10 2016-02-10 HDR CE5: Report of
04:46:46
00:02:31
00:02:31
Experiment 5.3.2
W. Dai, M. Krishnan,
P. Topiwala (FastVDO)
JCTVC-W0056
m37689
2016-02-09 2016-02-09 2016-02-17 CE1-related: LUT-based
05:51:41
05:54:41
06:44:06
luma sample adjustment
C. Rosewarne,
V. Kolesnikov (Canon)
JCTVC-W0057
m37690
2016-02-09 2016-02-09 2016-02-09 Content colour gamut SEI
08:29:08
10:21:23
10:21:23
message
H. Mook Oh, J. Choi, J.Y. Suh
JCTVC-W0058
m37692
Video usability information
2016-02-09 2016-02-09 2016-02-09
signaling for SDR backward H. M. Oh, J.-Y. Suh
09:07:07
10:23:20
10:23:20
compatibility
JCTVC-W0059
m37693
HDR CE6: report of CE62016-02-09 2016-02-10 2016-02-16
4.6b experiment (ETM
09:13:38
18:57:43
22:55:39
using SEI message)
E. Francois,
C. Chevance, Y. Olivier
(Technicolor)
JCTVC-W0060
m37694
2016-02-09 2016-02-09 2016-02-09 Re-shaper syntax extension
09:28:36
10:25:19
10:25:19
for HDR CE6
Hyun Mook Oh, JongYeul Suh
JCTVC-W0061
m37695
2016-02-09 2016-02-09 2016-02-09 CE7: Results Core
09:55:25
10:01:27
10:01:27
Experiment 7.1 test a and b
M. Naccari, A. Cotton,
M. Pindoria
JCTVC-W0062
K. Andersson,
P. Wennersten,
2016-02-09 2016-02-09 2016-02-25 Non-normative HM encoder R. Sjöberg,
m37696
10:55:18
22:57:16
20:13:22
improvements
J. Samuelsson, J. Ström,
P. Hermansson,
M. Pettersson (Ericsson)
JCTVC-W0063
m37698 2016-02-09 2016-02-09 2016-02-20 HDR CE6: Core
Y.-J. Chang, P.-H. Lin,
C.-L. Lin, J.-S. Tu, C.C. Lin (ITRI)
Withdrawn
A. M. Tourapis, Y. Su,
D. Singer (Apple Inc),
C. Fogg (Movielabs)
A. Segall, J. Zhao
(Sharp), J. Strom,
M. Pettersson,
K. Andersson (Ericsson)
Wiebe de Haan, Robert
Page: 276
Date Sav
12:42:40
22:04:01
04:58:32
Experiments 4.3 and 4.6a:
Brondijk, Rocco Goris,
Description of the Philips
Rene van der Vleuten
CE system in 4:2:0 and with
automatic reshaper
parameter derivation
JCTVC-W0064
m37699
2016-02-09 2016-02-09 2016-02-19 HDR CE6-related: Cross
12:51:14
14:58:17
18:29:15
Check of CE6.46b
Robert Brondijk, Wiebe
de Haan
JCTVC-W0065
m37700
2016-02-09
12:57:24
JCTVC-W0066
CE1-related: Optimization
of HEVC Main 10 coding in C. Jung, S. Yu, Q. Lin
2016-02-09 2016-02-09 2016-02-09
m37701
HDR videos Based on
(Xidian Univ.), M. Li,
13:01:26
13:07:10
13:07:10
Disorderly Concealment
P. Wu (ZTE)
Effect
JCTVC-W0067
m37702
HDR CE7: xCheck 1a 1b
2016-02-09 2016-02-10 2016-02-20
and Core Experiement 2a
13:04:24
00:10:46
03:45:29
and 2b
Robert Brondijk, Wiebe
de Haan, Rene van der
Vleuten, Rocco Goris
JCTVC-W0068
m37703
CE2-related: Adaptive
2016-02-09 2016-02-09 2016-02-09 Quantization-Based HDR
13:09:44
13:14:47
13:14:47
video Coding with HEVC
Main 10 Profile
C. Jung, Q. Fu, G. Yang
(Xidian Univ.), M. Li,
P. Wu (ZTE)
JCTVC-W0069
m37704
2016-02-09 2016-02-09 2016-02-21 HDR CE5: Report on Core
13:12:16
14:55:35
02:37:38
Experiment CE5.3.1
Robert Brondijk, Wiebe
de Haan, Jeroen Stessen
JCTVC-W0070
m37705
2016-02-09 2016-02-09 2016-02-20 CE5-related: xCheck of
13:15:09
14:56:42
14:13:10
CE5.3.2
Robert Brondijk, Rocco
Goris
JCTVC-W0071
CE2-related: Adaptive PQ:
Adaptive Perceptual
C. Jung, S. Yu, P. Ke
2016-02-09 2016-02-09 2016-02-09
m37706
Quantizer for HDR video
(Xidian Univ.), M. Li,
13:16:50
13:21:14
13:21:14
Coding with HEVC Main 10 P. Wu (ZTE)
Profile
JCTVC-W0072
Alan Chalmers,
2016-02-09 2016-02-11 2016-02-11 Highly efficient HDR video Jonathan Hatchett, Tom
m37707
13:46:12
16:21:08
16:21:08
compression
Bashford Rogers, Kurt
Debattista
JCTVC-W0073
m37708
JCTVC-W0074
CIECAM02 based
2016-02-09 2016-02-09 2016-02-24 Orthogonal Color Space
m37710
15:07:13
15:30:36
01:09:20
Transformation for
HDR/WCG Video
JCTVC-W0075
m37712
Palette encoder
2016-02-09 2016-02-09 2016-02-21
improvements for the 4:2:0
16:01:37
16:17:23
21:32:25
chroma format and lossless
C. Gisquet, G. Laroche,
P. Onno (Canon)
JCTVC-W0076
m37713
Comments on alignment of
2016-02-09 2016-02-09 2016-02-09
SCC text with multi-view
16:01:50
16:18:35
16:18:35
and scalable
C. Gisquet, G. Laroche,
P. Onno (Canon)
JCTVC-W0077
m37714
Bug fix for DPB operations
2016-02-09 2016-02-09 2016-02-25
X. Xu, S. Liu, S. Lei
when current picture is a
16:24:24
22:20:28
16:30:23
(MediaTek)
reference picture
JCTVC-W0078
m37716
Bottom-up hash value
2016-02-09 2016-02-09 2016-02-21
calculation and validity
17:35:32
17:46:31
19:23:26
check for SCC
W. Xiao (Xidian Univ.),
B. Li, J. Xu (Microsoft)
JCTVC-W0079
m37717
HDR CE7-related:
2016-02-09 2016-02-09 2016-02-09
Additional results for
18:23:13
18:26:07
18:26:07
Experiment 7.2a
M. Naccari,
M. Pindoria, A. Cotton
(BBC)
JCTVC-W0080
m37718
J. Samuelsson,
2016-02-09 2016-02-15 2016-02-15 Crosscheck of HDR CE5.3.1
M. Pettersson, J. Strom
18:33:14
15:15:59
15:15:59
(JCTVC-W0069)
(Ericsson)
JCTVC-W0081
m37719
Crosscheck of Luma delta
J. Samuelsson, J. Ström,
2016-02-09 2016-02-16 2016-02-16
QP adjustment based on
P. Hermansson
18:39:58
11:25:29
11:25:29
video statistical information (Ericsson)
Withdrawn
2016-02-09 2016-02-17 2016-02-19 HDR CE6: Cross-check of
14:22:29
18:21:13
18:30:15
6.46a
W. Dai, M. Krishnan,
P. Topiwala (FastVDO)
Y. Kwak,
Y. Baek(UNIST),
J. Lee, J. W. Kang, H.Y. Kim(ETRI)
Page: 277
Date Sav
(JCTVC-W0039)
JCTVC-W0082
m37720
2016-02-09 2016-02-09 2016-02-09
Withdrawn
19:33:24
19:39:31
19:39:31
JCTVC-W0083
m37722
2016-02-09 2016-02-20 2016-02-20
Withdrawn
20:36:32
14:35:38
14:35:38
JCTVC-W0084
T. Lu, F. Pu, P. Yin,
T. Chen, W. Husak
2016-02-09 2016-02-09 2016-02-20 HDR CE2: CE2.a-2, CE2.c,
m37723
(Dolby), Y. He,
21:05:09
21:57:17
08:58:01
CE2.d and CE2.e-3
L. Kerofsky, Y. Ye
(InterDigital)
JCTVC-W0085
HDR CE2 related: Further
2016-02-09 2016-02-16 2016-02-19
m37724
Improvement of JCTVC21:08:38
21:35:50
07:47:20
W0084
JCTVC-W0086
m37725
JCTVC-W0087
P. J. Warren,
Description of Color Graded
2016-02-09 2016-02-09 2016-02-09
S. M. Ruggieri,
m37726
SDR content for HDR/WCG
21:13:29
21:59:16
21:59:16
W. Husak, T. Lu,
Test Sequences
P. Yin, F. Pu (Dolby)
JCTVC-W0088
SDR backward
2016-02-09 2016-02-09 2016-02-09
m37727
compatibility requirements
21:46:55
21:51:42
21:51:42
for HEVC HDR extension
JCTVC-W0089
m37729
JCTVC-W0090
HDR CE3: Results of
2016-02-09 2016-02-15 2016-02-15 subjective evaluations
m37730
22:22:53
14:58:41
14:58:41
conducted with the DSIS
method
Martin Rerabek,
Philippe Hanhart,
Touradj Ebrahimi
JCTVC-W0091
HDR CE3: Benchmarking
2016-02-09 2016-02-15 2016-02-15 of objective metrics for
m37731
22:29:10
14:58:53
14:58:53
HDR video quality
assessment
Philippe Hanhart,
Touradj Ebrahimi
JCTVC-W0092
Description of the
2016-02-09 2016-02-09 2016-02-10 Exploratory Test Model
m37732
23:05:53
23:12:09
15:39:18
(ETM) for HDR/WCG
extension of HEVC
K. Minoo (Arris), T. Lu,
P. Yin (Dolby),
L. Kerofsky
(InterDigital),
D. Rusanovskyy
(Qualcomm),
E. Francois
(Technicolor)
JCTVC-W0093
m37734
JCTVC-W0094
HDR CE2: Report of CE2.bZhouye Gu, Koohyar
2016-02-09 2016-02-10 2016-02-21 2, CE2.c and CE2.d
m37735
Minoo, Yue Yu, David
23:30:23
09:35:55
07:55:13
experiments (for reshaping
Baylon, Ajay Luthra
setting 2)
JCTVC-W0095
m37736
Y. He, Y. Ye
(InterDigital), Hendry,
Y.-K Wang
(Qualcomm),
V. Baroncini (FUB)
JCTVC-W0096
Proposed editorial
2016-02-09 2016-02-20 2016-02-20 improvements to HEVC
m37737
23:58:28
19:06:01
19:06:01
Screen Content Coding
Draft Text 5
T. Lu, F. Pu, P. Yin,
T. Chen, W. Husak
(Dolby), Y. He,
L. Kerofsky, Y. Ye
(InterDigital)
R. Yeung, S. Qu, P. Yin,
2016-02-09 2016-02-09 2016-02-25 Indication of SMPTE 2094T. Lu, T. Chen,
21:10:04
21:58:24
16:32:33
10 metadata in HEVC
W. Husak (Dolby)
HDR CE2-related: some
2016-02-09 2016-02-15 2016-02-19
experiments on reshaping
22:02:39
17:37:18
19:19:40
with input SDR
L. van de Kerkhof,
W. de Haan (Philips),
E. Francois
(Technicolor)
Y. Olivier, E. Francois,
C. Chevance
HDR CE2: Report of CE2.aYue Yu, Zhouye Gu
2016-02-09 2016-02-10 2016-02-21 3, CE2.c and CE2.d
Koohyar Mino, David
23:24:49
09:34:25
07:54:13
experiments (for reshaping
Baylon, Ajay Luthra
setting 2)
2016-02-09 2016-02-10 2016-02-27 SHVC verification test
23:38:58
05:16:13
00:57:10
results
R. Joshi, G. Sullivan,
J. Xu, Y. Ye, S. Liu, Y.K. Wang, G. Tech
Page: 278
Date Sav
JCTVC-W0097
m37740
2016-02-10 2016-02-10 2016-02-21 HDR CE2: CE2.c-Chroma
00:41:24
07:49:18
00:15:50
QP offset study report
D.Rusanovskyy, J.Sole,
A.K.Ramasubramonian,
D. B. Sansli,
M. Karczewicz
(Qualcomm)
JCTVC-W0098
m37741
2016-02-10 2016-02-10 2016-02-10 Effective Colour Volume
03:33:39
03:44:15
03:44:15
SEI
A. Tourapis, Y. Su,
D. Singer (Apple Inc.)
2016-02-10 2016-02-10 2016-02-21 HDR CE5 test 3: Constant
04:19:52
07:36:09
19:23:20
Luminance results
J. Sole,
D. Rusanovskyy,
A. Ramasubramonian,
D. Bugdayci,
M. Karczewicz
(Qualcomm)
HDR CE2-related: Results
2016-02-10 2016-02-10 2016-02-21
for combination of CE1
04:21:59
09:48:12
21:05:21
(anchor 3.2) and CE2
J. Sole,
A. Ramasubramonian,
D. Rusanovskyy,
D. Bugdayci,
M. Karczewicz
(Qualcomm)
A. K. Ramasubramonian
, J. Sole,
D. Rusanovskyy,
D. Bugdayci,
M. Karczewicz
JCTVC-W0099
JCTVC-W0100
m37742
m37743
JCTVC-W0101
m37744
2016-02-10 2016-02-10 2016-02-21 HDR CE2: Report on
07:08:41
10:48:46
22:00:00
CE2.a-1 LCS
JCTVC-W0102
m37745
2016-02-10
08:06:18
Withdrawn
2016-02-10 2016-02-10 2016-02-23 HDR CE6: Test 4.1
08:22:29
10:11:34
19:53:45
Reshaper from m37064
D. B. Sansli,
A. K. Ramasubramonian
, D. Rusanovskyy,
J. Sole, M. Karczewicz
(Qualcomm)
JCTVC-W0103
m37746
JCTVC-W0104
Comparison of Compression
Performance of HEVC
B. Li, J. Xu,
2016-02-10 2016-02-10 2016-02-10 Screen Content Coding
m37747
G. J. Sullivan
08:55:26
08:58:13
08:58:13
Extensions Test Model 6
(Microsoft)
with AVC High 4:4:4
Predictive profile
JCTVC-W0105
Evaluation of Chroma
2016-02-10 2016-02-19 2016-02-19 Subsampling for High
m37748
09:32:56
15:04:38
15:04:38
Dynamic Range Video
Compression
JCTVC-W0106
m37749
Evaluation of Backward2016-02-10 2016-02-20 2016-02-20
compatible HDR
09:36:36
16:35:32
16:35:32
Transmission Pipelines
M. Azimi, R. Boitard,
M. T. Pourazad,
P. Nasiopoulos
JCTVC-W0107
m37758
Closed form HDR 4:2:0
2016-02-10 2016-02-10 2016-02-18
chroma subsampling (HDR
23:54:29
23:57:42
08:05:29
CE1 and AHG5 related)
Andrey Norkin (Netflix)
JCTVC-W0108
m37759
Cross-check report on
2016-02-11 2016-02-11 2016-02-11
HDR/WCG CE1 new
02:18:23
14:53:04
14:53:04
anchor generation
D. Jun, J. Lee,
J. W. Kang (ETRI)
JCTVC-W0109
m37765
2016-02-11 2016-02-19 2016-02-19 CE6-related: xCheck of CE6
Robert Brondijk
09:12:58
18:29:56
18:29:56
4.1
JCTVC-W0110
HDR CE7: Comments on
visual quality and
E. Francois, Y. Olivier,
2016-02-11 2016-02-11 2016-02-11 compression performance of
m37771
C. Chevance
17:05:32
17:23:41
18:45:31
HLG for SDR backward
(Technicolor)
compatible HDR coding
system
JCTVC-W0111
m37795
JCTVC-W0112
m37857 2016-02-15 2016-02-19 2016-02-19 HDR CE2: crosscheck of
Cross-check of JCTVC2016-02-13 2016-02-17 2016-02-17
W0107: Closed form HDR
00:50:28
00:23:06
00:23:06
4:2:0 chroma subsampling
R. Boitard,
M. T. Pourazad,
P. Nasiopoulos
A. Tourapis
C. Chevance, Y. Olivier
Page: 279
Date Sav
14:59:17
11:22:36
11:22:36
CE2.a-2, CE2.c, CE2.d and
CE2.e-3 (JCTVC-W0084)
(Technicolor)
JCTVC-W0113
m37858
HDR CE2: crosscheck of
2016-02-15 2016-02-19 2016-02-19
CE2.a-1 LCS (JCTVC14:59:43
18:20:55
18:20:55
W0101)
C. Chevance, Y. Olivier
(Technicolor)
JCTVC-W0114
m37859
2016-02-15 2016-02-19 2016-02-19 HDR CE6: crosscheck of
14:59:56
15:47:00
15:47:00
CE6.4.3 (JCTVC-W0063)
C. Chevance, Y. Olivier
(Technicolor)
JCTVC-W0115
H. R. Tohidypour,
Software implementation of
2016-02-16 2016-02-19 2016-02-20
M. Azimi,
m37950
visual information fidelity
00:52:50
15:30:20
16:45:28
M. T. Pourazad,
(VIF) for HDRTools
P. Nasiopoulos
JCTVC-W0116
m37967
JCTVC-W0117
Cross-check of Non2016-02-16 2016-02-17 2016-02-19 normative HM encoder
m37968
07:57:56
04:17:14
07:21:34
improvements (JCTVCW0062)
B. Li, J. Xu (Microsoft)
JCTVC-W0118
Crosscheck for bottom-up
2016-02-16 2016-02-19 2016-02-19 hash value calculation and
m37971
08:39:06
10:45:11
10:45:11
validity check for SCC
(JCTVC-W0078)
W. Zhang (Intel)
JCTVC-W0119
m37979
Some considerations on hue M. Pindoria,
2016-02-16 2016-02-16 2016-02-19
shifts observed in HLG
M. Naccari, T. Borer,
10:45:26
10:55:36
07:40:40
backward compatible video A. Cotton (BBC)
JCTVC-W0120
m37982
2016-02-16 2016-02-17 2016-02-17 Cross-check of JCTVC11:19:39
06:50:37
06:50:37
W0085
J. Lee, J. W. Kang,
D. Jun, H. Ko (ETRI)
JCTVC-W0121
m37995
2016-02-17 2016-02-19 2016-02-19 HDR CE2: cross-check
02:01:27
07:22:24
07:22:24
report of JCTVC-W0033
T. Lu, F. Pu, P. Yin
JCTVC-W0122
m37996
2016-02-17 2016-02-20 2016-02-20 HDR CE2: cross-check
02:09:23
01:24:50
01:24:50
report of JCTVC-W0097
T. Lu, F. Pu, P. Yin
JCTVC-W0123
HDR CE6: cross-check
2016-02-17 2016-02-17 2016-02-17 report of test4.2: Color
m38008
06:58:56
10:50:39
10:50:39
enhancement (JCTVCW0034)
JCTVC-W0124
m38009
2016-02-17
06:58:58
JCTVC-W0125
m38039
CE6-related: Cross check
2016-02-17 2016-02-17 2016-02-21
report for Test 4.1 Reshaper M. Naccari (BBC)
19:56:41
19:59:11
21:14:35
from m37064
JCTVC-W0126
m38061
Preliminary study of
2016-02-18 2016-02-18 2016-02-19
filtering performance in
13:20:40
13:22:16
05:25:30
HDR10 pipeline
JCTVC-W0127
Cross-check of JCTVC2016-02-18 2016-02-19 2016-02-19 W0075 on palette encoder
m38065
18:26:04
17:37:39
17:37:39
improvements for lossless
mode
JCTVC-W0128
HDR CE2: Cross-check of
2016-02-18 2016-02-19 2016-02-21 JCTVC-W0084 (combines
m38067
D. Rusanovskyy
20:30:49
20:23:08
09:51:35
solution for CE2.a-2, CE2.c,
CE2.d and CE2.e-3)
JCTVC-W0129
m38069
2016-02-18 2016-02-18 2016-02-19 SCC Level Limits Based on S. Deshpande, S.-H.
22:55:36
22:58:20
18:35:23
Chroma Format
Kim
JCTVC-W0130
m38081
Crosscheck for further
2016-02-19 2016-02-20 2016-02-20
improvement of HDR CE2
09:23:25
00:45:24
00:45:24
(JCTVC-W0085)
JCTVC-W0131
HDR CE2: cross-check
2016-02-19 2016-02-19 2016-02-19 report of CE2.b-2, CE2.c
m38091
18:33:32
18:38:04
21:29:28
and CE2.d experiments
(JCTVC-W0094)
Cross-check of SCC encoder
2016-02-16 2016-02-16 2016-02-16
improvement (JCTVCB. Li, J. Xu (Microsoft)
07:57:02
08:14:41
08:14:41
W0042)
E. Alshina(Samsung)
Withdrawn
K. Pachauri, Aishwarya
V. Seregin (Qualcomm)
Y. Wang, W. Zhang,
Y. Chiu (Intel)
C. Chevance, Y. Olivier
(Technicolor)
Page: 280
Date Sav
JCTVC-W0132
m38118
2016-02-21 2016-02-21 2016-02-21 Information and Comments
J. Holm
21:44:08
21:46:38
21:46:38
on Hybrid Log Gamma
JCTVC-W0133
m38127
Wiebe de Haan, Leon
2016-02-22 2016-02-22 2016-02-26 Indication of SMPTE 2094van de Kerkhof, Rutger
02:27:58
02:32:32
16:47:23
20 metadata in HEVC
Nijland
JCTVC-W0134
m38157
SHM software
X. Huang (USTC),
2016-02-23 2016-02-23 2016-02-24
modifications for multi-view M. M. Hannuksela
04:08:33
04:22:30
04:46:38
support
(Nokia)
JCTVC-W0135
m38159
Some Elementary Thoughts
2016-02-23 2016-02-23 2016-02-23
on HDR Backward
P. Topiwala (FastVDO)
06:34:55
18:57:49
18:57:49
Compatibility
JCTVC-W0136
m37671
2016-02-08
04:36:30
JCTVC-W0138
m38177
2016-02-24 2016-02-24 2016-02-24
Withdrawn
20:24:58
20:26:31
20:26:31
JCTVC-W1000
2016-03-04
m38208
21:22:49
JCTVC-W1002
High Efficiency Video
2016-03-04 2016-05-18 2016-05-18 Coding (HEVC) Test Model
m38212
22:21:45
15:27:23
15:27:23
16 (HM 16) Update 5 of
Encoder Description
JCTVC-W1003
m38210
P. Yin, C. Fogg,
2016-03-04 2016-03-19 2016-03-19 Draft text for ICtCp support
G. J. Sullivan,
21:36:52
06:06:22
06:06:22
in HEVC (Draft 1)
A. Tourapis (editors)
JCTVC-W1004
m38213
2016-03-04 2016-03-19 2016-03-19 Verification Test Report for Y. Ye, V. Baroncini, Y.22:25:26
02:02:35
02:02:35
Scalable HEVC (SHVC)
K. Wang (editors)
JCTVC-W1005
2016-03-04 2016-03-24 2016-05-14 HEVC Screen Content
m38214
22:30:44
06:50:38
04:00:14
Coding Draft Text 6
JCTVC-W1008
m38215
2016-03-04 2016-05-21 2016-05-21 Conformance Testing for
22:34:16
10:06:24
10:06:24
SHVC Draft 5
JCTVC-W1011
m38209
2016-03-04
21:33:14
JCTVC-W1012
2016-03-04
m38217
22:42:50
JCTVC-W1013
m38216
Reference software for
2016-03-04 2016-05-03 2016-05-03
Scalable HEVC (SHVC)
22:38:40
20:43:32
20:43:32
Extensions Draft 4
JCTVC-W1014
m38218
Screen Content Coding Test R. Joshi, J. Xu,
2016-03-04 2016-05-23 2016-05-23
Model 7 Encoder
R. Cohen, S. Liu, Y. Ye
22:45:07
07:12:07
07:12:07
Description (SCM 7)
(editors)
JCTVC-W1016
Conformance Testing for
2016-03-04 2016-05-24 2016-05-24 HEVC Screen Content
m38219
22:46:53
08:09:20
08:09:20
Coding (SCC) Extensions
Draft 1
JCTVC-W1017
m38176
JCTVC-W1018
Verification Test Plan for
2016-02-26 2016-02-26 2016-03-11 HDR/WCG Video Coding
m38193
03:49:05
19:38:33
15:39:16
Using HEVC Main 10
Profile
R. Sjöberg,
V. Baroncini,
A. K. Ramasubramonian
(editors)
JCTVC-W1020
m38220 2016-03-04 2016-03-05 2016-03-05 Common Test Conditions
E. Francois, J. Sole,
Withdrawn
Meeting Report of the 23rd
JCT-VC Meeting (19â€―26
February 2016, San Diego,
USA)
Reference Software for
Screen Content Coding
Draft 1
G. J. Sullivan, J.-R.
Ohm (chairs)
C. Rosewarne, B. Bross,
M. Naccari,
K. Sharman,
G. J. Sullivan (editors)
R. Joshi, S. Liu,
G. J. Sullivan, Y.-K.
Wang, J. Xu, Y. Ye
(editors)
J. Boyce,
A. K. Ramasubramonian
(editors)
K. Rapaka, B. Li,
X. Xiu (editors)
Conformance Testing for
Improved HEVC Version 1 T. Suzuki, K. Kazui
Testing and Format Range (editors)
Extensions Profiles Draft 6
Conversion and Coding
2016-02-24 2016-02-24 2016-02-27
Practices for HDR/WCG
20:02:57
20:04:25
03:43:32
Video, Draft 1
Y. He, V. Seregin
(editors)
R. Joshi, J. Xu (editors)
J. Samuelsson (editor)
Page: 281
Date Sav
22:57:00
02:36:50
02:36:50
for HDR/WCG Video
Coding Experiments
J. Ström, P. Yin
(editors)
Page: 282
Date Sav
Annex B to JCT-VC report:
List of meeting participants
The participants of the twenty-third meeting of the JCT-VC, according to a sign-in sheet
circulated during the meeting sessions (approximately 159 people in total), were as follows:
Page: 283
Date Sav
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
Anne Aaron (Netflix)
Massimiliano Agostinelli (Trellis Management)
Yong-Jo Ahn (Kwangwoon Univ. (KWU))
Elena Alshina (Samsung Electronics)
Peter Amon (Siemens AG)
Alessandro Artusi (Trellis)
Maryam Azimi (Univ. British Columbia)
Giovanni Ballocca (Sisvel Tech)
Yukihiro Bandoh (NTT)
Vittorio Baroncini (Fondazione Ugo Bordoni)
Guillaume Barroux (Fujitsu Labs)
David Baylon (Arris)
Ronan Boitard (Univ. British Columbia)
Jill Boyce (Intel)
Robert Brondijk (Philips)
Madhukar Budagavi (Samsung Research)
Done Bugdayci Sansli (Qualcomm Tech.)
Eric (Chi W.) Chai (Real Commun.)
Alan Chalmers (Univ. Warwick)
Yao-Jen Chang (ITRI Intl.)
Chun-Chi Chen (NCTU/ITRI)
Jianle Chen (Qualcomm)
Peisong Chen (Broadcom)
Weizhong Chen (Huawei)
Yi-Jen Chiu (Intel)
Hae Chul Choi (Hanbat Nat. Univ.)
Jangwan Choi (LG Electronics)
Kiho Choi (Samsung Electronics)
Takeshi Chujoh (Toshiba)
Gordon Clare (Orange Labs FT)
David Daniels (Sky)
Wiebe De Haan (Philips)
Andre Dias (BBC)
Alberto Duenas (NGCodec)
Alexey Filippov (Huawei)
Chad Fogg (MovieLabs)
Edouard François (Technicolor)
Wen Gao (Harmonic)
Christophe Gisquet (Canon Research France)
Dan Grois (Fraunhofer HHI)
Zhouye Gu (Arris)
Miska Hannuksela (Nokia)
Ryoji Hashimoto (Renesas)
Shinobu Hattori (Fox)
Yuwen He (InterDigital Commun.)
Fnu Hendry (Qualcomm)
Dzung Hoang (Freescale Semiconductor)
Seungwook Hong (Arris)
Yu-Wen Huang (MediaTek)
Walt Husak (Dolby Labs)
Roberto Iacoviello (RAI)
Atsuro Ichigaya (NHK (Japan Broadcasting Corp.))
Page: 284
Date Sav
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
Tomohiro Ikai (Sharp)
Masaru Ikeda (Sony)
Shunsuke Iwamura (NHK (Japan Broadcasting Corp.))
Wonkap Jang (Vidyo)
Rajan Joshi (Qualcomm)
Dongsan Jun (ETRI)
Cheolkon Jung (Xi'dian Univ.)
Jung Won Kang (Electronics and Telecom Research Institute (ETRI))
Kei Kawamura (KDDI)
Louis Kerofsky (Interdigital)
Dae Yeon Kim (Chips & Media)
Ilseung Kim (Hanyang Univ.)
Jungsun Kim (MediaTek USA)
Namuk Kim (Sungkyunkwan Univ. (SKKU))
Seung-Hwan Kim (Sharp)
Hyunsuk Ko (Electronics and Telecom Research Institute (ETRI))
Konstantinos Konstantinides (Dolby Labs)
Patrick Ladd (Comcast Cable)
PoLin (Wang) Lai (Mediatek USA)
Jani Lainema (Nokia)
Fabrice Le Léannec (Technicolor)
Dongkyu Lee (Kwangwoon Univ.)
Jinho Lee (Electronics and Telecom Research Institute (ETRI))
Junghyun Lee (Hanyang Univ.)
Yung-Lyul Lee (Sejong Univ.)
Shawmin Lei (MediaTek)
Min Li (Qualcomm)
Ming Li (ZTE)
Chongsoon Lim (Panasonic)
Sung-Chang Lim (Electronics and Telecom Research Institute (ETRI))
Ching-Chieh Lin (ITRI Intl.)
Shan Liu (MediaTek)
Ang Lu (Zhejiang Univ.)
Taoran Lu (Dolby)
Ajay Luthra (Arris)
Xiang Ma (Huawei)
Dimitrie Margarit (Sigma Designs)
Detlev Marpe (Fraunhofer HHI)
Akira Minezawa (Mitsubishi Electric)
Koohyar Minoo (Arris)
Matteo Naccari (BBC R&D)
Ohji Nakagami (Sony)
Tung Nguyen (Fraunhofer HHI)
Andrey Norkin (Netflix)
Hyun Mook Oh (LG Electronics)
Jens-Rainer Ohm (RWTH Aachen Univ.)
Peshela Pahelawatta (DirecTV)
Hao Pan (Apple)
Manindra Parhy (Nvidia)
Tom Paridaens (Ghent Univ. - iMinds)
Nidhish Parikh (Nokia)
Min Woo Park (Samsung Electronics)
Page: 285
Date Sav
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
124.
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.
152.
153.
154.
155.
156.
Youngo Park (Samsung Electronics)
Pierrick Philippe (Orange Labs FT)
Tangi Poirier (Technicolor)
Mahsa T. Pourazad (Telus Commun. & UBC)
Fangjun Pu (Dolby Labs)
Adarsh Krishnan Ramasubramonian (Qualcomm Tech.)
Krishnakanth Rapaka (Qualcomm Tech.)
Justin Ridge (Nokia)
Christopher Rosewarne (CiSRA / Canon)
Dmytro Rusanovskyy (Qualcomm)
Jonatan Samuelsson (LM Ericsson)
Chris Seeger (NBC Universal)
Andrew Segall (Sharp)
Vadim Seregin (Qualcomm)
Masato Shima (Canon)
Robert Skupin (Fraunhofer HHI)
Joel Sole Rojals (Qualcomm)
Ranga Ramanujam Srinivasan (Texas Inst.)
Alan Stein (Technicolor)
Jacob Ström (Ericsson)
Yeping Su (Apple)
Karsten Sühring (Fraunhofer HHI)
Gary Sullivan (Microsoft)
Haiwei Sun (Panasonic)
Teruhiko Suzuki (Sony)
Maxim Sychev (Huawei Tech.)
Yasser Syed (Comcast Cable)
Han Boon Teo (Panasonic)
Herbert Thoma (Fraunhofer IIS)
Emmanuel Thomas (TNO)
Pankaj Topiwala (FastVDO)
Alexandros Tourapis (Apple)
Jih-Sheng Tu (ITRI international)
Yi-Shin Tung (ITRI USA / MStar Semi.)
Rahul Vanam (InterDigital Commun.)
Wade Wan (Broadcom)
Ye-Kui Wang (Qualcomm)
James Welch (Ineoquest)
Stephan Wenger (Vidyo)
Ping Wu (ZTE UK)
Xiangjian Wu (Kwangwoon Univ.)
Xiaoyu Xiu (InterDigital Commun.)
Jizheng Xu (Microsoft)
Xiaozhong Xu (MediaTek)
Haitao Yang (Huawei Tech.)
Yan Ye (InterDigital Commun.)
Peng Yin (Dolby Labs)
Lu Yu (Zhejiang Univ.)
Yue Yu (Arris)
Wenhao Zhang (Intel)
Xin Zhao (Qualcomm)
Zhijie Zhao (Huawei Tech.)
Page: 286
Date Sav
157.
158.
159.
Jiantong Zhou (Huawei Tech.)
Minhua Zhou (Broadcom)
Feng Zou (Qualcomm Tech.)
Page: 287
Date Sav
– JCT-3V report
Source: Jens Ohm and Gary Sullivan, Chairs
Summary
The Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V) of ITU-T
WP3/16 and ISO/IEC JTC 1/ SC 29/ WG 11 held its fourteenth meeting during 22–26 Feb 2016
at the San Diego Marriott La Jolla in San Diego, US. The JCT-3V meeting was held under the
chairmanship of Dr Jens-Rainer Ohm (RWTH Aachen/Germany) and Dr Gary Sullivan
(Microsoft/USA). For rapid access to particular topics in this report, a subject categorization is
found (with hyperlinks) in section 1.14 of this document.
The meeting was mainly held in a ―single track‖ fashion, with few breakout activities (as
documented in this report) held in parallel. All relevant decisions were made in plenaries when
one of the chairs was present.
The JCT-3V meeting sessions began at approximately 1900 hours on Monday 22 Feb 2016.
Meeting sessions were held as documented in the sections below until the meeting was closed at
approximately 1758 hours on Thursday 25 Feb 2016. Approximately 9 people attended the JCT3V meeting, and approximately 6 input documents and 5 AHG reports were discussed (for one
document, JCT3V-N0027, no presenter was available). The meeting took place in a collocated
fashion with a meeting of WG11 – one of the two parent bodies of the JCT-3V. The subject
matter of the JCT-3V meeting activities consisted of work on 3D extensions of the Advanced
Video Coding (AVC) and the High Efficiency Video Coding (HEVC) standards.
The primary goals of the meeting were to review the work that was performed in the interim
period since the thirteenth JCT-3V meeting in producing

Draft 4 of Alternative Depth Info SEI Message (ISO/IEC 14496-10:2014/DAM3, for
ballot)

Software of Alternative Depth Info SEI (ISO/IEC 14496-5:2001/PDAM42, for ballot)

MV-HEVC Software Draft 5 (ISO/IEC 23008-5:2015 FDAM4, for ballot)

3D-HEVC Software Draft 3

3D-HEVC Verification Test Report
 MV-HEVC Verification Test Plan
Furthermore, the JCT-3V reviewed technical input documents. The JCT-3V produced 4
particularly important output documents from the meeting:

Draft 2 of Software of Alternative Depth Info SEI (ISO/IEC 14496-5:2001/DAM42, for
ballot)

MV-HEVC and 3D-HEVC Conformance Draft 4 (to be integrated in ISO/IEC FDIS
23008-8:201x 2nd ed., for ballot)

3D-HEVC Software Draft 4 (to be integrated in ISO/IEC FDIS 23008-5:201x 2nd ed., for
ballot)

MV-HEVC Verification Test Report
Page: 288
Date Sav
For the organization and planning of its future work, the JCT-3V established 3 "Ad Hoc Groups"
(AHGs) to progress the work on particular subject areas. The next JCT-3V meeting is planned
during 27–31 May 2016 under ITU-T auspices in Geneva, CH.
The document distribution site http://phenix.it-sudparis.eu/jct3v/ was used for distribution of all
documents.
The reflector to be used for discussions by the JCT-3V and all of its AHGs is jct-3v@lists.rwthaachen.de.
1
Administrative topics
1.1
Organization
The ITU-T/ISO/IEC Joint Collaborative Team on 3D Video Coding Extension Development
(JCT-3V) is a group of video coding experts from the ITU-T Study Group 16 Visual Coding
Experts Group (VCEG) and the ISO/IEC JTC 1/ SC 29/ WG 11 Moving Picture Experts Group
(MPEG). The parent bodies of the JCT-3V are ITU-T WP3/16 and ISO/IEC
JTC 1/SC 29/WG 11.
1.2
Meeting logistics
The JCT-3V meeting sessions began at approximately 1400 hours on Monday 22 Feb 2016.
Meeting sessions were held as documented in the sections below until the meeting was closed at
approximately 1758 hours on Thursday 25 Feb 2016. Approximately 9 people attended the JCT3V meeting, and approximately 6 input documents and 5 AHG reports were discussed (for one
document, JCT3V-N0027, no presenter was available). The meeting took place in a collocated
fashion with a meeting of WG11 – one of the two parent bodies of the JCT-3V. The subject
matter of the JCT-3V meeting activities consisted of work on 3D extensions of the Advanced
Video Coding (AVC) and the High Efficiency Video Coding (HEVC) standards.
Information regarding preparation and logistics arrangements for the meeting had been provided
via the email reflector jct-3v@lists.rwth-aachen.de and at http://wftp3.itu.int/av-arch/jct3vsite/2016_02_N_SanDiego/.
1.3 Documents and document handling considerations
1.3.1 General
The documents of the JCT-3V meeting are listed in Annex A of this report. The documents can
be found at http://phenix.it-sudparis.eu/jct3v/.
Registration timestamps, initial upload timestamps, and final upload timestamps are listed in
Annex A of this report (as of the time of preparation of this report).
Document registration and upload times and dates listed in Annex A and in headings for
documents in this report are in Paris/Geneva time. Dates mentioned for purposes of describing
events at the meeting (rather than as contribution registration and upload times) follow the local
time at the meeting facility.
Highlighting of recorded decisions in this report:

Decisions made by the group that affect the normative content of the draft standard are
identified in this report by prefixing the description of the decision with the string
"Decision:".

Decisions that affect the reference software but have no normative effect on the text are
marked by the string "Decision (SW):".
Page: 289
Date Sav

Decisions that fix a bug in the specification (an error, oversight, or messiness) are marked
by the string "Decision (BF):".

Decisions regarding things that correct the text to properly reflect the design intent, add
supplemental remarks to the text, or clarify the text are marked by the string "Decision
(Ed.):".

Decisions regarding simplification or improvement of design consistency are marked by
the string "Decision (Simp.):".

Decisions regarding complexity reduction (in terms of processing cycles, memory
capacity, memory bandwidth, line buffers, number of contexts, number of context-coded
bins, etc.) … "Decision (Compl.):"
This meeting report is based primarily on notes taken by the chairs and projected (if possible) for
real-time review by the participants during the meeting discussions. The preliminary notes were
also circulated publicly by ftp (http://wftp3.itu.int/av-arch/jct3v-site/) during the meeting on a
daily basis. Considering the high workload of this meeting and the large number of contributions,
it should be understood by the reader that 1) some notes may appear in abbreviated form, 2)
summaries of the content of contributions are often based on abstracts provided by contributing
proponents without an intent to imply endorsement of the views expressed therein, and 3) the
depth of discussion of the content of the various contributions in this report is not uniform.
Generally, the report is written to include as much discussion of the contributions and
discussions as is feasible in the interest of aiding study, although this approach may not result in
the most polished output report.
1.3.2 Late and incomplete document considerations
The formal deadline for registering and uploading non-administrative contributions had been
announced as Monday, 15 Feb 2016. Non-administrative documents uploaded after 2359 hours
in Paris/Geneva time Tuesday 16 Feb 2016 were considered "officially late". One late
contribution (JCT3V-N0027) was observed, which however was not presented as no owner was
available when it was called to be discussed.
1.3.3 Outputs of the preceding meeting
The report documents of the previous meeting, particularly the meeting report (JCT3V-M1000),
the 3D-HEVC verification test report (JCT3V-M1001), the MV-HEVC verification test plan v4
(JCT3V-M1002), the Draft 4 of Texture/Depth packing SEI (JCT3V-M1006) and associated
software draft 1 (JCT3V-M1005), MV-HEVC Software Draft 5 (JCT3V-M1009), and the 3DHEVC Software Draft 3 (JCT3V-M1012), which had been produced in the interim period, were
approved. The HTM reference software package produced by AHG3 on software development,
and the software technical evaluations were also approved.
All output documents of the previous meeting and the software had been made available in a
reasonably timely fashion.
The chairs asked if there were any issues regarding potential mismatches between perceived
technical content prior to adoption and later integration efforts. It was also asked whether there
was adequate clarity of precise description of the technology in the associated proposal
contributions.
1.4 Attendance
The list of participants in the JCT-3V meeting can be found in Annex B of this report.
Page: 290
Date Sav
The meeting was open to those qualified to participate either in ITU-T WP3/16 or ISO/IEC
JTC 1/ SC 29/ WG 11 (including experts who had been personally invited by the Chairs as
permitted by ITU-T or ISO/IEC policies).
Participants had been reminded of the need to be properly qualified to attend. Those seeking
further information regarding qualifications to attend future meetings may contact the Chairs.
1.5 Agenda
The agenda for the meeting was as follows:

IPR policy reminder and declarations

Opening remarks

Contribution document allocation

Reports of ad hoc group activities

Review of results of previous meeting

Consideration of contributions and communications on 3D video coding projects
guidance

Consideration of 3D video coding technology proposal contributions

Consideration of information contributions

Coordination activities

Future planning: Determination of next steps, discussion of working methods,
communication practices, establishment of coordinated experiments, establishment of
AHGs, meeting planning, refinement of expected standardization timeline, other planning
issues

Other business as appropriate for consideration
1.6 IPR policy reminder
Participants were reminded of the IPR policy established by the parent organizations of the JCT3V and were referred to the parent body websites for further information. The IPR policy was
summarized for the participants.
The ITU-T/ITU-R/ISO/IEC common patent policy shall apply. Participants were particularly
reminded that contributions proposing normative technical content shall contain a non-binding
informal notice of whether the submitter may have patent rights that would be necessary for
implementation of the resulting standard. The notice shall indicate the category of anticipated
licensing terms according to the ITU-T/ITU-R/ISO/IEC patent statement and licensing
declaration form.
This obligation is supplemental to, and does not replace, any existing obligations of parties to
submit formal IPR declarations to ITU-T/ITU-R/ISO/IEC.
Participants were also reminded of the need to formally report patent rights to the top-level
parent bodies (using the common reporting form found on the database listed below) and to
make verbal and/or document IPR reports within the JCT-3V as necessary in the event that they
are aware of unreported patents that are essential to implementation of a standard or of a draft
standard under development.
Some relevant links for organizational and IPR policy information are provided below:

http://www.itu.int/ITU-T/ipr/index.html (common patent policy for ITU-T, ITU-R, ISO,
and IEC, and guidelines and forms for formal reporting to the parent bodies)
Page: 291
Date Sav

http://ftp3.itu.int/av-arch/jct3v-site (JCT-3V contribution templates)

http://www.itu.int/ITU-T/studygroups/com16/jct-3v/index.html
information and founding charter)

http://www.itu.int/ITU-T/dbase/patent/index.html (ITU-T IPR database)
(JCT-3V
general
 http://www.itscj.ipsj.or.jp/sc29/29w7proc.htm (JTC 1/ SC 29 Procedures)
It is noted that the ITU TSB director's AHG on IPR had issued a clarification of the IPR
reporting process for ITU-T standards, as follows, per SG 16 TD 327 (GEN/16):
"TSB has reported to the TSB Director‘s IPR Ad Hoc Group that they are receiving Patent
Statement and Licensing Declaration forms regarding technology submitted in Contributions
that may not yet be incorporated in a draft new or revised Recommendation. The IPR Ad
Hoc Group observes that, while disclosure of patent information is strongly encouraged as
early as possible, the premature submission of Patent Statement and Licensing Declaration
forms is not an appropriate tool for such purpose.
In cases where a contributor wishes to disclose patents related to technology in Contributions,
this can be done in the Contributions themselves, or informed verbally or otherwise in
written form to the technical group (e.g. a Rapporteur‘s group), disclosure which should then
be duly noted in the meeting report for future reference and record keeping.
It should be noted that the TSB may not be able to meaningfully classify Patent Statement
and Licensing Declaration forms for technology in Contributions, since sometimes there are
no means to identify the exact work item to which the disclosure applies, or there is no way
to ascertain whether the proposal in a Contribution would be adopted into a draft
Recommendation.
Therefore, patent holders should submit the Patent Statement and Licensing Declaration form
at the time the patent holder believes that the patent is essential to the implementation of a
draft or approved Recommendation."
The chairs invited participants to make any necessary verbal reports of previously-unreported
IPR in draft standards under preparation, and opened the floor for such reports: No such verbal
reports were made.
1.7 Software copyright disclaimer header reminder
It was noted that it is our understanding according to the practices of the parent bodies to make
reference software available under copyright license header language which is the BSD license
with preceding sentence declaring that contributor or third party rights are not granted, as e.g.
recorded in N10791 of the 89th meeting of ISO/IEC JTC 1/ SC 29/ WG 11. Both ITU and
ISO/IEC will be identified in the <OWNER> and <ORGANIZATION> tags in the header. This
software header is currently used in the process of designing the new HEVC standard and for
evaluating proposals for technology to be included in this design. Additionally, after
development of the coding technology, the software will be published by ITU-T and ISO/IEC as
an example implementation of the 3D video standard(s) and for use as the basis of products to
promote adoption of the technology. This is likely to require further communication with and
between the parent organizations.
The ATM, HTM and MFC software packages that are used in JCT-3V follow these principles.
The view synthesis software used for non-normative post processing is included in the HTM
package and also has the BSD header.
1.8 Communication practices
The documents for the meeting can be found at http://phenix.it-sudparis.eu/jct3v/. Furthermore,
the site http://ftp3.itu.int/av-arch/jct3v-site was used for distribution of the contribution
document template and circulation of drafts of this meeting report.
Page: 292
Date Sav
Communication of JCT-3V is performed via the list jct-3v@lists.rwth-aachen.de (to subscribe or
unsubscribe, go to https://mailman.rwth-aachen.de/mailman/listinfo/jct-3v).
It was emphasized that reflector subscriptions and email sent to the reflector must use their real
names when subscribing and sending messages and must respond to inquiries regarding their
type of interest in the work.
It was emphasized that usually discussions concerning CEs and AHGs should be performed
using the reflector. CE internal discussions should primarily be concerned with organizational
issues. Substantial technical issues that are not reflected by the original CE plan should be openly
discussed on the reflector. Any new developments that are result of private communication
cannot be considered to be the result of the CE.
For the case of CE documents and AHG reports, email addresses of participants and contributors
may be obscured or absent (and will be on request), although these will be available (in human
readable format – possibly with some "obscurification") for primary CE coordinators and AHG
chairs.
1.9 Terminology
Note: Acronyms should be used consistently. For example, ―IV‖ is sometimes used for ―interview‖ and sometimes for ―intra-view‖.
Some terminology used in this report is explained below:

AHG: Ad hoc group.

AMVP: Advanced motion vector prediction.

ARP: Advanced residual prediction.

ATM: AVC based 3D test model

AU: Access unit.

AUD: Access unit delimiter.

AVC: Advanced video coding – the video coding standard formally published as ITU-T
Recommendation H.264 and ISO/IEC 14496-10.

BD: Bjøntegaard-delta – a method for measuring percentage bit rate savings at equal
PSNR or decibels of PSNR benefit at equal bit rate (e.g., as described in document
VCEG-M33 of April 2001).

BoG: Break-out group.

BR: Bit rate.

B-VSP: Backward view synthesis prediction.

CABAC: Context-adaptive binary arithmetic coding.

CD: Committee draft – the first formal ballot stage of the approval process in ISO/IEC.

CE: Core experiment – a coordinated experiment conducted between two subsequent
JCT-3V meetings and approved to be considered a CE by the group.

Consent: A step taken in ITU-T to formally consider a text as a candidate for final
approval (the primary stage of the ITU-T "alternative approval process").

CPB: Coded picture buffer.

CTC: Common test conditions.

DBBP: Depth based block partitioning.
Page: 293
Date Sav

DC: Disparity compensation

DDD: Disparity derived depth (which uses the motion disparity vector to reconstruct a
certain block (PU) of the depth map)

DIS: Draft international standard – the second formal ballot stage of the approval process
in ISO/IEC.

DF: Deblocking filter.

DLT: Depth lookup table.

DMC: Depth based motion competition.

DMM: Depth modeling mode.

DPB: Decoded picture buffer.

DRPS: Depth-range parameter set.

DRWP: Depth-range based weighted prediction.

DT: Decoding time.

DV: Disparity vector

ET: Encoding time.

HEVC: High Efficiency Video Coding – the video coding standardization initiative
under way in the JCT-VC.

HLS: High-level syntax.

HM: HEVC Test Model – a video coding design containing selected coding tools that
constitutes our draft standard design – now also used especially in reference to the (nonnormative) encoder algorithms (see WD and TM).

HRD: Hypothetical reference decoder.

HTM: HEVC based 3D test model

IC: Illumination compensation

IDV: Implicit disparity vector

IVMP: Inside-view motion prediction (which means motion for depth component is
inherited from texture component motion)

IVRC: Inter-view residual prediction.

MC: Motion compensation.

MPEG: Moving picture experts group (WG 11, the parent body working group in
ISO/IEC JTC 1/ SC 29, one of the two parent bodies of the JCT-3V).

MPI: Motion parameter inheritance.

MV: Motion vector.

NAL: Network abstraction layer (HEVC/AVC).

NBDV: Neighboured block disparity vector (used to derive unavailable depth data from
reference view‘s depth map) and DoNBDV = depth oriented NBDV

NB: National body (usually used in reference to NBs of the WG 11 parent body).
Page: 294
Date Sav

NUT: NAL unit type (HEVC/AVC).

PDM: Predicted Depth Map

POC: Picture order count.

PPS: Picture parameter set (HEVC/AVC).

QP: Quantization parameter (as in AVC, sometimes confused with quantization step
size).

QT: Quadtree.

RA: Random access – a set of coding conditions designed to enable relatively-frequent
random access points in the coded video data, with less emphasis on minimization of
delay (contrast with LD). Often loosely associated with HE.

RAP: Random access picture.

R-D: Rate-distortion.

RDO: Rate-distortion optimization.

RDOQ: Rate-distortion optimized quantization.

REXT: Range extensions (of HEVC).

RPS: Reference picture set.

RQT: Residual quadtree.

SAO: Sample-adaptive offset.

SEI: Supplemental enhancement information (as in AVC).

SD: Slice data; alternatively, standard-definition.

SDC: Segment-wise DC coding.

SH: Slice header.

SHVC: Scalable HEVC.

SPS: Sequence parameter set (HEVC/AVC).

TSA: Temporal sublayer access.

Unit types:
o CTB: Coding tree block (luma or chroma) – unless the format is monochrome,
there are three CTBs per CTU.
o CTU: Coding tree unit (containing both luma and chroma, previously called
LCU)
o CB: Coding block (luma or chroma).
o CU: Coding unit (containing both luma and chroma).
o LCU: (formerly LCTU) largest coding unit (name formerly used for CTU before
finalization of HEVC version 1).
o PB: Prediction block (luma or chroma)
Page: 295
Date Sav
o PU: Prediction unit (containing both luma and chroma), with eight shape
possibilities.

2Nx2N: Having the full width and height of the CU.

2NxN (or Nx2N): Having two areas that each have the full width and half
the height of the CU (or having two areas that each have half the width
and the full height of the CU).

NxN: Having four areas that each have half the width and half the height
of the CU.

N/2x2N paired with 3N/2x2N or 2NxN/2 paired with 2Nx3N/2: Having
two areas that are different in size – cases referred to as AMP.
o TB: Transform block (luma or chroma).
o TU: Transform unit (containing both luma and chroma).

VCEG: Visual coding experts group (ITU-T Q.6/16, the relevant rapporteur group in
ITU-T WP3/16, which is one of the two parent bodies of the JCT-3V).

VPS: Video parameter set.

VS: View synthesis.

VSO: View synthesis optimization (RDO tool for depth maps).

VSP: View synthesis prediction.

WD: Working draft – the draft HEVC standard corresponding to the HM.

WG: Working group (usually used in reference to WG 11, a.k.a. MPEG).
1.10 Liaison activity
The JCT-3V did not send or receive formal liaison communications at this meeting.
1.11
Opening remarks

Common code base for MV/3D software (currently based on HM 16.5) / coordination
with RExt / SHVC which may establish a single codebase after the February 2016
meeting?

3D-HEVC Software/Conformance timelines

Status of conformance and reference software in the different amending activities: 3DHEVC Reference Software under DAM ballot; 3D/MV-HEVC conformance: DAM
closed 11-25.

MV-HEVC verification testing – unlike planned, was not performed prior to next
meeting. It was later decided to perform it until the14th meeting.

Expected output docs:
o Draft 5 of Alternative Depth Info SEI Message (Study ISO/IEC 1449610:2014/DAM3, not for ballot)
o 3D-HEVC Reference Software FDAM
o MV-/3D-HEVC Conformance FDAM
o Draft 2 of Software for ADI SEI (DAM in ISO/IEC)
Page: 296
Date Sav
o Dispositions of comments on the above
o MV-HEVC verification test report
o (no update of HTM text, HTM11 stays valid) –
1.12
Contribution topic overview
The approximate subject categories and quantity of contributions per category for the meeting
were summarized as follows.

AHG reports (section 2) (7)

Project development and status (section 3) (0)

SEI messages (section 4) (2)
 Non-normative Contributions (section 5) (0)
NOTE – The number of contributions noted in each category, as shown in parenthesis above,
may not be 100% precise.
1.13
Scheduling
Generally meeting time was planned to be scheduled during 0900–2000, with coffee and lunch
breaks as convenient. Ongoing refinements were announced on the group email reflector as
needed.
Some particular scheduling notes are shown below, although not necessarily 100% accurate:

Monday, first day
o 1900–2030: Opening and AHG report review
o 2030–2130: Review SEI messages

Thursday, second day
o 0830 Viewing for centralized depth SEI message
o 1600–1800 Closing plenary
2 AHG reports (5)
(Review of these AHG reports was chaired by JRO & GJS, Monday 02/22 1930–2030.)
The activities of ad hoc groups that had been established at the prior meeting are discussed in
this section.
JCT3V-N0001 JCT-3V AHG report: Project management (AHG1) [J.-R. Ohm, G. J. Sullivan]
All documents from the preceding meeting had been made available at the document site
(http://phenix.it-sudparis.eu/jct3v/) or the ITU-based JCT-3V site (http://wftp3.itu.int/avarch/jct3v-site/2015_10_M_Geneva/), particularly including the following:

Draft 4 of Alternative Depth Info SEI Message (ISO/IEC 14496-10:2014/DAM3, for
ballot)

Software of Alternative Depth Info SEI (ISO/IEC 14496-5:2001/PDAM42, for ballot)

MV-HEVC Software Draft 5 (ISO/IEC 23008-5:2015 FDAM4, for ballot)

3D-HEVC Software Draft 3

3D-HEVC Verification Test Report
Page: 297
Date Sav
 MV-HEVC Verification Test Plan
The 5 ad hoc groups had made some progress, and reports from their activities had been
submitted.
With regard to the software, coordination with JCT-VC should be conducted to avoid divergence
in cases of future bug fixing. Ideally, one common software code base should be established for
all parts of HEVC.
"Bug tracking" systems for software and text specifications had been installed previously. The
sites are https://hevc.hhi.fraunhofer.de/trac/3d-hevc and https://hevc.hhi.fraunhofer.de/trac/3davc/. The bug tracker reports were automatically forwarded to the group email reflector, where
the issues could be further discussed.
MV-HEVC verification tests had been conducted in the interim period, such that the test report is
expected to be issued from the current meeting.
Approximately 7 input contributions to the current meeting had been registered. One lateregistered or late-uploaded contribution was observed.
The meeting announcement had been made available from the aforementioned document site and
http://wftp3.itu.int/av-arch/jct3v-site/2016_02_N_SanDiego/JCT3V-N_Logistics.doc.
A preliminary basis for the document subject allocation and meeting notes had been circulated to
the participants as http://wftp3.itu.int/av-arch/jct3v-site/2016_02_N_SanDiego/JCT3VN_Notes_d0.doc.
JCT3V-N0002 JCT-3V AHG Report: 3D-HEVC Draft and MV-HEVC / 3D-HEVC Test Model
editing (AHG2) [G. Tech, J. Boyce, Y. Chen, M. Hannuksela, T. Suzuki, S. Yea, J.-R. Ohm,
G. Sullivan]
Several editorial defects in particular in Annex F of the HEVC specification have been reported
already to the last meeting in Geneva. A summary including fixes can be found in document
JCT3V-M1002_defects. The defects have also been reported (among others) to JCT-VC in
document JCTVC-V0031. However, a review and confirmation of suggested fixes is still
pending and required before integration to the current HEVC draft text.
Further clarification was obtained through Gerhard Tech by email communication.
Whereas, in the first version of the report, the mentioned document of defects was not included,
a new version was later provided which fixed that deficiency.
JCT3V-N0003 JCT-3V AHG Report: MV-HEVC and 3D-HEVC Software Integration (AHG3)
[G. Tech, H. Liu, Y. Chen]
Development of the software was coordinated with the parties needing to integrate changes.
The distribution of the software was announced on the JCT-3V e-mail reflector and the software
was made available through the SVN server:
https://hevc.hhi.fraunhofer.de/svn/svn_3DVCSoftware/tags/
Anchor bitstreams have been created and uploaded to:
ftp.hhi.fraunhofer.de; path: /HTM-Anchors/
Note that the username and password have changed. To obtain them, please send a mail to
gerhard.tech@hhi.fraunhofer.de.
One update version of the HTM software was produced and announced on the JCT-3V email
reflector. A second version will be released during the meeting. The following sections give a
brief summary of the integrations.
HTM-16.0 was developed by integrating changes between HM-16.6 and HM-16.7 to HTM-15.2.
Moreover, minor bug fixes and clean-ups have been integrated. The coding performance
compared to HTM-15.2 changes insignificantly.
Version 16.1 was planned to be be released during the meeting. It contains further fixes for nonCTC conditions and clean-ups. The coding performance under CTC conditions is identical to
HTM-16.0.
Page: 298
Date Sav
The MV-HEVC software draft 5 (JCT3V-M1009) had been released. The software had been
generated by removing 3D-HEVC related source code and configuration files from HTM-16.0.
The software can also be accessed using the svn:
https://hevc.hhi.fraunhofer.de/svn/svn_3DVCSoftware/branches/HTM-16.0-MV-draft-5
The related document has been submitted as FDAM text.
The 3D-HEVC software draft 3 (JCT3V-M1012) had been released. The software corresponds to
HTM-16.0. The software can also be accessed using the svn:
https://hevc.hhi.fraunhofer.de/svn/svn_3DVCSoftware/tags/HTM-16.0
The related document had been submitted as DAM study text. Note that, although it was missing
in the MPEG document system until January, the AHG provided this document in time.
Gerhard Tech provided the disposition of NB comments.
JCT3V-N0004 JCT-3V AHG Report: 3D Coding Verification Testing (AHG4) [V. Baroncini,
K. Müller, S. Shimizu]
Presented Thursday 1600–1800.
Several e-mails were exchanged among co-chairs; the status on the test plan of the MV-HEVC
verification and availability of the test materials were discussed.
Mandate 1:
Finalize the report of 3D-HEVC verification testing
The report of 3D-HEVC verification testing were finalized and delivered on November 18th 2015
as JCT3V-M1001 (version 2).
Mandate 2:
Implement the MV-HEVC verification test plan JCT3V-M1002 and prepare a test
report
The MV-HEVC verification test was successfully conducted. Some work is still necessary to
organize the test results, but the results seem fine though the quick checking. The report will be
prepared after the careful checks and take more works.
Mandate 3:
Prepare viewing logistics for 14th JCT-3V meeting.
No viewing was necessary during the meeting since the subjective viewing for the MV-HEVC
verification testing were already performed. The viewing logistics, however, will be available
and shared with MPEG FTV AHG.
JCT3V-N0005 JCT-3V AHG Report: HEVC Conformance testing development (AHG5)
[Y. Chen, T. Ikai, K. Kawamura, S. Shimizu, T. Suzuki]
The draft conformance text was revised as an input contribution of JCT3V-N0023 with the
following improvements:

Added the information on Resolution and Number of Frames in the bitstream description

Added the information on output layer set in the MV-HEVC bitstream description

Fixed level information
 Fixed some editorial issues (i.e. typoes, missing definitions, …)
Note: the revision also provided a response to the NB comments on voting on ISO/IEC 230088:201x/DAM 1, which were reviewed by the editors.
At the last meeting, the following timeline was established.

Generate all revision 3 bitstreams (target date 2015/11/27)

Confirmed the generated bitstreams with the latest HTM (HTM16.0) two weeks after the
release of the HTM-16.0
Page: 299
Date Sav
In the confirmation process, the following issues were found:

[3DHC_TD_C] The tested tool (QTL) was disabled

[MVHEVCS-I] The inbld_flag was set equal to 0 in the PTL(profle tier level) indications for
the independent layer
(inbld_flag should be equal to 1)

[MVHEVCS-J] the bitstream (hybrid scalability) was missing.
(QTL should be enabled)
The former two bitstreams have been regenerated and the issues have been resolved.
However, MVHEVCS-J is still missing.
Nokia did not provide the bitstream for Hybrid Multiview Coding. The specification of the
bitstream was removed from the conformance spec. Though the functionality is basically
identical with hybrid scalability in SHVC, it is worthwhile to investigate whether the
functionality of hybrid multiview coding is broken and might be needed to be removed from the
standard.
The ftp site at ITU-T is used to exchange bitstreams. The ftp site for downloading bitstreams is,
http://wftp3.itu.int/av-arch/jct3v-site/bitstream_exchange/
Two folders are created in this ftp site for MV-HEVC bitstreams and 3D-HEVC bitstreams
respectively:
http://wftp3.itu.int/av-arch/jct3v-site/bitstream_exchange/under_test/MV-HEVC/
http://wftp3.itu.int/av-arch/jct3v-site/bitstream_exchange/under_test/3D-HEVC/
The following drop box folder is used for uploading the bitstreams that are pending for moved to
any of the above folders. It is suggested the provider of a bitstream do not use it for sharing it to
those who will perform the verification.
http://wftp3.itu.int/av-arch/jct3v-site/dropbox/
It was expected that it will not be necessary to re-generate conformance bitstreams upon the
availability of HTM 16.1 software, since this only adds SEI messages related to layered coding
which are not used in the regular decoding process.
T. Ikai and K. Kawamura were to perform a check after availability of HTM 16.1 software to
confirm that there is no problem with the set of conformance streams before they are submitted
as FDAM (submission for ITU consent by the next meeting in Geneva). If problems are found,
the responsible parties need to be tasked to re-generate the streams.
3 Standards development, bug fixing (5)
JCT3V-N0023 Proposed editorial improvements to MV-HEVC and 3D-HEVC Conformance
Draft [Y. Chen, T. Ikai, K. Kawamura, S. Shimizu, T. Suzuki]
This was reviewed Monday evening.
The editors should align the editorial style with editors of SHVC conformance. For example, one
issue is the capitalization of some words which is not compliant with ITU-T and ISO/IEC
convention.
The disposition of NB comments report was prepared by K. Kawamura and T. Ikai.
JCT3V-N0024 Draft 2 of Reference Software for Alternative Depth Info SEI Message Extension
of AVC [T. Senoh (NICT)]
This was reviewed Monday evening.
The disposition of NB comments report was prepared by T. Senoh.
Page: 300
Date Sav
JCT3V-N0025 [AHG 5] Further confirmation on MV-HEVC conformance streams [K. Kawamura,
S. Naito (KDDI)]
This contribution reports a cross-check results of test bitstreams for MV-HEVC conformance by
an alternative decoder. The decoder was implemented independently with the reference software.
It was reported that all conformance streams can be decoded correctly, and the hash values
produced by the decoder were matched with those in the conformance stream.
JCT3V-N0026 Bug report (#114) on MV-HEVC Reference Software [K. Kawamura, S. Naito
(KDDI)]
This contribution reports a mismatch between a text and a reference software on DPB size syntax.
The ticket is #114 and was already fixed on HTM-16.0-dev1.
This refers to a ballot comment, which will be resolved when HTM 16.0 or 16.1 is promoted to
FDAM.
JCT3V-N0027 SHM software modifications for multi-view support [X. Huang (USTC),
M. M. Hannuksela (Nokia)] [late]
Contribution noted (for information only). No presenter was available Thursday afternoon when
it was on the agenda to be discussed. No action was taken by JCT-3V on this. (See also the JCTVC report.)
It was unclear whether the availability of this software would finally allow to enable hybrid
scalability.
4 SEI messages (2)
Discussed Monday 1900–2100 and Thursday 16:00–17:00.
JCT3V-N0021 Centralized Texture-Depth Packing SEI Message for H.264/AVC [J.-F. Yang, H.M. Wang, C.-Y. Chen, T.-A. Chang]
This contribution modifies the prior contribution, JCT3V-M0021 ―Centralized Texture-Depth
Packing (CTDP) SEI Message for H.264/AVC‖, by removing unprecise syntax terms, and
polishing the descriptions of syntax structures. In the syntax of H.264/AVC SEI message,
quarter_reduced_depth_line_number, which can precisely specify the downsampling factors of
depth and texture, is replaced of all the unprecise syntax terms. The selections of
quarter_reduced_depth_line_numbers for HD (1920x1080) and UHD (3840x2160) formats are
also enlisted as examples. Although the prior study and experiments are conducted based on the
HEVC kernel, it can still be applicable to H.264/AVC. Before the available of 3D broadcasting
systems, the contributor said that that the proposed CTDP formats with SEI Message could help
to deliver 3D videos in the current 2D broadcasting systems simply and efficiently.
JCT3V-N0022 Centralized Texture-Depth Packing SEI Message for HEVC [J.-F. Yang, H.-M.
Wang, C.-Y. Chen, T.-A. Chang]
This contribution modifies the prior contribution, JCT3V-M0022 ―Centralized Texture-Depth
Packing (CTDP) SEI Message for HEVC‖, by removing unprecise syntax terms, polishing the
descriptions of syntax structures and conducting the simulation for CG contents, which have
higher
quality
depth
maps.
In
the
syntax
of
CTDP
SEI
message,
quarter_reduced_depth_line_number, which can precisely specify the downsampling factors of
depth and texture, is replaced of all the unprecise syntax terms. The selections of
quarter_reduced_depth_line_numbers for HD (1920x1080) and UHD (3840x2160) formats are
also enlisted as examples. Besides, in addition of YUV calibration, the contributor also further
suggested the use of depth enhancement and a rolling guidance filter before the DIBR process to
improve the performance of virtual view syntheses. In simulations, in addition of VSRS, the
adaptive temporal view synthesis (ATVS) system was also used in the simulations. To test the
Page: 301
Date Sav
down and up sampling filters, Lanczos4 filtering was used for texture resizing in CTDP packing
and depacking processes. The results reportedly show that Lanczos4 helps to improve the CTDP
performances in BDBR about 5-6%.
The new proposal includes numerous elements of non-normative processing for further
optimization.
A depth enhancement process based on bilateral filter is included to compensate for the artefacts
in depth map that are caused by the rgb/yuv conversion in depth map packing.
Compared to 3D HEVC, the bit rate increase is 5-10% for 1view+1depth, 175% for
2view/2depth.
The proponent reports that problems may occur at depth edges due to the RGB444->YUV420
conversion.
Viewing was performed on Thursday 8:30-9:00. The proponent and three experts participated in
the viewing. Examples were demonstrated for Shark and Undo Dancer, One-Video-plus depth
and Two-Video-plus-depth cases were shown for 3D-HEVC and the packing scheme with equal
QP, and approximately similar data rates (usually slightly lower for the packing scheme in one
view cases, and slightly higher in the two view cases). The following observations were made

the texture in the packing scheme was more blurred, naturally caused due to the subsampling

the packing scheme had significantly more visible artefacts at object boundaries, which is
likely due to distortion of the depth maps.
The experts suggested that this could be explainable due to the fact that the packing in RGB
4:4:4 and downconversion to YUV 4:2:0 causes some uncontrollable defects in the reconstructed
depth map (the artefacts were clearly visible also in the case of QP27, where usually the
compression would not introduce large errors). The proponents are asked to consider packing the
depth data only into the Y component to prevent that the packing scheme itself distorts the depth
map.
The results demonstrated were already using texture-dependent bilateral depth filtering, but
artefacts were still visible. The SEI message should be capable to be used without complicated
post processing and still deliver sufficient quality of the depth maps. (Note: from previous
experiments that had been conducted in JCT-3V in the context of 3D-AVC, the observation was
that downsampling of depth maps is not critical with regard to subjective quality of synthesis.)
It was also mentioned that due to the RGB444 to YUV420 conversion, high spatial frequencies
may appear in the UV components, which unnecessarily increases the data rate.
Further study was recommended.
5 Non-normative contributions (0)
No contributions noted.
6 Project planning
6.1
Software and Conformance development
HTM software: Version 16.1 was requested to be released within one week after the meeting
Conformance development
-
6.2
After availability of HTM 16.1 software, the conformance streams are to be finally
checked and provided for integration in new edition
Software repositories
HTM software for 3D-HEVC can be checked in
https://hevc.hhi.fraunhofer.de/svn/svn_3DVCSoftware/
Page: 302
Date Sav
Therefore, for each integrator a separate software branch will be created by the software
coordinator containing the current anchor version or the version of the previous integrator:
e.g., branches/0.7-mycompany
The branch of the last integrator will become the new release candidate tag.
e.g., tags/0.8rc1
This tag can be cross-checked by the group for correctness. If no problems occur the release
candidate will become the new tag after 7 days:
e.g., tags/0.8
If reasonable, intermediate release candidate tags can be created by the software coordinator.
7 Establishment of ad hoc groups
The ad hoc groups established to progress work on particular subject areas until the next meeting
are described in the table below. The discussion list for all of these ad hoc groups will be the
main JCT-3V reflector (jct-3v@lists.rwth-aachen.de).
Title and Email Reflector
Chairs
Mtg
JCT-3V project management (AHG1)
(jct-3v@lists.rwth-aachen.de)
G. J. Sullivan, J.-R. Ohm
(co-chairs)
N
G. Tech, H. Liu, Y. W.
Chen (co-chairs)
N

Coordinate overall JCT-3V interim efforts.

Report on project status to JCT-3V reflector.

Provide report to next meeting on project
coordination status.
MV-HEVC / 3D-HEVC Software Integration
(AHG2)
(jct-3v@lists.rwth-aachen.de)

Coordinate development of the HTM software
and its distribution to JCT-3V members.

Produce documentation of software usage for
distribution with the software.

Prepare and
version .

Prepare and deliver Draft 4 of 3D-HEVC
software JCT3V-N1012.

Coordinate with AHG3 to identify any possible
bugs in software and conformance.

Identify any mismatches between software and
text.
deliver
HTM-16.1
software
Page: 303
Date Sav
HEVC Conformance testing development (AHG
3)
(jct-3v@lists.rwth-aachen.de)
 Further improve the conformance draft JCT3VN1008.

T. Ikai, K. Kawamura, T.
Suzuki (co-chairs)
N
Coordinate with AHG3 to identify any possible
bugs in software and conformance.
8 Output documents
The following documents were agreed to be produced or endorsed as outputs of the meeting.
Names recorded below indicate those responsible for document production.
JCT3V-N1000 Meeting Report of 14th JCT-3V Meeting [J.-R. Ohm, G. J. Sullivan] (before next
meeting)
(Note: Initial versions of the subsequent draft documents were requested to be uploaded by the
end of the meeting, with continually updating to be performed until the final the version was
released.)
JCT3V-N1001 MV-HEVC Verification Test Report [V. Baroncini, K. Müller, S. Shimizu] (WG11
N16050) [2016-03-31]
JCT3V-K1003 Test Model 11 of 3D-HEVC and MV-HEVC [Y. Chen, G. Tech, K. Wegner,
S. Yea]
Remains valid (although from a prior meeting).
JCT3V-N1005 Draft 2 of Reference Software for Alternative Depth Info SEI in 3D-AVC
[T. Senoh] (ISO/IEC 14496-5:2001 DAM42, WG11 N16029) [2016-04-15]
The disposition of NB comments for ISO/IEC JTC 1/SC 29/WG 11 (N16028) was also reviewed.
JCT3V-M1006 Draft 4 of Alternative Depth Info SEI Message [T. Senoh, Y. Chen, J.-R. Ohm,
G. J. Sullivan] (ISO/IEC 14496-10:2014 DAM3, WG11 N15755) [2015-11-30]
Remains valid (although from a prior meeting).
JCT3V-N1008 MV-HEVC and 3D-HEVC Conformance Draft 4 [T. Ikai, K. Kawamura, T. Suzuki]
(to be integrated in ISO/IEC 23008-8:201X, WG11 N16061) [2016-03-18]
The disposition of NB comments for ISO/IEC JTC 1/SC 29/WG 11 (N16058) was reviewed, and
a meeting resolution was issued by WG 11 to progress into final spec.
JCT3V-M1009 MV-HEVC Software Draft 5 [G. Tech] [ISO/IEC 23008-5:201X FDAM2, WG11
N15786) [2015-11-30]
Remains valid (although from a prior meeting).
JCT3V-N1012 3D-HEVC Software Draft 4 [G. Tech, H. Liu, Y. W. Chen] (to be integrated into
ISO/IEC 23008-5:201X, WG11 N16055) [2016-03-18]
The disposition of NB comments for ISO/IEC JTC 1/SC 29/WG 11 (N16054) was reviewed, and
a meeting resolution was issued by WG 11 to progress into final spec.
JCT3V-G1100 Common Test Conditions of 3DV Core Experiments
Remains valid (although from a prior meeting).
Page: 304
Date Sav
9 Future meeting plans, expressions of thanks, and closing of the meeting
The document upload deadline for the 15th meeting of the JCT-3V will be May 23, 2016, 2359
MET (Geneva/Paris time zone). Future meeting plans were established according to the
following guidelines:

Meeting under ITU-T SG 16 auspices when it meets (starting meetings on the Saturday of
the first week and closing it on the Tuesday or Wednesday of the second week of the
SG 16 meeting);

Otherwise meeting under ISO/IEC JTC 1/SC 29/WG 11 auspices when it meets (starting
meetings on the first day and closing it on the last day of the WG 11 meeting);

In cases where JCT-3V meets during the first week of the SG16 meeting under ITU-T
auspices, and co-located with an MPEG meeting at a nearby meeting place, the meeting
dates could also be approximately aligned with the MPEG meeting.
Some specific future meeting plans were established as follows:
 during 27-31 May 2016 under ITU-T auspices in Geneva, CH.
It is suggested to discontinue activities of JCT-3V after its 15th meeting.
Prof. Naeem Ramzan was thanked for conducting the MV HEVC Verification test at his
laboratory at the Western Scotland University.
The USNB of WG11 and the companies that financially sponsored the meeting and associated
social event – Apple, CableLabs, Google, Huawei, InterDigital, Microsoft, Mitsubishi Electric,
Netflix and Qualcomm – were thanked for the excellent hosting of the 14th meeting of the JCT3V. Interdigital and Sony were thanked for providing the 3D viewing equipment that was used
during the meeting.
It was also reminded that final output documents (if also registered under a WG11 output doc
number) have to be uploaded separately with a WG11 header. To do this in a timely fashion is
particularly important for standards submitted for ballot, which should be sent to the chairs by
the due date.
The JCT-3V meeting was closed at approximately 1758 hours on Thursday 25 Feb 2016.
Page: 305
Date Sav
Annex A to JCT-3V report:
List of documents
JCT3V-VC
number
MPEG
number
JCT3V-N0001
m38134
2016-02-22 2016-02-22 2016-02-22 JCT-3V AHG report: Project
06:53:41
20:51:20
20:51:20
management (AHG1)
J.-R. Ohm,
G. J. Sullivan
JCT3V-N0002
m38138
JCT-3V AHG Report: 3D2016-02-22 2016-02-22 2016-02-27 HEVC Draft and MV-HEVC /
15:31:47
15:54:49
18:43:45
3D-HEVC Test Model editing
(AHG2)
G. Tech, J. Boyce,
Y. Chen,
M. Hannuksela,
T. Suzuki, S. Yea,
J.-R. Ohm,
G. Sullivan
JCT3V-N0003
m38139
JCT-3V AHG Report: MV2016-02-22 2016-02-22 2016-02-22
G. Tech, H. Liu,
HEVC and 3D-HEVC
15:33:49
15:51:22
15:51:22
Y. Chen
Software Integration (AHG3)
JCT3V-N0004
m38180
JCT-3V AHG Report: 3D
2016-02-24 2016-02-24 2016-02-24
Coding Verification Testing
22:25:25
22:25:47
22:25:47
(AHG4)
JCT3V-N0005
m37784
Y. Chen, T. Ikai,
JCT-3V AHG Report: HEVC
2016-02-12 2016-02-22 2016-02-22
K. Kawamura,
Conformance testing
09:04:08
23:12:22
23:12:22
S. Shimizu,
development (AHG5)
T. Suzuki
JCT3V-N0021
m37662
Centralized Texture-Depth
2016-02-08 2016-02-09 2016-02-09
Packing SEI Message for
02:53:24
04:03:57
04:03:57
H.264/AVC
J.-F.r Yang, H.-M.
Wang, C.-Y. Chen,
T.-A. Chang,
JCT3V-N0022
m37663
Centralized Texture-Depth
2016-02-08 2016-02-09 2016-02-22
Packing SEI Message for
03:04:48
04:04:37
01:21:57
HEVC
J.-F.r Yang, H.-M.
Wang, C.-Y. Chen,
T.-A. Chang,
m37785
Proposed editorial
2016-02-12 2016-02-16 2016-02-22 improvements to MV-HEVC
09:09:41
01:22:16
23:12:56
and 3D-HEVC Conformance
Draft
Y. Chen, T. Ikai,
K. Kawamura,
S. Shimizu,
T. Suzuki
JCT3V-N0024
m37789
Proposed Draft 2 of Reference
2016-02-12 2016-02-12 2016-02-23 Software for Alternative
T. Senoh (NICT)
12:04:15
12:10:11
18:19:45
Depth Info SEI Message
Extension of AVC
JCT3V-N0025
m38074
[AHG 5] Further confirmation
2016-02-19 2016-02-19 2016-02-19
K. Kawamura,
on MV-HEVC conformance
02:46:16
02:50:06
02:50:06
S. Naito (KDDI)
streams
JCT3V-N0026
m38075
2016-02-19 2016-02-19 2016-02-19 Bug report (#114) on MV02:51:06
02:52:42
02:52:42
HEVC Reference Software
K. Kawamura,
S. Naito (KDDI)
JCT3V-N0027
m38156
2016-02-23 2016-02-23 2016-02-24 SHM software modifications
04:07:37
04:23:35
04:46:09
for multi-view support
X. Huang (USTC),
M. M. Hannuksela
(Nokia)
JCT3V-N0028
m38163
2016-02-23 2016-02-23 2016-02-23
Withdrawn
18:30:10
18:32:25
18:32:25
JCT3V-N1000
m38203
2016-02-27
22:01:59
Meeting Report of 14th JCT3V Meeting
J.-R. Ohm,
G. J. Sullivan
JCT3V-N1001
m38204
2016-02-27 2016-05-17 2016-05-17 MV-HEVC Verification Test
22:03:30
08:14:11
08:33:34
Report
V. Baroncini,
K. Müller,
S. Shimizu
JCT3V-N1005
m38205
Draft 2 of Reference Software
2016-02-27 2016-03-03 2016-03-03
for Alternative Depth Info SEI T. Senoh
22:04:35
01:39:22
01:39:22
in 3D-AVC
JCT3V-N1008
m38206
2016-02-27 2016-03-18 2016-03-18 MV-HEVC and 3D-HEVC
22:06:22
09:01:03
09:01:03
Conformance Draft 4
JCT3V-N0023
Created
First upload Last upload
Title
Authors
V. Baroncini,
K. Müller,
S. Shimizu
T. Ikai,
K. Kawamura,
Page: 306
Date Sav
T. Suzuki
JCT3V-N1012
m38207
2016-02-27 2016-03-18 2016-03-18
3D-HEVC Software Draft 4
22:09:58
10:37:15
10:37:15
G. Tech, H. Liu,
Y. W. Chen
Page: 307
Date Sav
Annex B to JCT-3V report:
List of meeting participants
The participants of the fourteenth meeting of the JCT-3V, according to an attendance sheet
circulated during sessions (approximately 9 in total), were as follows:
Page: 308
Date Sav
1.
2.
3.
4.
5.
6.
7.
8.
9.
Vittorio Baroncini (FUB, IT)
Marek Domanski (Poznan U, PL)
Tomohiro Ikai (Sharp, JP)
Kei Kawamura (KDDI, JP)
Jens-Rainer Ohm (RWTH, DE)
Justin Ridge (Nokia, FI)
Takanori Senoh (NICT, JP)
Gary Sullivan (Microsoft, US)
Jar-Ferr Yang (NCKU, TW)
Page: 309
Date Sav
– JVET report
Source: Jens Ohm and Gary Sullivan, Chairs
Summary
The Joint Video Exploration Team (JVET) of ITU-T WP3/16 and ISO/IEC JTC 1/ SC 29/
WG 11 held its second meeting during 20–26 February 2016 at the San Diego Marriott La Jolla
in San Diego, US. The JVET meeting was held under the leadership of Dr Gary Sullivan
(Microsoft/USA) and Dr Jens-Rainer Ohm (RWTH Aachen/Germany) as responsible
coordinators of the two organizations, assisted substantially by Dr Jill Boyce (Intel/USA). For
rapid access to particular topics in this report, a subject categorization is found (with hyperlinks)
in section 1.14 of this document.
The JVET meeting sessions began at approximately 0900 hours on Saturday 20 February 2016.
Meeting sessions were held on all days (including weekend days) until the meeting was closed at
approximately 1555 hours on Thursday 25 February 2016. Approximately 161 people attended
the JVET meeting, and approximately 50 input documents were discussed. The meeting took
place in a collocated fashion with a meeting of ISO/IEC JTC 1/SC 29/WG 11 – one of the two
parent bodies of the JVET. The subject matter of the JVET meeting activities consisted of
studying future video coding technology with a potential compression capability that
significantly exceeds that of the current HEVC standard and evaluating compression technology
designs proposed in this area.
One primary goal of the meeting was to review the work that was performed in the interim
period since the first JVET meeting (in October 2015) in producing the Joint Exploration Test
Model 1 (JEM1). Another important goal was to review the work that had been conducted for
investigating the characteristics of new test material in the assessment of video compression
technology. Furthermore, technical input documents were reviewed, and proposed modifications
towards development of an improved test model as JEM2 were considered.
The JVET produced 4 output documents from the meeting:

Algorithm description of Joint Exploration Test Model 2 (JEM2)

Call for test materials for future video coding standardization

JVET common test conditions and software reference configurations
 Description of Exploration Experiments on coding tools
For the organization and planning of its future work, the JCT-VC established 5 "ad hoc groups"
(AHGs) to progress the work on particular subject areas. Seven Exploration Experiments (EE)
were defined on particular subject areas of coding tool testing. The next four JVET meetings are
planned to be held during Thu. 26 May – Wed. 1 June 2016 under ITU-T auspices in Geneva,
CH, during Sat. 15 – Fri. 21 Oct. 2016 under WG 11 auspices in Chengdu, CN, during Thu. 12 –
Wed. 18 Jan. 2017 under ITU-T auspices in Geneva, CH, and during Sat. 1 – Fri. 7 Apr. 2017
under WG 11 auspices in Hobart, AU.
The document distribution site http://phenix.it-sudparis.eu/jvet/ was used for distribution of all
documents.
The reflector to be used for discussions by the JVET and all its AHGs is the JVET reflector:
jvet@lists.rwth-aachen.de hosted at RWTH Aachen University. For subscription to this list, see
https://mailman.rwth-aachen.de/mailman/listinfo/jvet.
Page: 310
Date Sav
1
Administrative topics
1.1 Organization
The ITU-T/ISO/IEC Joint Video Exploration Team (JVET) is a collaboration group of video
coding experts from the ITU-T Study Group 16 Visual Coding Experts Group (VCEG) and the
ISO/IEC JTC 1/ SC 29/ WG 11 Moving Picture Experts Group (MPEG). The parent bodies of
the JVET are ITU-T WP3/16 and ISO/IEC JTC 1/SC 29/WG 11.
The Joint Video Exploration Team (JVET) of ITU-T WP3/16 and ISO/IEC JTC 1/ SC 29/
WG 11 held its second meeting during 20–26 February 2016 at the San Diego Marriott La Jolla
in San Diego, US. The JVET meeting was held under the leadership of Dr Gary Sullivan
(Microsoft/USA) and Dr Jens-Rainer Ohm (RWTH Aachen/Germany) as responsible
coordinators of the two organizations. Dr Jill Boyce (Intel/USA) assisted substantially with the
chairing of this meeting.
1.2 Meeting logistics
The JVET meeting sessions began at approximately 0900 hours on Saturday 20 Feb 2016.
Meeting sessions were held on all days (including weekend days) until the meeting was closed at
approximately 1555 hours on Thursday 26 Feb 2016. Approximately 161 people attended the
JVET meeting, and approximately 50 input documents were discussed. The meeting took place
in a collocated fashion with a meeting of ISO/IEC JTC 1/SC 29/WG 11 – one of the two parent
bodies of the JVET. The subject matter of the JVET meeting activities consisted of studying
future video coding technology with a potential compression capability that significantly exceeds
that of the current HEVC standard and evaluating compression technology designs proposed in
this area.
Information regarding logistics arrangements for the meeting had been provided via the email
reflector jvet@lists.rwth-aachen.de and at http://wftp3.itu.int/av-arch/jvetsite/2016_02_B_SanDiego/.
1.3 Primary goals
One primary goal of the meeting was to review the work that was performed in the interim
period since the first JVET meeting (in October 2015) in producing the Joint Exploration Test
Model 1 (JEM1). Another important goal was to review the work that had been conducted for
investigating the characteristics of new test material in the assessment of video compression
technology. Furthermore, technical input documents were reviewed, and proposed modifications
towards development of an improved test model as JEM2 were considered.
1.4 Documents and document handling considerations
1.4.1 General
The documents of the JVET meeting are listed in Annex A of this report. The documents can be
found at http://phenix.it-sudparis.eu/jvet/.
Registration timestamps, initial upload timestamps, and final upload timestamps are listed in
Annex A of this report.
The document registration and upload times and dates listed in Annex A and in headings for
documents in this report are in Paris/Geneva time. Dates mentioned for purposes of describing
events at the meeting (other than as contribution registration and upload times) follow the local
time at the meeting facility.
Highlighting of recorded decisions in this report:
Page: 311
Date Sav

Decisions made by the group that might affect the normative content of a future standard
are identified in this report by prefixing the description of the decision with the string
"Decision:".

Decisions that affect the JEM software but have no normative effect are marked by the
string "Decision (SW):".

Decisions that fix a "bug" in the JEM description (an error, oversight, or messiness) or in
the software are marked by the string "Decision (BF):".
This meeting report is based primarily on notes taken by the responsible leaders. The preliminary
notes were also circulated publicly by ftp during the meeting on a daily basis. It should be
understood by the reader that 1) some notes may appear in abbreviated form, 2) summaries of the
content of contributions are often based on abstracts provided by contributing proponents
without an intent to imply endorsement of the views expressed therein, and 3) the depth of
discussion of the content of the various contributions in this report is not uniform. Generally, the
report is written to include as much information about the contributions and discussions as is
feasible (in the interest of aiding study), although this approach may not result in the most
polished output report.
1.4.2 Late and incomplete document considerations
The formal deadline for registering and uploading non-administrative contributions had been
announced as Monday, 15 February 2016. Any documents uploaded after 2359 hours
Paris/Geneva time on that day were considered "officially late".
All contribution documents with registration numbers JVET-B0045 and higher were registered
after the "officially late" deadline (and therefore were also uploaded late). However, some
documents in the "B0045+" range include break-out activity reports that were generated during
the meeting, and are therefore better considered as report documents rather than as late
contributions.
In many cases, contributions were also revised after the initial version was uploaded. The
contribution document archive website retains publicly-accessible prior versions in such cases.
The timing of late document availability for contributions is generally noted in the section
discussing each contribution in this report.
One suggestion to assist with the issue of late submissions was to require the submitters of late
contributions and late revisions to describe the characteristics of the late or revised (or missing)
material at the beginning of discussion of the contribution. This was agreed to be a helpful
approach to be followed at the meeting.
The following technical design proposal contributions were registered on time but were uploaded
late:

JVET-B0033 (a proposal on adaptive multiple transform for chroma),
 JVET-B0041 (a proposal on adaptive reference sample smoothing simplification).
The following technical design proposal contributions were both registered late and uploaded
late:

JVET-B0047 (a proposal on non square TU partitioning),

JVET-B0048 (a proposal on universal string matching for screen content coding),

JVET-B0051 (a proposal on improvement of intra coding tools),

JVET-B0054 (a proposal on de-quantization and scaling for next generation containers),

JVET-B0058 (a proposal on modification of merge candidate derivation),
Page: 312
Date Sav

JVET-B0059 (a proposal on TU-level non-separable secondary transform),
 JVET-B0060 (a proposal on improvements for adaptive loop filter).
The following other documents not proposing normative technical content were registered on
time but were uploaded late:

JVET-B0021 (proposed improvements to algorithm description of joint exploration test
model 1),

JVET-B0031 (An evaluation report of test sequences),

JVET, B0034 (a cross-check of JVET-B0023),
 JVET-B0035 (An evaluation report of test sequences for future video coding).
The following other documents not proposing normative technical content, other than crosscheck documents, were both registered late and uploaded late:

JVET-B0049 (a contribution offering test sequences for video coding testing)

JVET-B0073 (a contribution suggesting modification of the low-delay common test
conditions for video coding experiments)
The following performance evaluation and cross-check documents were both registered late and
uploaded late: JVET-B0045, JVET-B0046, JVET-B0050, JVET-B0052, JVET-B0053, JVETB0055, JVET-B0056, JVET-B0057, JVET-B0061, JVET-B0062, JVET-B0063, JVET-B0065,
JVET-B0066, JVET-B0067, JVET-B0068, JVET-B0069, JVET-B0070, and JVET-B0072.
The following contribution registrations were later cancelled, withdrawn, never provided, were
cross-checks of a withdrawn contribution, or were registered in error: JVET-B0032, JVETB0064, JVET-B0071.
As a general policy, missing documents were not to be presented, and late documents (and
substantial revisions) could only be presented when sufficient time for studying was given after
the upload. Again, an exception is applied for AHG reports, CE/EE summaries, and other such
reports which can only be produced after the availability of other input documents. There were
no objections raised by the group regarding presentation of late contributions, although there was
some expression of annoyance and remarks on the difficulty of dealing with late contributions
and late revisions.
It was remarked that documents that are substantially revised after the initial upload are also a
problem, as this becomes confusing, interferes with study, and puts an extra burden on
synchronization of the discussion. This is especially a problem in cases where the initial upload
is clearly incomplete, and in cases where it is difficult to figure out what parts were changed in a
revision. For document contributions, revision marking is very helpful to indicate what has been
changed. Also, the "comments" field on the web site can be used to indicate what is different in a
revision.
A few contributions may have had some problems relating to IPR declarations in the initial
uploaded versions (missing declarations, declarations saying they were from the wrong
companies, etc.). In any such cases, these issues were corrected by later uploaded versions in a
reasonably timely fashion in all cases (to the extent of the awareness of the responsible
coordinators).
Some other errors were noticed in other initial document uploads (wrong document numbers in
headers, etc.) which were generally sorted out in a reasonably timely fashion. The document web
site contains an archive of each upload.
1.4.3 Outputs of the preceding meeting
The output documents of the previous meeting, particularly the JEM1 JVET-A1001, and the
work plan for test sequence investigation JVET-A1002, were approved. The JEM1 software
Page: 313
Date Sav
implementation was also approved. Documents of the prior meeting had been handled separately
using the document archive site of the two parent bodies and some minor differences in the
documents had resulted from this (e.g. due to differences in delivery deadlines), whereas a new
document repository was set up to be used for the current and future meetings.
For the first JVET meeting, the meeting report information had been included in the reports of
the parent body meetings (a section of video subgroup of the WG 11 113th meeting report, and a
report of Q6/16 activities in the meeting of ITU-T SG 16). No complaints were made about the
correctness of the content of these reports.
All output documents of the previous meeting and the software had been made available in a
reasonably timely fashion.
1.5 Attendance
The list of participants in the JVET meeting can be found in Annex B of this report.
The meeting was open to those qualified to participate either in ITU-T WP3/16 or ISO/IEC
JTC 1/SC 29/WG 11 (including experts who had been personally invited as permitted by ITU-T
or ISO/IEC policies).
Participants had been reminded of the need to be properly qualified to attend. Those seeking
further information regarding qualifications to attend future meetings may contact the
responsible coordinators.
1.6 Agenda
The agenda for the meeting was as follows:

IPR policy reminder and declarations

Contribution document allocation

Review of results of previous meeting

Consideration of contributions and communications on project guidance

Consideration of technology proposal contributions

Consideration of information contributions

Coordination activities

Future planning: Determination of next steps, discussion of working methods,
communication practices, establishment of coordinated experiments, establishment of
AHGs, meeting planning, refinement of expected standardization timeline, other planning
issues

Other business as appropriate for consideration
1.7 IPR policy reminder
Participants were reminded of the IPR policy established by the parent organizations of the JVET
and were referred to the parent body websites for further information. The IPR policy was
summarized for the participants.
The ITU-T/ITU-R/ISO/IEC common patent policy shall apply. Participants were particularly
reminded that contributions proposing normative technical content shall contain a non-binding
informal notice of whether the submitter may have patent rights that would be necessary for
implementation of the resulting standard. The notice shall indicate the category of anticipated
licensing terms according to the ITU-T/ITU-R/ISO/IEC patent statement and licensing
declaration form.
Page: 314
Date Sav
This obligation is supplemental to, and does not replace, any existing obligations of parties to
submit formal IPR declarations to ITU-T/ITU-R/ISO/IEC.
Participants were also reminded of the need to formally report patent rights to the top-level
parent bodies (using the common reporting form found on the database listed below) and to
make verbal and/or document IPR reports within the JVET necessary in the event that they are
aware of unreported patents that are essential to implementation of a standard or of a draft
standard under development.
Some relevant links for organizational and IPR policy information are provided below:

http://www.itu.int/ITU-T/ipr/index.html (common patent policy for ITU-T, ITU-R, ISO,
and IEC, and guidelines and forms for formal reporting to the parent bodies)

http://ftp3.itu.int/av-arch/jvet-site (JVET contribution templates)

http://www.itu.int/ITU-T/dbase/patent/index.html (ITU-T IPR database)
 http://www.itscj.ipsj.or.jp/sc29/29w7proc.htm (JTC 1/SC 29 Procedures)
It is noted that the ITU TSB director's AHG on IPR had issued a clarification of the IPR
reporting process for ITU-T standards, as follows, per SG 16 TD 327 (GEN/16):
"TSB has reported to the TSB Director's IPR Ad Hoc Group that they are receiving Patent
Statement and Licensing Declaration forms regarding technology submitted in Contributions
that may not yet be incorporated in a draft new or revised Recommendation. The IPR Ad
Hoc Group observes that, while disclosure of patent information is strongly encouraged as
early as possible, the premature submission of Patent Statement and Licensing Declaration
forms is not an appropriate tool for such purpose.
In cases where a contributor wishes to disclose patents related to technology in Contributions,
this can be done in the Contributions themselves, or informed verbally or otherwise in
written form to the technical group (e.g. a Rapporteur's group), disclosure which should then
be duly noted in the meeting report for future reference and record keeping.
It should be noted that the TSB may not be able to meaningfully classify Patent Statement
and Licensing Declaration forms for technology in Contributions, since sometimes there are
no means to identify the exact work item to which the disclosure applies, or there is no way
to ascertain whether the proposal in a Contribution would be adopted into a draft
Recommendation.
Therefore, patent holders should submit the Patent Statement and Licensing Declaration form
at the time the patent holder believes that the patent is essential to the implementation of a
draft or approved Recommendation."
The responsible coordinators invited participants to make any necessary verbal reports of
previously-unreported IPR in draft standards under preparation, and opened the floor for such
reports: No such verbal reports were made.
1.8 Software copyright disclaimer header reminder
It was noted that, as had been agreed at the 5th meeting of the JCT-VC and approved by both
parent bodies at their collocated meetings at that time, the JEM software uses the HEVC
reference software copyright license header language, which is the BSD license with preceding
sentence declaring that contributor or third party rights are not granted, as recorded in N10791 of
the 89th meeting of ISO/IEC JTC 1/SC 29/WG 11. Both ITU and ISO/IEC will be identified in
the <OWNER> and <ORGANIZATION> tags in the header. This software is used in the process
of designing the JEM software, and for evaluating proposals for technology to be included in the
design. This software or parts thereof might be published by ITU-T and ISO/IEC as an example
implementation of a future video coding standard and for use as the basis of products to promote
adoption of such technology.
Page: 315
Date Sav
Different copyright statements shall not be committed to the committee software repository (in
the absence of subsequent review and approval of any such actions). As noted previously, it must
be further understood that any initially-adopted such copyright header statement language could
further change in response to new information and guidance on the subject in the future.
1.9 Communication practices
The documents for the meeting can be found at http://phenix.it-sudparis.eu/jvet/.
JVET email lists are managed through the site https://mailman.rwthaachen.de/mailman/options/jvet, and to send email to the reflector, the email address is
jvet@lists.rwth-aachen.de. Only members of the reflector can send email to the list. However,
membership of the reflector is not limited to qualified JVET participants.
It was emphasized that reflector subscriptions and email sent to the reflector must use real names
when subscribing and sending messages and subscribers must respond to inquiries regarding the
nature of their interest in the work.
For distribution of test sequences, a password protected ftp site had been set up at RWTH
Aachen University, with a mirror site at FhG-HHI. Qualified JVET participants may contact the
responsible coordinators of JVET to obtain access to these sites.
1.10
Terminology
Some terminology used in this report is explained below:

ACT: Adaptive colour transform.

AI: All-intra.

AIF: Adaptive interpolation filtering.

ALF: Adaptive loop filter.

AMP: Asymmetric motion partitioning – a motion prediction partitioning for which the
sub-regions of a region are not equal in size (in HEVC, being N/2x2N and 3N/2x2N or
2NxN/2 and 2Nx3N/2 with 2N equal to 16 or 32 for the luma component).

AMVP: Adaptive motion vector prediction.

AMT: Adaptive multi-core transform.

AMVR: (Locally) adaptive motion vector resolution.

APS: Active parameter sets.

ARC: Adaptive resolution conversion (synonymous with DRC, and a form of RPR).

ARSS: Adaptive reference sample smoothing.

ATMVP: Advanced temporal motion vector prediction.

AU: Access unit.

AUD: Access unit delimiter.

AVC: Advanced video coding – the video coding standard formally published as ITU-T
Recommendation H.264 and ISO/IEC 14496-10.

BA: Block adaptive.

BC: See CPR or IBC.
Page: 316
Date Sav

BD: Bjøntegaard-delta – a method for measuring percentage bit rate savings at equal
PSNR or decibels of PSNR benefit at equal bit rate (e.g., as described in document
VCEG-M33 of April 2001).

BIO: Bi-directional optical flow.

BL: Base layer.

BoG: Break-out group.

BR: Bit rate.

BV: Block vector (MV used for intra BC prediction, not a term used in the standard).

CABAC: Context-adaptive binary arithmetic coding.

CBF: Coded block flag(s).

CC: May refer to context-coded, common (test) conditions, or cross-component.

CCLM: Cross-component linear model.

CCP: Cross-component prediction.

CG: Coefficient group.

CGS: Colour gamut scalability (historically, coarse-grained scalability).

CL-RAS: Cross-layer random-access skip.

CPMVP: Contol-point motion vector prediction (used in affine motion model).

CPR: Current-picture referencing, also known as IBC – a technique by which sample
values are predicted from other samples in the same picture by means of a displacement
vector sometimes called a block vector, in a manner basically the same as motioncompensated prediction.

CTC: Common test conditions.

CVS: Coded video sequence.

DCT: Discrete cosine transform (sometimes used loosely to refer to other transforms
with conceptually similar characteristics).

DCTIF: DCT-derived interpolation filter.

DF: Deblocking filter.

DRC: Dynamic resolution conversion (synonymous with ARC, and a form of RPR).

DT: Decoding time.

ECS: Entropy coding synchronization (typically synonymous with WPP).

EE: Exploration Experiment – a coordinated experiment conducted toward assessment of
coding technology.

EOTF: Electro-optical transfer function – a function that converts a representation value
to a quantity of output light (e.g., light emitted by a display.

EPB: Emulation prevention byte (as in the emulation_prevention_byte syntax element of
AVC or HEVC).

EL: Enhancement layer.
Page: 317
Date Sav

ET: Encoding time.

FRUC: Frame rate up conversion.

HEVC: High Efficiency Video Coding – the video coding standard developed and
extended by the JCT-VC, formalized by ITU-T as Rec. ITU-T H.265 and by ISO/IEC as
ISO/IEC 23008-2.

HLS: High-level syntax.

HM: HEVC Test Model – a video coding design containing selected coding tools that
constitutes our draft standard design – now also used especially in reference to the (nonnormative) encoder algorithms (see WD and TM).

IBC (also Intra BC): Intra block copy, also known as CPR – a technique by which
sample values are predicted from other samples in the same picture by means of a
displacement vector called a block vector, in a manner conceptually similar to motioncompensated prediction.

IBDI: Internal bit-depth increase – a technique by which lower bit-depth (esp. 8 bits per
sample) source video is encoded using higher bit-depth signal processing, ordinarily
including higher bit-depth reference picture storage (esp. 12 bits per sample).

IBF: Intra boundary filtering.

ILP: Inter-layer prediction (in scalable coding).

IPCM: Intra pulse-code modulation (similar in spirit to IPCM in AVC and HEVC).

JEM: Joint exploration model – the software codebase for future video coding
exploration.

JM: Joint model – the primary software codebase that has been developed for the AVC
standard.

JSVM: Joint scalable video model – another software codebase that has been developed
for the AVC standard, which includes support for scalable video coding extensions.

KLT: Karhunen-Loève transform.

LB or LDB: Low-delay B – the variant of the LD conditions that uses B pictures.

LD: Low delay – one of two sets of coding conditions designed to enable interactive realtime communication, with less emphasis on ease of random access (contrast with RA).
Typically refers to LB, although also applies to LP.

LIC: Local illumination compensation.

LM: Linear model.

LP or LDP: Low-delay P – the variant of the LD conditions that uses P frames.

LUT: Look-up table.

LTRP: Long-term reference pictures.

MANE: Media-aware network elements.

MC: Motion compensation.

MDNSST: Mode dependent non-separable secondary transform.

MOS: Mean opinion score.
Page: 318
Date Sav

MPEG: Moving picture experts group (WG 11, the parent body working group in
ISO/IEC JTC 1/SC 29, one of the two parent bodies of the JVET).

MV: Motion vector.

NAL: Network abstraction layer (as in AVC and HEVC).

NSQT: Non-square quadtree.

NSST: Non-separable secondary transform.

NUH: NAL unit header.

NUT: NAL unit type (as in AVC and HEVC).

OBMC: Overlapped block motion compensation (e.g., as in H.263 Annex F).

OETF: Opto-electronic transfer function – a function that converts to input light (e.g.,
light input to a camera) to a representation value.

OOTF: Optical-to-optical transfer function – a function that converts input light (e.g.
l,ight input to a camera) to output light (e.g., light emitted by a display).

PDPC: Position dependent (intra) prediction combination.

POC: Picture order count.

PoR: Plan of record.

PPS: Picture parameter set (as in AVC and HEVC).

QM: Quantization matrix (as in AVC and HEVC).

QP: Quantization parameter (as in AVC and HEVC, sometimes confused with
quantization step size).

QT: Quadtree.

QTBT: Quadtree plus binary tree.

RA: Random access – a set of coding conditions designed to enable relatively-frequent
random access points in the coded video data, with less emphasis on minimization of
delay (contrast with LD).

RADL: Random-access decodable leading.

RASL: Random-access skipped leading.

R-D: Rate-distortion.

RDO: Rate-distortion optimization.

RDOQ: Rate-distortion optimized quantization.

ROT: Rotation operation for low-frequency transform coefficients.

RPLM: Reference picture list modification.

RPR: Reference picture resampling (e.g., as in H.263 Annex P), a special case of which
is also known as ARC or DRC.

RPS: Reference picture set.

RQT: Residual quadtree.
Page: 319
Date Sav

RRU: Reduced-resolution update (e.g. as in H.263 Annex Q).

RVM: Rate variation measure.

SAO: Sample-adaptive offset.

SD: Slice data; alternatively, standard-definition.

SDT: Signal dependent transform.

SEI: Supplemental enhancement information (as in AVC and HEVC).

SH: Slice header.

SHM: Scalable HM.

SHVC: Scalable high efficiency video coding.

SIMD: Single instruction, multiple data.

SPS: Sequence parameter set (as in AVC and HEVC).

STMVP: Spatio-temporal motion vector prediction.

SW: Software.

TBA/TBD/TBP: To be announced/determined/presented.

TGM: Text and graphics with motion – a category of content that primarily contains
rendered text and graphics with motion, mixed with a relatively small amount of cameracaptured content.

VCEG: Visual coding experts group (ITU-T Q.6/16, the relevant rapporteur group in
ITU-T WP3/16, which is one of the two parent bodies of the JVET).

VPS: Video parameter set – a parameter set that describes the overall characteristics of a
coded video sequence – conceptually sitting above the SPS in the syntax hierarchy.

WG: Working group, a group of technical experts (usually used to refer to WG 11, a.k.a.
MPEG).

WPP: Wavefront parallel processing (usually synonymous with ECS).

Block and unit names:
o CTB: Coding tree block (luma or chroma) – unless the format is monochrome,
there are three CTBs per CTU.
o CTU: Coding tree unit (containing both luma and chroma, synonymous with
LCU), with a size of 16x16, 32x32, or 64x64 for the luma component.
o CB: Coding block (luma or chroma), a luma or chroma block in a CU.
o CU: Coding unit (containing both luma and chroma), the level at which the
prediction mode, such as intra versus inter, is determined in HEVC, with a size of
2Nx2N for 2N equal to 8, 16, 32, or 64 for luma.
o PB: Prediction block (luma or chroma), a luma or chroma block of a PU, the level
at which the prediction information is conveyed or the level at which the
prediction process is performed2 in HEVC.
2
The definitions of PB and PU are tricky for a 64x64 intra luma CB when the prediction control
information is sent at the 64x64 level but the prediction operation is performed on 32x32 blocks.
Page: 320
Date Sav
o PU: Prediction unit (containing both luma and chroma), the level of the prediction
control syntax2 within a CU, with eight shape possibilities in HEVC:

2Nx2N: Having the full width and height of the CU.

2NxN (or Nx2N): Having two areas that each have the full width and half
the height of the CU (or having two areas that each have half the width
and the full height of the CU).

NxN: Having four areas that each have half the width and half the height
of the CU, with N equal to 4, 8, 16, or 32 for intra-predicted luma and N
equal to 8, 16, or 32 for inter-predicted luma – a case only used when
2N×2N is the minimum CU size.

N/2x2N paired with 3N/2x2N or 2NxN/2 paired with 2Nx3N/2: Having
two areas that are different in size – cases referred to as AMP, with 2N
equal to 16 or 32 for the luma component.
o TB: Transform block (luma or chroma), a luma or chroma block of a TU, with a
size of 4x4, 8x8, 16x16, or 32x32.
o TU: Transform unit (containing both luma and chroma), the level of the residual
transform (or transform skip or palette coding) segmentation within a CU (which,
when using inter prediction in HEVC, may sometimes span across multiple PU
regions).
1.11
Opening remarks

Reviewed logistics, agenda, working practices

Results of previous meeting: JEM, web site, software, etc.
1.12
Scheduling of discussions
Scheduling: Generally meeting time was scheduled during 0800–2000 hours, with coffee and
lunch breaks as convenient. Ongoing scheduling refinements were announced on the group email
reflector as needed. Some particular scheduling notes are shown below, although not necessarily
100% accurate or complete:
(Meeting sessions were chaired by J.-R. Ohm, G. J. Sullivan, and/or J. Boyce)

Sat. 20 Feb., 1st day
o 0900–1300 AHG reports and JEM analysis (chaired by JRO and GJS)
o 1430–1900 JEM analysis, technology proposals (chaired by JB)

Sun. 21 Feb., 2nd day
o 0900–1300 Technology proposals (chaired by JB)
o 1400–1600 BoG on independent segment coding (K. Sühring)
o 1600–1800 Technology proposals (chaired by JRO and JB)
o 1800– BoG on test material selection (T. Suzuki)

Mon. 22 Feb., 3rd day
o 1600– BoG on test material selection (T. Suzuki)
The PB, PU, TB and TU definitions are also tricky in relation to chroma for the smallest block
sizes with the 4:2:0 and 4:2:2 chroma formats. Double-checking of these definitions is
encouraged.
Page: 321
Date Sav
o 1800– BoG on B0039 viewing (K. Andersson)

Tue. 23 Feb., 4th day
o 1000–1330 Technology proposals, EE principles (chaired by JRO and JB)
o 1530–1800 Viewing of test material (T. Suzuki)
o 1600–1800 BoG on parallel coding and cross-RAP dependencies (K. Sühring)

Wed. 24 Feb., 5th day
o Breakout work on test material
o 1400–1800 common test conditions, EE review, revisits, setup AHGs (chaired by JB
and later JRO)

Thu. 25 Feb., 6th day
o 0900–1200 BoG on call for test materials (A. Norkin)
o 1430–1600 closing plenary, approval of documents (chaired by JRO and GJS)
1.13
Contribution topic overview
The approximate subject categories and quantity of contributions per category for the meeting
were summarized
 Status and guidance (3) (section 3)
 Analysis and improvement of JEM (11) (section 0)
 Test material investigation (18) (section 4)
 Technology proposals (16) (section 5)
2
Status and guidance by parent bodies (3)
JVET-B0001 Report of VCEG AHG1 on Coding Efficiency Improvements [M. Karczewicz,
M. Budagavi]
The following summarizes the Coding Efficiency Improvements AHG activities between Q.6/16
VCEG meeting in Geneva, Switzerland (October 2015) and the current meeting in San Diego,
USA.
The first version of the Joint Exploration Test Model Software (HM-16.6-JEM-1.0) was released
17th of December, 2016. The software can be downloaded at:
https://vceg.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/HM-16.6-JEM-1.0
The following tools were included on top of HM-14.0-KTA-2.0 software:

Position dependent intra prediction combination (PDPC).

Harmonization and improvements for Bi-Directional Optical Flow (BIO).

Non-separable secondary transform.

Affine motion prediction.
 Adaptive reference sample smoothing.
Signal dependent transform (SDT) is still being integrated. In addition configuration files were
updated to reflect modifications to chroma QP offset.
The table below shows the BD-Rate reduction of JEM1 comparing to HM-14.0 for JCT-VC
common test condition for random access configuration.
Page: 322
Date Sav
Random Access Main 10
Y
U
−20.81%
−29.93%
−21.27%
−13.17%
−20.65%
−14.78%
−20.55%
−9.84%
V
Class A
−23.77%
Class B
−9.24%
Class C
−18.20%
Class D
−12.06%
Class E
Overall
−20.84%
−16.71%
−15.43%
This does not yet include the signal dependent transform, which had not been sufficiently
integrated soon enough.
The following contributions proposing new tools or the modifications of the existing tools had
been registered for this meeting:

Generic
o JVET-B0023: Quadtree plus binary tree structure integration with JEM tools
o JVET-B0028: Direction-dependent sub-TU scan order on intra prediction
o JVET-B0033: Adaptive Multiple Transform for Chroma
o JVET-B0038: Harmonization of AFFINE, OBMC and DBF
o JVET-B0047: Non Square TU Partitioning
o JVET-B0051: Further improvement of intra coding tools
o JVET-B0058: Modification of merge candidate derivation
o JVET-B0058: TU-level non-separable secondary transform
o JVET-B0059: Improvements on adaptive loop filter

Screen content
o JVET-B0048: Universal string matching for ultra high quality and ultra high
efficiency SCC

HDR
o JVET-B0054: De-quantization and scaling for next generation containers

Scalability
o JVET-B0043: Polyphase subsampled signal for spatial scalability
In addition, there were two contributions proposing encoder-only modifications to JEM:

JVET-B0039: Non-normative JEM encoder improvements.
 JVET-B0041: Adaptive reference sample smoothing simplification.
There were also a number of contributions analyzing performance of tools already included in
JEM1:

JVET-B0022: Performance of JEM 1 tools analysis by Samsung.

JVET-B0037: Performance analysis of affine inter prediction in JEM1.0.

JVET-B0044: Coding Efficiency / Complexity Analysis of JEM 1.0 coding tools for the
Random Access Configuration.

JVET-B0045: Performance evaluation of JEM 1 tools by Qualcomm.
Page: 323
Date Sav
 JVET-B0057: Evaluation of some intra-coding tools of JEM1.
Three of the contributions provide analysis of all the tools currently included in JEM software
(JVET-B0022, JVET-B0044 and JVET-B0045). Documents JVET-B0022 and JVET-B0045
include both tool-on and tool-off results, while the document JVET-B0044 only includes tool-on
results. Tool-on results in documents JVET-B0022 and JVET-B0045 are the same for all the
tools except for cross-component linear model (COM16_C806_LMCHROMA) prediction,
secondary transform (COM16_C1044_NSST) and adaptive reference sample smoothing
(COM16_C983_RSAF). For cross-component linear model prediction and secondary transform
the mismatch is limited to single Class A sequence. The tool-off results in JVET-B0022 and
JVET-B0045 differ for larger number of tools: sub-PU level motion vector prediction
(COM16_C806_VCEG_AZ10_SUB_PU_TMVP), pattern matched motion vector derivation
(VCEG_AZ07_FRUC_MERGE), position dependent intra prediction combination
(COM16_C1044_PDPC), cross-component linear model prediction
(COM16_C806_LMCHROMA), adaptive reference sample smoothing (COM16_C983_RSAF)
and adaptive loop filter (ALF_HM3_REFACTOR).
The AHG recommended

To review all the related contribution

Discuss possibility of an extension/annex to HEVC based on the tool analysis
submissions
JVET-B0002 VCEG AHG report on Subjective Distortion Measurement (AHG2 of VCEG)
[T. K. Tan]
A report of this AHG was submitted, although there was no substantial activity reported. The
report recommended to discontinue the AHG; however, several experts expressed their view that
the topic is important and merits further AHG study.
JVET-B0004 Report of VCEG AHG on test sequences selection (AHG4 of VCEG) [T. Suzuki,
J. Boyce, A. Norkin]
Reviewed Sunday.
Available UGC sequences have too much hand shaking. It would be preferable to have
sequences that don't have hand shaking. Stabilizing pre-processing can be applied, and the
stabilized version could be the new source.
Sequences chosen for evaluation at the last meeting are available at the Aachen university ftp site.
There has been a number of input contributions studying the selecting sequences. Four new SCC
test sequences have been proposed. Parts of the Chimera and El Fuente test sequences chosen at
the last meeting have been made available by Netflix under the "creative commons-attributionno commercial use-no derivative 4.0" license.
The sequences offered by the following parties had been uploaded and were available from the
Aachen university ftp ftp.ient.rwth-aachen.de.

Bcom

Huawei

Netflix (chosen parts of Chimera and ElFuente)

University of Bristol.
Full ElFuente and Chimera sequences were available at www.cdvl.org under the CDVL license.
Four new screen content coding sequences had been proposed by Tongii university. There are
four such sequences, 1920x1080, 30 fps, 300 frames, both RGB and YCbCr versions of which
were available.
Page: 324
Date Sav
A BoG (coordinated by T. Suzuki and J. Chen) was formed for selection of test material with the
following requested activities

Review input contributions, sort out sequences which are assessed to be inappropriate for
codec testing (as per assessment of the testers)

Establish categories per application domain/characteristics, e.g. movie streaming,
surveillance, …

Identify categories (including resolutions besides 4K) where we would like to have
content, e.g. user generated content available so far seems inappropriate; also consider
whether subsampling could be performed on some of the sequences to generate lower
resolution material

Pre-select around 8+ per category for viewing at compressed results, clarify how many
QP points to look at (looking at more rate points should be better to identify the
characteristics of the sequences with regard to codec testing, because different degrees of
quality should be visible at different rates)

Also consider the RD behaviour that is reported for selecting the sequences and the QP
points to look at

In the end, we should select around 4 per category (towards a subjective viewing effort as
would be performed in context of a CfP)

As a target, at the end of the meeting, new sequences could be identified to replace class
A.

Discuss whether it would be appropriate for objective testing to use more shorter
sequences (for RA/LD) or subsampled (for AI)

Also noisy sequences should be included if they are typical for content captured by
cameras
JVET-B0006 Report of VCEG AHG on JEM software development [X. Li, K. Suehring]
This report summarizes the activities of the AhG on JEM software development that had taken
place between the 1st and 2nd JVET meetings.
The mandates given to the AhG were as follows:

Coordinate the development and availability of the "Joint Exploration Model" (JEM)
software for future video coding investigation

Update the software as necessary for bug fixes and new adoptions

Coordinate with AHG1 on analysis of the behaviour and capabilities of the new proposed
features that are integrated into this software
A brief summary of activities is given below.
Software development was continued based on the HM-16.6-KTA-2.0 version. A branch was
created in the software repository to implement the JEM 1 tools based on the decisions noted in
document TD-WP3-0215. All integrated tools were included in macros to highlight the changes
in the software related to that specific tool.
HM-16.6-JEM-1.0 was released on Dec. 17th, 2015.
A few minor fixes were added to the trunk after the release of HM-16.6-JEM-1.0. Those fixes
will be included in the next release of JEM.
A branch was created based on HM-16.6-JEM-1.0 for the integration of VCEG-AZ08, which
was decided to be reimplemented in HM-16.6-JEM-1.1.
Page: 325
Date Sav
The integration of VCEG-AZ08 was not finalized before the start of the 2nd JVET meeting, thus
also no performance report is available. The contributors reported, they are still working on bug
fixes. Due to the high complexity in addition to the already complex tools, running full anchors
takes more than a week, even with full parallel coding. This makes verification complicated. As
encoding time becomes a big burden, fast algorithms which do not degrade software quality
should be encouraged.
As decided on the last meeting, another branch was created for COM16-C966, which was based
on HM-13.0.
Software repository and versions are summarized below:

The JEM software is developed using a Subversion repository located at:
o https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/

The implementation of JEM 1 tools has been performed on the branch
o https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/branches/HM-16.6-KTA2.0-dev

The reimplementation of VCEG-AZ08 is performed on the branch
o https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/branches/HM-16.6-JEM1.0-dev

The branch for COM16-C966
o https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/branches/HM-13.0QTBT

Released versions of HM-16.6-JEM-1.0 can be found at
o https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/HM-16.6-JEM-1.0
As decided at the last meeting and on the JVET email reflector, the Main 10 settings of HM test
conditions with chroma QP adjustment for AI and RA configurations were used for the tests.
The performance of HM-16.6-JEM-1.0 over HM-16.6-KTA-2.0 and HM-16.6 is summarized as
follows.
Page: 326
Date Sav
All Intra Main10
Class A
Class B
Class C
Class D
Class E
Overall
Class F (optional)
Y
-2.90%
-3.54%
-4.54%
-3.65%
-3.91%
-3.69%
-2.86%
Over HM-16.6-KTA-2.0
U
V
EncT
11.53%
10.71%
196%
12.86%
12.65%
186%
8.42%
8.02%
187%
8.43%
8.18%
187%
8.28%
7.69%
180%
10.13%
9.70%
187%
7.00%
7.06%
176%
Y
-1.80%
-2.94%
-3.20%
-2.86%
Over HM-16.6-KTA-2.0
U
V
EncT
21.70%
22.36%
139%
19.34%
19.11%
138%
12.02%
11.76%
133%
12.90%
11.61%
133%
DecT
138%
143%
152%
156%
-2.72%
-2.86%
16.66%
8.97%
147%
128%
DecT
100%
102%
104%
103%
103%
102%
103%
Over HM-16.6-gcc-5.2-NoSIMD
U
V
EncT
-23.49%
-20.05%
2080%
-8.76%
-6.20%
1991%
-11.77%
-14.91%
2288%
-7.95%
-9.36%
2624%
-11.97%
-14.40%
1464%
-12.63%
-12.57%
2084%
-11.56%
-11.59%
1599%
DecT
174%
172%
163%
165%
175%
169%
147%
Y
-20.81%
-21.27%
-20.65%
-20.55%
Over HM-16.6-gcc-5.2-NoSIMD
U
V
EncT
-29.93%
-23.77%
541%
-13.17%
-9.24%
510%
-14.78%
-18.20%
547%
-9.84%
-12.06%
551%
DecT
712%
814%
847%
933%
-20.84%
-16.45%
-16.71%
-14.81%
535%
414%
822%
432%
Over HM-16.6-gcc-5.2-NoSIMD
U
V
EncT
DecT
Y
-15.38%
-13.82%
-14.84%
-11.79%
-15.75%
-14.22%
-12.85%
Random Access Main 10
Class A
Class B
Class C
Class D
Class E
Overall (Ref)
Class F (optional)
16.38%
8.61%
136%
130%
-15.43%
-14.54%
Low delay B Main10
Y
Class A
Class B
Class C
Class D
Class E
Overall (Ref)
Class F (optional)
-1.58%
-1.68%
-2.55%
-2.82%
-2.08%
-1.64%
Over HM-16.6-KTA-2.0
U
V
EncT
-2.50%
-3.06%
-3.54%
-5.34%
-3.43%
-2.23%
-2.84%
-3.36%
-4.36%
-3.91%
-3.55%
-2.02%
134%
133%
135%
135%
134%
125%
DecT
Y
130%
129%
139%
149%
135%
113%
-15.57%
-16.10%
-15.66%
-20.72%
-16.69%
-16.34%
-22.32%
-20.24%
-17.64%
-28.11%
-21.71%
-23.54%
-20.66%
-22.31%
-17.12%
-32.11%
-22.34%
-23.51%
374%
423%
427%
238%
367%
301%
461%
507%
586%
525%
514%
317%
Over HM-16.6-gcc-5.2-NoSIMD
U
V
EncT
DecT
Low delay P Main10
Y
Class A
Class B
Class C
Class D
Class E
Overall (Ref)
Class F (optional)
-1.57%
-1.54%
-1.96%
-2.19%
-1.78%
-1.64%
Over HM-16.6-KTA-2.0
U
V
EncT
-2.35%
-2.84%
-3.26%
-3.44%
-2.90%
-2.38%
-2.85%
-2.71%
-4.22%
-3.77%
-3.33%
-2.20%
131%
129%
132%
124%
129%
123%
DecT
Y
109%
105%
114%
111%
110%
105%
-21.01%
-18.19%
-17.47%
-23.42%
-19.87%
-17.11%
-25.93%
-21.38%
-19.05%
-31.52%
-24.12%
-24.05%
-23.71%
-23.12%
-17.91%
-35.36%
-24.30%
-23.80%
358%
400%
399%
224%
346%
285%
271%
276%
309%
242%
276%
219%
The JEM bug tracker is located at
https://hevc.hhi.fraunhofer.de/trac/jem
It uses the same accounts as the HM software bug tracker. For spam fighting reasons, account
registration is only possible at the HM software bug tracker at
https://hevc.hhi.fraunhofer.de/trac/hevc
Contributors were asked to please file all issues related to JEM into the bug tracker and to try to
provide all the details, which are necessary to reproduce the issue. Patches for solving issues and
improving the software are always appreciated.
The AHG recommended

to continue software development on the HM-16.6 based version

to formalize test conditions with an output document

to provide software decisions in written form during or shortly after the meeting, e.g. as a
BoG report or draft meeting report
It was commented that having an extreme runtime means that integration testing of new tools
becomes difficult since they must be tested with other tools.
Decoding time is influenced by advanced motion techniques (BIO and FRUC), and it was noted
that the percentage increases for decoding are affected by the relatively fast basis speed. The
times also depend on the degree of compiler optimization. Encoding for intra is influenced by
Page: 327
Date Sav
several things. The effect of the increased number of intra modes is mitigated by fast search
techniques.
It was remarked that joint decisions could be used in the anchor to obtain some of the gain of
things like increased block size.
The luma/chroma imbalance and the use of chroma delta QP to adjust for that was discussed. It
was noted that the offset fixed the imbalance that was previously evident in the overall results.
It was commented that the nonlinear relationship between luma QP and chroma QP means that
the offset has a different amount of effect for large QP and for smaller QP (and can be a problem
for encoders trying to adjust the QP in a customized way).
It might be valuable to use a QP offset for chroma that is specific to the QP value for luma
(which depends on the hierarchy position). However, this is not currently supported in the
software and doing the tests necessary to determine the appropriate value would take a
substantial amount of time and effort.
Integration of the JEM with the SCM (SCC) software and updating to a more recent version are
desirable. The runtime and memory usage increase associated with SCC tools was noted as a
concern.
It was remarked that the Locomotive and Nebuta test sequences have unusual characteristics and
that using only picture resolution as the categorization of test sequences for CTC may not be
sufficient.
At least as a supplemental test, we should run the SCM on the JVET CTC and see how much of
an effect may be missing from our tests because of the lack of SCC feature capability in the JEM
software.
3
Analysis and improvement of JEM (11)
JVET-B0021 An improved description of Joint Exploration Test Model 1 (JEM1) [J. Chen,
E. Alshina, G. J. Sullivan, J.-R. Ohm, J. Boyce] [late]
Discussed Sat 11:00 GJS & JRO
This document provided ans summarizes proposed improvements to Algorithm Description of
Joint Exploration Test Model 1 (w15790 and T13-SG16-151012-TD-WP3-0213). The main
changes are adding the description of encoding strategies used in experiments for the study of the
new technology in JEM as well as improvement of algorithm description.
JVET-B0022 Performance of JEM 1 tools analysis by Samsung [E. Alshina, A. Alshin, K. Choi,
M. Park (Samsung)]
This contribution presents performance tests for each tool in JEM 1.0 in the absence as well as in
the presence of other tools. The goal of this testing was to give better understanding for
efficiency and complexity of individual tool; identify pain-points and suggest rules to follow
during further JEM development. It also could be considered as a cross-check for all tools
previously added to the JEM.
It was reported that almost every tool in JEM has variations and supplementary modifications.
Sometimes those modifications were not mentioned in the original contribution and so are not
properly described in the JEM algorithms description document.
In total, the JEM description includes 22 tools. Two of them were not integrated into the main
S/W branch by the start time of this testing (and so were tested separately).
Below is a summary of each of the JEM tool's performance in the absence of other tools, as
reported in the contribution:
Page: 328
Date Sav
Part 1: all-intra and random access.
All Intra
U
V
RA
Tool name
Y
Larger CTB and Larger TU
−0.4 −2.1 −2.5
Quadtree plus binary tree structure
−4.2 −9.6 −9.4
523% 105% −5.9 −11.3 −12.7 155% 102%
67 intra prediction modes
−0.7 −0.4 −0.4
100%
98% −0.2
Four-tap intra interpolation filter
−0.4 −0.3 −0.3
101%
96% −0.2
−0.4 −0.4 99% 103%
Boundary prediction filters
−0.2 −0.2 −0.2
102% 100% −0.1
−0.1 −0.1 99% 100%
Cross component prediction
−2.7
101%
Position dependent intra combination
−1.5 −1.5 −1.6
188% 102% −0.8
−0.4 −0.4 107% 101%
Adaptive reference sample smoothing
−1.0 −1.2 −1.1
160%
98% −0.4
−0.5 −0.6 105% 101%
0.5
Enc
2.6
Dec
Y
93% 100% −1.1
98% −1.5
U
V
Enc
Dec
−2.4 −2.4 102% 107%
0.1
2.5
0.1 98%
5.5 99%
99%
99%
Sub-PU based motion vector prediction
na
na
na
na
na −1.7
−1.6 −1.7 115% 110%
Adaptive motion vector resolution
na
na
na
na
na −0.8
−1.2 −1.2 113%
Overlapped block motion compensation
na
na
na
na
na −1.9
−3.0 −2.9 110% 123%
Local illumination compensation
na
na
na
na
na −0.3
Affine motion compensation prediction
na
na
na
na
na −0.9
−0.8 −1.0 118% 102%
Pattern matched motion vector derivation
na
na
na
na
na −4.5
−4.1 −4.2 161% 300%
Bi-directional optical flow
na
na
na
na
na −2.4
−0.8 −0.8 128% 219%
0.1
99%
0.1 112% 100%
Adaptive multiple Core transform
−2.8 −0.1 −0.2
215% 108% −2.4
Secondary transforms
−3.3 −5.0 −5.2
369% 102% −1.8
−4.6 −4.7 125% 103%
Signal dependent transform (SDT)
−2.0 −2.2 −2.2 2460% 1540% −1.7
−1.6 −1.7 593% 1907%
Adaptive loop filter
−2.8 −3.1 −3.4
119% 124% −4.6
−2.3 −2.2 105% 128%
Context models for transform coefficient
−0.9 −0.6 −0.7
104%
99% −0.6
0.1
Multi-hypothesis probability estimation
−0.7 −1.0 −0.8
102%
97% −0.4
−0.1
na
na −0.2
Initialization for context models
na
na
na
"hypothetical max gain"
−17.4 −15.0 −13.8
JEM1.0
−14.2 −12.6 −12.6
Efficiency factor
0.82
0.5
0.2 124% 103%
0.0 102%
99%
0.1 101% 101%
−0.4 −0.4 99%
99%
−26.8 −19.4 −17.0
20
1.6 −20.8 −17.7 −15.4
6
7.9
0.78
Page: 329
Date Sav
Part 2: Low delay B and low delay P.
Low-delay B
U
V
Enc
Low-delay P
Tool name
Y
Dec
Y
U
V
Enc
Dec
Larger CTB and Larger TU
−1.1 −4.6 −5.5
101% 103% −1.6
Quadtree plus binary tree structure
−6.4 −12.5 −13.9
151% 104% −6.7 −14.2 −15.5 140% 107%
−6.2 −7.0 97% 106%
0.0 −0.2
96%
95% −0.2
0.0 −0.2 94%
99%
−0.1 −0.2 −0.2
96%
95% −0.1
0.0 −0.3 94%
99%
0.0 −0.2
97%
95% −0.1
−0.1
0.1 94%
99%
Cross component prediction
−0.1 −4.0 −4.3
97%
96% −0.2
−4.9 −4.8 96%
96%
Position dependent intra combination
−0.3 −0.2 −0.6
102%
94% −0.3
−0.5 −0.5 103%
99%
Adaptive reference sample smoothing
−0.1 −0.4 −0.7
101%
94% −0.2
−0.6 −0.3 101%
94%
Sub-PU based motion vector prediction
−1.9 −2.2 −1.8
114% 102% −1.6
Adaptive motion vector resolution
−0.6 −1.0 −0.9
111%
Overlapped block motion compensation
−2.3 −2.9 −2.7
105% 119% −5.2
Local illumination compensation
−0.4 −0.3 −0.3
116%
96% −0.8
−0.5 −0.3 109%
Affine motion compensation prediction
−1.6 −1.4 −1.6
118%
99% −1.9
−1.1 −1.2 110% 103%
Pattern matched motion vector derivation
−2.7 −2.3 −2.3
146% 249% −2.5
−2.0 −1.5 121% 155%
67 intra prediction modes
Four-tap intra interpolation filter
Boundary prediction filters
Bi-directional optical flow
0.0
0.0
0.0 −0.2 −0.1
94% −0.4
101% 102%
−1.9 −1.6 104% 103%
−0.7 −0.5 106%
99%
−5.2 −4.9 103% 119%
na
na
0.6
na
99%
na
na
Adaptive multiple Core transform
−1.6
0.6
117%
96% −1.9
Secondary transforms
−0.7 −1.9 −2.5
117%
95% −0.8
Signal dependent transform (SDT)
−3.0 −2.8 −2.7
−6.8
Adaptive loop filter
−3.2 −1.6 −1.8
101% 116% −5.2
Context models for transform coefficient
−0.2
0.3
0.0
99%
94% −0.3
0.1
0.2 97%
98%
Multi-hypothesis probability estimation
−0.2
0.5
0.7
99%
95% −0.2
0.8
0.8 97%
98%
Initialization for context models
−0.3 −1.5 −1.2
96%
94% −0.3
−1.3 −1.0 94%
99%
1.1
"hypothetical max gain"
−17.4 −22.8 −25.6
JEM1.0
−16.7 −21.7 −22.3
Efficiency factor
0.6 120% 101%
−2.4 −2.8 120% 100%
−5.8 −5.7
−2.8 −2.7 101% 122%
−23.8 −28.7 −27.9
4.1
0.96
4.7 −19.9 −24.1 −24.3 3.6
2.4
0.84
General comments in the contribution, based on this tool-by-tool analysis, were

Tools have been added to JEM without proper cross-check and study

Some tools include modifications that are not directly related to the proposed tool

Proposals include very broad description of algorithm (important details were not
mentioned in the JEM description)

There is some overlap between tools; the "efficiency coefficient" is: AI: 82%, RA: 78%;
LD: 96%; LDP: 84%

The additional memory for parameters storage is huge

The additional precision of new transform coefficients and interpolation filters is
questionable
Page: 330
Date Sav
Tool-by-tool analysis and commentary for each tool in JEM1.0 was provided in substantial detail.
A few of the many observations reported in the document are:

For large block sizes, CU sizes larger than 64 are almost not used for encoding even for
the highest resolution test sequences (class A). But enlarging CTB size decreases SAO
overhead cost and so SAO is applied more actively especially for chroma. On our opinion
main source of gain from enlarging CTB size is more efficient SAO usage. The
performance impact of high precision 64x64 transforms was said to be negligible.

Performance improvement of 4-taps Intra interpolation filter is twice higher for classes C
and D compared to high resolution video.

Some combination of recent to MPI handling did not appear helpful.

Some strange behaviour: disabling context model selection for transform coefficient
provides 0.3% (LDB) and 0.2 (LDP) gain; disabling window adaptation in highprobability estimation for CABAC results in 0.00% BD-rate change.
 The deblocking filter operation is changed when ALF is enabled.
Based on the presented JEM analysis, the contributor suggested the following:

Do not do "blind tools additions" to JEM;

Establish exploration experiments (EEs):
o Group tools by categories;
o Proposal should be studied in EE for at least 1 meeting cycle before JEM
modification;
o List-up all alternatives (including tools in HM-KTA blindly modified in JEM);
o "Hidden modifications" should be tested separately;
o Identify tools with duplicated functionality and overlapping performance in EEs;

Simplifications (run time, memory usage) are desired;

JEM tool description need to be updated based on knowledge learned;
 Repeat tool-on and tool-off tests for new test sequences (after a test set will be modified).
Comments from the group discussion:

Don't forget to consider subjective quality and alternative quality measures

Compute and study the number of texture bits for luma and chroma separately
 It may help if the software architecture can be improved
Group agreements:

Have an EE before an addition to JEM

Try to identify some things to remove (only very cautiously)

Try to identify some inappropriate side-effects to remove

Try to identify some agreed subset(s)
o May need to consider multiple complexity levels
o Consider this in the CTC development
Page: 331
Date Sav
JVET-B0062 Crosscheck of JVET-B0022 (ATMVP) [X. Ma, H. Chen, H. Yang (Huawei)] [late]
JVET-B0036 Simplification of the common test condition for fast simulation [X. Ma, H. Chen,
H. Yang (Huawei)]
Chaired by J. Boyce
A simplified test condition is proposed for RA and AI configurations to reduce simulation runtime. For RA configuration, each RAS (Random Access Segment, approximately 1s duration) of
the full-length sequence can be used for simulation independent of other RAS. And therefore the
simulation of the full-length sequence can be split to a set of parallel jobs. For AI configuration,
RAP pictures of the full-length sequence are chosen as a snapshot of the original for simulation.
It is claimed that the compression performance when using the original test condition can be
reflected faithfully by using the proposed new test condition, while the encoding run-time is
significantly reduced.
Encoding of Nebuta for QP22 RA takes about 10 days. The contribution proposes parallel
encoding for RA Segments.
A small mismatch is seen when parallel encoding done, because of some cross-RAP encoder
dependencies. Sources of mismatches identified in contribution. It was suggested that the ALF
related difference is due to a bug in decoder dependency across random access points, which has
been reported in a software bug report.
It was proposed to encode only some of the intra frames, or to use the parallel method.
If RA is changed in this way, LD will become the new bottleneck.
Software was not yet available in the contribution. Significant interest was expressed in having
this software made available.
We want to restrict our encoder to not use cross-RAP dependencies, so that parallel encoding
would have no impact on the results.
It was agreed to create a BoG (coordinated by K. Suehring and H. Yang) to remove cross-RAP
dependencies in the encoder software/configurations. It was agreed that if this could be done
during the meeting, the common test conditions defined at this meeting would include this
removal of the dependencies. (see notes under B0074).
Decision (SW): Adopt to JEM SW, once the SW is available and confirmed to have identical
encoding results, with cross-RAP dependencies removed. Also add to common test conditions.
Decoding time reporting is typically done in ratios. Decoding time calculation can be based
either on adding parallel decoding times, or the non-parallel decoding, but the same method
should be used for both the Anchor and the Test.
It is proposed for AI to just use the I frames from the RA config, in order to reduce the AI
encode time.
This was further discussed Tuesday as part of the common test conditions consideration.
JVET-B0037 Performance analysis of affine inter prediction in JEM1.0 [H. Zhang, H. Chen,
X. Ma, H. Yang (Huawei)]
Chaired by J. Boyce.
An inter prediction method based on an affine motion model was proposed in the previous
meeting and was adopted into JEM (Joint Exploration Model). This contribution presents the
coding performance of the affine coding tool integrated in JEM 1.0. Results show that affine
inter prediction can bring 0.50%, 1.32%, 1.35% coding gains beyond JEM 1.0 in RA main 10,
LDB main 10 and LDP main 10 configurations, respectively. In addition, comments regarding
this coding tool collected from the previous meeting is addressed.
In the affine motion model tool, 1/64 pixel MV resolution is used only for those PU that selected
the affine model.
Page: 332
Date Sav
Affine motion model tool is already included in the JEM. No changes are proposed. This
contribution just provides some additional information about the tool.
JVET-B0039 Non-normative JEM encoder improvements [K. Andersson, P. Wennersten,
R. Sjoberg, J. Samuelsson, J. Strom, P. Hermansson, M. Pettersson (Ericsson)]
Chaired by J. Boyce.
This contribution reports that a change to the alignment between QP and lambda improves the
BD rate for luma by 1.65% on average for RA, 1.53% for LD B and 1.57% for LD P using the
common test conditions. The change, in combination with extension to a GOP hierarchy of
length 16 for random access, is reported to improve the BD rate for luma by 7.0% on average
using the common test conditions. To verify that a longer GOP hierarchy does not decrease the
performance for difficult-to-encode content, four difficult sequences were also tested. An
average improvement in luma BD rate of 4.7% was reported for this additional test set. Further
extending the GOP hierarchy to length 32 is reported for HM to improve the BD rate by 9.7% for
random access common conditions and 5.4% for the additional test set. It was also reported that
the PSNR of the topmost layer is improved and that subjective quality improvements with
respect to both static and moving areas have been seen by the authors especially when both the
change to the alignment between QP and lambda and a longer GOP hierarchy are used. The
contribution proposes that both the change to the alignment between QP and lambda and the
extension to a GOP hierarchy of 16 or 32 pictures be included in the reference software for JEM
and used in the common test conditions. Software is provided in the contribution.
The contribution proposed to adjust the alignment between lambda and QP. This would be an
encoder only change. Decision (SW): Adopt the QP and lambda alignment change to the JEM
encoder SW. Communicate to JCT-VC to consider making the same change to the HM. Also add
to common test conditions.
The contribution proposed an increase in the hierarchy depth that would require a larger DPB
size than HEVC currently supports if the resolution was at the maximum for the level. This will
add a very long delay and would make it difficult to compare performance to the HM unless a
corresponding change is also made to that. Encoders might not actually use the larger hierarchy,
so this might not represent expected real world conditions.
It was suggested to revisit the consideration of the common test conditions to include GOP
hierarchy of 16 or 32 after offline subjective viewing. The intra period will also need to be
considered. Memory analysis is also requested. A BoG (coordinated by K. Andersson, E. Alshina)
was created to conduct informal subjective viewing.
This was discussed again Tuesday AM, see further notes under the JVET-B0075 BoG report.
JVET-B0063 Cross-check of non-normative JEM encoder improvements (JVET-B0039) [B. Li,
J. Xu (Microsoft)] [late]
JVET-B0067 Cross-check of JVET-B0039: Non-normative JEM encoder improvements
[C. Rudat, B. Bross, H. Schwarz (Fraunhofer HHI)] [late]
JVET-B0044 Coding Efficiency / Complexity Analysis of JEM 1.0 coding tools for the Random
Access Configuration [H. Schwarz, C. Rudat, M. Siekmann, B. Bross, D. Marpe, T. Wiegand
(Fraunhofer HHI)]
Chaired by J. Boyce.
This contribution provides a coding efficiency / complexity analysis of JEM 1.0 coding tools for
the random access Main 10 configuration. The primary goal of the investigation was to identify
sets of coding tools that represent operation points on the concave hull of the coding efficiency –
Page: 333
Date Sav
complexity points for all possible combinations of coding tools. Since an analysis of all
combinations of coding tools is virtually impossible (for the 22 integrated coding tools, there are
222 = 4.194.304 combinations), the authors used a two-step analysis: First, all coding tools were
evaluated separately and ordered according to the measured coding efficiency – complexity
slopes. In the second step, the coding tools were successively enabled in the determined order.
The analysis started with the tool with the highest value in (greedy) ―bang for the buck‖ (coding
gain vs complexity, as measured by a weighted combination of encode and decode run times,
with decode 5x more important than encode), and iteratively added the next higher value in each
step.
LMCHROMA showed a loss with the new common test conditions with chroma QP offsets.
The contribution only tested RA Main 10 classes B-D.
There was a slight difference in configuration, TC offset −2 vs TC offset 0 vs JVET-B0022.
Different compilers get different decoding times. GCC 4.6.3 800 vs GCC 5.2 900 decoder
runtimes.
It was suggested that it would be useful if memory bandwidth and usage could be considered. It
would also be useful if a spreadsheet with raw data could be provided so that parameters can be
changed, such as relative weight between encoder and decoder complexity. It would be useful to
provide a similar graph containing only decoder complexity.
Encoder runtime is also important, at least since it impacts our ability to run simulations.
Two tools have very large increases in as-measured complexity – BIO and FRUC_MERGE.
It was remarked that the BAC_ADAPT_WDOW results may be incorrect because of a software
bug.
It was commented that this measurement of complexity is not necessarily the best measure. It
was suggested that proponents of tools that show high complexity with this measurement provide
some information about the complexity using other implementations. For example, knowledge
that a technique is SIMD friendly, or parallelizable, would be useful.
Tools with high encoder complexity could provide two different encoder algorithms with
different levels of encoder complexity, e.g. a best performing and a faster method.
This was further discussed on Tuesday. The contribution had been updated to provide summary
sheets and enable adjustment of the weighting factor. All raw data had also been provided.
JVET-B0045 Performance evaluation of JEM 1 tools by Qualcomm [J. Chen, X. Li, F. Zou,
M. Karczewicz, W.-J. Chien (Qualcomm)] [late]
Chaired by J. Boyce.
This contribution evaluates the performance of the coding tools in the JEM1. The coding gain,
encoder and decoder running time of each individual tool in JEM reference software are
provided.
The contribution used HEVC common test conditions, All Intra Class A-E, RA Class A-D, for
LDP, LDB Class B-E. Individual tool on and tool off tests were performed.
The contribution proposed a grouping of tools into 4 categories. The first group was considered
the most suitable for an extension to HEVC.
The proponent requested to have a discussion about the potential for this exploratory work to be
included in a new extension HEVC. (This would be a parent-body matter.)
JVET-B0050 Performance comparison of HEVC SCC CTC sequences between HM16.6 and
JEM1.0 [S. Wang, T. Lin (Tongji Univ.)] [late]
The contributor was not available to present this contribution.
This contribution presents an SCC performance comparison between HM16.6 (anchor) and
JEM1.0 (tested). Seven TGM and three MC sequences from HEVC SCC CTC were used. HEVC
SCC CTC AI and LB configurations wre tested using 50 frames and 150 frames, respectively.
Page: 334
Date Sav
AI YUV BD-rate −3.6%, −4.0%, −3.6% and −4.6%, −4.0%, −3.7% are reported for TGM and
MC sequences, respectively.
LB YUV BD-rate −13.2%, −12.5%, −11.8% and −11.3%, −10.3%, −10.0% are reported for
TGM and MC sequences, respectively.
JVET-B0057 Evaluation of some intra-coding tools of JEM1 [A. Filippov, V. Rufitskiy (Huawei
Technologies)] [late]
Chaired by J. Boyce.
This contribution presents an evaluation of some of JEM1.0 intra-coding tools, specifically: 4-tap
interpolation filter for intra prediction, position dependent intra prediction combination, adaptive
reference sample smoothing and MPI. Simulations include ―off-tests‖ as provided in JVETB0022 as well as a brief tools efficiency analysis. Tools efficiency is estimated by calculating a
ratio of coding gain increase to encoder complexity.
The contribution calculated the ―slope‖ of tools, comparing coding gain with weighted
complexity measure, similar to that used in JVET-B0044, but with a relative weight of 3 for
decode vs encode. This was applied to intra tools in AI configuration.
Experimental results were similar to those in JVET-B0022.
General Discussion
Comments from the general discussion of the contributions in this area included:

Encourage proponents to provide range of complexity, both highest quality and simpler
encoding algorithm for faster encoding.

In contributions, proponents should disclose any configuration changes that could also be
changed separate from their tool proposal.

The tools in the JEM have not been cross-checked.

Suggest to do some type of cross checking of tools already in the JEM, perhaps through
exploration experiments.
 At this meeting will want to define common test conditions with new sequences.
Further discussion and conclusion was conducted on Sunday:
Decision (SW): Create an experimental branch of the JEM SW. Candidate tools can be made
available for further study within this experimental branch without being adopted to the JEM
model. The software coordinators will not maintain this branch, and it won‘t use bug tracking,
but will instead be maintained by the proponents.
JVET-B0073 Simplification of Low Delay configurations for JVET CTC [M. Sychev (Huawei)]
Chaired by J. Boyce.
A simplified test condition is proposed for LDB and LDP configurations to reduce simulation
run-time. Each RAS (Random Access Segment, approximately 1s duration) of the full-length
sequence is proposed to be used for simulation, independent of other RAS, and it was asserted
that therefore the simulation of the full-length sequence can be split to a set of parallel jobs. By
using the proposed new test condition, the encoding run-time can be reduced by the multiple
parallelism factor.
The contribution provided some experimental results with varying LD encoding configurations,
but not identical to what was being proposed.
It was remarked that with the parallelization of RA encoding and subsampling of the AI intra
frames, the LD cases become the longest sequences to encode.
Page: 335
Date Sav
4
Test material investigation (17)
All contributions in this category were reviewed in a BoG on selection of test material, reported
in JVET-B0076.
JVET-B0024 Evaluation report of SJTU Test Sequences [T. Biatek, X. Ducloux]
JVET-B0025 Evaluation Report of Chimera Test Sequence for Future Video Coding [H. Ko, S.C. Lim, J. Kang, D. Jun, J. Lee]
JVET-B0026 JEM1.0 Encoding Results of Chimera Test Sequence [S.-C. Lim, H. Ko, J. Kang]
JVET-B0027 SJTU 4K test sequences evaluation report from Sejong University [N. U. Kim,
J. W. Choi, G.-R. Kim]
JVET-B0029 Evaluation report of B-Com test sequence (JCTVC-V0086) [O. Nakagami]
JVET-B0030 Comment on test sequence selection [O. Nakagami, T. Suzuki (Sony)]
JVET-B0031 Evaluation report of Huawei test sequence [K. Choi, E. Alshina, A. Alshin, M. Park]
[late]
JVET-B0035 Evaluation Report of Chimera and Huawei Test Sequences for Future Video
Coding [P. Philippe (Orange)]
JVET-B0040 Evaluation Report of Huawei and B-Com Test Sequences for Future Video Coding
[F. Racapé, F. Le Léannec, T. Poirier]
JVET-B0042 Evaluation Report of B-COM Test Sequence for Future Video Coding (JCTVCV0086) [H. B. Teo, M. Dong]
JVET-B0046 Evaluation report of Netflix Chimera and SJTU test sequences [F. Zou, J. Chen,
X. Li, M. Karczewicz (Qualcomm)] [late]
JVET-B0049 Four new SCC test sequences for ultra high quality and ultra high efficiency SCC
[J. Guo, L. Zhao, T. Lin (Tongji Univ.)] [late]
JVET-B0052 Report of evaluating Huawei surveillance test sequences [C.-C. Lin, J.-S. Tu, Y.-J.
Chang, C.-L. Lin (ITRI)] [late]
Page: 336
Date Sav
JVET-B0053 Report of evaluating Huawei UGC test sequences [J.-S. Tu, C.-C. Lin, Y.-J.
Chang, C.-L. Lin (ITRI)] [late]
JVET-B0055 Netflix Chimera test sequences evaluation report [M. Sychev, H. Chen (Huawei)]
[late]
JVET-B0056 Evaluation report of SJTU Test Sequences from Sharp [T. Ikai (Sharp)] [late]
JVET-B0061 Evaluation report of SJTU test sequences for future video coding standardization
[S.-H. Park, H. Xu, E. S. Jang] [late]
JVET-B0065 Coding results of 4K surveillance and 720p portrait sequences for future video
coding [K. Kawamura, S. Naito (KDDI)] [late]
5
Technology proposals (16)
JVET-B0023 Quadtree plus binary tree structure integration with JEM tools [J. An, H. Huang,
K. Zhang, Y.-W. Huang, S. Lei (MediaTek)]
Chaired by J. Boyce
This contribution reports the integration of the new coding tools in JEM on top of the quadtree
plus binary tree (QTBT) structure. It is reported that around 5% BD-rate saving can be achieved
by the QTBT structure.
At the previous meeting, it was decided to put QTBT in a separate branch, because of the
interaction with other tools. Integration with all but two of the other adopted tools – NSST and
RSAF – has since been done.
JEM 1.0 is based on HM16.6.
The plan is for proponents to integrate QTBT into JEM 1.0. An intermediate step would be to
add the remaining two tools in the separate branch, and then upgrade to HM 16.6. At the next
meeting we will decide if QTBT should be merged into the main branch.
This was discussed again on Sunday. The proponents provided information that the QTBT
software is now based on HM16.6, rather than an earlier version of the HM as had been
discussed on Saturday.
It was agreed to include QTBT in the EE document for testing.
JVET-B0034 Cross-check of JVET-B0023 [E. Alshina, K. Choi, A. Alshin, M. Park, M. Park,
C. Kim] [late]
Simulation results matched, but the cross-checkers didn‘t look at the source code.
JVET-B0028 Direction-dependent sub-TU scan order on intra prediction [S. Iwamura,
A. Ichigaya]
Chaired by J. Boyce
This contribution proposes an change of intra prediction by modification of sub-TU scan order
depending on intra prediction direction. The proposed method modifies z-scan order of sub-TUs
when intra prediction direction is set to from top-right to bottom-left or from bottom-left to topright, which is corresponding to intra prediction mode from 2 to 9 or from 27 to 34 of the HEVC
specification. By the modification of scan order, the lower/right neighboring samples can be
utilized as reference samples, so that the accuracy of intra prediction is improved. The proposed
Page: 337
Date Sav
method is integrated on top of HM16.7 and the experimental result shows −0.2%(Y), −0.4%(U)
and −0.3%(V) BD-rate impact for All Intra configuration with very slight increase of encoding
time.
Experimental results were vs. HM16.7 rather than JEM 1.0. The proponent was encouraged to
provide results vs JEM 1.0.
JVET-B0033 Adaptive Multiple Transform for Chroma [K. Choi, E. Alshina, A. Alshin, M. Park,
M. Park, C. Kim] [late]
Chaired by J. Boyce.
This contribution provides an information of the use of Adaptive Multiple Transforms (AMT)
for chroma components. In JEM1.0, adaptive multiple transform is used for the luma component.
It shows a good coding performance for the luma component but the some coding loss appears in
chroma components. It is suggest that this is due to the fact that the used transform kernel for
chroma components is different from that of the luma component. This contribution proposes to
enable the use of AMT for chroma by aligning with the used transform kernel of luma
component. The result reportedly shows 1% increase of chroma gain without coding loss of luma
and increase of encoding/decoding time.
Further study was encouraged.
JVET-B0038 Harmonization of AFFINE, OBMC and DBK [H. Chen, S. Lin, H. Yang]
Chaired by J. Boyce.
In this contribution, a change of AFFINE (affine transform prediction), OBMC (overlapped
block motion compensation) and DBF (de-blocking filter) is proposed. Simulations reportedly
show that 0.15%, 0.15% and 0.14% luma BD-rate reduction can be achieved for RA, LDB and
LDP configurations, respectively. More than 1% coding gain can reportedly be obtained for
some affine-featured sequences.
Decoder time increase, average 118%, for sequences with rotation. An increase in worst case
complexity is not expected. The sequences with the biggest coding gains show the biggest
increase in decoding time.
Decision: Adopt to JEM the proposal to harmonize AFFINE with OBMC and deblocking filter.
Also add to common test conditions.
JVET-B0041 Adaptive reference sample smoothing simplification [A. Filippov, V. Rufitskiy
(Huawei Research)] [late]
Chaired by J. Boyce.
This contribution presents a simplification of the RDO procedure used by the adaptive reference
sample smoothing (ARSS) filter, i.e. a non-normative modification of ARSS is considered. The
results of different tests are presented and analyzed. The proposed scheme provides a better ratio
of the coding gain to encoder-side computational complexity.
The presentation was later uploaded in a revision of the contribution.
Two non-normative encoder changes are proposed, which provide simplification.
Future study was encouraged for interaction with other tools.
Decision (SW): Adopt to the JEM SW encoder the proposed simplifications, #1a and #2, to the
RDO decision of the ARSS. Also add to common test conditions.
JVET-B0043 Polyphase subsampled signal for spatial scalability [E. Thomas (TNO)]
Chaired by J. Boyce.
This contribution presents a technique to enable spatial scalability of a video bitstream. The
video signal (luma and chroma components) is subsampled to a lower resolution, possibly using
polyphase subsampling. Multiple lower resolution versions of the signal, called resolution
Page: 338
Date Sav
components in this contribution, are encoded and transported in the same video bitstream. On the
decoder side, the more resolution components are decoded the higher the output resolution is.
The contribution is related to multi-description coding. A high resolution frame is decomposed
into 4 lower resolution frames. Two options were presented: temporal multiplexing with interlayer prediction, or spatial multiplexing with frame packing, which could use tiles.
No experimental results were available yet.
Further study was encouraged. It would be interesting to compare this scheme to SHVC, both for
compression efficiency and for use case and complexity.
JVET-B0047 Non Square TU Partitioning [K. Rapaka, J. Chen, L. Zhang, W.-J. Chien,
M. Karczewicz (Qualcomm)] [late]
Chaired by J. Boyce.
This contribution proposes non-square TU partitioning for intra and inter prediction modes. Two
partition types (2NxN and Nx2N) are added for intra mode. For non-square partitions, a binary
split is allowed at the root level (level 0) for intra and inter prediction modes. Further TU
splitting processing follows the HEVC mechanism. It is reported that the proposed method
provides 1.5%, 1.0%, 0.7%, 0.8% BD-rate saving for AI, RA, LDB and LDP configurations
respectively over HM 16.6.
Results are compared with HM 16.6 rather than JEM 1.0. It was not tested vs QTBT.
The minimum TU size is 4x8.
Experimental results vs. the JEM are requested.
Decision (SW): Make SW available in experimental SVN branch of JEM software. Include in
the EE document for testing.
JVET-B0068 Cross-check of Non Square TU Partitioning (JVET-B0047) [O. Nakagami (Sony)]
[late]
Not all simulations were finished, but those that are finished showed a match. The cross-checker
didn‘t study the source code.
JVET-B0048 Universal string matching for ultra high quality and ultra high efficiency SCC
[L. Zhao, K. Zhou, J. Guo, S. Wang, T. Lin (Tongji Univ.)] [late]
Chaired by J. Boyce.
"Universal string matching" (USM) is proposed for screen content coding. USM is integrated
with HM16.6-JEM1.0 to get HM16.6-JEM1.0USM. Using HM16.7-SCM6.0 as anchor with
HEVC-SCC CTC sequences and coding configurations (full frame IBC version), it is reported
that HM16.6-JEM1.0USM with all JEM specific tools turned off (i.e. equivalent to
HM16.6USM) has −4.8%, −4.4%, −4.2% BD-rate for YUV TGM AI lossy coding. Encoding
time ratio is 86%, i.e. 14% decrease from SCM5.2. Moreover, using four newly proposed screen
content test sequences and the same HEVC-SCC CTC coding configurations, Y BD-rate is
reported as −33.8%, −24.4%, −25.7%, and −23.3% for the four sequences ClearTypeSpreadsheet,
BitstreamAnalyzer, EnglishDocumentEditing, and ChineseDocumentEditing, respectively
(proposed in JVET-B0049), resulting in an average of −26.8% Y BD-rate. Furthermore,
replacing qp=(22, 27, 32, 37) with qp=(7, 12, 17, 22) and keeping other HEVC-SCC CTC coding
configurations unchanged, it is reported that the BD-rate becomes −6.4%, −6.2%, −6.0% for
YUV TGM sequences in HEVC-SCC CTC and −48.0%, −29.8%, −28.2%, and −26.6% for Y of
the four sequences, resulting in an average of −33.1% Y BD-rate.
The contribution proposes new SCC test sequences for JVET.
The "universal string matching" tool is proposed. Compared to what had been studied when tool
was proposed in prior SCC work, additional complexity constraints are imposed.
Page: 339
Date Sav
Experimental results were based on comparing Tested: HM16.6+JEM1.0+USM (JEM macros
off) to Anchor: HM16.7+SCM6.0 using full frame IBC. More gains were reported at higher bit
rates. Much higher gains were reported on proposed SCC sequences vs. the SCC CTC sequences.
It was initially planned to further discuss this after test sequences are available, and consider
including in the experimental SVN branch if new CTC conditions would contain screen content
sequences. As, however, such an action was not taken (see BoG report B0076), this plan became
obsolete.
JVET-B0051 Further improvement of intra coding tools [S.-H. Kim, A. Segall (Sharp)] [late]
Chaired by J. Boyce.
This contribution proposes changes to the intra-coding process in JEM 1.0, with specific
emphasis on (i) extended intra prediction directions (ii) non-separable secondary transform
(NSST). The changes are as follows: First, when extended intra prediction directions are enabled,
the contribution proposes to signal the non-MPM modes by first sub-dividing the non-MPM
modes into two mode sets, and then signalling these modes sets with different binarizations.
Second, the contribution proposes an alternative method for signalling the NSST index
(NSST_idx). Specifically, instead of using two binarization methods based on intra prediction
mode and partition size, the proposal codes NSST_idx with a unified binarization method and
adjusts the context model to reflect the statistics of the index based on the intra prediction mode
and partition size. Finally, the contribution proposes to allow for NSST and PDPC to be enabled
in the same PU. Using the above mentioned changes, it is reported that an improvement of
0.35% and 0.19% luma BD-rate savings is observed for AI and RA configurations, respectively.
NSST and PDCD are disallowed to be combined in the current JEM. It would be possible to
allow the decoder to combine them without enabling the combination in the default encoder
configuration.
There was a significant increase in encoding time when the features are combined.
Decision: Adopt non-MPM mode coding in two mode sets. Also add to common test conditions.
Further study of other aspects was encouraged, and it was suggested to consider the interaction
with B0059.
Decision (SW): Add unified binarization for NSST index and independent coding between
PDPC and NSST index to the experimental SVN branch. Include in the EE list.
Proponents are asked to make experimental results available for each aspect individually.
JVET-B0054 De-quantization and scaling for next generation containers [J. Zhao, A. Segall, S.H. Kim, K. Misra (Sharp)] [late]
Chaired by J. Boyce.
This contribution proposes a change in the de-quantization and scaling process in JEM 1.0. For
background, the change was said to be motivated by recent work in MPEG, where it has been
shown that next generation "containers", such as ST-2084, re-shape quantization noise as a
function of brightness. For current standards, this requires encoders to compensate by spatially
varying the QP in an inverse manner. Here, it is proposed that a next-generation standards
decoder could infer the needed compensation without significant QP signalling. This is
accomplished by adapting the scaling of AC coefficients based on the DC coefficient and
reconstructed prediction mean. Experiments performed using sequences under study in MPEG
(and now JCT-VC) reportedly show a gain of 2.0% for both AI and RA configuration when the
QP granularity is 16x16.
As proposed, the decoder infers a QP adjustment based on average luma values. Delta QP can
also be signalled.
A LUT is signalled on a per-sequence basis. It was suggested that we would want to give
encoder the ability to disable the feature.
Only AC coefficients are affected, and not the DC coefficients.
Page: 340
Date Sav
The proponent suggested that SDR content may also use the ST-2084 container, especially for
services with a mix of SDR and HDR content.
Decision (SW): Make SW available in experimental SVN branch of JEM software. Include in
the EE document for testing. Also include the luma QP adjustment encoder functionality.
JVET-B0058 Modification of merge candidate derivation [W.-J. Chien, J. Chen, S. Lee,
M. Karczewicz (Qualcomm)] [late]
Chaired by J. Boyce.
In this contribution, modifications to the merge candidate derivation are proposed, including
higher motion resolution, POC-based merge candidate pruning, simplification of the advanced
temporal motion vector predictor, and derivation of the spatio-temporal motion vector predictor.
With the proposed modifications, an additional 0.6%, 0.6%, and 2.3% BD rate reduction over
HM16.6 is reportedly achieved for random access, low delay B, and low delay P configurations,
respectively.
Four different modifications are proposed, but only combined performance results are available,
not individual tool performance numbers.
It was proposed to move from 1/8 pel to 1/16 pel storage and merge candidate accuracy. No
additional storage or memory bandwidth vs. HEVC is used, but the allowable range is decreased
to ~4k.
This was further discussed Tuesday AM (chaired by JO/JB). It was reported that the higher
precision of MV storage increases by 0.0/0.3/0.4% for the cases of RA/LDB/LDP.
It was asked if the gain will be reduced once the harmonization of the affine with OBMC is
included in the JEM. The gain of this aspect is greater than the affine OBMC harmonization.
Decision: Adopt the 1/16 pel motion vector storage accuracy to the JEM and the common test
conditions.
It was suggested for individual ―tool on‖ experiments to only test this with ATMVP turned on,
because these would show more benefits together.
Decision (SW): Add to the experimental branch the pruning and ATMVP simplification. Include
in the EE document for testing. Separate experimental results for each proposed aspect should be
provided.
JVET-B0066 Cross-check of JVET-B0058: Modification of merge candidate derivation [H. Chen,
H. Yang (Huawei)] [late]
JVET-B0059 TU-level non-separable secondary transform [X. Zhao, A. Said, V. Seregin,
M. Karczewicz, J. Chen, R. Joshi (Qualcomm)] [late]
Chaired by J. Boyce.
In this contribution, a TU-level non-separable secondary transform (NSST) is proposed.
Compared to the current CU-level NSST design in JEM-1.0 software, the proposed method
speeds up the encoder by reducing the number of rate-distortion checks and using transformdomain distortion estimation. With the proposed TU-level NSST, an average 44% overall
encoder run-time reduction is reportedly achieved over JEM-1.0 for all intra (AI), and average
BD-rate improvement of 0.1% is reportedly achieved for the luminance component for both AI
and random access (RA) configurations.
Replacing CU-level CSST with TU-level NSST provides an encoder speedup of 44%, plus a
small coding gain in luma and coding loss in chroma.
A new Hypercube-Givens Transform (HyGT) is used in the computation of the secondary
transform and also proposed. It has a butterfly-like structure. Separate experimental results are
not given for the two aspects, which would be preferable.
A small increase in decoder runtime was seen, but it is unclear why.
Page: 341
Date Sav
Further study was encouraged, and it was requested to consider the interaction with B0051.
This was further discussed on Wednesday.
Decision (SW): Make SW available in experimental SVN branch of JEM software. Include in
the EE document for testing. Proponents were asked to provide separate experimental results for
each aspect individually.
JVET-B0060 Improvements on adaptive loop filter [M. Karczewicz, L. Zhang, W.-J. Chien
(Qualcomm)] [late]
Chaired by J. Boyce.
In this contribution, several changes to the adaptive loop filter (ALF) in HM16.6 JEM-1.0 are
proposed. Three main introduced modifications are: classification with the diagonal gradients
taken into consideration, geometric transformations of filter coefficients, and prediction from
fixed filters. In addition, some cleanups of the software are also included. With the proposed
methods, the coding performance of HM16.6 JEM-1.0 is reportedly improved by 1.1% and 1.1%
on average for all intra (AI) and random access (RA) configurations, respectively, when all the
tools are enabled and by 1.5% and 1.5% on average for AI and RA configurations, respectively,
when only ALF is enabled. The overall performance improvement of ALF, compared to HM16.6,
reportedly reaches 4.0% and 6.0%, on average for AI and RA, respectively.
The HM 3.0 version of the ALF was put into the JEM at an earlier stage. Changes are proposed
with respect to the HM 3.0 version of ALF. It was commented that later versions of the HM (HM
8) had made changes to ALF vs. the HM 3.0 version.
The contribution proposes 25 classes rather than the 15 classes in the older design.
The proposal avoids use of temporal prediction in I frames.
Software cleanups were also proposed.
Simulation results were only provided for the combination of changes. Some of the changes are
inherently grouped together, but some could be separated. It was specifically requested to
provide test results with the fixed filters on and off. The impact of the change to the chroma filter
shape alignment with luma filter shape is also requested.
Decision (SW): Make SW available in experimental SVN branch of JEM software. Include in
the EE document for testing.
JVET-B0069 Crosscheck of the improvements on ALF in JVET-B060 [C.-Y. Chen, Y.-W. Huang
(MediaTek)] [late]
Contribution noted.
JVET-B0070 Cross-check of JVET-B0060 [B. Li, J. Xu (Microsoft)] [late]
Contribution noted.
6
Joint Meetings, BoG Reports, and Summary of Actions Taken
6.1 General
The setup of Exploration Experiments was discussed, and an initial draft of the EE document
was reviewed in the plenary (chaired by JRO). This included the list of all tools that are intended
to be investigated in EEs during the subsequent meeting cycle:

EE1: Quad-tree plus binary-tree (QTBT)

EE2: Non Square TU Partitioning

EE3: NSST and PDPC index coding

EE4: De-quantization and scaling for next generation containers
Page: 342
Date Sav

EE5: Improvements on adaptive loop filter

EE6: Modification of Merge candidate derivation
 EE7: TU-level non-separable secondary transform
It was agreed to give the editors the discretion to finalize the document during the two weeks
after the meeting, and circulate/discuss it on the reflector appropriately.
6.2 Joint meetings
A joint meeting of the parent bodies was held on Monday February 23 at 1800 in which future
video coding was discussed. In that discussion, it was noted for clarification that JVET is tasked
with exploring potential compression improvement technology, regardless of whether it differs
substantially from HEVC or not. Discussions of converting that work into a formal
standardization project, determining whether that would be a new standard or not, timelines for
standardization, profiles, etc., belong in the parent bodies rather than in JVET.
A requirements input VCEG-BA07 / m37709 from Nokia was noted on video for virtual reality
systems. Other requirements issues and contributions relating to future video coding
requirements are also under consideration in the parent bodies.
See the corresponding and reports of the parent-body meetings and the JCT-VC meeting for
additional discussions of potential relevance.
6.3 BoGs
JVET-B0074 Report of BoG on parallel encoding and removal of cross-RAP dependencies
[K. Suehring, H. Yang (BoG coordinators)]
In contribution JVET-B0036, independent coding of test sequence segments was proposed. It
was discovered that the JEM software encoder uses information from the coded part of the
sequence to make decisions in and after the next IRAP picture.
To enable independent encoding, these dependencies need to be removed. Contribution JVETB0036 lists the following macros to cause dependencies across random access points at the
encoder:

CABAC_INIT_PRESENT_FLAG

VCEG_AZ07_INIT_PREVFRAME
 COM16_C806_ALF_TEMPPRED_NUM
There also exists a dependency related to SAO encoding.
Some of the issues had already been resolved.
The BoG planned to meet again.
If we decide to keep the AI conditions to include all frames in the sequence, we should apply
parallel encoding to the AI case.
It was asked if we should sub-sample the intra frames in the IA case for the common test
conditions. We may want to have different common test conditions for objective results than for
subjective testing. It would be possible to reuse the intra frames from the RA case. It was
suggested that there may be overtraining of intra proposals to particular frames.
Decision (SW): Use a factor of 8 for subsampling of intra frames in common test conditions.
This was further discussed on Wednesday.
Patches for all issues have been provided. No problems were seen in the experiments with the
common test conditions that had been run so far.
Several questions regarding common test conditions were raised.
A concern was expressed that, especially for small resolution sequences, segment-wise decoding
may lead to adding up inaccuracies in decoding time, when segments are decoded in parallel.
Page: 343
Date Sav
Concatenation of the segments with removing the additional coded frame is not directly possible,
because POC would be incorrect. So the concatenated stream will have to include the extra frame.
It was suggested that fixing the POC values is possible by rewriting the slice segment headers.
No action was taken at this time on that, but further study was encouraged.
The decoding time can be corrected by subtracting the reported decoding time of the duplicated
frames. It was agreed to correct decoding times in this way.
Tests for JEM showed that for Classes A-D there was not much impact. It was agreed that it is
OK to use parallelization for decoding. It was also agreed that for run time calculations, either
parallel or not can be used, but the same method should be used consistently for the anchor and
tested cases.
For using temporally subsampled sequences for AI as decided by JVET this morning, either the
software needs to be modified to allow subsampling or subsampled sequences need to be
provided. It was suggested that the software coordinators can take care of patching the software
to provide this feature. It was agreed that for AI configuration with frame subsampling, software
coordinators will create a configuration option to enable this type of encoding using the original
sequences.
The question was raised, how to calculate bi trate on the subsampled sequences (i.e. using
original frame rate, or the lower actual coded frame rate). It was agreed that the bit rate reporting
should reflect the lower frame rate of the actually coded frames.
It was suggested to use IRAP pictures in the AI test conditions to enable splitting. IDR use would
not require any duplicate encoding, but the frames won‘t have different POC values. An IDR
period may solve the issue. No action was taken on this point at this meeting.
Initial verification results were provided with the BoG report, as well as the software patches.
JVET-B0075 Report on BoG on informal subjective viewing related to JVET-B0039
[K. Andersson, E. Alshina (BoG coordinators)]
This document provides a report on informal subjective viewing of the proposed extension of the
hierarchy for random access coding to 32 together with the change of QP alignment with lambda
for common conditions and software as proposed in JVET-B0039. The outcome was that the
proposal applied on HM-16.6 looked similar or better than HM-16.6 at a bit rate lower than the
bit rate for HM-16.6 (on average 10% lower bit rate).
About 9 people attended the informal subjective viewing. No concerns were raised about visual
pumping during the subjective viewing. Some viewers suggested that intra pumping may be
reduced with the larger GOP size, with smoother temporal characteristics.
Memory consumption with increased GOP size was studied. Approximately a factor of 2
memory usage increase is needed for GOP 32 vs GOP 8. The usage is up to 5.7 GB for GOP size
32, 3.3 GB for GOP size 16, 3 GB for GOP size 8 for Class A. 4k sequences would require more
memory. It was remarked that SCC has a memory increase as well.
Configuration files should be provided for GOP 16 and GOP 32, regardless of whether this is
included in the common test conditions.
With a GOP hierarchy length of 16, what should we do with the intra period, especially for 24
fps sequences? It was suggested that we should use 32 for the intra period for 24 fps content.
Decision (SW): Use GOP 16 in the common test conditions. For anchor comparison with HEVC,
we should also use GOP 16 in the HM. For 24 fps sequences, use an intra period of 32.
JVET-B0076 Report of BoG on selection of test material [T. Suzuki, J. Chen (BoG coordinators)]
The BoG selected some sequences for viewing.
We would like to select about 4 sequences per category, after viewing. A goal was to replace
Class A at this meeting.
Selection focused on the application domain category. Categories are: Moving vehicle,
surveillance, sports, TV/movie, people, high frame rate, and texture.
Page: 344
Date Sav
25 sequences were identified for viewing. It was agreed to put a priority on QP37, because of
hard drive capacity limit in the viewing room equipment. Only 8 bit viewing was available with
this equipment.
Due to contention for viewing room availability, it was agreed to look into moving the
equipment to another room for the viewing session Tuesday afternoon.
New sequences were proposed for screen content coding. The JEM doesn‘t include the SCC
tools, so it was agreed to defer consideration of such sequences until the next meeting, since the
priority for this meeting is replacing the Class A sequences.
Further discussed on Wednesday.
Two new class A categories with different characteristics were proposed.

A1: People : Tango, Drums (100), CampfireParty, ToddlerFountain
 A2: Others : CatRobot, TrafficFlow, DaylightRoad, RollerCoaster
It was asked whether the number of frames should be reduced, and agreed to use 300 frames to
encode, regardless of frame rate.
The suggested sequences are mostly 50 or 60 fps, but Drums is 100 fps, and CampfireParty and
TrafficFlow are 30 fps.
Class A isn‘t tested with LD in the current common test conditions.
All discussed candidate sequences are 4:2:0, 10 bit.
It was remarked that the Drums sequence is 100 fps and with the parallel encoding, it can only be
split into 3 parallel segments.
For RollerCoaster, it was agreed to start 600 frames into the sequence. A new sequence will be
created starting at the offset position and given a slightly different name.
Agreed on Wednesday was to replace the prior Class A sequences with 8 new sequences in 2
categories as listed above, 300 frames each.
Participants are encouraged to make recommendations at the next meeting to replace sequences
in other categories, possibly by downsampling higher-resolution sequences.
BoG on Call for Test Material [A. Norkin (BoG coordinator)]
A BoG on on a Call for Test Material (coordinated by A. Norkin) was held Thu. morning. The
result of this BoG activity, after its review and refinement at the meeting, is the output document
JVET-B1002. The following types of content were initially identified as currently missing: HDR,
Sports, Gaming, High/complex Motion, UGC, panoramic, VR, Nature, and 8K.
6.4 List of actions taken affecting the JEM2
The following is a summary, in the form of a brief list, of the actions taken at the meeting that
affect the text of the JEM2 description. Both technical and editorial issues are included. This list
is provided only as a summary – details of specific actions are noted elsewhere in this report and
the list provided here may not be complete and correct. The listing of a document number only
indicates that the document is related, not that it was adopted in whole or in part.

Encoder only
o
o
JVET-B0036

IRAP-level parallel encoding

AI subsampling
JVET-B0039

Cfg: GOP16

QP lambda change
Page: 345
Date Sav
o
JVET-B0041


Normative change (i.e., affecting the bitstream format or output of the decoding process)
o
JVET-B0038

o
o
Harmonization of AFFINE, OBMC and DBF
JVET-B0051

non-MPM mode coding
JVET-B0058

7
Simplification #1a and #2
1/16 pel motion vector storage accuracy
Project planning
7.1 JEM description drafting and software
The following agreement has been established: the editorial team has the discretion to not
integrate recorded adoptions for which the available text is grossly inadequate (and cannot be
fixed with a reasonable degree of effort), if such a situation hypothetically arises. In such an
event, the text would record the intent expressed by the committee without including a full
integration of the available inadequate text.
7.2 Plans for improved efficiency and contribution consideration
The group considered it important to have the full design of proposals documented to enable
proper study.
Adoptions need to be based on properly drafted working draft text (on normative elements) and
HM encoder algorithm descriptions – relative to the existing drafts. Proposal contributions
should also provide a software implementation (or at least such software should be made
available for study and testing by other participants at the meeting, and software must be made
available to cross-checkers in CEs).
Suggestions for future meetings included the following generally-supported principles:

No review of normative contributions without draft specification text

JEM text is strongly encouraged for non-normative contributions

Early upload deadline to enable substantial study prior to the meeting
 Using a clock timer to ensure efficient proposal presentations (5 min) and discussions
The document upload deadline for the next meeting was planned to be Monday 16 May 2016.
As general guidance, it was suggested to avoid usage of company names in document titles,
software modules etc., and not to describe a technology by using a company name.
7.3 General issues for Experiments
Group coordinated experiments have been planned. These may generally fall into one category:

"Exploration experiments" (EEs) are the coordinated experiments on coding tools which
are deemed to be interesting but require more investigation and could potentially become
part of the main branch of JEM by the next meeting.

A description of each experiment is to be approved at the meeting at which the
experiment plan is established. This should include the issues that were raised by other
experts when the tool was presented, e.g., interference with other tools, contribution of
Page: 346
Date Sav
different elements that are part of a package, etc. (E. Alshina will edit the document
based on input from the proponents, review is performed in the plenary)

Software for tools investigated in EE is provided in a separate branch of the software
repository

During the experiment, further improvements can be made

By the next meeting it is expected that at least one independent party will report a
detailed analysis about the tool, confirms that the implementation is correct, and gives
reasons to include the tool in JEM

As part of the experiment description, it should be captured whether performance relative
to JEM as well as HM (with all other tools of JEM disabled) should be reported by the
next meeting.
It is possible to define sub-experiments within particular EEs, for example designated as EEX.a,
EEX.b, etc., where X is the basic EE number.
As a general rule, it was agreed that each EE should be run under the same testing conditions
using one software codebase, which should be based on the JEM software codebase. An
experiment is not to be established as a EE unless there is access given to the participants in (any
part of) the TE to the software used to perform the experiments.
The general agreed common conditions for single-layer coding efficiency experiments are
described in the output document JVET-B1010.
Experiment descriptions should be written in a way such that it is understood as a JVET output
document (written from an objective "third party perspective", not a company proponent
perspective – e.g. referring to methods as "improved", "optimized", etc.). The experiment
descriptions should generally not express opinions or suggest conclusions – rather, they should
just describe what technology will be tested, how it will be tested, who will participate, etc.
Responsibilities for contributions to EE work should identify individuals in addition to company
names.
EE descriptions should not contain excessively verbose descriptions of a technology (at least not
unless the technology is not adequately documented elsewhere). Instead, the EE descriptions
should refer to the relevant proposal contributions for any necessary further detail. However, the
complete detail of what technology will be tested must be available – either in the CE description
itself or in referenced documents that are also available in the JVET document archive.
Any technology must have at least one cross-check partner to establish an EE – a single
proponent is not enough. It is highly desirable have more than just one proponent and one crosschecker.
Some agreements relating to EE activities were established as follows:

Only qualified JVET members can participate in an EE.

Participation in an EE is possible without a commitment of submitting an input document
to the next meeting.

All software, results, documents produced in the EE should be announced and made
available to all EE participants in a timely manner.
This was further discussed Tuesday AM, chaired by JRO and J. Boyce.
A separate branch under the experimental section will be created for each new tool include in the
EE. The proponent of that tool is the gatekeeper for that separate software branch. (This differs
from the main branch of the JEM, which is maintained by the software coordinators.)
New branches may be created which combine two or more tools included in the EE document or
the JEM. Requests for new branches should be made to the software coordinators.
Page: 347
Date Sav
We don‘t need to formally name cross-checkers in the EE document. To promote the tool to the
JEM at the next meeting, we would like see comprehensive cross-checking done, with analysis
that the description matches the software, and recommendation of value of the tool given tradeoffs.
Timeline:
T1 = JEM2.0 SW release + 4 weeks: Integration of all tools into separate EE branch of JEM is
completed and announced to JVET reflector.
Initial study by cross-checkers can begin.
Proponents may continue to modify the software in this branch until T2
3rd parties encouraged to study and make contributions to the next meeting with proposed
changes
T2: JVET-C meeting start – 3 weeks: Any changes to the exploration branch software must be
frozen, so the cross-checkers can know exactly what they are cross-checking. An SVN tag
should be created at this time and announced on the JVET reflector.
This procedure was agreed on Tuesday.
Common test conditions:

Intra-frame sub-sampling of 8

Parallel encoding of RA

Replacing Class A sequences (see notes for BoG report JVET-B0076). Maintaining other
sequences for this meeting cycle.

Tools not currently included in the main branch are QTBT and signal dependent
transforms. A tool can be in the main branch without being enabled in the common test
conditions.

QTBT should be included as an EE. Not included in the common test conditions defined
at this meeting.

Signal dependent transforms usage is not enabled in the common test conditions defined
at this meeting.
Above common test conditions characteristics were agreed on Tuesday.
7.4 Software development
Software coordinators will work out the detailed schedule with the proponents of adopted
changes.
Any adopted proposals where software is not delivered by the scheduled date will be rejected.
The planned timeline for software releases was established as follows:
8

JEM2 will be released within 2 weeks (2016-03-11)

The results about coding performance will be reported by 2016-03-18
Output documents and AHGs
The following documents were agreed to be produced or endorsed as outputs of the meeting.
Names recorded below indicate the editors responsible for the document production.
JVET-B1000 Meeting Report of the 2nd JVET Meeting [G. J. Sullivan, J.-R. Ohm] [2016-05-01]
(near next meeting)
Page: 348
Date Sav
JVET-B1001 Algorithm description of Joint Exploration Test Model 2 (JEM2) [J. Chen,
E. Alshina, G. J. Sullivan, J.-R. Ohm, J. Boyce] [2016-03-25]
Based on JVET-B0021 plus the new adoptions:

Encoder only
o
o
o
JVET-B0036

IRAP-level parallel encoding

AI subsampling
JVET-B0039

Cfg: GOP hierarchy length 16

QP lambda change
JVET-B0041


Simplification #1a and #2
Normative change
o
JVET-B0038

o
JVET-B0051

o
Harmonization of AFFINE, OBMC and DBF
non-MPM mode coding
JVET-B0058

1/16 pel motion vector storage accuracy
JVET-B1002 Call for test materials for future video coding standardization [A. Norkin, H. Yang,
J.-R. Ohm, G. J. Sullivan, T. Suzuki] [2016-03-05]
JVET-B1010 JVET common test conditions and software reference configurations [K. Suehring,
X. Li] [2016-03-11]
JVET-B1011 Description of Exploration Experiments on coding tools [E. Alshina, J. Boyce, S.-H.
Kim, L. Zhang, Y.-W. Huang] [2016-03-11]
AHG Title and Email Reflector
Chairs
Mtg
Tool evaluation (AHG1)
(jvet@lists.rwth-aachen.de)
E. Alshina, M.
Karczewicz (co-chairs)
N

Coordinate the exploration experiments.

Investigate interaction of tools in JEM and
exploration experiment branches.

Discuss and evaluate methodologies and criteria
to assess the benefit of tool.

Study and summarize new technology proposals.
Page: 349
Date Sav
JEM algorithm description editing (AHG2)
(jvet@lists.rwth-aachen.de)

Produce and finalize JVET-B1001 Algorithm
Description of Joint Exploration Test Model 2

Gather and address comments for refinement of
the document
J. Chen (chair) E. Alshina,
J. Boyce (vice chairs)
N
X. Li, K. Suehring (cochairs)
N
T. Suzuki (chair), J. Chen,
J. Boyce, A. Norkin (vice
chairs)
N

Coordinate with the JEM software development
AHG to address issues relating to mismatches
between software and text.
JEM software development (AHG3)
(jvet@lists.rwth-aachen.de)

Coordinate development of the JEM2 software
and its distribution.

Produce documentation of software usage for
distribution with the software.

Prepare and deliver JEM2 software version and
the reference configuration encodings according
to JVET-B1010 common conditions.

Suggest configuration files for additional testing
of tools.

Coordinate with AHG on JEM model editing
and errata reporting to identify any mismatches
between software and text.

Investigate parallelization for speedup of
simulations.
Test material (AHG4)
(jvet@lists.rwth-aachen.de)

Maintain the video sequence test material
database for development of future video coding
standards.

Identify and recommend appropriate test
materials and corresponding test conditions for
use in the development of future video coding
standards.

Identify missing types of video material, solicit
contributions, collect, and make available a
variety of video sequence test material.

Study coding performance and characteristics in
relation to video test materials.
Page: 350
Date Sav
Objective quality metrics (AHG5)
(jvet@lists.rwth-aachen.de)
9

Study metrics which are useful to evaluate the
quality of video compression algorithms with
closer match to human perception.

Collect and make software implementing the
computation of such metrics available.
P. Nasiopoulos, M.
Pourazad (co-chairs)
N
Future meeting plans, expressions of thanks, and closing of the meeting
Future meeting plans were established according to the following guidelines:

Meeting under ITU-T SG 16 auspices when it meets (starting meetings on the Thursday
of the first week and closing it on the Tuesday or Wednesday of the second week of the
SG 16 meeting – a total of 6–6.5 meeting days), and

Otherwise meeting under ISO/IEC JTC 1/SC 29/WG 11 auspices when it meets (starting
meetings on the Saturday prior to such meetings and closing it on the last day of the
WG 11 meeting – a total of 6.5 meeting days).
Some specific future meeting plans (to be confirmed) were established as follows:

Thu. 26 May – Wed. 1 June 2016, 3rd meeting under ITU-T auspices in Geneva, CH.

Sat. 15 – Fri. 21 Oct. 2016, 4th meeting under WG 11 auspices in Chengdu, CN.

Thu. 12 – Wed. 18 Jan 2017, 5th meeting under ITU-T auspices in Geneva, CH.
 Sat. 1 – Fri. 7 Apr. 2017, 6th meeting under WG 11 auspices in Hobart, AU.
The agreed document deadline for the 3rd JVET meeting is Monday 16 May 2016. Plans for
scheduling of agenda items within that meeting remain TBA.
The USNB of WG11 and the companies that financially sponsored the meeting and associated
social event – Apple, CableLabs, Google, Huawei, InterDigital, Microsoft, Mitsubishi Electric,
Netflix and Qualcomm – were thanked for the excellent hosting of the 2nd meeting of the JVET.
InterDigital, Qualcomm, Samsung and Sony were thanked for providing viewing equipment.
The 2nd JVET meeting was closed at approximately 1555 hours on Thursday 25 February 2016.
Page: 351
Date Sav
Annex A to JVET report:
List of documents
JVET number
MPEG
number
JVET-B0001
m38092
2016-02-19 2016-02-20 2016-02-20 Report of VCEG AHG1 on Coding Efficiency
23:07:31
16:04:33
16:04:33
Improvements
M. Karczewicz, M.
Budagavi
JVET-B0002
m38155
2016-02-23 2016-02-23 2016-02-23 VCEG AHG report on Subjective Distortion
03:59:29
04:01:30
04:01:30
Measurement (AHG2 of VCEG)
TK Tan
JVET-B0004
m38077
2016-02-19 2016-02-20 2016-02-21 VCEG AHG report on test sequences selection
06:37:30
21:07:58
23:03:29
(AHG4 of VCEG)
T. Suzuki, J. Boyce, A.
Norkin
JVET-B0006
m38064
2016-02-18 2016-02-20 2016-02-20
Report of AHG on JEM software development
17:56:47
07:37:18
07:37:18
X. Li, K. Suehring
JVET-B0021
m37606
2016-01-29 2016-02-20 2016-02-20 Proposed Improvements to Algorithm
02:32:08
18:09:50
18:09:50
Description of Joint Exploration Test Model 1
J. Chen, E. Alshina, G.J. Sullivan, J.-R. Ohm,
J. Boyce
JVET-B0022
m37610
2016-02-02 2016-02-12 2016-02-17 Performance of JEM 1 tools analysis by
07:28:11
18:05:42
13:57:41
Samsung
E. Alshina, A. Alshin,
K. Choi, M. Park
JVET-B0023
m37673
2016-02-08 2016-02-08 2016-02-20 Quadtree plus binary tree structure integration
09:57:19
10:03:12
17:37:03
with JEM tools
J. An, H. Huang, K.
Zhang, Y.-W. Huang, S.
Lei (MediaTek)
JVET-B0024
m37750
2016-02-10 2016-02-15 2016-02-17
Evaluation report of SJTU Test Sequences
10:02:09
15:35:39
15:20:11
Thibaud Biatek, Xavier
Ducloux
JVET-B0025
m37778
2016-02-12 2016-02-12 2016-02-15 Evaluation Report of Chimera Test Sequence
06:09:39
06:38:55
05:47:05
for Future Video Coding
H. Ko, S.-C. Lim, J.
Kang, D. Jun, J. Lee, H.
Y. Kim (ETRI)
JVET-B0026
m37779
2016-02-12 2016-02-12 2016-02-15 JEM1.0 Encoding Results of Chimera Test
06:10:40
06:39:27
05:47:37
Sequence
S.-C. Lim, H. Ko, J.
Kang, H. Y. Kim
(ETRI)
JVET-B0027
m37796
2016-02-13 2016-02-13 2016-02-13 SJTU 4K test sequences evaluation report from
03:07:31
03:10:27
03:10:27
Sejong University
Nam Uk Kim, JunWoo
Choi, Ga-Ram Kim,
Yung-Lyul Lee
JVET-B0028
m37809
2016-02-15 2016-02-15 2016-02-20 Direction-dependent sub-TU scan order on intra Shunsuke Iwamura,
02:17:32
15:55:10
17:30:17
prediction
Atsuro Ichigaya
JVET-B0029
m37820
2016-02-15 2016-02-15 2016-02-15 Evaluation report of B-Com test sequence
04:31:35
04:47:27
04:47:27
(JCTVC-V0086)
O. Nakagami, T. Suzuki
(Sony)
JVET-B0030
m37821
2016-02-15 2016-02-15 2016-02-15
Comment on test sequence selection
04:43:04
05:11:26
05:11:26
O. Nakagami, T. Suzuki
(Sony)
JVET-B0031
m37822
2016-02-15 2016-02-16 2016-02-16
Evaluation report of Huawei test sequence
05:14:56
05:08:49
05:08:49
Kiho Choi, E. Alshina,
A. Alshin, M. Park
JVET-B0032
m37823
2016-02-15
05:14:58
JVET-B0033
m37824
2016-02-15 2016-02-16 2016-02-21
Adaptive Multiple Transform for Chroma
05:21:46
05:09:44
04:32:07
Kiho Choi, E. Alshina,
A. Alshin, M. Park, M.
Park, C. Kim
JVET-B0034
m37825
2016-02-15 2016-02-16 2016-02-16
Cross-check of JVET-B0023
05:26:09
05:10:38
05:10:38
E. Alshina, Kiho Choi,
A. Alshin, M. Park, M.
Park, C. Kim
JVET-B0035
m37829
2016-02-15 2016-02-16 2016-02-16 Evaluation Report of Chimera and Huawei Test Pierrick Philippe
08:58:14
12:11:43
12:11:43
Sequences for Future Video Coding
(Orange)
JVET-B0036
m37840
2016-02-15 2016-02-15 2016-02-21 Simplification of the common test condition for X. Ma, H. Chen, H.
11:59:37
14:43:11
23:10:45
fast simulation
Yang (Huawei)
JVET-B0037
m37841
2016-02-15 2016-02-15 2016-02-19 Performance analysis of affine inter prediction
12:04:45
14:43:51
15:10:34
in JEM1.0
Created
First upload Last upload
Title
Authors
Withdrawn
H. Zhang, H. Chen, X.
Ma, H. Yang (Huawei)
Page: 352
Date Sav
JVET-B0038
m37842
2016-02-15 2016-02-15 2016-02-19
Harmonization of AFFINE, OBMC and DBF
12:07:39
14:44:23
15:09:18
H. Chen, S. Lin, H.
Yang, X. Ma (Huawei)
JVET-B0039
m37845
2016-02-15 2016-02-15 2016-02-24
Non-normative JEM encoder improvements
12:40:11
23:18:30
18:34:20
Kenneth Andersson, Per
Wennersten, Rickard
Sjoberg, Jonatan
Samuelsson, Jacob
Strom, Per Hermansson,
Martin Pettersson
JVET-B0040
m37855
2016-02-15 2016-02-15 2016-02-20 Evaluation Report of Huawei and B-Com Test
14:32:47
17:39:50
21:26:34
Sequences for Future Video Coding
Fabien Racape, Fabrice
Le Leannec, Tangi
Poirier
JVET-B0041
m37862
2016-02-15 2016-02-18 2016-02-21 Adaptive reference sample smoothing
15:20:43
03:12:59
10:06:43
simplification
Alexey Filippov, Vasily
Rufitskiy (Huawei
Technologies)
JVET-B0042
m37888
2016-02-15 2016-02-15 2016-02-20 Evaluation Report of B-COM Test Sequence for Han Boon Teo, Meng
17:34:11
18:03:23
18:27:21
Future Video Coding (JCTVC-V0086)
Dong
JVET-B0043
m37890
2016-02-15 2016-02-15 2016-02-20 Polyphase subsampled signal for spatial
18:06:12
23:23:21
18:03:10
scalability
Emmanuel Thomas
JVET-B0044
m37898
Coding Efficiency / Complexity Analysis of
2016-02-15 2016-02-15 2016-02-23
JEM 1.0 coding tools for the Random Access
18:40:36
23:54:59
20:29:56
Configuration
H. Schwarz, C. Rudat,
M. Siekmann, B. Bross,
D. Marpe, T. Wiegand
(Fraunhofer HHI)
JVET-B0045
m37956
2016-02-16 2016-02-16 2016-02-20 Performance evaluation of JEM 1 tools by
01:58:25
07:35:03
21:27:44
Qualcomm
J. Chen, X. Li, F. Zou,
M. Karczewicz, W.-J.
Chien, T. Hsieh
JVET-B0046
m37957
F. Zou, J. Chen, X. Li,
2016-02-16 2016-02-16 2016-02-21 Evaluation report of Netflix Chimera and SJTU
M. Karczewicz
02:17:57
02:25:03
23:16:32
test sequences
(Qualcomm)
JVET-B0047
m37958
2016-02-16 2016-02-16 2016-02-21
Non Square TU Partitioning
02:18:53
04:07:53
03:33:27
K. Rapaka, J. Chen, L.
Zhang, W. -J. Chien, M.
Karczewicz
JVET-B0048
m37961
2016-02-16 2016-02-16 2016-02-21 Universal string matching for ultra high quality
04:31:59
15:59:49
17:54:10
and ultra high efficiency SCC
Liping Zhao, Kailun
Zhou, Jing Guo, Shuhui
Wang, Tao Lin (Tongji
Univ.)
JVET-B0049
m37962
2016-02-16 2016-02-16 2016-02-23 Four new SCC test sequences for ultra high
04:34:23
16:02:14
08:35:43
quality and ultra high efficiency SCC
Jing Guo, Liping Zhao,
Tao Lin (Tongji Univ.)
JVET-B0050
m37963
2016-02-16 2016-02-16 2016-02-21 Performance comparison of HEVC SCC CTC
04:36:58
16:54:00
15:52:23
sequences between HM16.6 and JEM1.0
Shuhui Wang, Tao Lin
(Tongji Univ.)
JVET-B0051
m37964
2016-02-16 2016-02-16 2016-02-20
Further improvement of intra coding tools
05:10:48
05:26:13
02:47:22
S.-H. Kim, A. Segall
(Sharp)
m37965
2016-02-16 2016-02-16 2016-02-21 Report of evaluating Huawei surveillance test
07:46:08
08:02:52
21:55:29
sequences
Ching-Chieh Lin, JihSheng Tu, Yao-Jen
Chang, Chun-Lung Lin
(ITRI)
JVET-B0053
m37966
2016-02-16 2016-02-16 2016-02-20 Report of evaluating Huawei UGC test
07:49:02
08:04:05
02:32:14
sequences
Jih-Sheng Tu, ChingChieh Lin, Yao-Jen
Chang, Chun-Lung Lin
(ITRI)
JVET-B0054
m37972
2016-02-16 2016-02-16 2016-02-21 De-quantization and scaling for next generation J. Zhao, A. Segall, S.-H.
09:25:00
09:44:13
19:04:12
containers
Kim, K. Misra (Sharp)
JVET-B0055
m37973
2016-02-16 2016-02-21 2016-02-21 Netflix Chimera test sequences evaluation
09:46:19
17:00:07
17:00:07
report
JVET-B0056
m37980
2016-02-16 2016-02-16 2016-02-16 Evaluation report of SJTU Test Sequences from
T. Ikai (Sharp)
10:57:10
10:59:30
10:59:30
Sharp
JVET-B0057
m37990
2016-02-16 2016-02-18 2016-02-21
Evaluation of some intra-coding tools of JEM1
17:23:15
14:15:40
10:10:21
JVET-B0052
Maxim Sychev,
Huanbang
Chen(Huawei)
Alexey Filippov, Vasily
Rufitskiy (Huawei
Technologies)
Page: 353
Date Sav
m37994
2016-02-16 2016-02-16 2016-02-23
Modification of merge candidate derivation
23:18:03
23:35:44
21:54:07
W. -J. Chien, J. Chen,
S. Lee, M. Karczewicz
(Qualcomm)
JVET-B0059
m37998
2016-02-17 2016-02-17 2016-02-21
TU-level non-separable secondary transform
05:06:31
05:29:50
03:32:20
X. Zhao, A. Said, V.
Seregin, M.
Karczewicz, J. Chen, R.
Joshi (Qualcomm)
JVET-B0060
m38000
2016-02-17 2016-02-17 2016-02-20
Improvements on adaptive loop filter
05:19:30
05:28:11
01:40:08
M. Karczewicz, L.
Zhang, W. -J. Chien, X.
Li (Qualcomm)
JVET-B0061
m38004
2016-02-17 2016-02-17 2016-02-17 Evaluation report of SJTU test sequences for
06:29:45
07:17:22
12:18:47
future video coding standardization
Sang-hyo Park, Haiyan
Xu, Euee S. Jang
JVET-B0062
m38019
2016-02-17 2016-02-19 2016-02-19
Crosscheck of JVET-B0022(ATMVP)
13:12:30
15:06:12
15:06:12
X. Ma, H. Chen, H.
Yang (Huawei)
JVET-B0063
m38078
2016-02-19 2016-02-19 2016-02-19 Cross-check of non-normative JEM encoder
07:08:45
07:20:12
07:20:12
improvements (JVET-B0039)
B. Li, J. Xu (Microsoft)
JVET-B0064
m38079
2016-02-19
07:13:04
JVET-B0065
m38096
2016-02-20 2016-02-20 2016-02-23 Coding results of 4K surveillance and 720p
04:16:23
17:58:16
03:21:38
portrait sequences for future video coding
K. Kawamura, S. Naito
(KDDI)
JVET-B0066
m38100
2016-02-20 2016-02-21 2016-02-21 Cross-check of JVET-B0058: Modification of
09:55:37
04:28:46
04:28:46
merge candidate derivation
H. Chen, H. Yang
(Huawei)
JVET-B0067
m38103
2016-02-20 2016-02-26 2016-02-26 Cross-check of JVET-B0039: Non-normative
18:50:57
20:58:09
20:58:09
JEM encoder improvements
C. Rudat, B. Bross, H.
Schwarz (Fraunhofer
HHI)
JVET-B0068
m38104
2016-02-20 2016-02-22 2016-02-22 Cross-check of Non Square TU Partitioning
19:58:43
00:38:06
00:38:06
(JVET-B0047)
O. Nakagami (Sony)
JVET-B0069
m38105
2016-02-20 2016-02-25 2016-02-25 Crosscheck of the improvements on ALF in
21:06:16
10:41:24
10:41:24
JVET-B060
C.-Y. Chen, Y.-W.
Huang (MediaTek)
JVET-B0070
m38106
2016-02-20 2016-02-25 2016-02-25
Cross-check of JVET-B0060
21:19:55
19:48:36
19:48:36
B. Li, J. Xu (Microsoft)
JVET-B0071
m38108
2016-02-20
23:59:39
JVET-B0072
m38109
2016-02-21 2016-02-21 2016-02-21
SJTU 4k test sequences evaluation report
02:21:38
02:24:37
19:33:56
Seungsu Jeon, Namuk
Kim, HuikJae Shim,
Byeungwoo Jeon
JVET-B0073
m38117
2016-02-21 2016-02-21 2016-02-24 Simplification of Low Delay configurations for
20:17:28
20:31:14
03:10:14
JVET CTC
Maxim Sychev
(Huawei)
JVET-B0074
m38122
2016-02-22 2016-02-23 2016-02-24 Report of BoG on parallel encoding and
00:16:55
03:14:00
23:17:46
removal of cross-RAP dependencies
K. Suehring, H. Yang
JVET-B0075
m38160
2016-02-23 2016-02-23 2016-02-23 Report on BoG on informal subjective viewing
08:26:10
15:37:54
15:37:54
related to JVET-B0039
K. Andersson, E.
Alshina
JVET-B0076
m38172
2016-02-24 2016-02-24 2016-02-24
Report of BoG on selection of test material
18:14:39
20:22:20
20:22:20
T. Suzuki, J. Chen
JVET-B1000
m38200
2016-02-27
21:17:38
G. J. Sullivan, J.-R.
Ohm
JVET-B1001
m38201
J. Chen, E. Alshina, G.
2016-02-27 2016-03-08 2016-03-25 Algorithm description of Joint Exploration Test
J. Sullivan, J.-R. Ohm,
21:20:40
18:04:59
22:30:44
Model 2
J. Boyce
JVET-B1002
m38199
2016-02-26 2016-03-05 2016-03-05 Call for test material for future video coding
23:44:07
23:52:05
23:52:05
standardization
A. Norkin, H. Yang, J.R. Ohm, G. J. Sullivan,
T. Suzuki
JVET-B1010
m38202
2016-02-27 2016-04-01 2016-04-04 JVET common test conditions and software
21:31:20
15:29:21
13:20:09
reference configurations
K. Suehring, X. Li
JVET-B1011
m38182
2016-02-25 2016-02-25 2016-03-14 Description of Exploration Experiments on
02:05:31
16:41:35
07:26:38
Coding Tools
E.Alshina, J. Boyce, Y.W. Huang, S.-H. Kim,
L. Zhang
JVET-B0058
Withdrawn
Withdrawn
Meeting Report of 2nd JVET Meeting
Page: 354
Date Sav
Annex B to JVET report:
List of meeting participants
The participants of the second meeting of the JVET, according to a sign-in sheet circulated
during the meeting sessions (approximately 161 people in total), were as follows:
Page: 355
Date Sav
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
Anne Aaron (Netflix)
Yong-Jo Ahn (Kwangwoon Univ. (KWU))
Elena Alshina (Samsung Electronics)
Peter Amon (Siemens AG)
Kenneth Andersson (LM Ericsson)
Maryam Azimi (UBC)
Giovanni Ballocca (Sisvel Tech)
Yukihiro Bandoh (NTT)
Gun Bang (ETRI)
Guillaume Barroux (Fujitsu Labs)
David Baylon (Arris)
Ronan Boitard (UBC)
Jill Boyce (Intel)
Benjamin Bross (Fraunhofer HHI)
Madhukar Budagavi (Samsung Research)
Alan Chalmers (Univ. Warwick)
Yao-Jen Chang (ITRI Intl.)
Chun-Chi Chen (NCTU/ITRI)
Jianle Chen (Qualcomm)
Peisong Chen (Broadcom)
Wei-Jung Chien (Qualcomm)
Haechul Choi (Hanbat Nat. Univ.)
Kiho Choi (Samsung Electronics)
Keiichi Chono (NEC)
Takeshi Chujoh (Toshiba)
Gordon Clare (Orange Labs FT)
Audré Dias (BBC)
Xavier Ducloux (Thomson Video Networks)
Alberto Duenas (NGCodec)
Thierry Fautier (Harmonic)
Alexey Filippov (Huawei)
Christophe Gisquet (Canon Research France)
Zhouye Gu (Arris)
Wassim Hamiclouche (IETR/INSA)
Ryoji Hashimoto (Renesas)
Dake He (Blackberry)
Yuwen He (InterDigital Commun.)
Hendry (Qualcomm)
Jin Heo (LG Electronics)
James Holland (Intel)
Seungwook Hong (Arris)
Ted Hsieh (Qualcomm Tech.)
Yu-Wen Huang (MediaTek)
Atsuro Ichigaya (NHK (Japan Broadcasting Corp.))
Tomohiro Ikai (Sharp)
Masaru Ikeda (Sony)
Shunsuke Iwamura (NHK (Japan Broadcasting Corp.))
Euee S. Jang (Hanyang Univ.)
Wonkap Jang (Vidyo)
Seungsu Jeon (Sungkyunkwan Univ. (SKKU))
Minqiang Jiang (Huawei)
Guoxin Jin (Qualcomm Tech.)
Rajan Joshi (Qualcomm)
Dongsan Jun (ETRI)
Joël Jung (Orange Labs)
Jung Won Kang (Electronics and Telecom Research Institute (ETRI))
Marta Karczewicz (Qualcomm Tech.)
Kei Kawamura (KDDI)
356
357
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
Louis Kerofsky (Interdigital)
Dae Yeon Kim (Chips & Media)
Hui Yong Kim (ETRI)
Jae-Gon Kim (Korea Aerosp. Univ.)
Jungsun Kim (MediaTek USA)
Namuk Kim (Sungkyunkwan Univ. (SKKU))
Seung-Hwan Kim (Sharp)
Hyunsuk Ko (Electronics and Telecom Research Institute (ETRI))
Krasimir Kolarov (Apple)
Konstantinos Konstantinides (Dolby Labs)
Patrick Ladd (Comcast Cable)
Jani Lainema (Nokia)
Fabrice Le Léannec (Technicolor)
Dongkyu Lee (Kwangwoon Univ.)
Jae Yung Lee (Sejong Univ.)
Jinho Lee (Electronics and Telecom Research Institute (ETRI))
Sungwon Lee (Qualcomm)
Yung-Lyul Lee (Sejong Univ.)
Shawmin Lei (MediaTek)
Ming Li (ZTE)
Xiang Li (Qualcomm Tech.)
Chongsoon Lim (Panasonic)
Jaehyun Lim (LG Electronics)
Sung-Chang Lim (Electronics and Telecom Research Institute (ETRI))
Sung-Won Lim (Sejong Univ.)
Ching-Chieh Lin (ITRI Intl.)
Tao Lin (Tongji Univ.)
James Wenjun Liu (Huawei)
Shan Liu (MediaTek)
Ang Lu (Zhejiang Univ.)
Taoran Lu (Dolby)
Xiang Ma (Huawei)
Dimitrie Margarit (Sigma Designs)
Detlev Marpe (Fraunhofer HHI)
Akira Minezawa (Mitsubishi Electric)
Koohyar Minoo (Arris)
Kiran Misra (Sharp)
Matteo Naccari (BBC R&D)
Ohji Nakagami (Sony)
Panos Nasiopoulos (Univ. British Columbia)
Tung Nguyen (Fraunhofer HHI)
Didier Nicholson (VITEC)
Andrey Norkin (Netflix)
Byungtae Oh (Korea Aerosp. Univ.)
Jens-Rainer Ohm (RWTH Aachen Univ.)
Hao Pan (Apple)
Krit Panusopone (Motorola Mobility)
Manindra Parhy (Nvidia)
Min Woo Park (Samsung Electronics)
Sang-Hyo Park (Hanyang Univ.)
Youngo Park (Samsung Electronics)
Wen-Hsiao Peng (ITRI Intl./NCTU)
Pierrick Philippe (Orange Labs FT)
Tangi Poirier (Technicolor)
Mahsa Pourazad (TELUS Commun. & UBC)
Fangjun Pu (Dolby Labs)
Krishnakanth Rapaka (Qualcomm Tech.)
Justin Ridge (Nokia)
Page: 357
Date Sa
358
117.
118.
119.
120.
121.
122.
123.
124.
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.
152.
153.
154.
155.
156.
157.
158.
159.
160.
161.
Dmytro Rusanovskyy (Qualcomm)
Amir Said (Qualcomm Tech.)
Jonatan Samuelsson (LM Ericsson)
Yago Sanchez De La Fuente (Fraunhofer HHI)
Andrew Segall (Sharp)
Vadim Seregin (Qualcomm)
Masato Shima (Canon)
Rickard Sjöberg (Ericsson)
Robert Skupin (Fraunhofer HHI)
Ranga Ramanujam Srinivasan (Nvidia)
Yeping Su (Apple)
Karsten Sühring (Fraunhofer HHI)
Gary Sullivan (Microsoft)
Haiwei Sun (Panasonic)
Teruhiko Suzuki (Sony)
Maxim Sychev (Huawei Tech.)
Yasser Syed (Comcast Cable)
Han Boon Teo (Panasonic)
Emmanuel Thomas (TNO)
Tadamasa Toma (Panasonic)
Pankaj Topiwala (FastVDO)
Alexandros Tourapis (Apple)
Jih-Sheng Tu (ITRI international)
Yi-Shin Tung (ITRI USA / MStar Semi.)
Rahul Vanam (InterDigital Commun.)
Wade Wan (Broadcom)
Jian Wang (Polycom)
Stephan Wenger (Vidyo)
Dongjae Won (Sejong Univ.)
Ping Wu (ZTE UK)
Xiangjian Wu (Kwangwoon Univ.)
Xiaoyu Xiu (InterDigital Commun.)
Jizheng Xu (Microsoft)
Xiaozhong Xu (MediaTek)
Anna Yang (Korea Aerosp. Univ.)
Haitao Yang (Huawei Tech.)
Yan Ye (InterDigital Commun.)
Sehoon Yea (LG Electronics)
Peng Yin (Dolby Labs)
Yue Yu (Arris)
Li Zhang (Qualcomm Tech.)
Wenhao Zhang (Intel)
Xin Zhao (Qualcomm Tech.)
Zhijie Zhao (Huawei Tech.)
Jiantong Zhou (Huawei Tech.)
Page: 358
Date Sa
359
– Audio report
Source: Schuyler Quackenbush, Chair
Table of Contents
1
Record of AhG meetings
361
1.1 AhG on 3D Audio Phase II ............................................................................... 361
2
Opening Audio Plenary
365
3
Administrative matters
365
3.1 Communications from the Chair....................................................................... 365
3.2 Approval of agenda and allocation of contributions ......................................... 365
3.3 Creation of Task Groups ................................................................................... 365
3.4 Approval of previous meeting report ................................................................ 365
3.5 Review of AHG reports .................................................................................... 365
3.6 Ballots to process .............................................................................................. 365
3.7 Received National Body Comments and Liaison matters ................................ 365
3.8 Joint meetings ................................................................................................... 366
3.9 Plenary Discussions .......................................................................................... 366
4
Task group activities 366
4.1 Joint Meetings ................................................................................................... 366
4.1.1 With Systems on 3D Audio PDAM 4 ........................................................ 366
4.1.2 With All on MPEG Vision ......................................................................... 367
4.1.3 With 3DG on Interactive 360 degree Media, VR and AR ......................... 368
4.1.4 With Requirements on New Profile Proposal ............................................ 368
4.1.5 With All on Joint work between JPEG and MPEG.................................... 369
4.2 Task Group discussions .................................................................................... 369
4.2.1 3D Audio Phase 1 ....................................................................................... 369
4.2.2 3D Audio AMD 3 (Phase 2) ....................................................................... 370
4.2.3 Maintenance ............................................................................................... 379
4.3 Discussion of Remaining Open Issues.............................................................. 382
5
Closing Audio Plenary and meeting deliverables
386
5.1 Discussion of Open Issues ................................................................................ 386
5.2 Recommendations for final plenary .................................................................. 386
5.3 Disposition of Comments on Ballot Items ........................................................ 386
5.4 Responses to Liaison and NB comments .......................................................... 386
5.5 Establishment of Ad-hoc Groups ...................................................................... 386
5.6 Approval of output documents ......................................................................... 386
5.7 Press statement .................................................................................................. 386
5.8 Agenda for next meeting................................................................................... 386
5.9 All other business.............................................................................................. 386
5.10 Closing of the meeting .................................................................................... 387
Annex A Participants
388
Annex B Audio Contributions and Schedule
390
Page: 359
Date Sa
360
Annex C
Annex D
Annex E
Task Groups
394
Output Documents
395
th
Agenda for the 115 MPEG Audio Meeting
398
Page: 360
Date Sa
361
1
1.1
Record of AhG meetings
AhG on 3D Audio Phase II
A meeting of the AhG on 3D Audio was held the 1300 – 1800 hrs on the Sunday prior to
the 114th meeting of WG11 at the MPEG meeting venue.
3D Audio Phase 2 CEs
Total Components Coding (TCC)
Lukasz Januszkiewicz, Zylia, presented
m37930
Zylia Poznan Listening Test Site Properties
Jakub Zamojski, Lukasz
Januszkiewicz, Tomasz Zernicki
The contribution presents details on the Zylia listening room. This room meets BS.1116
requirements in terms of


Aspect ratio
Reverberation time as a function of frequency
Speakers are on the surface of a cylinder with radius of approximately 3 meters.
Lukasz Januszkiewicz, Zylia, presented
m37932
Corrections and clarifications on Tonal
Component Coding
Lukasz Januszkiewicz, Tomasz
Zernicki
The contribution proposes syntax changes

Add one bit and define a 2-bit field tccMode in the config()
This permits a one or alternatively two TCC ―packets‖ of data per CPE.
Other corrections are proposed that that are either purely editorial or make specification
text conform to reference software.
All experts present in the AhG meeting agreed to recommend to the Audio Subgroup to
incorporate all changes in the contribution into a Study on MPEG-H 3D Audio DAM3
output document.
Christian Neukam, FhG-IIS, presented
m37947
Listening Test Results for TCC from FhG
IIS
Sascha Dick, Christian Neukam
The contribution reports on a cross-check listening test on the proposed Zylia TCC Core
Experiment. In the test


Sys1
Sys2
RM
CE technology using OFDT-based synthesis
In absolute MUSHRA scores, no items different, mean not different
In differential MUSHRA scores, 1 item better for Sys2, mean not different, but mean
value better by 1 MUSHRA point.
Lukasz Januszkiewicz, Zylia, presented
m37933
Low Complexity Tonal Component Coding
Tomasz Zernicki, Lukasz
Januszkiewicz, Andrzej Rumiaski,
Marzena Malczewska
The contribution, along with the previous cross-checks, consists of a complete CE
proposal.
The contribution explores whether a Low Compexity TCC might be appropriate for the
Low Complexity Profile. The ―RM‖ baseline was

MPEG-H 3DA LC RQE with IGF active
The point of the experiment is that the TCC is able to work with the IGF tool. The
contribution reports on a listening test on the proposed Zylia TCC Core Experiment. In
the test


Sys1
Sys2
RM
CE technology using ODFT-based synthesis
In absolute MUSHRA scores, no items different, mean not different
Page: 361
Date Sa
362
In differential MUSHRA scores, 3 items better for Sys2, mean better, mean value better
by 1.5 MUSHRA point.
The presenter noted that the number of Tonal Components is typically small (e.g. less
than 3 per core channel), in which case the Time Domain tone synthesis technique
would fit with a Low Complexity use case. The presenter noted that low complexity
could be guaranteed via restrictions
The Chair clarified that the following issues and questions are on the table:



TCC works with IGF (as demonstrated in the listening test)
We might assume that TD synthesis is higher quality that ODFT synthesis
We need to investigate the maxium number of TC per core audio channel
This will continue to be discussed and brought up later in the week.
High Resolution Envelope Processing (HREP)
Seungkwon Beack, ETRI, presented
m37833
ETRI's cross-check report for CE on High
Resolution Envelope Processing
Seungkwon Beack, Taejin Lee
The contribution presents a cross-check listening test or HREP. Tested were:






48 kb/s, 128 kb/s
8 items (stereo)
11 listeners
Listening via headphones
Sys1 – HREP
Sys2: - 3D Audio RM
48 kb/s


Absolute: 4 better, overall better
Differential: 8 better (i.e. all better), overall better by 9 points (value is ~65)
128 kb/s


Absolute: 1 better, overall better
Differential: 2 better, overall better. Overall better by 2 points (value is ~92)
The Chair presented, on behalf of FhG-IDMT experts
m37715
Cross Check Report for CE on HREP (Test
Site Fraunhofer IDMT)
Judith Liebetrau, Thomas Sporer,
Alexander Stojanow
The contribution presents a cross-check listening test or HREP
5.1 channel, 128 kb/s


Absolute: no difference
Differential: 3 better (only applause items), overall no difference
Stereo, 48 kb/s


Absolute: overall better
Differential: 5 better, overall better
Stereo 128 kb/s


Absolute: no difference
Differential: no difference
Christian Neukam, FhG-IIS, presented
m37889
3DA CE on High Resolution Envelope
Processing (HREP)
Sascha Disch, Florin Ghido, Franz
Reutelhuber, Alexander Adami,
Juergen Herre
The contribution, along with the previous cross-checks, consists of a complete CE
proposal.
The presenter motivated the need for the HREP tool – in short that applause-like signals
are common in many scenarios, particularly broadcast of live performances.
The technology is an independent, single-channel pre- and post-processing tool whose
side information rate is 1 to 4 kb/s for stereo (i.e. 2 channels).
Page: 362
Date Sa
363
Complexity is dominated by the 50% overlap real-valued DFT/IDFT, which is
approximately 2.7 WMOPS. Worst case is 3.4 WMOPS when other processing is added.
For LC Profile, it is proposed to limit the number of HREP for channels/objects. Further,
the presenter noted that if the gain is 1, then the DFT/IDFT does not need to be executed.
For LC Profile, it is proposed that HREP


No more than 8 core audio channels use HREP at any instant
No more than 50% HREP frames (i.e. the 64-frame hop) are active with HREP.
The presenter showed activation plots (not included in the contribution) that indicated
that HREP is active for almost 100% of the frames for the applause items, and almost
none of the frames for one non-applause item.
The presenter noted that, to a large extent, the HREP coding is independent of core
coder bitrate.
FhG-IIS conducted the same listening tests as reported in the previous cross-checks. The
test items in these tests were:
5.1 channel, 128 kb/s

7 applause items
Stereo, 48 kb/s and 128 kb/s

11 MPEG Surround item, plus 2 additional applause items
Test results were:
5.1 channel, 128 kb/s

differential: 4 better, overall better
48 kb/s stereo


2 better in absolute score
7 of 8 better in differential, overall better, mean improvement more than 10 mushra
points
128 kb/s stereo


2 better, overall better in absolute by 5 mushra points
6 of 8 better in differential, mean better
The contribution presented in ANNEX B test outcomes when all listening test data is
pooled .
5.1 channel, 128 kb/s

4 better, overall better
48 kb/s stereo

8 (all) better in differential, by as much as 15 mushra points, overall better, by 10
mushra points
128 kb/s stereo

6 better in differential, overall better. Overall improvement of 3 mushra points.
There was some discussion as to exactly where the HREP post-processor was in the 3D
Audio decoder. The presenter noted that a figure in the contribution makes this very
clear. It was concluded that HREP is applicable to channels and discrete objects (i.e. not
SAOC objects). The contribution does not cover HOA content, and the Chair concluded
that it may be too late in the Phase 2 timeline to contemplate application to HOA content.
In summary, the contribution requests


That HREP technology be accepted into the MPEG-H 3D Audio standard
That HREP be included in the LC Profile with the recommended constraints
All experts present in the AhG meeting agreed to recommend to the Audio Subgroup to
incorporate the HREP technology into a Study on MPEG-H 3D Audio DAM3 output
document and also into the High Profile.
Issues of acceptance into LC profile will continue to be discussed.
Page: 363
Date Sa
364
HOA Issues
Gregory Pallone, Orange, presented
m37894
Proposed modifications on MPEG-H 3D
Audio
Gregory Pallone
The contribution notes that when presenting via loudspeakers and listening in the near
field, the perceived spatial cues, e.g. interaural level, may be incorrect. It gives use cases
in which such near-field listening will occur, e.g. inside a car.
The results of an experiment are reported, in which far-field and near-field presentation
are simulated using binaural rendering and presentation. Near Field Compensation (NFC)
simulates moving the speakers further from the listener.
The contribution proposes to modify the 3D Audio specification in two places as follows:
Preprocessing block NFC processing (should be switched active if UseNfc is true and the speaker
listener distance rmax is samller NfcReference Distance), pre-processing block DRC-1 for HOA
and as an alternative to rendering to loud-speakers, a computational efficient binaural rendering
directly using the HOA coefficients (H2B, see 13.3.1). Computational more efficient but
mathematically equivalent ways to implement the processing chain may be found in Annex G.
with:
Preprocessing block NFC processing (should be switched active if rmax < NfcReferenceDistance
in case UsesNfc flag is active, or if rmax < 1.5 meters in case UsesNfc flag is inactive, with rmax
the maximum speaker distance of the listening setup), pre-processing block DRC-1 for HOA and
as an alternative to rendering to loud-speakers, a computational efficient binaural rendering
directly using the HOA coefficients (H2B, see 13.3.1). Computational more efficient but
mathematically equivalent ways to implement the processing chain may be found in Annex G.
Nils Peters, Qualcomm, noted that the contribution uses a very regular set of
loudspeaker positions, and wondered if the tested quality might be less with e.g. the
MPEG-H 3D Audio 22.2 loudspeaker positions.
Oliver Weubolt, Technicolor, questioned how realistic or prevelant is the use case of the
close-placed loudspeaker listening.
The Chair asked experts work together to draft the intent of the Orange proposal in more
clear and concise language. This topic will continue to be discussed.
AhG Report
The Chair presented
m37901
Report of AHG on 3D Audio and Audio
Maintenance
Schuyler Quackenbush
Experts agreed that the report represents the AhG activity. The Chair will upload it as a
contribution document.
The AhG meeting was adjourned at 5:50pm
Page: 364
Date Sa
365
2
Opening Audio Plenary
The MPEG Audio Subgroup meeting was held during the 114th meeting of WG11,
February 22-26, San Diego, USA. The list of participants is given in Annex A.
3
3.1
Administrative matters
Communications from the Chair
The Chair summarised the issues raised at the Sunday evening Chair‘s meeting, proposed task groups for the week, and proposed
agenda items for discussion in Audio plenary.
3.2
Approval of agenda and allocation of contributions
The agenda and schedule for the meeting was discussed, edited and approved. It shows
the documents contributed to this meeting and presented to the Audio Subgroup, either
in the task groups or in Audio plenary. The Chair brought relevant documents from
Requirements, Systems to the attention of the group. It was revised in the course of the
week to reflect the progress of the meeting, and the final version is shown in Annex B.
3.3
Creation of Task Groups
Task groups were convened for the duration of the MPEG meeting, as shown in Annex
C. Results of task group activities are reported below.
3.4
Approval of previous meeting report
The Chair asked for approval of the 113th Audio Subgroup meeting report, which was
registered as a contribution. The report was approved.
m37805
3.5
113th MPEG Audio Report
Schuyler Quackenbush
Review of AHG reports
There were no requests to review any of the AHG reports.
m37902
m37901
3.6
AHG on 3D Audio and Audio Maintenance
AHG on Responding to Industry Needs on Adoption
of MPEG Audio
ISO secretariat
ISO secretariat
Ballots to process
The following table indicates the ballots to be processed at this meeting.
WG11 Doc
15166
15826
15828
15838
15392
15842
15847
3.7
Title
ISO/IEC 13818-1:201x/DAM 5
ISO/IEC 14496-3:2009/Amd.3:2012/DCOR 1
ISO/IEC 14496-3:2009/PDAM 6, Profiles, Levels and
Downmixing Method for 22.2 Channel Programs
ISO/IEC 23003-3:2012/Amd.1:2014/DCOR 2
ISO/IEC 23003-3:2012/DAM 3 DRC and IPF
ISO/IEC 23003-4:2015 PDAM 1, Parametric DRC,
Gain Mapping and Equalization Tools
ISO/IEC 23008-3:2015/DAM 2, MPEG-H 3D Audio
File Format Support
Closes
2015-12-25
2016-02-10
2015-12-26
Ballot
37555
37573
37553
2016-01-26
37640
37560
37643
37559
Received National Body Comments and Liaison matters
The Liaison statements were presented, discussed, and resources to draft responses were
allocated.
Number
m37580
m37655
Title
Liaison Statement from ITU-R SG 6 on ITU-R BS.11965
m37655: Liaison Statement from DVB on MPEG-H
Audio
Topic
ITU-R SG 6 via SC 29 Secretariat
DVB
Page: 365
Date Sa
366
m37589
m37590
m37596
IEC CD 61937-13 and 61937-14
IEC CD 61937-2 Ed. 2.0/Amd.2
IEC CDV MIDI (MUSICAL INSTRUMENT DIGITAL
INTERFACE) SPECIFICATION 1.0 (ABRIDGED
EDITION, 2015)
IEC NP MIDI (MUSICAL INSTRUMENT DIGITAL
INTERFACE) SPECIFICATION 1.0 (ABRIDGED
EDITION, 2015)
IEC NP Microspeakers
IEC NP Multimedia Vibration Audio Systems - Method
of measurement for audio characteristics of audio
actuator by pinna-conduction
IEC CD 60728-13-1 Ed. 2.0
m37597
m37651
m37652
m37658
3.8
IEC TC 100 via SC 29 Secretariat
IEC TC 100 via SC 29 Secretariat
IEC TC 100 via SC 29 Secretariat
IEC TC 100 via SC 29 Secretariat
IEC TC 100 via SC 29 Secretariat
IEC TC 100 via SC 29 Secretariat
IEC TC 100 via SC 29 Secretariat
Joint meetings
Groups
Audio, Systems
All
Audio, 3DV
Audio, Req
All
3.9
What
Carriage of MPEG-H Systems data in MPEG-H 3D Audio
m37531 Study Of 23008-3 DAM4 (w15851) - Carriage Of
Systems Metadata, Michael Dolan, Dave Singer, Schuyler
Quackenbush
m37896 Study on Base Media File Format 14496-12
PDAM1
MPEG Vision
MPEG-H 3D Audio and VR, AR
Possible future MPEG-H 3D Audio profiles
m37832 MPEG-H Part 3 Profile Definition
360 Media
Where
Audio
Day
Tue
Time
0900-1000
Audio
Audio
Audio
Wed
Thu
Thu
1130-1300
1000-1100
1100-1200
Audio
Thu
1530-1600
Plenary Discussions
Henney Oh, WILUS, presented
m37987
Information on Korean standard for
terrestrial UHD broadcast services
Henney Oh
The presenter reviewed the information from NGBF (m37104) that was submitted to the
113th MPEG meeting. At this meeting, this contribution notes that MPEG-H 3D Audio
was selected and as the audio broadcast technology, and specifically the Low
Complexity Profile. Schedule


2Q trial
4Q public roll-out
The NGBF envisions 16-channel input and 12-channel output.
The Chair was delighted to state that the Korean NGBF is the first customer of MPEG-H
3D Audio.
4
Task group activities
In this report consensus positions of the Audio subgroup are highlighted in yellow.
Other text captures the main points of presentations and subsequent discussion.
4.1
Joint Meetings
4.1.1
With Systems on 3D Audio PDAM 4
The Audio Chair presented
m37531
Study Of 23008-3 DAM4 (w15851) Carriage Of Systems Metadata
Michael Dolan, Dave Singer, Schuyler
Quackenbush
The contribution proposes to replace the DAM4 text from the 113th meeting with new
text that specifies a new signalling mechanism using UUID as opposed to a field value
specified in the MPEG-H 3D Audio specification.
It was the consensus of the Audio and Systems experts to issue the contribution text as
Study on ISO/IEC 23008-3 DAM4.
Page: 366
Date Sa
367
m37896
Study on Base Media File Format
14496-12 PDAM1
It was the consensus of the Audio and Systems experts to issue the contribution text as
ISO/IEC 14496-14/DAM 1. The ballot comments were reviewed and an acceptable
response was determined.
4.1.2
With All on MPEG Vision
Rob Koenen, TNO, reviewed N15724, from the 113th meeting, which captured the status
of MPEG Vision. Many experts contributed to the discussion.
The concept of MPEG Vision is



A 5 year time horizon
Stakeholders
o Industry that uses MPEG specifications
o Companies and institutions that participate in MPEG
o The organizations in liaison with MPEG
Establish dialog with industry, via WG11 output documents, to bring MPEG Vision to
their attention for both participation and feedback
Vision topics are succinctly captured in an Annex of N15724
A next version of the MPEG Vision document could be in the following format:








Introduction
MPEG technology has enabled new markets
o MP3 enabled Internet music delivery
o Media compression that enabled wireless streaming media to SmartPhones
Why MPEG is uniquely able to address the emerging needs of industry (high quality
technology due to internal competitive phase, rigorous conformance and reference
software testing).
Emerging Trends
o Increasing supply and demand of media
o Social media
o Mobile high-speed networking
 That is always available
o Broadband/Broadcast convergence
o Internet of Things
o Cloud computing and storage
o Big Data, machine learning, search and discovery
o Using above to improve customer experience (the “individual” end-user and
the business application user).
o Facilitating richer media authoring and presentation
o Facilitating richer user to user communication
o Facilitate seamless media processing chains, e.g. via Common Media Format
Issues
o Security
Specific Topic Keywords (e.g. taken from aspects of Trends), followed by several
sentences of exciting text. It may be desirable to also have a figure to communicate
more effectively.
Each specific topic could be associated with 30 seconds of evangelizing video, to be
compiled into the “MPEG Vision Video”
Specific ask to potential customers:
o What are roadblocks?
o What is level of interest?
o When is it needed?
What will MPEG offer between now and 2020?
Page: 367
Date Sa
368
What can Audio experts contribute? At least a summary of emerging trends in audio and
more specifically, audio compression.
4.1.3
With 3DG on Interactive 360 degree Media, VR and AR
The Chair showed figure from 113th Audio report on the Interactive 360 degree Media
joint meeting and noted that objects in the VR model have both (yaw, pitch, roll) and (x,
y, z) parameters. The 3DV Chair noted that objects can be virtual sound sources, ―users‖
or even rooms. In this respect, rooms can also have associated reverberation properties.
The Audio Chair noted that if a higher level process (e.g. visual engine) in Augmented
Reality use case assesses room properties such as dimension, acoustic reflectivity,
reverberation, then Audio can react to Augmented Reality is the same way as Virtual
Reality.
This need may trigger Audio to define interfaces to AR or VR engines and also create
acoustic simulation (e.g. reverberation) technology.
4.1.4
With Requirements on New Profile Proposal
Christof Fersch, Dolby, presented
m37832
MPEG-H Part 3 Profile Definition
Christof Fersch
This is a joint input contribution from Dolby, DTS, Zylia, which proposes a new profile.
The presenter noted that the proposed new profile has two main objectives:


Efficient support of object-based immersive content
Efficient integration into products supporting multiple formats
It is anticipated that a renderer would be out of scope for this profile, such that device
manufacturers could add value or differentiate with their own renderer.
Relative to LC profile, it proposed tools


Add: eSbr, MPS212, SAOC-3D, IC, TCC
Remove: Format Converter, Object Renderer, HOA, Binaural, System metadata carriage
Jon Gibbs, Huawei, stated that Huawei supports the proposed profile since they desire to
provide their own renderer.
Marek Domanski Polish National Body, stated support for a profile that does not include
a renderer.
Max Neuendorf, FhG-IIS, noted that requested profile fails to address requirement of
rendering.
Deep Sen, Qualcomm, asserted that the authors of the contribution are also the main
competitors with MPEG-H 3D Audio in the marketplace. He further noted that a
normative behaviour, including rendering, permits broadcasters to fulfil the regulatory
requirements of e.g. the CALM act.
Juergen Herre, IAL/FhG-IIS, noted that SAOC-3D cannot provide high-quality
individual objects as envisioned by the contribution.
Gregory Pallone, Orange, expressed the point of view of Orange as a broadcaster.
Orange does not support non-normative rendering, and is concerned about using only
some other renderer.
Matteo Naccari, BBC, stated that BBC wishes to have complete control of the complete
broadcast chain, and hence does not support this proposal.
Jon Gibbs, Huawei, noted that this a fourth profile in MPEG-H 3D Audio. Huawei
supports the possibility of providing its own renderer as a means to differentiate their
products in the market. Further, he stated that Huawei, as a mobile handset device
manufacturer, is very familiar with standards where end-to-end quality needs to be
maintained and in those markets the network operators subsidise the handsets in order to
justify this end-to-end control.
David Daniels, Sky Europe, stated support for the proposed profile.
Ken McCann, speaking as DVB liaison officer, noted that the Liaison to MPEG from
DVB in m37655 includes DVB‘s Commercial Requirements for Next Generation Audio
Page: 368
Date Sa
369
which reflects opinions expressed in the room –that there is support for both normative
and informative renders.
Yasser Syed, Comcast, stated that profiles should be responsive to market needs.
Matteo Naccari, BBC, stated that when broadcast consumers complain, they complain to
BBC.
The Chair proposed that Audio experts draft a WG11 document that captures the
proposal for careful study but on a timeline independent of DAM 3.
In addition, Audio could draft a resolution requesting that experts in the broadcast
industry review the document and comment via contribution documents.
Furthermore, Audio could draft a Liaison template addressed to appropriate application
standards organizations.
4.1.5
With All on Joint work between JPEG and MPEG
The AhG Chair, Arienne Hines, CableLabs, gave a presentation on the plan for joint
work. She presented a diagram on workflow, which Audio experts can find in the notes
on the Joint Meeting on 360 degree Media as documented in the 113th MPEG Audio
report.
Visual experts have proposed a Light Field representation. The Audio Chair stated that
Audio experts could envision sound field analysis/synthesis as being applicable.
The work address end-to-end workflow, and the goal of the joint meeting was to fill in
the headings for Audio. Candidate headings were:



Audio Applications of Conceptual Workflow
Technologies supporting Audio
Attributes and Commonalities of Audio Technologies
Marius Preda, Univ Mines, noted that there can be sound fields and sound objects, and
that both may be appropriate for this study.
Interested Audio experts will follow this work and progress document N16150 and
N15971.
4.2
Task Group discussions
4.2.1
3D Audio Phase 1
Phase I Corrections
Max Neuendorf, FhG-IIS, presented
m37885
Proposed DCOR to MPEG-H 3D Audio
edition 2015
Max Neuendorf, Achim Kuntz, Simone
Fueg, Andreas Hoelzer, Michael
Kratschmer, Christian Neukam, Sascha
Dick, Elena Burdiel, Toru Chinen
The contribution proposed numerous corrections and clarifications to the 3D Audio base
text.
The Chair noted that, procedurally, it is proper to address Phase 1 corrections with a
DCOR, and Phase 2 corrections with a Study on text, but at the 115th MPEG meeting is
it expected that both texts are merged to create MPEG-H 3D Audio Second Edition.
The presenter reviewed the corrections proposed in the contribution.
Gregory Pallone, Orange, noted that in HOA binaural rendering there can be two BRIR
filter sets: normal and H2B, and that this conflicts with the proposed change: signalling
which BRIR to use. The presenter stated that he would consult with BRIR experts back
at the office and bring back a recommendation for further discussion.
With the exception of one proposed correction discussed in the preceding paragraph, it
was the consensus of the Audio subgroup to incorporate all changes proposed in the
contribution into ISO/IEC 23008-3:2015/DCOR 1 Corrections, to issue from this
meeting.
Page: 369
Date Sa
370
4.2.2
3D Audio AMD 3 (Phase 2)
Kai WU, Panasonic Singapore, presented
m37827
Proposed simplification of HOA parts in
MPEG-H 3D Audio Phase 2
Hiroyuki EHARA, Sua-Hong NEO, Kai
WU
The contribution raises concerns on Near Field Compensation (NFC) in LC Profile.
The contribution notes the following:
To avoid creating such a confusing and difficult standard, this alternative option
between ―lower HOA order with NFC‖ and ―higher HOA order without NFC‖
should be removed, and the low complexity should be guaranteed for the HOA
tools. There are four options of countermeasure as follows:
1) Always enable NFC (no order restriction on the NFC usage)
2) Always enable NFC (reduce the HOA order restriction in each level to
the order where NFC can be applied)
3) Always disable NFC (LCP does not support NFC)
4) Disable HOA (LCP does not support HOA)
The contribution discusses how each option could be specified in the LC profile text.
In conclusion, the presenter stated support for option 3), which would require the
following text:
If option-3 is chosen,
a) remove NFC from LCP in Table P1 in *1+ and the corresponding “Note 1”,
b) change the “Note 6” in Table P1 in *1+ to “Note 6: In order to meet target complexity
for the profile/levels, implementers shall use the low-complexity HOA decoder and
renderer”,
c) remove NFC processing text after Table P4 in [1], and
d) add the following text in the introduction part of Section 18 in *1+: “This section
specifies the low-complexity HOA decoding and rendering algorithm. In the lowcomplexity profile, this section overrides the original sections of “HOA decoding” and
“HOA rendering” for achieving a reduced complexity, otherwise this section is
informative.”
The Chair noted that there are two components to the option 3 proposal:


Remove NFC from LC Profile
Mandate use of the low complexity HOA decoding found in Annex G
Max Neuendorf, FhG-IIS, stated that ―option 3‖ proposal would lead to simpler, lowercomplexity implementations.
The Chair recommended that experts continue to discuss the issues raised by the Orange
Oliver Wuebbolt, Technicolor, presented
m37770
Corrections and Clarifications on
MPEG-H 3D Audio DAM3
Oliver Wuebbolt, Johannes Boehm,
Alexander Krueger, Sven Kordon,
Florian Keiler
The contribution proposed corrections and clarifications to the DAM3 text concerning
HOA processing.



Clarification of the order of DRC and NFC processing, with both text and a figure.
Restrict application of NFC to only one HOA channel group
Other corrections and clarifications
There was discussion on NFC and HOA channel groups between Gregory Pallone,
Orange and the presenter. The Chair suggested that this one topic needs further
discussion.
With the exception of NFC and HOA channel groups, it was the consensus of the Audio
subgroup to incorporate all changes proposed in the contribution into the Study on
ISO/IEC 23008-3:2015/DAM 3, to issue from this meeting.
Nils Peters, Qualcomm, presented
m37874
Thoughts on MPEG-H 3D Audio
Nils Peters, Deep Sen, Jeongook Song,
Moo Young Kim
Page: 370
Date Sa
371
The contribution has two categories of proposals


Extension of CICP tables
HOA issues
Pre-binauralized Stereo
It proposes to signal ―stereo that is binauralized‖ as a CICP index, specifically index 21.
This must be added to a table in MPEG-H and a table in CICP . Further, such signals
should be normatively specified to be processed with only certain outputs.
Achim Kuntz, FhG-IIS, noted that the WIRE output might facilitate processing of prebinauralized stereo.
Gregory Pallone, Orange, reminded experts of their contribution to the previous MPEG
meeting, in which an ambiance signal is provided as pre-binauralized stereo and an
object it mixed onto that using a personal BRIR.
Coverage of ITU-R Speaker Configurations
The contribution proposes to add ITU-R speaker configurations to CICP and 3D Audio
tables.
Support for Horizontal-Only HOA
Currently, the 3D Audio HOA tools to not have any directivity that lies directly on the
horizontal plane. This can be resolved via an additional 64-value codebook and without
any changes to bitstream syntax.
Screen-related Adaptation for HOA
In the current specification, HOA audio sound stage adaptation can result is peaks in
computational complexity. The presenter noted that an implementation-dependent
solution is to pre-compute the adaptation matrix for each production and decoder screen
size combination.
The contribution proposes an alternative signalling of production screen size.
HOA and Static Objects
If a program consists of an HOA scene and e.g. 3 static dialog objects, the contribution
suggests that the HOA renderer be used for the static dialog objects.
Michael K, FhG-IIS, noted that switch group interactivity (e.g. dialog level) is not
supported in HOA rendering. Gregory Pallone, Orange, noted that it should be possible.
The Chair suggested that both of the preceding issues needs more discussion, and will
bring it up later in the week after all other contributions have been presented.
Achim Kuntz, FhG-II, presented
m37892
Review of MPEG-H 3DA Metadata
Simone Feug, Christian Ertel, Achim
Kuntz
The contribution was triggered by feedback from FhG-IIS implementers. The presenter
noted that there are two disctinct categories of 3D Audio metadata:


User interaction
Decoding and rendering
Currently, most Audio Scene Interaction (ASI) metadata is provided in the MHAS layer,
but this does not contain all ASI metadata.
The contribution proposes to


Move rendering metadata from ASI to config and extension elements. In this way the
decoder can be set up after parsing the config() element.
Add other user interaction metadata to ASI. In this way, decoder user interface can be
set up after parsing ASI.
The presenter noted that no metadata is added, and none is removed in the proposed
changes.
It was the consensus of the Audio subgroup to adopt the proposed changes into the
Study on MPEG-H 3D Audio DAM3.
Max Neuendorf, FhG-IIS, presented
m37897
Review of MPEG-H 3DA Signaling
Max Neuendorf, Sascha Dick, Nikolaus
Rettelbach
Page: 371
Date Sa
372
The contribution proposes several minor changes to signalling methods in MPEG-H 3D
Audio.
It proposed to change:



Signaling of start and stop indices in IGF such that they better match the acuity of
human auditory system. This would be by increasing granularity by a factor of 2. This
would require adding two bits, one for start and one for stop.
Signaling of Stereo Filling in Multichannel Coding Tool (MCT). This would permit stereo
filling to be used in MCT. The contribution includes results of a listening test that shows
the improvement in performance when using Stereo Filling in MCT.
o With differential scores, 3 items better, one by nearly 20 MUSHRA points.
These are exactly the items with the worst performance when using the
current 3D Audio technology.
Signal crossfade (or not), such that crossfades are not mandated when they don’t make
sense, e.g. when transitioning from one program to another. In addition, the
contribution to removes the need for redundant config() information at the
AudioPreRoll() level.
It was the consensus of the Audio subgroup to adopt the proposed changes into the
Study on MPEG-H 3D Audio DAM 3.
Sang Bae Chon, Samsung, presented
m37853
Clarifications on MPEG-H PDAM3
Sang Bae Chon, Sunmin Kim
The contribution notes errors in the Internal Channel portion of the DAM 3 text and also
Reference Software. It proposes the following corrections to the text, each of which
were reviewed by the presenter.
No
Change Type Descriptions
1
Syntax
Configuration bitstream correction.
2
Editorial
Typo
3
Editorial
Missing a group and clarifications
4
Editorial
Clarification on the ―output‖ channel
5
Editorial
Clarification on the ―output‖ channel
6
Editorial
Clarification on the figure
7
Editorial
Typo and clarification
8
Editorial
Detail description of signal and parameter flow in Figure
9
Editorial
Detail description of signal and parameter flow in Figure
10
Editorial
Clarification on the equation
11
Editorial
Typo
In addition, it proposes two bug fixes in the RM6 Reference Software.
It was the consensus of the Audio subgroup to adopt the proposed changes into the
Study on MPEG-H 3D Audio DAM 3.
Max Neuendorf, FhG-IIS, presented
m37863
Proposed Study on DAM3 for MPEG-H
3D Audio
Christian Neukam, Michael Kratschmer,
Max Neuendorf, Nikolaus Rettelbach,
Toru Chinen
The contribution proposes numerous editorial or minor changes to the DAM 3 text from
the 113th MPEG meeting, as shown in the following table.
Page: 372
Date Sa
373
Application of Peak Limiter
Y
Changes
Ref
Soft?
Y
Default Parameters for DRC
decoder
Y
Y
-
-
Corrections in Technical
Overview
Clarification on IGF
Whitening
Correction in TCX
windowing
Clarification on postprocessing of the synthesis
signal (bass post filtering)
Update of description of
Adaptive Low-Frequency
De-emphasis
Correction of wrap around in
MCT
Clarification on totalSfb
Correction of inter-frame
dependencies
Alignment of bitstream
element names and
semantics for screen-related
remapping
Removal of unnecessary
semantics
Correction of remapping
formulas
Replacement of Figure for
screen-related remapping
Correction of non-uniform
spread
Application of spread
rendering
Clarification of the
relationship of scenedisplacement processing
and DRC
Clarification on the
processing of scenedisplacement data
Definition of „atan2()‟
Correction of the HOA
transform matrix
Gain interaction as part of
the metadata preprocessing
Clarification of behavior after
divergence processing
Gain of original and
duplicated objects
(divergence)
Exclusion of LFEs from
diffuseness processing
Gain adjustment of direct
and diffuse paths
Routing of diffuse path
Correct reference distance in
Y
-
-
-
due to
signal
processing
Y
(only at high
target
loudness)
Y
(only at high
target
loudness)
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
Y
-
Y
-
Y
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
Y
-
-
-
Y
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
Y
-
-
-
-
-
Issue
Changes
Text?
Changes
Bitstr
Config
-
Changes
Bitstr
Payload
-
due to
side
effects
-
-
-
Page: 373
Date Sa
374
doubling factor formulas
Fix of behavior for smaller
radius values
Minimum radius for depth
spread rendering
Correction in mae_Data()
Correction of excluded
sector signaling
Clarifications wrt.
receiverDelayCompensation
Internal Channel signaling
correction
Editorial correction to
Section 8 Updates to MHAS
Typos
Y
-
-
-
-
-
Y
-
-
-
-
-
Y
Y
Y
-
-
-
-
Y
Y
-
-
-
-
Y
Y
Y
Y
-
Y
Y
-
-
-
-
-
Y
-
-
-
-
-
It was the consensus of the Audio subgroup to adopt the proposed changes into the
Study on MPEG-H 3D Audio DAM 3.
3D Audio Profiles
Christof Fersch, Dolby, presented
m37832
MPEG-H Part 3 Profile Definition
Christof Fersch
This a joint input contribution from Dolby, DTS, Zylia, which proposes a new profile,
Broadcast Baseline. The presenter noted that Dolby Atmos and DTX DTS-X have
cinema sound formats that are composed only of dynamic objects.
The presenter described a Common Receiver Broadcast Receiver architecture in which
there might be multiple audio decoders that interface to a separate rendering engine.
In conclusion, the presenter requested that the new profile definition be included in the
Study on DAM 3.
Discussion
Gregory Pallone, Orange, asked why the binaural rendering was removed from the
profile proposal. The presenter responded that the profile does not include any rendering
technology.
Juergen Herre, IAL/FhG-IIS, noted that some content providers wish to guarantee the
user experience via a normative profile. The presenter observed that broadcasters are
also free to use the MPEG-H LC Profile, with a normative renderer.
Jon Gibbs, Huawei, stated that as an implementer Huawei finds flexibility in rendering a
desirable option.
The presenter stated that the proposed profile can decode every LC Profile bitstream.
The Chair noted that the decoder can parse the LC Profile bitstream, but cannot be
conformant since no rendered output is produced.
Henney Oh, WILUS, stated that Korean Broadcast Industry desires to have a codec
coupled with renderer to guarantee a baseline user experience.
There will be further discussion in a joint meeting with Requirements.
Takehiro Sugimoto, NHK, presented
m37816
Proposed number of core channels for
LC profile of MPEG-H 3D Audio
Takehiro Sugimoto, Tomoyasu Komori
The contribution points out an ambiguity and also a limitation in the current LC Profile
specification. The presenter stated that customers or even government mandate will be
for


9 languages (beyond Japanese) in a multilingual service
Audio description in every broadcast service (in both Japanese and English)
This leads to the following needs



10 languages for dialog
2 languages for audio description
22.2 immersive audio program
Page: 374
Date Sa
375
The Chair noted that there is



Channels in a bitstream
Channels that are decoded by 3D Audio Core Coder
Channels that are ouput by Format Converter
It was the consensus of the Audio subgroup to support the needs expressed in the
contribution, which will require clarification of the LC and High Profile specification.
Further Discussion
Max Neuendorf, FhG-IIS, presented a modified table for describing LC Profile in which
new tables were added:



Stream chn
Decoded chn
Output chn
After some discussion by experts, and small changes to the presented text, it was the
consensus of the Audio subgroup to incorporate the table and text shown into the
specification of LC and Profile.
Max Neuendorf, FhG-IIS, presented
m37883
Complexity Constraints for MPEG-H
Max Neuendorf, Michael Kratschmer,
Manuel Jander, Achim Kuntz, Simone
Fueg, Christian Neukam, Sascha Dick,
Florian Schuh
The contribution proposed to change the LC Profile definition such that the maximum
complexity is reduced. The presenter noted that the proposal for Binaural Rendering has
already been discussed between experts with a lack of full agreement. Hence more
discussion is needed on this one proposal.












DRC
Arithmetic Coding (Chair’s note – need more complete description)
LTPF
OAM
M/S, Complex prediction
IGF
MCT
Sampling Rate
Object Rendering
Binaural Rendering
IPF
Size of Audio Scene Information
Tim Onders, Dolby, asked if all LC profile listening test bitstreams satisfied the
proposed constraints. The presenter believed that all did.
With the exception of the proposal for Binaural Rendering, and NFC in
SignalGroupTypeHOA, it was the consensus of the Audio subgroup to adopt all
proposed changes in the contribution into the Study on 3D Audio DAM 3 text.
The proposal for Binaural Rendering, and NFC in SignalGroupTypeHOA restrictions
needs additional discussion.
Further Discussion
Concerning NFC in SignalGroupTypeHOA, it was the consensus of the Audio subgroup
to adopt the proposed change in the contribution into the Study on 3D Audio DAM 3
text.
3D Audio Reference Software
Achim Kuntz, FhG-IIS, presented
m37891
Software for MPEG-H 3D Audio RM6
Michael Fischer, Achim Kuntz, Sangbae
Chon, Aeukasz Januszkiewicz, Sven
Kordon, Nils Peters, Yuki Yamamoto
This a joint contribution from many companies, and the presenter gave credit to all for
their hard work. The code is in the contribution zip archive. The code implements:
Page: 375
Date Sa
376


th
DAM3 from the 113 meeting, if #defines are set correctly
Additional bugs that were identified and corrected, and delimited by #defines.
It was the consensus of the Audio subgroup to take the contribution and associated
source code and make it RM6, Software for MPEG-H 3D Audio.
Taegyu Lee, Yonsei, presented
m37986
Bugfix on the software for MPEG-H 3D
Audio
Taegyu Lee, Henney Oh
The contribution noted that the current code for the FD binauralization tool supportes
block size of 4096 but not 1024. The contribution also presents fix for this bug.
It was the consensus of the Audio subgroup to incorporate the proposed fix into RM7,
which will be built according to a workplan from this meeting.
Invariants CE
The Chair asked Audio experts if they felt it was appropriate to review and discuss this
meeting‘s documents concerning the Swissaudec Invariants CE even though Clemens
Par, Swissaudec, was not present to make the presentation and engage in the discussion.
No Audio experts objected to this proposal.
The Chair made a presentation concerning the Swissaudec Invariants CE. First, he noted
that the following document from the 113th meeting is the CE proposal containing the
CE technical description.
m37271
MPEG-H "Phase 2" Core Experiment –
Technical Description and Evidence of
Merit
Clemens Par
Second, he noted that a new version of the following contribution was uploaded to the
contribution register during the afternoon of Tuesday of the MPEG week.
m37529
Resubmission of Swissaudec's MPEG-H
"Phase 2" Core Experiment
Clemens Par
From the many items in the m37529 zip archive, the Chair presented the following two
documents:

m37592-2/m37271_Delta_Final_Test_Report/m37271_l100825-01_Swissaudec_SenseLab042-15.pdf

m37529/m37529_Computational_Complexity/m37529_m37271_ANNEX_Computational_complexity_clean.docx
and
The first is a final listening test report from DELTA SenseLab, the second a comparison
of the complexity of the CE technology and the RM technology. In fact, the Chair
created and presented his own complexity tables based on the Swissaudec contribution
and the appropriate MPEG contribution documents (see text between the ―====‖
delimiters below). The summary complexity table is shown here:
Comparison of the complexity of the Invariants CE to RM0 or Benchmark
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
CO submission
CO_01 - CO_10
3DA Phase 1 + MPS
3DA Phase 1 + MPS
3DA Phase 1
3DA Phase 1
CO_11 - CO_12
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
RM0/Benchmark
PCU
63+40 = 103
63+40 = 103
63
63
CE
Ratio
80.5
80.5
80.5
80.5
0.78
0.78
1.28
1.28
63 + 35 = 98
63 + 35 = 98
63 + 35 = 98
63 + 35 = 98
80.5
80.5
80.5
80.5
0.82
0.82
0.82
0.82
Comments from Audio experts
The Chair understands that the CE proponent asserts performance equivalent to RM0
and complexity lower than RM0, however, the Chair notes that for some operating
Page: 376
Date Sa
377
points the CE is less computationally complex than RM0, and for others it is more
computationally complex.
Max Neuendorf, FhG-IIS, stated that subjective test results based on only three items do
not prove the that subjective quality is preserved in all cases. Hence, there would be
considerable risk in adopting the technology. The Chair noted that nine other experts
expressed agreement with the previous statement. When asking the question: ―how
many experts agree to adopt the Swissauded CE technology into the Study on DAM 3?‖
no (zero) experts raised their hands.
Jon Gibbs, Huawei, noted that the Delta SenseLab test is statistically fairly weak in
terms of proving that system quality is truly equivalent.
The Chair noted that other CEs have more than 3 items in their subjective listening tests,
and that the risk in miss-judging the subjective performance based on the submitted test
is a major issue that cannot be overcome by the associated computational complexity
figures.
Chair‘s presentation document:
M3721, MPEG-H ―Phase 2‖ Core Experiment
113th MPEG meeting
Systems under test:
Bitrate
Sys Description
128 kb/s Sys1 Benchmark FhG 128 kb/s
Sys2 CE 3DAC 128 (RM0+CE technology)
Sys3 CPE 128 (Coded CE downmix)
Sys4 CPE PCM (Uncoded (PCM) CE downmix)
Sys5 High Anchor from CfP, 256 kb/s
96 kb/s
Sys1
Sys2
Sys3
Sys4
Benchmark FhG 96 kb/s
CE 3DAC 96 (RM0+CE technology)
CPE 96 (Coded CE downmix)
CPE PCM (Uncoded (PCM) CE downmix)
64 kb/s
Sys1
Sys2
Sys3
Sys4
RM0 64 kb/s
CE 3DAC 64 (RM0+CE technology)
CPE 64 (Coded CE downmix)
CPE PCM (Uncoded (PCM) CE downmix)
48 kb/s
Sys1
Sys2
Sys3
Sys4
RM0 48 kb/s
CE 3DAC 48 (RM0+CE technology)
CPE 48 (Coded CE downmix)
CPE PCM (Uncoded (PCM) CE downmix)
Signals used in the subjective test
Signal
Channels
CO_01_Church
22.2
CO_02_Mensch
22.2
CO_03_SLNiseko 22.2
[Chair presented DELTA SenseLab Report]
M34261, Description of the Fraunhofer IIS 3D-Audio Phase 2 Submission
and Benchmark
109th MPEG meeting, July 2014, Sapporo, Japan
Overview
Page: 377
Date Sa
378
Table 1: Technology used for CO submission incl. number of coded
channels (#ch)
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
CO submission
CO_01 - CO_10
3DA Phase 1 + MPS
3DA Phase 1 + MPS
3DA Phase 1
3DA Phase 1
CO_11 - CO_12
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
#ch
9.1
9.1
9.1 / 9.0
9.1 / 9.0
22.2
22.2
22.2
22.2
Table 2: Technology used for CO benchmark incl. number of coded
channels (#ch)
(Blue entries indicate that the waveforms are identical to the corresponding
waveforms of the submission)
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
CO benchmark
CO_01 - CO_10
3DA Phase 1 + MPS
3DA Phase 1 + MPS
3DA Phase 1
3DA Phase 1
CO_11 - CO_12
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
#ch
5.1
7.1
9.1 / 9.0
9.1 / 9.0
22.2
22.2
22.2
22.2
PCU Complexity
As usually done within MPEG, the decoder and processing complexity is
specified in terms of PCU. Memory requirements are given in RCU. The
numbers given in Table 3 are worst-case estimates using the highest number out
of the 4 bitrates (48, 64, 96 and 128 kbps).
Table 3: Complexity estimation
Module
RCU
PCU
3DA-Core
Decoder
SAOC 3D
MPEG
Surround
98
23
35
40
152 (ROM)
50 (RAM)
63
Given above, Phase 2 RM0 and Benchmark complexity is
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
bitrate/items
48 kbps
64 kbps
96 kbps
CO submission
CO_01 - CO_10
3DA Phase 1 + MPS
3DA Phase 1 + MPS
3DA Phase 1
3DA Phase 1
CO_11 - CO_12
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
#ch
9.1
9.1
9.1 / 9.0
9.1 / 9.0
PCU
63+40 = 103
63+40 = 103
63
63
22.2
22.2
22.2
63 + 35 = 98
63 + 35 = 98
63 + 35 = 98
Page: 378
Date Sa
379
128 kbps
3DA Phase 1 (incl. SAOC 3D)
22.2
From m37271 v3
ECMA-407 Decoder
CICP-Downmix (9.1)
CICP-Downmix (11.1)
CICP-Downmix (14.0)
63 + 35 = 98
40.7 PCU
0.39 PCU
0.37 PCU
0.27 PCU
Complexity for the RM0 6-channel 3D Audio Core is ~ 3*13.2 = 39.6 PCU
Total PCU complexity for 22.2
80.3 PCU
Total PCU complexity for 9.1
80.69 PCU
Total PCU complexity for 11.1
80.67 PCU
Total PCU complexity for 14.0
80.57 PCU
Comparison of the complexity of the Invariants CE to RM0 or Benchmark
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
bitrate/items
48 kbps
64 kbps
96 kbps
128 kbps
4.2.3
CO submission
CO_01 - CO_10
3DA Phase 1 + MPS
3DA Phase 1 + MPS
3DA Phase 1
3DA Phase 1
CO_11 - CO_12
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
3DA Phase 1 (incl. SAOC 3D)
RM0/Benchmark
PCU
63+40 = 103
63+40 = 103
63
63
CE
Ratio
80.5
80.5
80.5
80.5
0.78
0.78
1.28
1.28
63 + 35 = 98
63 + 35 = 98
63 + 35 = 98
63 + 35 = 98
80.5
80.5
80.5
80.5
0.82
0.82
0.82
0.82
Maintenance
Seungkwon Beack, ETRI, presented
m37834
Maintenance report on MPEG Surround
Extension tool for internal sampling-rate
Seungkwon Beack, Taejin Lee, Jeongil
Seo, Adrian Murtaza
MPS
The presenter reminded Audio experts that, at the 113th meeting, the MPEG Surround
specification was revised to permit greater flexibility with respect to MPEG Surround
internal sampling rate and number of QMF bands. Specifically, to support 64 QMF
bands permits it to interface directly with MPEG-H 3D Audio in the QMF domain. It
presents listening test results that document the performance of this operation point.
Systems under test


Sys1 Revised system
Sys2 Original system
Mushra test results showed


No difference for average scores
One better for differential scores, no difference overall
In conclusion, test results showed that current spec works.
Adrian Murtaza, FhG-IIS, presented
m37877
Proposed Study on DAM3 for MPEG-D
MPEG Surround
Adrian Murtaza, Achim Kuntz
MPS
The contribution relates to the internal sampling rate issue addressed by the previous
contribution. Specifically, it mandates that MPEG-H 3D Audio shall only be used with
MPEG Surround when MPEG Surround is in 64-band QMF mode.
It was the consensus of the Audio subgroup to incorporate the contribution into a Study
on DAM3 for MPEG-D MPEG Surround.
Adrian Murtaza, FhG-IIS, presented
m37879
Thoughts on MPEG-D SAOC Second
Adrian Murtaza, Jouni Paulus, Leon
SAOC
Page: 379
Date Sa
380
Edition
Terentiv
This contribution is a draft for what will become MPEG-D SAOC Second Edition. This
Draft Second Edition contains:




COR 1
COR 2
DCOR 3 (in the final text of the Second Edition)
AMD 3 (Dialog Enhancement)
The Chair thanks Audio experts for undertaking the roll-up to make the SAOC Second
Edition.
Adrian Murtaza, FhG-IIS, presented
m37878
Proposed Corrigendum to MPEG-D
SAOC
Adrian Murtaza, Jouni Paulus, Leon
Terentiv
SAOC
The contribution proposes additional corrections to be added to the SAOC Defect Report
from the 106th and 112th meeting to make the text of a DCOR 3 to issue from this
meeting.
Corrections are:



Better organization of text
Clarifications of text
Corrections to text
It was the consensus of the Audio subgroup to incorporate the text from the two defect
reports (from the 106th and 112th meetings) and the corrections in this contribution into a
DCOR 3 to issue from this meeting.
Adrian Murtaza, FhG-IIS, presented
m37875
Proposed Study on DAM4 for MPEG-D
SAOC
Adrian Murtaza, Leon Terentiv, Jouni
Paulus
SAOC
The contribution related to Conformance, and proposes corrections to the



The definition of “correct” decoding (i.e. maximum PCM output of 32 for 16-bit PCM
output, and maximum RMS difference)
Numerous other corrections to the text of the conformance document
Definition of 23 new conformance sequences for SAOC-DE. The conformance streams
are available now. In total, there are 71 SAOC conformance streams, 48 for SAOC and
23 for SAOC-DE.
It was the consensus of the Audio subgroup to make the contribution a Study on DAM4
for MPEG-D SAOC. The conformance streams are already on the MPEG SVN server,
as indicated in the Study On text.
Adrian Murtaza, FhG-IIS, presented
m37876
Proposed Study on DAM5 for MPEG-D
SAOC
Adrian Murtaza, Leon Terentiv, Jouni
Paulus
SAOC
The contribution related to Reference Software, and proposed several corrections and
additional functionality to the Reference Software.
It was the consensus of the Audio subgroup to make the contribution a Study on DAM5
for MPEG-D SAOC. The reference software is already on the MPEG SVN server, as
indicated in the Study On text.
Christof Fersch, Dolby, gave a verbal presentation of
m37791
Report on corrected USAC conformance
bitstreams
Christof Fersch
USAC
The one conformance stream previously identified as a problem, was re-defined and regenerated so as to resolve all issues.
Yuki Yamamoto, Sony, gave a verbal presentation of
m37826
Sony's crosscheck report on USAC
conformance
Yuki Yamamoto, Toru Chinen
USAC
Sony experts confirmed that the new conformance stream was able to be decoded
successfully.
Page: 380
Date Sa
381
The Chair noted that the fully correct and cross-checked set of USAC conformance
streams will be made available attached to ISO/IEC 23003-3:2012/AMD 1 COR 2,
Corrections to Conformance. This will be re-visited when the DoC on DCOR 2 is
reviewed.
Max Neuendorf, FhG-IIS, presented
m37886
Constraints for USAC in adaptive
streaming applications
Max Neuendorf, Matthias Felix, Bernd
Herrmann, Bernd Czelhan, Michael
Haertl
USAC
The contribution describes and clarifies numerous situations that arise in streaming use
cases involving USAC. The contribution proposes to add text to



The body of the USAC specification where appropriate
Annex B Encoder Description
Annex F Audio/Systems Interaction
It was the consensus of the Audio subgroup to incorporate the contribution text into
ISO/IEC 23003-3:2012/FDAM 3, Support for MPEG-D DRC and IPF.
Michael Kratschmer, FhG-IIS, presented
m37884
Parametric Limiter for MPEG-D DRC
Michael Kratschmer, Frank Baumgarte
DRC
The presenter gave a brief overview of how MPEG-D DRC works and what it can be
used for. He further noted that peak limiting and clipping prevention is typically out of
scope for most MPEG audio decoders. The contribution specifically addresses the
prevention of peak limiting, via the specification of a new parametric DRC type for peak
limiting. This new functionality does not require any new tools or architecture. What is
required is a new playload that contains


Limiting threshold
Smoothing attack and release time constants
It was the consensus of the Audio subgroup to incorporate the contribution text into
ISO/IEC 23003-4:2015 DAM 1, Parametric DRC, Gain Mapping and Equalization
Tools, to be produced at this meeting.
Michael Kratschmer, FhG-IIS, presented
m37882
Proposed Corrigendum to MPEG-D
DRC
Michael Kratschmer, Frank Baumgarte
DRC
The contribution notes that each DRC configuration requires a separate metadata set. It
proposes to adopt the downmixld mechanism for downmix configurations into a new
drcSetId for DRC configurations. This does not require a syntax change, only a
semantics change.
It was the consensus of the Audio subgroup to incorporate the contribution text into a
Defect Report on MPEG-D DRC.
Frank Baumgarte, Apple, presented
m37895
Study on MPEG-D DRC 23003-4
PDAM1
Frank Baumgarte, Michael Kratschmer
DRC
The presenter reviewed that the PDAM 1 text specifies the following tools:




Parametric DRC
Gain mapping
Loudness EQ
Generic EQ
Since the 113th meeting, when the PDAM issued, the technology has been implemented
which has identified issues in the specification. Many of those issues are captured in the
proposed Study On text. Issues are:


Complexity management. The Study text defines how to
o Estimate complexity
o Conditions under which EQs and other gains can be discarded so as to satisfy
maxium decoder complexity limits
Loudness measurement
o Update to latest ITU recommendation
Page: 381
Date Sa
382



EQ Phase alignment
o Need further restrictions on filter phase
Ordering of boxes in MP4 FF
o To simplify decoder parser complexity
Numerous other clarifications and editorial changes
It was the consensus of the Audio subgroup to make the contribution text ISO/IEC
23003-4:2015 DAM 1, Parametric DRC, Gain Mapping and Equalization Tools, to be
produced at this meeting.
Michael Kratschmer, FhG-IIS, presented
m37881
Proposed RM7 of MPEG-D DRC
software
Michael Kratschmer, Frank Baumgarte
DRC
The contribution is a proposed RM7 for MPEG-D DRC Reference Software. It
incorporates all tools in DRC including



ISO/IEC 23003-4:2015, Dynamic Range Control
15842, Text of ISO/IEC 23003-4:2015 PDAM 1, Parametric DRC, Gain Mapping and
Equalization Tools
m37895, Study on MPEG-D DRC 23003-4 PDAM1
It was the consensus of the Audio subgroup to add to this the technology in

m37884, Parametric Limiter for MPEG-D DRC
And issue this as

ISO/IEC 23003-4:2015/PDAM 2, DRC Reference Software
from this meeting with a


4.3
3 week editing period and
A resolution for a 2 month ballot
Discussion of Remaining Open Issues
DAM 3 Open Issues
NFC Syntax
Gregory Pallone, Orange, presented syntax and supporting text pertaining to carriage of
NFC information
Experts agreed that the current syntax will work and support the Orange use case if the
NfcReferenceDistance is signalled in the bitstream.
Experts from Technicolor and Samsung stated that any change, even editorial, causes
more work for implementers, and so would prefer that syntax remain as in the DAM 3
text.
It was the consensus of the Audio subgroup to keep syntax as in the DAM 3 text from
the 113th meeting.
NFC Processing
Gregory Pallone, Orange, presented information pertaining to NFC processing.
The Chair asked the question: what are the use cases in LC Profile decoders that require
NFC processing? The presenter noted that when listening in the car (and others) the
speakers might be too close.
Revisions to LC Profile specification
Max Neuendorf, FhG-IIS, presented edited LC Profile specification text concerning:



Bitstream channels
3D Audio core decoded channels
Speaker output channels
The text is shown between the ―====‖ delimiters, below.
It was the consensus of the Audio subgroup to keep adopt this text into Study on MPEGH 3D Audio DAM 3 text.
Page: 382
Date Sa
383
Example of max.
loudspeaker configuration
5
2
2.0
5
2 ch + 3 static obj
2
2nd order + 3 static obj
2
48000
18
9
8
7.1
9
6 ch + 3 static obj
4
4th order + 3 static obj
3
48000
32
16
12
11.1
16
12 ch + 4 obj
6
6th order + 4 obj
4
48000
56
28
24
22.2
28
24 ch + 4 obj
6
6th order + 4 obj
5
96000
56
28
24
22.2
28
24 ch + 4 obj
6
6th order + 4 obj
Max. HOA order
Example of
max. HOA order + O
Max. no. of loudspeaker
output ch
10
Example of a
max. Config C+O
Max. no. of decoder
processed core ch
48000
Max. Sampling rate
1
Level
Max. no. of core ch in
compressed data stream
Max. no. of decoded objects
Table P2 — Levels and their corresponding restrictions for the Low Complexity Profile
— The use of switch groups determines the subset of core channels out of the core
channels in the bitstream that shall be decoded.
— If the mae_AudioSceneInfo() contains switch groups (mae_numSwitchGroups>0),
then the elementLengthPresent flag shall be 1
Further discussion on Zylia CE and LC Profile
Tomasz Żernicki and Lukasz Januszkiewicz from Zylia has presented listening test
pooled results for TCC at 20 kbps (TCC+eSBR) and 64 kbps (TCC+IGF). There is a
significant improvement for 7 items for 20 kbps and 2 items for 64 kbps. The quality of
none item was degraded.
New results (64kbps) were related to application of TCC in higher bitrates with IGF tool
activated. These new results demonstrate that TCC works efficiently for high bitrates use
cases and still can introduce statistically significant improvement of the quality.
Tomasz Żernicki, Zylia, proposed to include TCC tool into MPEG-H 3D Audio Low
Complexity profile. Additionally, relevant changes to DAM3 text were proposed.
The presenter proposed to restrict the tool for use in LC Profile according to the table
below
Table PX — Restrictions applying to TCC tool according to the Levels of the Low
Complexity Profile
Mpegh3daProfile LevelIndication
1
2
3
4
5
Maximum number of TCC channels
2
4
4
8
0
MAX_NUM_TRJ
4
4
4
4
0
Max Neuendorf, FhG-IIS, noted that the TCC processing presented to this meeting used
the ODFT synthesis method.
Juergen Herre, IAL/FhG-IIS, asked why pitch pipe, which is a highly tonal tool, did not
show improvement using the TCC tool. The presenter stated that the encoder signal
classifier disable the tool for this class of signal.
Max Neuendorf, FhG-IIS, asked to clarify the description of complexity based on the
synthesis method used. Presenters agreed, that a short update regarding the Taylor series
Page: 383
Date Sa
384
cosine function calculation could be added to the technical description of the tool.
Christof Fersch, Dolby, noted that this is a small change that can be added to DAM3
during present meeting.
Discussion on new tools LC Profile
The Audio Chair observed the TCC tool and the HREF tool have both requested
adoption into LC profile, and furthermore, that the Chair has suggested the possibility of
making all NFC processing informative in LC Profile. Furthermore, experts from Korea,
the first customer of MPEG-H 3D Audio, have expressed a strong request that the set of
tools in LC Profile should remain unchanged. The Chair suggested that a possible
consensus position that resolves many open issues is to:



Not admit TCC into LC Profile
Not admit HREP into LC Profile
Keep NFC signaling syntax and NFC processing in LC Profile as specified in the DAM 3
from the 113th MPEG meeting.
Henney Oh, WULUS, spoke for the Korean broadcast industry to say that Korea is
implementing MPEG-H 3D Audio codec now, and it would be best if there is no change
in the LC Profile toolbox. Sang Bae Chon, Samsung, speaking as a Consumer
Electronics device manufacturer, stated that implementers are on a similarly aggressive
timeline. A change to the LC Profile toolbox will jeopardize this timeline.
Concerning the set of tools in LC Profile and also NFC processing mandated in LCP
profile, it was the consensus of the Audio subgroup to stay as is specified in the DAM 3
text from the 113th meeting.
HREP
Christain Neukam, FhG-IIS, gave a short presentation on HREP. The presentation asked
to have HREP in all processing paths of the 3D Audio decoder: channels, objects,
SAOC-3D, HOA.
Christof Freich, Dolby, stated some concern as to whether HREP will actually deliver a
quality increase in the SAOC and HOA signal paths. Christian Neukam, FhG-IIS, stated
that HREP is a core coder tool and as such will work on all signal types just at the 3D
Audio core provides efficient compression for all signal types. Nils Peters, Qualcomm,
noted that HOA works well with the core coder, and as such it his expectation that it will
work with HREP. Oliver Weubbolt, Technicolor, noted that ―channel-based‖ coding
tools work equally well for HOA signals. Adrian Murtaza, FhG-IIS, noted HREP gives
more bits to core coder and hence can be expected to make signals sound better.
Christian Neukam, FhG-IIS, noted that HREP is an identity system absent any coding.
Tim Onders, Doby, noted that HREP ―shifts a more objectionable artifact to a less
objectionable artifact,‖ and questioned whether subsequent SOAC and HOA processing
could ―color‖ the artifacts resulting in a worse result. Oliver Weubbolt, Technicolor,
reminded experts that this tool supplies more than 15 MUSHRA points performance
gain for some signals (e.g. selected applause signals). The Chair attempted to sum up,
and noted that a 15 point performance gain is greater than almost any CE, and hence
expects that HREP will deliver good performance when put in the SAOC and HOA
signal paths.
It was the consensus of the Audio subgroup to accept the HREP tool into the SAOC and
HOA signal paths (so that it is now present in all signal paths), and to document this in
Study on MPEG-H 3D Audio DAM 3.
Qualcomm, multiple items
Nils Peters, Qualcomm, gave a presentation that re-iterated the points raised in his
contribution m37874. The resolution for each item is recorded below.
CICP Extensions – Study as AhG mandate.
Screen adaptation and Signaling of screen size – pre-compute information ahead of
content switch. The Chair noted that there was no need to change the existing syntax, but
only recommend a small number (e.g. 8 of the 9x9x9 possibilities) preferred screen sizes
Page: 384
Date Sa
385
and to describe how the adaptation information could be pre-compued. It was the
consensus of the Audio subgroup to capture the proposal, with the modifications of the
Chair, into an informative Annex.
Rendering of HOA and static objects –
Deep Sen, Qualcomm, noted that the content provider might be mixing using only an
HOA panner and so it may be advantageous to place dialog objects using the HOA
panner.
Achim Kuntz questioned if HOA panning is desirable, in that some objects would tend
to become more diffuse. Presenter confirmed that a HOA panner would tend to excite
more speakers.
This proposal was not accepted
Support for horizontal only HOA content – the proposal is to add one new codebook so
that truly horizontal-only HOA content can be rendered as such.
It was the consensus of the Audio subgroup to adopt the proposal in to Study on MPEGH 3D Audio DAM 3.
Panasonic presentation
The presenter observed that both ―straightforward‖ and ―informative Annex G‖ HOA
renderer will be used in LC Profile decoders.
The presenter recommended adding one sentence. This was discussed, with some
experts supporting the new text and some wishing it to be further edited. The Chair
noted that, for this matter, the DAM text from the 113th meeting is correct and proposed
that there be no changes.
Proposed changes were not accepted.
Further Discussion on Panasonic proposals
Concerning Section 18 ―Low Complexity HOA Spatial Decoding and Rendering,‖
Panasonic experts presented new text, as shown below within the ―#####‖ delimiters,
where the red text is the proposed new text.
It was the consensus of the Audio subgroup to adopt the newly proposed text in to Study
on MPEG-H 3D Audio DAM 3.
(The following text is extracted from Section 18 of DAM3 for your reference.)
######################################################################
18
Low Complexity HOA Spatial Decoding and Rendering
Replace the following sentence at the beginning of subclause 12.4.2.1 "General Architecture"
The architecture of the spatial HOA decoder is depicted in Figure 40.
With
The Spatial HOA decoding process describes how to reconstruct the HOA coefficient sequences
from the HOA transport channels and the HOA side information (HOA extension payload).
Subsequently the HOA rendering matrix is applied to the HOA coefficient sequences to get the
final loudspeaker signals. The HOA renderer is described in section 12.4.3. Both processing
steps can be very efficiently combined resulting in an implementation with much lower
computational complexity. Since the HOA synthesis in the decoding process can be expressed
as a synthesizing matrix operation, the rendering matrix can be applied to the synthesizing
matrix for such combination. This realizes “decoding and rendering” in one-step rendering
without having to reconstruct full HOA coefficient sequences when they are not necessary. A
detailed description how t o integrate the spatial HOA decoding and rendering can be found in
Annex G. However, for didactical reasons the processing steps are described separately in the
followi‟ng subclauses.
The architecture of the spatial HOA decoder is depicted in Figure 40.
Replace ANNEX G (informative) "Low Complexity HOA Rendering" with the following text:
Page: 385
Date Sa
386
Annex G
(informative)
######################################################################
Binaural Rendering and selection of impulse responses
Max Neuendorf, FhG-IIS, made a presentation on new text. This text was discussed, and
edited.
It was the consensus of the Audio subgroup to adopt the newly proposed text in to Study
on MPEG-H 3D Audio DAM 3.
5
5.1
Closing Audio Plenary and meeting deliverables
Discussion of Open Issues
Remaining open issues were discussed and resolved. These pertained to 3D Audio and
are documented at the end of the 3D Audio Task Group section.
5.2
Recommendations for final plenary
The Audio recommendations were presented and approved.
5.3
Disposition of Comments on Ballot Items
DoC on Audio-related balloted items were presented and approved.
5.4
Responses to Liaison and NB comments
Liaison statement and NB comment responses generated by Audio were presented and
approved.
5.5
Establishment of Ad-hoc Groups
The ad-hoc groups shown in the following table were established by the Audio subgroup.
No.
Title
Mtg
No
AHG on 3D Audio and Audio Maintenance
AHG on Responding to Industry Needs on
No
Adoption of MPEG Audio
5.6
Approval of output documents
All output documents, shown in Annex D, were presented in Audio plenary and were
approved.
5.7
Press statement
The was no Audio contribution to the press statement.
5.8
Agenda for next meeting
The agenda for the next MPEG meeting is shown in Annex F.
5.9
All other business
There was none.
Page: 386
Date Sa
387
5.10
Closing of the meeting
The 114th Audio Subgroup meeting was adjourned Friday at 13:30 hrs.
Page: 387
Date Sa
Annex A
Participants
There were 43 participants in the Audio subgroup meeting, as shown in the following table.
First
Last
Country
Affiliation
Email
Seungkwon
Beack
KR
ETRI
skbeack@etri.re.kr
Toru
Chinen
JP
Sony
Toru.Chinen@jp.sony.com
Samsung
Sang
Chon
KR
Electronics
sb.chon@samsung.com
Int'l Audio Labs
Erlangen /
Jürgen
Herre
DE
Fraunhofer
hrr@iis.fraunhofer.de
Taejin
Lee
KR
ETRI
tjlee@etri.re.kr
Max
Neuendorf
DE
Fraunhofer IIS
max.neuendorf@iis.fraunhofer.de
Henney
Oh
KR
WILUS
henney.oh@wilusgroup.com
Gregory
Pallone
FR
Orange
gregory.pallone@orange.com
Schuyler
Quackenbush USA
ARL
srq@audioresearchlabs.com
Takehiro
Sugimoto
Japan
NHK
sugimoto.t-fg@nhk.or.jp
Jeongook
Song
US
Qualcomm
jeongook@qti.qualcomm.com
Oliver
Wuebbolt
DE
Technicolor
oliver.wuebbolt@technicolor.com
Tomasz
Zernicki
PL
Zylia
tomasz.zernicki@zylia.pl
Yuki
Yamamoto
Japan
Sony Corporation Yuki.Yamamoto@jp.sony.com
Christof
Fersch
Germany Dolby
christof.fersch@dolby.com
Taegyu
Lee
KR
Yonsei University tglee@dsp.yonsei.ac.kr
Panasonic
Hiroyuki
Ehara
Japan
Corporation
ehara.hiroyuki@jp.panasonic.com
Frank
Baumgarte
USA
Apple Inc.
fb@apple.com
Werner
De Bruijn
NL
Philips
werner.de.bruijn@philips.com
Huawei
Panji
Setiawan
Germany Technologies
panji.setiawan@huawei.com
Nils
Peters
US
Qualcomm
npeters@qti.qualcomm.com
Harald
Fuchs
Germany Fraunhofer IIS
harald.fuchs@iis.fraunhofer.de
michael.kratschmer@iis.fraunhofe
Michael
Kratschmer
Germany Fraunhofer IIS
r.de
Achim
Kuntz
Germany Fraunhofer IIS
achim.kuntz@iis.fraunhofer.de
Martin
Morrell
USA
Qualcomm
mmorrell@qti.qualcomm.com
Łukasz Januszkiewicz Poland
Zylia Sp. z o.o.
lukasz.januszkiewicz@zylia.pl
Adrian
Murtaza
DE
Fraunhofer IIS
adrian.murtaza@iis.fraunhofer.de
Moo Young
Kim
USA
Qualcomm
kimyoung@qti.qualcomm.com
christian.neukam@iis.fraunhofer.d
Christian
Neukam
Germany Fraunhofer
e
Panasonic R&D
Kai
Wu
Singapore Center
kai.wu@sg.panasonic.com
Dolby
Timothy
Onders
US
Laboratories, Inc. timothy.onders@dolby.com
Deep
Sen
USA
Qualcomm
dsen@qti.qualcomm.com
Peisong
Chen
USA
Broadcom
peisong.chen@broadcom.com
Haiting
Li
CN
Huawei
lihaiting@huawei.com
388
389
Jon
Ken
Jason
S M Akramus
Ingo
Phillip
Gibbs
McCann
Filos
Salehin
Hofmann
Maness
UK
UK
USA
USA
Germany
USA
Shankar
Shivappa
USA
Hyunho
Ted
Ju
Laverty
KR
USA
Page:
Technologies Co.,
Ltd.
Huawei
Technologies Co
Ltd
Zetacast/Dolby
Qualcomm
QUALCOMM
Fraunhofer IIS
DTS. Inc.
Qualcomm
technologies Inc
Korea Electronics
Technology
Institute
DTS, Inc.
jon.gibbs@oblisco.com
ken@zetacast.com
jfilos@qti.qualcomm.com
ssalehin@qti.qualcomm.com
ingo.hofmann@iis.fraunhofer.de
phillip.maness@dts.com
sshivapp@qti.qualcomm.com
hyunho@keti.re.kr
ted.laverty@dts.com
389
Date Saved: 2016-06-03
Annex B
Audio Contributions and Schedule
3D Audio Phase 2 AhG Meeting
Sunday
1300-1800
m37930
m37932
m37947
m37933
m37833
m37715
m37889
m37894
Topic
3D Audio Phase 2 CEs
TCC
Zylia Poznan Listening Test Site Properties
Authors
Corrections and clarifications on Tonal
Component Coding
Listening Test Results for TCC from FhG IIS
Low Complexity Tonal Component Coding
HREP
ETRI's cross-check report for CE on High
Resolution Envelope Processing
Cross Check Report for CE on HREP (Test Site
Fraunhofer IDMT)
3DA CE on High Resolution Envelope
Processing (HREP)
HOA Issues
Proposed modifications on MPEG-H 3D Audio
Presented
Jakub Zamojski, Lukasz Januszkiewicz,
Tomasz Zernicki
Lukasz Januszkiewicz, Tomasz Zernicki
x
Sascha Dick, Christian Neukam
Tomasz Zernicki, Lukasz
Januszkiewicz, Andrzej Rumiaski,
Marzena Malczewska
x
x
Seungkwon Beack, Taejin Lee
x
Judith Liebetrau, Thomas Sporer,
Alexander Stojanow
Sascha Disch, Florin Ghido, Franz
Reutelhuber, Alexander Adami, Juergen
Herre
x
Gregory Pallone
x
x
x
Review of AhG Report
114th MPEG Audio subgroup meeting
Monday
0900-1300
1300-1400
1400-1430
m37805
m37902
m37901
m37987
1430-1800
m37834
m37877
m37875
Topic
MPEG Plenary
Lunch
Audio Plenary
Welcome and Remarks
Report on Sunday Chairs meeting
Review main tasks for the week
113th MPEG Audio Report
AHG on 3D Audio and Audio Maintenance
AHG on Responding to Industry Needs on
Adoption of MPEG Audio
Review of Liaison Documents
Information on Korean standard for
terrestrial UHD broadcast services
Maintenance
MPEG-D
Maintenance report on MPEG Surround
Extension tool for internal sampling-rate
Proposed Study on DAM3 for MPEG-D
MPEG Surround
x
x
x
Henney Oh
x
Seungkwon Beack, Taejin Lee, Jeongil
Seo, Adrian Murtaza
Adrian Murtaza, Achim Kuntz
x
x
x
m37879
Thoughts on MPEG-D SAOC Second
Edition
m37791
Report on corrected USAC conformance
Christof Fersch
m37878
390
Presented
Schuyler Quackenbush
Schuyler Quackenbush
Juliane Borsum
Adrian Murtaza, Leon Terentiv, Jouni
Paulus
Adrian Murtaza, Leon Terentiv, Jouni
Paulus
Adrian Murtaza, Jouni Paulus, Leon
Terentiv
Adrian Murtaza, Jouni Paulus, Leon
Terentiv
m37876
Proposed Study on DAM4 for MPEG-D
SAOC
Proposed Study on DAM5 for MPEG-D
SAOC
Proposed Corrigendum to MPEG-D SAOC
Authors
x
x
x
x
391
m37826
m37886
m37884
m37895
m37882
m37881
Tuesday
0900-1000
m37531
m37896
m37827
m37885
bitstreams
Sony's crosscheck report on USAC
conformance
Constraints for USAC in adaptive streaming
applications
Parametric Limiter for MPEG-D DRC
Study on MPEG-D DRC 23003-4 PDAM1
Proposed Corrigendum to MPEG-D DRC
Proposed RM7 of MPEG-D DRC software
Joint with Systems
3D Audio DAM 4
Study Of 23008-3 DAM4 (w15851) Carriage Of Systems Metadata
Study on Base Media File Format 14496-12
PDAM1
Final AhG document presentations
Proposed simplification of HOA parts in
MPEG-H 3D Audio Phase 2
Phase I Corrections
Proposed DCOR to MPEG-H 3D Audio
edition 2015
m37770
Phase II Corrections and Clarifications
Corrections and Clarifications on MPEG-H
3D Audio DAM3
m37874
Thoughts on MPEG-H 3D Audio
1300-1400
Lunch
m37892
Phase II Corrections and Clarifications
Review of MPEG-H 3DA Metadata
m37897
Review of MPEG-H 3DA Signaling
m37853
m37863
Clarifications on MPEG-H PDAM3
Proposed Study on DAM3 for MPEG-H 3D
Audio
m37832
m37891
Proposed 3D Audio Profile
MPEG-H Part 3 Profile Definition
3D Audio Reference Software
Software for MPEG-H 3D Audio RM6
m37986
Bugfix on the software for MPEG-H 3D
Audio
1800-
Chairs meeting
Wednesday
0900-1100
MPEG Plenary
1130-1300
Page:
Joint with All at Audio
MPEG Vision
Yuki Yamamoto, Toru Chinen
x
Max Neuendorf, Matthias Felix, Bernd
Herrmann, Bernd Czelhan, Michael Haertl
x
Michael Kratschmer, Frank Baumgarte
Frank Baumgarte, Michael Kratschmer
Michael Kratschmer, Frank Baumgarte
Michael Kratschmer, Frank Baumgarte
x
x
x
x
Michael Dolan, Dave Singer, Schuyler
Quackenbush
x
x
Hiroyuki EHARA, Sua-Hong NEO, Kai
WU
x
Max Neuendorf, Achim Kuntz, Simone
Fueg, Andreas Hoelzer, Michael
Kratschmer, Christian Neukam, Sascha
Dick, Elena Burdiel, Toru Chinen
x
Oliver Wuebbolt, Johannes Boehm,
Alexander Krueger, Sven Kordon, Florian
Keiler
Nils Peters, Deep Sen, Jeongook Song,
Moo Young Kim
x
Simone Feug, Christian Ertel, Achim
Kuntz
Max Neuendorf, Sascha Dick, Nikolaus
Rettelbach
Sang Bae Chon, Sunmin Kim
Christian Neukam, Michael Kratschmer,
Max Neuendorf, Nikolaus Rettelbach, Toru
Chinen
x
Christof Fersch
x
Michael Fischer, Achim Kuntz, Sangbae
Chon, Aeukasz Januszkiewicz, Sven
Kordon, Nils Peters, Yuki Yamamoto
Taegyu Lee, Henney Oh
x
x
x
x
x
x
x
391
Date Saved: 2016-06-03
392
1300-1400
Lunch
1400-1500
1500-1600
m37816
Document Preparation
3D Audio LC Profile
Proposed number of core channels for LC
profile of MPEG-H 3D Audio
Complexity Constraints for MPEG-H
Takehiro Sugimoto, Tomoyasu Komori
x
Max Neuendorf, Michael Kratschmer,
Manuel Jander, Achim Kuntz, Simone
Fueg, Christian Neukam, Sascha Dick,
Florian Schuh
x
1600-1700
m37529
Phase II CE
Resubmission of Swissaudec's MPEG-H
"Phase 2" Core Experiment
Clemens Par
x
1700-1730
Open Issues
Zylia presentation
m37883
1730Thursday
0900-1000
1000-1100
Social: Buses leave at 6:15pm
1200-1300
Open Issues
Joint with 3DV at Audio
MPEG-H 3D Audio in a
Virtural/Augmented Reality Use Case
Joint with Requirements at Audio
Possible future MPEG-H 3D Audio profiles
m37832 MPEG-H Part 3 Profile Definition
Christof Fersch
Possible LCX profile for Cable Industry
Document Preparation
1300-1400
Lunch
1400-1500
1530-1600
1600-1800
DoC Review
3D Everything joint meeting
Open Issues
1800-
Chairs meeting
1100-1200
Friday
0800-0900
0800-1300
1000
1030
Page:
x
x
x
x
Discussion of Open Issues
Audio Plenary
Report on Thursday Chairs meeting
Recommendations for final plenary
Establishment Ad-hoc groups and review
AhG Mandates
Get document numbers
Submit AhG Mandates and Resolutions
Approve and send Press Release
Approve Responses to NB comments and
Liaison
Approval of output documents:
Nxxxx in document header
wxxxx (short title).docx -- document
filename
wxxxx.zip – zip archive name
Agenda for next meeting
Review of Audio presentation to MPEG
392
Date Saved: 2016-06-03
393
1300-1400
1400-
Page:
plenary
A.O.B.
Closing of the Audio meeting
Lunch
MPEG Plenary
393
Date Saved: 2016-06-03
Annex C
Task Groups
1. MPEG-H 3D Audio Phase 1
2. MPEG-H 3D Audio Phase 2
3. Maintenance of MPEG-2, MPEG-4 and MPEG-D
394
395
Annex D
Output Documents
Documents approved by Audio and jointly by Audio and other subgroups are shown below. Note
that document numbers shown in RED are listed in another subgroups resolutions, and shown here
for reference.
No.
Title
ISO/IEC 13818-1– Systems
15912 DoC on ISO/IEC 13818-1:2015/DAM 5
15913 ISO/IEC 13818-1:2015/FDAM 5
No.
15998
15999
No.
15924
No.
15922
Title
ISO/IEC 14496-3 – Audio
Text of ISO/IEC 14496-3:2009/AMD 3:2012/COR 1,
Downscaled (E)LD
ISO/IEC 14496-3:2009/DAM 6, Profiles, Levels and
Downmixing Method for 22.2 Channel Programs
Title
ISO/IEC 14496-5 – Reference software
Text of ISO/IEC 14496-5:2001/AMD 24:2009/DCOR 3,
Downscaled (E)LD
Title
ISO/IEC 14496-12 – ISO base media file format
DoC on ISO/IEC 14496-12:2015 PDAM 1 DRC Extensions
15923 Text of ISO/IEC 14496-12:2015 DAM 1 DRC Extensions
No.
Title
ISO/IEC 23003-1 – MPEG Surround
15934 Study on ISO/IEC 23003-1:2007/DAM 3 MPEG Surround
Extensions for 3D Audio
No.
Title
ISO/IEC 23003-2 – Spatial Audio Object Coding (SAOC)
ISO/IEC 23003-2:2010/DCOR 3 SAOC, SAOC Corrections
16076
Study on ISO/IEC 23003-2:2010/DAM 4, SAOC
Conformance
Study on ISO/IEC 23003-2:2010/DAM 5, SAOC Reference
16078
Software
Draft of ISO/IEC 23003-2:2010 SAOC, Second Edition
16079
16077
No.
16080
Title
ISO/IEC 23003-3 – Unified speech and audio coding
DoC on ISO/IEC 23003-3:2012/Amd.1:2014/DCOR 2
In charge
TBP
Available
Harald
N
16/02/26
Fuchs
Harald
N
16/02/26
Fuchs
In charge TBP Available
Adrian
N
16/02/26
Murtaza
Takehiro N
16/02/26
Sugimoto
In charge TBP Available
Adrian
Murtaza
In charge
N
16/02/26
TBP
Available
David
N
16/02/26
Singer
David
N
16/04/01
Singer
In charge TBP Available
Adrian
N
16/02/26
Murtaza
In charge TBP Available
Adrian
Murtaza
Adrian
Murtaza
Adrian
Murtaza
Adrian
Murtaza
In charge
N
16/02/26
N
16/02/26
N
16/02/26
N
16/02/26
TBP
Available
Michael
N
Kratschmer
16/02/26
Page:
395
Date Saved: 2016-06-03
396
16081
Text of ISO/IEC 23003-3:2012/Amd.1:2014/COR 2
DoC on ISO/IEC 23003-3:2012/DAM 3 Support of MPEGD DRC, Audio Pre-Roll and IPF
Text of ISO/IEC 23003-3:2012/FDAM 3 Support of
16083
MPEG-D DRC, Audio Pre-Roll and IPF
No.
Title
ISO/IEC 23003-4 – Dynamic Range Control
Defect Report on MPEG-D DRC
16084
16082
DoC on ISO/IEC 23003-4:2015 PDAM 1, Parametric DRC,
Gain Mapping and Equalization Tools
Text of ISO/IEC 23003-4:2015 DAM 1, Parametric DRC,
16086
Gain Mapping and Equalization Tools
Request for Amendment ISO/IEC 23003-4:2015/AMD 2,
16087
DRC Reference Software
ISO/IEC 23003-4:2015/PDAM 2, DRC Reference Software
16088
16085
No.
16089
Title
ISO/IEC 23008-3 – 3D Audio
ISO/IEC 23008-3:2015/DCOR 1 Corrections,
DoC on ISO/IEC 23008-3:2015/DAM 2, MPEG-H 3D
Audio File Format Support
Text of ISO/IEC 23008-3:2015/FDAM 2, MPEG-H 3D
16091
Audio File Format Support
Study on ISO/IEC 23008-3:2015/DAM 3, MPEG-H 3D
16092
Audio Phase 2
Study on ISO/IEC 23008-3:2015/DAM 4, Carriage of
16093
Systems Metadata
WD on New Profiles for 3D Audio
16094
16090
No.
16095
16096
No.
16097
16098
16099
16100
Title
ISO/IEC 23008-6 – 3D Audio Reference Software
3D Audio Reference Software RM6
Workplan on 3D Audio Reference Software RM7
Title
Liaisons
Liaison Statement to DVB on MPEG-H 3D Audio
Liaison Statement Template on MPEG-H 3D Audio
Liaison Statement to ITU-R Study Group 6 on BS.11965
Liaison Statement to IEC TC 100
Michael
N
Kratschmer
Michael
N
Kratschmer
Michael
N
Kratschmer
In charge
TBP
16/02/26
Michael
Kratschmer
Michael
Kratschmer
Michael
Kratschmer
Michael
Kratschmer
Michael
Kratschmer
In charge
N
16/02/26
N
16/02/26
N
16/04/01
N
16/02/26
N
16/03/18
Christian
Neukam
Ingo
Hofmann
Ingo
Hofmann
Max
Neuendorf
Schuyler
Quackenbush
Christof
Fersch
In charge
N
16/02/26
N
16/02/26
N
16/02/26
N
16/03/18
N
16/02/26
N
16/02/26
Achim
Kuntz
Achim
Kuntz
In charge
N
16/02/26
N
16/02/26
Ken McCann
Ken McCann
Schuyler
Quackenbush
Schuyler
Quackenbush
N
N
N
16/02/26
16/02/26
16/02/26
N
16/02/26
16/02/26
16/02/26
Available
TBP Available
TBP Available
TBP Available
Page:
396
Date Saved: 2016-06-03
397
16101
16102
Liaison Statement to IEC TC 100 TA 4
Liaison Statement to IEC TC 100 TA 5
Schuyler
N
Quackenbush
Schuyler
N
Quackenbush
16/02/26
16/02/26
Page:
397
Date Saved: 2016-06-03
398
Annex E
Agenda for the 115th MPEG Audio Meeting
Agenda Item
1. Opening of the meeting
2. Administrative matters
2.1. Communications from the Chair
2.2. Approval of agenda and allocation of contributions
2.3. Review of task groups and mandates
2.4. Approval of previous meeting report
2.5. Review of AhG reports
2.6. Joint meetings
2.7. Received national body comments and liaison matters
3. Plenary issues
4. Task group activities
4.1. MPEG-H 3D Audio
4.2. Maintenance: MPEG-2, MPEG-4, and MPEG-D
4.3. Audio Exploration
5. Discussion of unallocated contributions
6. Meeting deliverables
6.1. Responses to Liaison
6.2. Recommendations for final plenary
6.3. Establishment of new Ad-hoc groups
6.4. Approval of output documents
6.5. Press statement
7. Future activities
8. Agenda for next meeting
9. A.O.B
10. Closing of the meeting
Page:
398
Date Saved: 2016-06-03
399
– 3DG report
Source: Marius Preda, Chair
Summary
3DGC meeting report San Diego, February 2016 .....................Error! Bookmark not defined.
1.
Opening of the meeting ................................................................................................ 400
1.1 Roll call ..................................................................................................................... 400
1.2 Approval of the agenda ............................................................................................. 400
1.3 Goals for the week .................................................................................................... 400
1.4 Standards from 3DGC............................................................................................... 401
1.5 Room allocation ........................................................................................................ 401
1.6 Joint meetings ........................................................................................................... 401
1.7 Schedule at a glance .................................................................................................. 402
2.
AhG reports .................................................................................................................. 403
2.1 MPEG-V ................................................................................................................... 403
2.2 Augmented Reality ................................................................................................... 403
2.3 3D Graphics compression ......................................................................................... 403
2.4 MIoT and Wearables................................................................................................. 404
http://wg11.sc29.org/doc_end_user/documents/114_San%20Diego/ahg_presentations/MIoTMPEGWearables-AHGR-SanDiego114.pptx-1456203700-MIoT-MPEGWearables-AHGRSanDiego114.pptx .............................................................................................................. 404
2.5 3D 3DG Activities are reported in the Wednesday and Friday plenary ................... 404
3.
Analysis of contributions ............................................................................................. 404
3.1 Joint meeting with Systems on MSE AF .................................................................. 412
3.2 Joint meeting with Requirements on IoT and Wearable ........................................... 412
3.3 Joint meeting with 3DA ............................................................................................ 412
3.4 Joint Session with JPEG on JPEGAR ....................................................................... 412
3.5 Session 3DG Plenary ................................................................................................ 413
4.
General issues ............................................................................................................... 413
4.1 General discussion .................................................................................................... 413
4.1.1 Reference Software ............................................................................................ 413
4.1.2 Web site .............................................................................................................. 413
5.
General 3DG related activities ..................................................................................... 413
5.1 Promotions ................................................................................................................ 413
5.2 Press Release ............................................................................................................. 413
5.3 Liaison....................................................................................................................... 414
6.
Resolutions from 3DG ................................................................................................. 414
6.1 Resolutions related to MPEG-4 ................................................................................ 414
6.1.1 Part 5 – Reference software ............................................................................... 414
6.2 Part 16 – Animation Framework eXtension (AFX) .................................................. 414
Page:
399
Date Saved: 2016-06-03
400
6.2.1 The 3DG subgroup recommends approval of the following documents ........... 414
6.3 Part 27 – 3D Graphics conformance ......................................................................... 415
6.3.1 The 3DG subgroup recommends approval of the following documents ........... 415
6.4 Resolutions related to MPEG-A ............................................................................... 415
6.5 Part 13 – Augmented reality application format ....................................................... 415
6.5.1 The 3DG subgroup recommends approval of the following documents ........... 415
6.6 Part 17 – Multisensorial Media Application Format ................................................ 415
6.6.1 The 3DG subgroup recommends approval of the following documents ........... 415
6.6.2 The 3DG subgroup thanks US National Body for their comments on ISO/IEC 2300017:20xx CD .................................................................................................................... 415
6.7 Resolutions related to MPEG-V (ISO/IEC 23005 – Media context and control)..... 415
6.7.1 General ............................................................................................................... 415
6.7.2 Part 1 – Architecture .......................................................................................... 416
6.7.3 Part 2 – Control Information .............................................................................. 416
6.7.4 Part 3 – Sensory Information ............................................................................. 416
6.7.5 Part 4 – Virtual World Object Characteristics ................................................... 416
6.7.6 Part 5 – Data Formats for Interaction Devices ................................................... 417
6.7.7 Part 6 – Common types and tools....................................................................... 417
6.8 Resolutions related to MAR Reference Model (ISO/IEC 18039 Mixed and Augmented
Reality Reference Model) .................................................................................................. 417
6.8.1 General ............................................................................................................... 417
6.9 Resolutions related to Explorations .......................................................................... 418
6.9.1 Media-centric IoT and Wearable........................................................................ 418
6.9.2 Point Cloud Compression................................................................................... 418
6.10 Management .......................................................................................................... 418
6.10.1 Liaisons ........................................................................................................... 418
7.
Establishment of 3DG Ad-Hoc Groups ....................................................................... 418
8.
Closing of the Meeting ................................................................................................. 420
1.
Opening of the meeting
1.1
Roll call
1.2
Approval of the agenda
The agenda is approved.
1.3
Goals for the week
The goals of this week are:
 Review contributions related to MPEG-V
 Review contributions related to Augmented Reality
Page:
400
Date Saved: 2016-06-03
401










Status of ref soft and conformance for Web3D Coding
Review contributions related to MIoT and Wearable
Edit output documents related to Context, objectives and requirements for MIoT/W
Contribute to Big Media output documents
Edit DoCs
Investigate future developments for MPEG 3D Graphics Compression
 Point cloud compression
Review and issue liaison statements
 JPEG on AR
 MIoT and Wearable with WG10, ITU-T, IEC SG10, Syc AAL
Review the votes
Web-site
MPEG database
1.4
Standards from 3DGC
Std Pt Edit. A/E Description
4
4
4
AR
A
A
A
V
V
V
V
V
V
V
1.5
5
16
27
13
13
17
1
2
3
4
5
6
7
A40
A3
A7
E1
E2
A1
E1
E4
E4
E4
E4
E4
E4
E3
CfP
3DG for browsers RS
Web3DG coding
Web3DC Conformance
ARRM
ARAF
ARAF RS & C
MSMAF
Architecture
Control information
Sensory information
Virtual world object characteristics
Data formats for interaction devices
Common types and tools
RS & C
WD
12/10
15/06
15/10
15/10
15/10
15/10
15/10
15/10
CD
DIS
FDIS Gr.
15/06
15/02
15/06
14/07
14/10
15/10
15/10
16/02
16/02
16/02
16/02
16/02
16/02
14/07
15/10
15/10
15/10
16/02
15/06
16/06
16/02
16/06
16/06
16/06
16/06
16/06
16/06
15/10
16/06
16/06
16/06
16/10
16/02
17/01
16/10
17/01
17/01
17/01
17/01
17/01
17/01
16/06
3
3
3
3
3
3
3
3
3
3
3
3
3
3
Room allocation
3DGC:
1.6
2001
2011
2009
201x
201x
201x
201x
201x
201x
201x
201x
201x
201x
201x
Santa Clara
Joint meetings
During the week, 3DG had several joint meetings with Requirements, Video, Audio and Systems.
Groups
What
Day Time1 Time2
Where
Page:
401
Date Saved: 2016-06-03
402
R, V, 3
All
JPEG-MPEG
A, 3
R, 3
All
All
1.7
Everything 3D
MPEG Vision
Joint standards
3D Audio in VR/AR
MIoTW
Communication
MPEG Vision
Tue
Wed
Thu
Thu
Thu
Thu
Thu
10:00
11:30
09:00
10:00
14:00
16:00
16:00
11:00
13:00
10:00
11:00
15:00
17:00
18:00
R
A
JPEG
A
3
Ocean Beach
Schedule at a glance
Monday
MPEG Plenary
09h00 to 13h00
Lunch
13h00 to 14h00
Agenda/Status/Preparation of the week
14h00 to 14h30
MIoT and Wearable (xxx)
14h30 to 17h00
3DG Plenary, review of 3DG
contributions ( xxx )
17h00 to 18h00
ARAF contributions
09h00 to 10h00
Everything 3D
10h00 to 11h00
MioT and Wearable (review of
contributions)
11h00 to 13h00
Lunch
13h00 to 14h00
MPEG-V (review of contributions)
14h00 to 16h00
MioT and Wearable (review of
contributions)
16h00 to 18h00
MPEG Plenary
09h00 to 11h30
MPEG Vision
11h30 to 13h00
Lunch
13h00 to 14h00
MIoT and Wearable (liaisons)
14h00 to 16h30
ARAF White paper
16h30 to 17h00
JSession MPEG-JPEG
Joint standards
09h00 to 10h00
Session
3D Audio in VR/AR
10h00 to 11h00
Session
Session
Tuesday
JSession
Wed
J Session (with all)
Session
Thu
Page:
402
Date Saved: 2016-06-03
403
MIoT and Wearable (review draft
requirements and DCfP)
11h00 to 13h00
Lunch
13h00 to 14h00
Requirements
MIoTW
14h00 to 15h00
Session
MIoT and Wearable output documents
15h00 to 18h00
JSession
Everything 3D
15h30 to 16h30
BO Session
Point Cloud compression planning
16h30 to 18h00
Session
Joint meeting
Communication (only for a ARAF White Paper
BO group)
16h00 to 17h00
Joint meeting with all (only
MPEG Vision
for a BO group)
16h00 to 18h00
Friday
Session
Liaisons review, press release, AR and MPEG-V,
MIoTW output documents
09h00 to 10h00
Session
3DG Plenary, 3DG Output Documents Review,
resolutions
10h00 to 13h00
MPEG Plenary
14h00 to xxh00
2.
AhG reports
2.1
MPEG-V
No AhG was established at the last meeting.
2.2
Augmented Reality
http://wg11.sc29.org/doc_end_user/documents/114_San%20Diego/ahg_presentations/ARAF_AhG_
Report.ppt-1456190378-ARAF_AhG_Report.ppt
2.3
3D Graphics compression
http://wg11.sc29.org/doc_end_user/documents/114_San%20Diego/ahg_presentations/AhG_3DG_m
onday_san_diego_february.v.1.pptx-1456171057-AhG_3DG_monday_san_diego_february.v.1.pptx
Page:
403
Date Saved: 2016-06-03
404
2.4
MIoT and Wearables
http://wg11.sc29.org/doc_end_user/documents/114_San%20Diego/ahg_presentations/MIoTMPEGWearables-AHGR-SanDiego114.pptx-1456203700-MIoT-MPEGWearables-AHGRSanDiego114.pptx
2.5
3D 3DG Activities are reported in the Wednesday and Friday plenary
Wednesday:
http://wg11.sc29.org/doc_end_user/documents/114_San%20Diego/presentations/MPEG3DGraphics
Mercredi.pptx-1456339383-MPEG3DGraphicsMercredi.pptx
Friday:
http://wg11.sc29.org/doc_end_user/documents/114_San%20Diego/presentations/MPEG3DGraphics
Vendredi.ppt-1456529824-MPEG3DGraphicsVendredi.ppt
3.
Analysis of contributions
Wearable & IoT
Root Elements for Media-centric IoT and Wearable (M-IoTW)
m37763
The root element is considered as a collection of several elements : Data,
PUnit, Cmmd, User.
Anna Yang, Jae-Gon Kim,
Sungmoon Chun, Hyunchul
Ko
Common and Gesture-Based Wearable Description for M-IoTW
m37764 A draft and initial schema is proposed for each of the following types : Data,
PUnit, Cmmd. The technical description is in a very early stage but it be
considered as a good starting point
m37993
Towards a reference architecture for Media-centric IoT and Wearable (MIoTW)
Sungmoon Chun, Hyunchul
Ko, Anna Yang, Jae-Gon
Kim
Mihai Mitrea, Bojan Joveski
Summary : A draft architecture inspired by WSO2.org is proposed
MIoT Use-case (Vibration subtitle for movie by using wearable device)
m38057
Resolution: consider this MIoT as an Individual MIoT for part 3
Speech-Related Wearable Element Description for M-IoTW
Summary :
Add Smartheadphone that has one or several microphones (regularMic and
m38113 InearMic)
AP : to check the micrphone from MPEG-V and update
AP : investigate on considering the generic « audioResource » in PData
instead of « speech »
m37903 Use Case of MIoT
Da Young Jung, Hyo-Chul
Bae, Kyoungro Yoon,
Miran Choi, Hyun-ki Kim
Jin Young Lee
Page:
404
Date Saved: 2016-06-03
405
Summary :We need to embed the global positioning in the metadata of the
IoT.
AP : add the requierement in the Req Document
Image analysis description for M-IoTW
Summary: A hybrid architecture including some local processing (on the
wearable) and some remote processing (more complex)
m37911 Various image analysis are proposed (face detection and recognition, object
detection and recognition, …)
duckhwan kim, bontae koo
Resolution: consider this simple example for identifying MPEG-U and
MPEG-UD types that can be reused
MIoT API instances
m37975
A set of APIs are proposed for MIoT covering basic functionalities. Camera
and Tracker MIoT are also proposed
Sang-Kyun Kim, Min Hyuk
Jeong, Hyun Young Park
Introduction to IoTivity APIs (MIoT)
m37976 Summary: IoTivity is a project proposed by Samsung and Intel for discovery
and communication for IoTs.
AP: use such a framework for Part 2 of MIoTW
m38041 Liaison Statement from JTC 1/WG 10 on Revised WD of ISO/IEC 30141
m38042
Liaison Statement from JTC 1/WG 10 on Request contributions on IoT use
cases
Sang-Kyun Kim, Min Hyuk
Jeong, Hyun Young Park,
JTC 1/WG 10 via SC 29
Secretariat
JTC 1/WG 10 via SC 29
Secretariat
Wearables and IoT&AAL use case standardisation
IEC 62559-2 is specifyig a template for use cases.
WG xxx and SyC AAL developped use cases related to IoTs
IEC formed a strategy group on Smart Wearable Devices
AP : put all the deocuments in a single place to facilitate the access for MPEG
members
m37800
m38041
Liaison
Statement from
JTC 1/WG 10 on JTC 1/WG 10 via SC 29 Secretariat
Revised WD of
ISO/IEC 30141
m38042
Liaison
Statement from
JTC 1/WG 10 on
JTC 1/WG 10 via SC 29 Secretariat
Request
contributions on
IoT use cases
Kate Grant
- Send a liaison to IEC SG 10 (strategic group on Wearable Smart Devices)
- CTA standardisation for BCI products (Consumer technology Association)
Details are in the documents below :
Page:
405
Date Saved: 2016-06-03
406
2016-02-09
11:42:16
m37697
2016-02-17
18:48:03
MPEG-V
General/All
MIoT Use-case
Summary: this contribution was not presented
M. Jalil
Piran


Part 1. Global architecture, use cases and common requirements
Part 2. System aspects: discovery, communication
o to interwork with available platforms, e.g UPnP or SmartBAN or IoTivity or xxx
o to set up a NEW light layer of presence and capabilities signaling
o AP: issue a CfP for Part 2 (Sang)
(Note: implementation guidelines for MIoT to be used on other platforms)
 Part 3. Individual M-IoTs, data formats and APIs
o Sensors and actuators
 Data format defined in MPEG-V (some may need to be extended), the MIoTW
standard deals only with the APIs for Sensors and actuators
 See MIoT Camera Example
o Media processing & storage units
 Data formats and/or API
 See MotionTracker Example, Face Recognition example
 AP: issue a CfP for Media processing unit interfaces (Mihai)
 Part 4. Wearable M-IoTs, data formats and APIs
o Smart Glasses
 Camera, gesture recognizer, microphone, display, etc.,
o Smart Watch
o Fitness tracker, Smart Textile (D-Shirt), Sleep monitoring
 Part 5. M-IoT Aggregates (combining individual MIoTs, eventually with other IoTs)
o Video Surveillance Systems
o Car Sensing and Communication System
o Smart Cities, Smart Factory, Smart Offices
MPEG-V
Proposal of metadata description for light display in MPEG-V
Summary: A lightDisplay type is proposed that can be used to activate
low resolution displays. These displays can have various forms (not only
rectangular)
m37808
AP: add the color matrix and the mask matrix. The second if used to
create an arbitrary form of the LightDevice
Jae-Kwan Yun, Yoonmee Doh,
Hyun-woo Oh, Jong-Hyun Jang
AP: design a VisualDisplayType (and AuralDisplayType) and an
AdvancedLightEffect
Page:
406
Date Saved: 2016-06-03
407
A proposal of instantiation to olfactory information for MPEG-V 4th
edition part 1
m37831
Summary: use cases for enose and olfactory
AP: accepted for Part 1 of 4th Edition
Enhancement of Enose CapabilityType for MPEG-V 4th edition part 2
m37860
Summary: a full set of enose capabilities is provided.
AP: accepted
A modification of syntax and examples for EnoseSensorType for
MPEG-V 4th edition part 5
m37861
Summary: EnoseSensorType
AP: add also the MonoChemical type (and change the Scent into
MixtureChemical). Boths have CSs.
Hae-Ryong Lee, Jun-Seok Park,
Hyung-Gi Byun, Jang-Sik Choi,
Jeong-Do Kim
Sungjune Chang, Hae-Ryong Lee,
Hyung-Gi Byun, Jang-Sik Choi,
Jeong-Do Kim
Jong-Woo Choi, Hae-Ryong Lee,
Hyung-Gi Byun, Jang-Sik Choi,
Jeong-Do Kim
Modification of 3DPrintingLUT
m37977 Summary: add the ThreeDPrintingSpectrumLUT (in addition to
ThreeDPrintingXYZLUT)
AP: accepted
In-Su Jang, Yoon-Seok Cho, JinSeo Kim, Min Hyuk Jeong, SangKyun Kim
3D Printing Color Reproduction command and preference
m37981 Summary: a command for activating or disactivating the color
reproduction.
AP: accepted
In-Su Jang, Yoon-Seok Cho, JinSeo Kim, Min Hyuk Jeong, SangKyun Kim
Proposal of metadata descriptions for handling sound effect in MPEG-V
m37999 Summary: Sound Device
AP: add the user preference at semantic level (a CS including "theatre,
cinema, concert hall, …)
Saim Shin, Jong-Seol James Lee,
Dalwon Jang, Sei-Jin Jang, HyunHo Joo, Kyoungro Yoon
m38054 Editor‘s input for the WD of MPEG-V Part 1 Version 4
Seung Wook Lee, JinSung Choi,
Kyoungro Yoon
m38055 Editor‘s input for the WD of MPEG-V Part 2 Version 4
Seung Wook Lee, JinSung Choi,
Kyoungro Yoon
m38056 Editor‘s input for the CD of MPEG-V Part 5 Version 4
Seung Wook Lee, JinSung Choi,
Kyoungro Yoon, Hyo-Chul Bae
AR
m37909
White Paper on ARAF 2nd Edition
Traian Lavric, Marius
Preda
m37910
Big Data for personalized user experience in AR
Traian Lavric, Marius
Preda, Veronica Scurtu
ARAF Expansion for Transformation System between the outdoor GPS and
Indoor Navigation
m38114
Summary: it is supposed that the indoor location system is able to calculate
the latitude, longitude and altitude corresponding to the user position.
However, the designer would like to specify indoor POIs by using local
coordinates (e.g. 10 meters x 30 meters x 3rd floor) and not the global
coordinates (long, lat, altitude).
JinHo Jeong,
HyeonWoo Nam
Page:
407
Date Saved: 2016-06-03
408
There is a need to calibrate the transformation between the indoor and global
positioning systems (this can be communicated to the browser as a
instantiation proto).
Another possibility is to consider the MapIndoor proto as a child of the
MapMarker (this needs a corrigenda for MapMarker because there is no
support for children)
Liaison:
#
37576
Title
Liaison statement
37583
JTC 1/WG 10 on Invitation to the 4th JTC
1/WG 10 Meeting
JTC 1/WG 10 on Logistical information
for the 4th JTC 1/WG 10 Meeting
Liaison statement
37584
Liaison statement
38041
Revised WD of ISO/IEC 30141
38042
Request contributions on IoT use cases
37581
37582
Source
JTC 1/
WG 10
JTC 1/
WG 10
JTC 1/
WG 10
ITU-T
SG 20
JTC 1/
WG 10
JTC 1/
WG 10
JTC 1/
WG 10
Disposition
IoT
3
Invitation to the 4th JTC 1/WG 10 Meeting, 18th22nd Jan. 2016
3
3
a new Study Group, IoT and its applications
including smart cities and communities (SC&C)
collection of Data related to the Internet of Things
3
Internet of Things Reference Architecture (IoT RA)
3
3
3
3DG
m37828
Efficient color data compression methods for PCC
Li Cui, Haiyan Xu, Seung-ho Lee, Marius Preda,
Christian Tulvan, Euee S. Jang
m38136
Point Cloud Codec for Tele-immersive Video
Rufael Mekuria (CWI), Kees Blom (CWI), Pablo
Cesar (CWI)
[FTV AHG] Further results on scene reconstruction with
hybrid SPLASH 3D models
m37528
Improvements on the SPLASH reconstruction from
image + depth. Point cloud can be the base
representation.
Sergio García Lobo, Pablo Carballeira López,
Francisco Morán Burgos
Web3D Coding for large objects with attributes and
texture
m37934
Christian Tulvan, Euee S. Jang, Marius Preda
Summary: update for the Web3D ref software (support
for high resolution textures, better navigation)
Text copied from the last version of the report in order to guideline the work on Wearable
Page:
408
Date Saved: 2016-06-03
409

Glasses (see-through and see-closed)
o Gesture recognition (performed by image analysis)
 Add new gestures (update the classification schema)
 MPEG-U could be the starting point
 Consider an intermediate representation format for arbitrary hand(s)
gestures
 MPEG-7 ShapeDescriptor (Contour and Region) can be the
starting point
 Adaptation of the recognition process with respect to the user
 MPEG-UD could be the starting point
o Voice recognition (check with Audio)
 Recognizing "some" voice commands
 Pre-encoded keywords: play, pause, left, right,
 User configured set of "words"
 Able to transmit the captured audio to a processing unit
 Probably encoded
 Probably sound features
 Interpret ambient sounds
o Image analysis (other than gesture)
 Recognizing "some" objects (e.g. Traffic signaling)
 CDVS and probably CDVA
 Face recognition
 MPEG-7 (FaceRecognition Desc, AdvancedFaceRecognition
Desc and InfraredFaceRecognition Desc)
 Text recognition
 Image to string – not too much to standardize however the
in/out API should be defined
 Able to transmit the captured image/video to a processing unit
 Probably encoded : MPEG-4 video and JPEG
 Probably image or video features (CDVS and CDVA)
 Able to convert in real time the captured image into another image (that
will be eventually displayed)
 Input: image, output: image– not too much to standardize
however the in/out API should be defined
o User interface
 Sensors
 MPEG-V could be the starting point
o Gyro, Accelerometer, Camera (color, stereoscopic,
infrared, …), microphone, touch sensitive device, gaze
tracker (?!)
 Display capabilities (Single, double screen, Stereoscopic)
 MPEG-V has the possibility to define actuator capabilities but is
not dealing yet with displays
 Rendering
 Rendering is controlled by the application
Page:
409
Date Saved: 2016-06-03
410

The glass should expose rendering capabilities (HW
acceleration, speed, extensions…)
 Consider the user profile for adapting glass features
 MPEG-UD
o Control and management
 Define level of quality for individual components in the glass system
 Example: voice recognition may have level 100 and object
recognition only 30: in case of lack of resources, voice
recognition will have priority
 Exposing the hardware capabilities (CPU, GPU, memory, storage,
battery level, …)
 Define level of quality for individual applications (to be moved outside
the glass section)
o Informative Part
 Typical overall architecture for smart glasses
 Use cases examples
o Communication and interconnection between glasses and external devices
 Messages, commands
 Eg. http, udp
 Media (image, video, audio)
 Eg. DASH
Wearable we want to deal with in MPEG
 Glasses
 Watches (Miran Choi). AP: to do the same exercise that was done for glasses.
 Earphone (Miran Choi). AP: to do the same exercise that was done for glasses.
 Artificial heart (Mihai Mitrea). AP: to do the same exercise that was done for glasses.
 D-Shirt (Mihai Mitrea). AP: to do the same exercise that was done for glasses.
Text copied from the last version of the report in order to guideline the work on MIoT:
 MIoT should expose timing information
 MIoT should expose location information (from the use case of fire surveillance
system)
 MIoT should be able to implement specific image processing algorithms and provides
codes with respect to a detected situation
o Defined some classes for image processing
 IoTs that we care about
o video camera
o microphone
o display (visual, auditory, olfactory, haptic)
o media storage
o media analyzers
 MIoT camera
o Sensing
Page:
410
Date Saved: 2016-06-03
411




MPEG-V camera sensor (5 types already defined: color, depth,
stereoscopic, thermo, spectrum)
o Some local Processing and support from a processing unit
 Compression (MPEG-4 video)
 Feature extraction and their transmission
 Face feature extraction (MPEG-7)
 Gesture recognition (MPEG-U ++)
 Feature extraction for arbitrary textures (CDVS)
 Color, shape, contour, texture pattern, motion trajectory, …
(MPEG-7)
 Fire
o Full processing
 Send compact commands to outside, depending on the application;
some commands can be standardized
Sensors & Actuators may become Engines (Active components, xxx) (we should add
APIs on top of MPEG-V s&a), therefore, they become IoTs. Some of them are MIoTs.
We can add some processing capabilities for some of them, from low processing
(sampling, compression, ..) to high processing (interpretation, analysis, …). We can
also build aggregates of MIoTs and IoTs and wearable are an example of these
aggregates.
Alternatives
o MIoT as a new part in MPEG-V containing the APIs
 Not so appropriate for MIoTs that don't have sensing or actuating (like
media analyzers or media storage)
o MIoT like MPEG-M engines
 Not so appropriate because the MPEG-M engines are usually complex
o A new project
 One layer contains individual MIoTs
 A second layer aggregates individual MIoTs, eventually with other
IoTs
 A particular class of this layer is the wearable
 Another class is the large scale systems (e.g. multi-cam
surveillance systems)
Output documents:
o Overview, context and objectives of Media-centric IoTs and Wearables (Marius) 3w
o Conceptual model, architecture and use cases for Media-centric IoTs and Wearables
(Mihai and Sang) 3w
o Draft requirements for Media-centric IoTs and Wearables (Sang and Mihai) 3w
 E.g. Category MIoT Camera
 E.g. Category glasses
 …
Page:
411
Date Saved: 2016-06-03
412

3.1
M-IoTW Ahg Mandate
o Improve and clarify the area of M-IoTW
o Identify and document additional use cases for M-IoTW
o Refine requirements for M-IoTW
o Prepare material for CfP (e.g. scope, evaluation criteria, data set)
o Investigate IoT Architectures for specific domains and how MIoTW will interwork
with them
o Identify the list of MPEG technologies related to M-IoTW
o Identify key organizations in IoT and Wearables and the status of their work
o Analyse the Working Draft of Information technology— Internet of Things
Reference Architecture (IoT RA) and submit initial comments for their next meeting
Joint meeting with Systems on MSE AF
No joint meeting was organized this meeting, MSE AF was handled by 3DG alone.
3.2
Joint meeting with Requirements on IoT and Wearable
Definitions of IoT and Wearables terms:
Entity
Thing
Media
Thing
Wearer
Wearable
Mwearable
any physical or virtual object that is sensed by and / or acted on by Things.
any thing that can communicate with other Things, in addition it may sense
and / or act on Entities.
is a Thing with at least one of audio / visual sensing and actuating capabilities
any living organism that is sensed by a Wearable.
any thing that senses the Wearer, may have control, communication, storage
and actuation capabilities, and may sense the Wearer environment.
a Wearable having at least one of media communication or storage capabilities
Three types of Wearables were illustrated by using the MPEG-V Architecture.
3.3
Joint meeting with 3DA
No joint meeting was organized this meeting.
3.4
Joint Session with JPEG on JPEGAR
No joint meeting was organized this meeting.
Page:
412
Date Saved: 2016-06-03
413
3.5
Session 3DG Plenary
Review of all the output documents. Details are provided in the documents listed in section 6.
4.
General issues
4.1
General discussion
4.1.1 Reference Software
It is recalled that the source code of both decoder AND encoder should be provided as part of the
Reference Software for all technologies to be adopted in MPEG standards. Moreover, not providing
the complete software for a published technology shall conduct to the removal of the corresponding
technical specification from the standard.
Currently all the AFX tools published in the third edition are supported by both encoder and decoder
implementation.
4.1.2 Web site
The new web site is available at http://wg11.sc29.org/3dgraphics/. It is a blog-based web site and all
members are allowed to post.
5.
General 3DG related activities
5.1
Promotions
5.2
Press Release
Augmented Reality Application Format reaches FDIS status
At the 114th MPEG meeting, MPEG's Application Format for Augmented Reality (ISO/IEC 2300013) has reached FDIS status and will be soon published as an International Standard. The MPEG
ARAF enables augmentation of the real world with synthetic media objects by combining multiple,
existing MPEG standards within a single specific application format addressing certain industry
needs. In particular, ARAF comprises three components referred to as scene, sensor/actuator, and
media. The scene component is represented using a subset of MPEG-4 Part 11 (BIFS), the
sensor/actuator component is defined within MPEG-V, and the media component may comprise
various types of compressed (multi)media assets using different modalities and codecs. The target
Page:
413
Date Saved: 2016-06-03
414
applications include geo-location based services, image based object detection and tracking, audio
recognition and synchronization, mixed and augmented reality games and real-virtual interactive
scenarios.
MPEG issues the 4th Edition of MPEG-V CD for communication between actors in virtual and
physical worlds
At 114th meeting, MPEG have released the 4th edition of MPEG-V CD (from part 1 to 6) that
supports new sensors and actuator such as 3DPrinter, ArrayCamera, E-nose, Radar, Microphone,
SoundDisplay and sensory effects such as AdvancedLight. In general, MPEG-V specifies the
architecture and associated representations to enable interaction between digital content and virtual
worlds with the physical one, as well as information exchange between virtual worlds. Features of
MPEG-V enable the specification of multi-sensorial content associated with audio/video data, and
control of multimedia applications via advanced interaction devices.
5.3
Liaison
Answer to ITU-T SG20 liaison on IoT. An output document was produced.
Answer to JTC1 WG10 liaison on IoT. An output document was produced.
6.
Resolutions from 3DG
6.1 Resolutions related to MPEG-4
6.1.1 Part 5 – Reference software
6.1.1.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 14496-5 – Reference software
TBP Available
6.2 Part 16 – Animation Framework eXtension (AFX)
6.2.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 14496-16 – Animation Framework eXtension (AFX)
Core Experiments Description for 3DG
Christian
16037
Tulvan
TBP Available
N
16/02/26
Page:
414
Date Saved: 2016-06-03
415
6.3 Part 27 – 3D Graphics conformance
6.3.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 14496-27 – 3D Graphics conformance
TBP Available
6.4 Resolutions related to MPEG-A
6.5 Part 13 – Augmented reality application format
6.5.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 23000-13 – Augmented reality application format
Text of ISO/IEC FDIS 23000-13 2nd Edition Augmented
Traian
16103
Reality Application Format
Lavric
TBP Available
N
16/03/03
6.6 Part 17 – Multisensorial Media Application Format
6.6.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge TBP Available
ISO/IEC 23000-17 – Multisensorial Media Application Format
DoC for ISO/IEC 23000-17:20xx CD Multiple Sensorial
Sang
N
16/02/26
16104 Media Application Format
Kyun
Kim
Text of ISO/IEC 23000-17:20xx DIS Multiple Sensorial
Sang
N
16/02/26
16105 Media Application Format
Kyun
Kim
6.6.2 The 3DG subgroup thanks US National Body for their comments on ISO/IEC 2300017:20xx CD
6.7
Resolutions related to MPEG-V (ISO/IEC 23005 – Media context and control)
6.7.1 General
6.7.1.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 23005 – Media context and control
16106 Technology under consideration
Seung
Wook
Lee
TBP Available
N
16/02/26
Page:
415
Date Saved: 2016-06-03
416
6.7.2 Part 1 – Architecture
6.7.2.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 23005-1 – Architecture
Request for ISO/IEC 23005-1 4th Edition Architecture
Marius
16107
Preda
th
Text of ISO/IEC CD 23005-1 4 Edition Architecture
Marius
16108
Preda
TBP Available
16/02/26
16/02/26
6.7.2.2 The 3DG recommends to appoint Marius Preda and Seung Wook Lee as editors of
ISO/IEC 23005-1 4th Edition Architecture
6.7.3 Part 2 – Control Information
6.7.3.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 23005-2 – Control information
Request for ISO/IEC 23005-2 4th Edition Control
Seung
16109 Information
Wook
Lee
Text of ISO/IEC CD 23005-2 4th Edition Control Information Seung
16110
Wook
Lee
TBP Available
16/02/26
16/03/11
6.7.3.2 The 3DG recommends to appoint Seung Wook Lee and Sang Kyun Kim as editors of
ISO/IEC DIS 23005-2 4th Edition Control Information
6.7.4 Part 3 – Sensory Information
6.7.4.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 23005-3 – Sensory information
Request for ISO/IEC 23005-3 4th Edition Sensory
Sang
16111 Information
Kyun
Kim
Text of ISO/IEC CD 23005-3 4th Edition Sensory Information Sang
16112
Kyun
Kim
TBP Available
16/02/26
16/03/11
6.7.4.2 The 3DG recommends to appoint Sang Kyun Kim as editor of ISO/IEC DIS 23005-3 4th
Edition Sensory Information
6.7.5 Part 4 – Virtual World Object Characteristics
6.7.5.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 23005-4 – Virtual world object characteristics
16113 Request for ISO/IEC 23005-4 4th Edition Virtual world
In-Su
TBP Available
16/02/26
Page:
416
Date Saved: 2016-06-03
417
object characteristics
Text of ISO/IEC CD 23005-4 4th Edition Virtual world object
16114
characteristics
Jang
In-Su
Jang
16/03/11
6.7.5.2 The 3DG recommends to appoint In-Su Jang and Sang Kyun Kim as editors of ISO/IEC
DIS 23005-4 4th Edition Virtual world object characteristics
6.7.6 Part 5 – Data Formats for Interaction Devices
6.7.6.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 23005-5 – Data formats for interaction devices
Request for ISO/IEC 23005-5 4th Edition Data Formats for
Seung
16115 Interaction Devices
Woo
Kum
Text of ISO/IEC CD 23005-5 4th Edition Data Formats for
Seung
16116 Interaction Devices
Woo
Kum
TBP Available
16/02/26
16/03/11
6.7.6.2 The 3DG recommends to appoint Seung Woo Kum, Sang Kyun Kim and Kyoungro Yoon
as editors of ISO/IEC DIS 23005-5 4th Edition Data Formats for Interaction Devices
6.7.7 Part 6 – Common types and tools
No.
Title
ISO/IEC 23005-6 – Common types and tools
Request for ISO/IEC 23005-6 4th Edition Common types
16117
and tools
Text of ISO/IEC CD 23005-6 4th Edition Common types and
16118
tools
In charge
TBP Available
Kyoungro
Yoon
Kyoungro
Yoon
16/02/26
16/03/11
6.7.7.1 The 3DG recommends to appoint Kyoungro Yoon as editor of ISO/IEC DIS 23005-6 4th
Edition Common types and tools
6.8
Resolutions related to MAR Reference Model (ISO/IEC 18039 Mixed and
Augmented Reality Reference Model)
6.8.1 General
6.8.1.1 The 3DG subgroup recommends approval of the following documents
No.
Title
In charge
ISO/IEC 18039 Mixed and Augmented Reality Reference
Model
16119 Text of ISO/IEC 2nd CD 18039 Mixed and Augmented
Marius
Reality Reference Model
Preda
TBP Available
Y
16/02/26
6.8.1.2 The 3DG subgroup recommends, at the request of SC24 WG9 concerning minor editorial
and formatting aspects, to withdraw the ISO/IEC DIS 18039 Mixed and Augmented Reality
Reference Model issued at 112th MPEG meeting, and transform it into a 2nd CD. In order to
Page:
417
Date Saved: 2016-06-03
418
minimize further delays in processing this standard we request two months CD ballot to be
implemented by both SCs so that the ballot results become available before the next meetings of
the working groups.
6.9
Resolutions related to Explorations
6.9.1 Media-centric IoT and Wearable
6.9.1.1 The 3 subgroup recommends approval of the following documents
No.
Title
In charge
Explorations – Media-centric Internet of Things and Wearable
16120 State of discussions related to MIoTW technologies
Marius
TBP Available
N
16/02/26
6.9.2 Point Cloud Compression
6.9.2.1 The 3 subgroup recommends approval of the following documents
No.
16121
16122
6.10
Title
Explorations – Point Cloud Compression
Current status on Point Cloud Compression (PCC)
Description of PCC Software implementation
In charge
Lazar
Bivolarsky
Lazar
Bivolarsky
TBP Available
N
16/03/11
N
16/03/11
Management
6.10.1 Liaisons
6.10.1.1
3DG recommends approval of the following document
No.
Title
Liaisons
16123 Liaison Statement to JTC1 WG10 related to MIoT
16124 Liaison Statement to ITU-T SG20 related to MIoT
7.
In charge
TBP Available
Marius
Marius
N
N
16/02/26
16/02/26
Establishment of 3DG Ad-Hoc Groups
AHG on AR
Page:
418
Date Saved: 2016-06-03
419
1.
2.
3.
Mandates
4.
5.
6.
Edit MAR RM documents
Continue the implementation of ARAF Reference Software
Identify additional needs for ARAF
Investigate technologies for representing Points of Interests
Identify the MPEG technologies that shall be part of ARAF
Continue actions to implement the collaborative work-plan for MAR RM with SC24
Chairmen Traian Lavric (Institut Mines Telecom), Marius Preda (Institut Mines Telecom)
Duration Until 115th MPEG Meeting
Reflector mpeg-3dgc AT gti. ssr. upm. es
Subscribe To subscribe, send email to https://mx.gti.ssr.upm.es/mailman/listinfo/mpeg-3dgc
During weekend prior to MPEG Room size
Meeting
20
Before weekend prior to MPEG Place /Date/Logistics May 29th, 17h00-18h00
AHG on MPEG-V
1. Edit the MPEG-V documents
Mandates 2. To further develop tools for MPEG-V 4th Edition and discuss related contributions.
3. Collect material for reference software and conformance of MPEG-V 3rd Edition
Chairmen Marius Preda (Institut Mines Telecom), Seung Wook Lee (ETRI)
Duration Until 115th MPEG Meeting
Reflector metaverse@lists.aau.at
Subscribe http://lists.aau.at/mailman/listinfo/metaverse
During weekend prior to MPEG Room size
Meeting
20
Before weekend prior to MPEG Place /Date/Logistics May 29th, 15h00-17h00
AHG on Graphics compression
1. Refine requirements for PCC
2. Solicit and compile additional test data for point clouds, including point clouds with
multiple attributes per point
3. Solicit contribution related to evaluation metrics
Mandates
4. Propose a raw representation for point clouds geometry and appearance
5. Further develop compression technologies for point clouds geometry and appearance
6. Collect contributions and maintain the utility software
Page:
419
Date Saved: 2016-06-03
420
7. Logistic preparation for a potential workshop
Chairmen Lazar Bivolarski (LZ), Rufael Mekuria (CWI), Christian Tulvan (Institut Mines Telecom),
Duration Until 115th MPEG Meeting
Reflector mpeg-3dgc AT gti. ssr. upm. es
Subscribe To subscribe, send email to https://mx.gti.ssr.upm.es/mailman/listinfo/mpeg-3dgc
During weekend prior to MPEG Room size
Meeting
8.
20
Place /Date/Logistics Sunday May 29th 2016,
Before weekend prior to MPEG
09h00-18h00
Closing of the Meeting
See you in Geneva.
Page:
420
Date Saved: 2016-06-03
Download