Uploaded by Praveena Rao

applsci-11-11435-v2

advertisement
applied
sciences
Article
Performance Evaluation of Artificial Neural Networks (ANN)
Predicting Heat Transfer through Masonry Walls Exposed
to Fire
Iasonas Bakas *
and Karolos J. Kontoleon
Laboratory of Building Construction & Building Physics, Department of Civil Engineering, Faculty of
Engineering, Aristotle University of Thessaloniki, GR-54124 Thessaloniki, Greece; kontoleon@civil.auth.gr
* Correspondence: iasompak@civil.auth.gr
Featured Application: The use of Artificial Neural Networks for the prediction of heat transfer
through a variety of masonry wall build-ups exposed to elevated temperatures on one side.
Citation: Bakas, I.; Kontoleon, K.J.
Performance Evaluation of Artificial
Neural Networks (ANN) Predicting
Heat Transfer through Masonry Walls
Exposed to Fire. Appl. Sci. 2021, 11,
Abstract: The multiple benefits Artificial Neural Networks (ANNs) bring in terms of time expediency
and reduction in required resources establish them as an extremely useful tool for engineering
researchers and field practitioners. However, the blind acceptance of their predicted results needs to
be avoided, and a thorough review and assessment of the output are necessary prior to adopting
them in further research or field operations. This study explores the use of ANNs on a heat transfer
application. It features masonry wall assemblies exposed to elevated temperatures on one side,
as generated by the standard fire curve proposed by Eurocode EN1991-1-2. A juxtaposition with
previously published ANN development processes and protocols is attempted, while the end results
of the developed algorithms are evaluated in terms of accuracy and reliability. The significance of the
careful consideration of the density and quality of input data offered to the model, in conjunction
with an appropriate algorithm architecture, is highlighted. The risk of misleading metric results is
also brought to attention, while useful steps for mitigating such risks are discussed. Finally, proposals
for the further integration of ANNs in heat transfer research and applications are made.
11435. https://doi.org/10.3390/
app112311435
Keywords: Machine Learning; Artificial Neural Networks; ANN performance; fire action; masonry
wall heat transfer; prediction evaluation; input data quality; performance index
Academic Editor: Francesco
Salamone
Received: 24 October 2021
Accepted: 27 November 2021
Published: 2 December 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affiliations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1. Introduction
1.1. Purpose and Innovation of This Work
Machine Learning and Artificial Neural Networks, in particular, are increasingly becoming the scientific method of choice for several engineering researchers and practitioners
alike. Although the notion was introduced in 1956 [1], its practical adoption was delayed
until the early 1990s [2] mainly due to the inadequate computational power available.
Arguably, the benefits Artificial Neural Networks (ANNs) can bring to a scientific
project make them an attractive method for analysing data. Enabling researchers to make
predictions regarding the phenomenon they study without the need for computationally
heavy numerical models or expensive testing facilities, in conjunction with Finite Element
or BIM model analysis [3], significantly reduces the cost of their research. In the same direction, the reduction in resources used for experiments and the ability to draw information
from previous experimental work could put the use of ANNs at the forefront of efforts for
sustainability within the scientific community. In terms of reliability and performance, it
has been shown that, usually, ANNs outperform or are of equivalent accuracy to traditional
linear and nonlinear statistical analysis methods [4]. Despite these apparent advantages,
the adoption of ANNs in certain research fields, such as heat transfer and fire engineering
in the context of civil engineering, is still slow [5].
Appl. Sci. 2021, 11, 11435. https://doi.org/10.3390/app112311435
https://www.mdpi.com/journal/applsci
Appl. Sci. 2021, 11, 11435
2 of 26
This study’s contribution is twofold; it aims to fill part of the ANN utilisation gap in
heat transfer research, on one hand, while stimulating a constructive dialogue regarding
the evaluation and significance of input selection for ANNs on the other. The first aim is
achieved through the development of an ANN architecture able to make predictions on
the thermal performance of masonry wall assemblies exposed to elevated temperatures. It
specifically proposes a neural network algorithm capable of predicting the temperature
development on the non-exposed face of the wall assemblies over time, when the standard
fire curve (as described in Eurocode EN1991-1-2) is applied to the other face of the wall.
Although there are a number of studies exploring the use of Machine Learning and Neural
Networks in applications such as structural health monitoring [6] and the residual capacity
of structural members [7], the adoption of these methods is still not as widespread within
the field of fire engineering as in other scientific domains [5]. Equally, there are studies
exploring the optimisation of the internal environment for user comfort through the use
of Artificial Intelligence [8]; the main focus, though, largely deviates from heat transfer
through structural members. The first part of this study’s unique contribution is that
it combines aspects of fire engineering (exposure to fire and the change of phase of the
components of the structural member) with the transient movement of heat through a
masonry wall. This work paves the way for a range of practical applications of heat transfer
through the composite, non-uniform, and perforated elements of varying material thermal
properties with prospects in the field of civil engineering and materials science.
The latter is approached with a systematic evaluation of the aforementioned algorithm’s performance in terms of accuracy and reliability. Different metrics are used to
measure the credibility of the ANN in this specific application and an intuitive comparison
to the “ground truth” data is also employed to evaluate the final predictions. To ensure a
robust model development and evaluation process, a step by step approach is adopted,
closely following previously published ANN development protocols as a guideline [9]. The
developed evaluation methodology constitutes an initial step towards the establishment of
a standardised process of quantifying the input data impact on the ANN’s performance. A
parallel investigation of a wide range of research facets (other ML techniques, optimisation
algorithms, the integration of other scientific fields beyond fire engineering, and heat transfer, to name a few) could provide additional value to the current proposal. Nonetheless,
this initial attempt already contributes towards building confidence around the accuracy
and reliability of ANN results, and it sets an initial foundation for developing a filter and
evaluation process during the selection of input datasets through a quantitative and visual
assessment of ANN predictive performance.
Several scientific efforts have focused on optimising and assessing the impact of
alternative algorithm architectures and the choice of hyperparameters on the performance
of ANNs [10]; on the other hand, this study explores the impact of varying degrees of
input data quantity and quality. The initially developed ANN architecture is maintained
identically throughout the study, while portions of the input data are gradually withheld,
with the intention to observe the degradation of the ANN’s predictive capabilities. This
results in four different regressor models with identical structures but significantly different
ultimate performance. The predictions of each model are assessed and the risk of receiving
misleading results is commented upon. Eventually, conclusions and recommendations on
how to mitigate such risks arising from specific ANN metrics are made.
Arguably, the field of Machine Learning is constantly evolving and expanding to
incorporate new bodies of theoretical study and practical applications. An extensive and
analytical review of such works would undeniably be fruitful and beneficial. Nevertheless,
the nature of such a thorough investigation does not align directly with the objectives of
the current study. Cutting edge research currently attempts to optimise the architecture
and performance of Machine Learning algorithms by exploring solutions ranging from the
use of genetic algorithms [11] and dispersing the computational and data processing load
to peripheral computers (the Edge) [12], to developing state of the art spatial accelerators to
expedite big data processing [13]. Despite the utmost significance of these advancements,
Appl. Sci. 2021, 11, 11435
3 of 26
this study aims to establish an evaluation regime for the more conventional form of ANNs,
which are still in the early stages of their implementation within the field of fire engineering
and built-environment heat transfer applications.
1.2. Advantages and Disadvantages of the Proposed Evaluation Method
Artificial Neural Networks are progressively used more extensively both in the context
of scientific research and industrial applications. Despite the multifaceted benefits their use
can bring to the research and industrial processes, careful consideration needs to be given
to achieve a balanced assessment of their advantages and drawbacks. The following list
outlines some of the main advantages of using ANNs within the context of heat transfer
and fire engineering and highlights the benefits of the methodology proposed herein in
terms of enhancing these favourable attributes:
•
•
•
•
•
The use of ANNs can lead to a reduction in the cost and resources associated with
further fire and heat transfer testing. This also contributes towards achieving the
sustainability targets of research institutions and academic organisations by removing
the need for the unnecessary use of fuel, test samples, and the construction of testing
facilities. The proposed methodology provides an opportunity for a more efficient
and accurate assessment of the ANN’s results.
Artificial Neural Networks can also help reduce the required time for developing
and analysing computationally heavy 3D Finite Element heat transfer models. This
further reduces the cost associated with acquiring powerful computers to support the
aforementioned models. Despite this powerful feature, the use of ANNs needs to be
regulated and evaluated; the methodology developed herein sets the foundation for
such an evaluation framework.
ANNs can increase and simplify the reproducibility of heat transfer and fire performance experiments. ANNs introduce efficiency in the exploratory amendment
of experiment parameters. Previously recorded figures can feed in and help construct the ANN model, providing the ability to then tweak and replicate the parameters to explore variations of the original experimental arrangements. As this
process involves the amendment and adjustment of the input data of ANNs, the
workflow introduced in this study can highlight the potential pitfalls arising from this
retrospective processing.
The methodology for input data review proposed herein aims to contribute towards
the prevention of misleading or not adequately precise results generated by the ANN,
for integration into further research or field applications.
This body of work also raises awareness regarding the need for a standardised methodology for assessing ANN performance and ANN model development.
On the other hand, some of the main disadvantages and limitations that need to be
addressed before ANNs and the proposed assessment methodology are fully integrated
into the scientific and industrial workflow include:
•
•
•
The development of ANNs usually requires a large amount of input data, which
are not always easy or affordable to obtain. The proposed methodology essentially
introduces additional filters and methods for stricter input data quality assessment,
which can make the above process even more involved.
The interpretation of the ANN output is not always straightforward, and its precision
can be difficult to judge unless a solid understanding of the expected results has been
developed beforehand. The following sections of this study touch on this specific
matter and make recommendations.
To extend the scope of the scientific and industrial application of the proposed model
within the context of the heat transfer and fire performance of wall assemblies, further
research is needed to incorporate a wider range of material properties, fire loadings,
and geometrical wall configurations.
Appl. Sci. 2021, 11, 11435
4 of 26
•
There are a plethora of proposed model architectures and ML approaches that could
potentially enhance the performance of the proposed model. Although the detailed
review and integration of such methodologies fall outside the scope of the present
study, it would be beneficial for those to be reviewed and compared against the
proposed model structure and topology.
1.3. Scope Limitations and Extended Application
The evaluation process presented in the following paragraphs focuses on a specific
application (heat transfer through masonry wall elements exposed to fire on one side) and,
as such, it is optimised for the parameters describing this particular case. The parameters
used as part of this study could be extended to capture additional features of the experimental and/or modelling data. This would provide a wider scope for the application
of the evaluation methodology within the context of heat transfer and fire engineering.
The additional parameters could include, but not be limited to, the moisture content of
the wall assembly materials, the different types and positions of insulation, a range of
different fire loads, ventilation parameters, the different compositions of masonry units
with the associated variations in their thermal properties, and the different qualities of
bedding and render mortar. The inclusion of a wider array of material properties, geometries, and thermal loading could greatly increase the scope of the modelling process and
evaluation method.
Another aspect that could enhance the flexibility of the current proposal would
be the integration of other Machine Learning (ML) techniques. Although an extensive
comparative study is beyond the scope of the present work, a replication of the proposed
process on the output generated by other ML algorithms and models would help build a
more holistic understanding of the method’s transferability.
Although there is wide scope for the further expansion of the proposed methodology,
a conscious effort has been made to primarily focus on its underpinning principles. This
provides an overview of the cornerstones of the evaluation concept and enables a straightforward transfer of its universally applicable values. The aim of the final conclusions and
recommendations is to furnish fellow researchers with a guideline on how to assess their
ANN output regardless of their specific scientific field and practical application.
1.4. Intuition of Artificial Neural Networks
Artificial Neural Networks (ANN) belong to the sphere of supervised Machine Learning (ML), which in turn falls under the wider concept of Artificial Intelligence (AI) [14].
The cornerstones of ANNs are their architecture and training data. In terms of architecture,
each ANN comprises multiple layers of interconnected nodes. These include the input
layer (input nodes), responsible for receiving and offering the input data to the network;
the output layer (output node), which represents the final prediction of the network; and
the hidden layer(s) in between, which capture and factor the various features present in the
available dataset. In the case of Deep Learning, the ANN category that the present study
falls under, the networks involve more than one hidden layer. These are fully connected;
all nodes of one layer are connected to all nodes of the previous and following layers.
The structure of ANNs resembles the structure of the human brain. Each node,
mentioned above, represents a neuron, and the connections between those perform similar
functions to brain synapses, as seen in Figure 1. Each neuron represents a feature of the
training data and can be outlined by Equation (1), where f(x) is an appropriately selected
activation function [15,16]:
!
n
P=f
∑ wi x i + b
(1)
i =1
where P is the output figure of the neuron, wi is the weights applied to each feature of the
data that are fed to the neuron, xi is the features of the input data themselves, and b is the
applied bias.
Appl. Sci. 2021, 11, 11435
5 of 26
Figure 1. Graphical representation of the neurons the ANN consists of. The illustrated activation function is indicative only
and represents the one used as part of this study.
Each neuron is initially assigned a random weight, quantifying the significance of the
represented feature to the value of the dependent variable. These weights are constantly
updated and ultimately optimised through an iterative training process (epoch). In each
training loop, the contribution of each neuron to the dependent variable and the overall
accuracy of the ANN algorithm prediction against a known value is evaluated. The process
of comparing the network predictions against known values (the ground truth) is the
founding principle of supervised learning [17]. To enable this internal assessment and
feedback process, the use of a loss function at the end of each epoch is necessary. This
feedforward, backpropagation training process is illustrated schematically in Figure 2.
Figure 2. Graphical representation of the feedforward and backpropagation process of a
typical ANN.
Similar to the architecture of the network, the choice of an appropriate loss function,
along with parameters including, but not limited to, the activation function, number of
epochs, learning rate, and batch size, is dependent on the type, quantity, and quality of the
Appl. Sci. 2021, 11, 11435
6 of 26
input data and type of expected predictions [18]. These are called hyperparameters, and
in conjunction with the ANN structure (number of hidden layers and number of neurons
on each hidden layer) turn an ANN algorithm into a trainable and functioning predictive
model. There is no prescriptive method for standardising the process of ANN development
at the moment, in terms of hyperparameters and architecture selection [19]. However,
previous scientific studies summarise useful conclusions that provide some guidance in
this respect.
Apart from developing an effective architecture and choosing appropriate hyperparameters, an aspect that requires attention is network overfitting [1]. Although an algorithm
can perform extremely well in making predictions on the given training and test set, there
is the risk of excessive focus on the provided observations, leading to the reduced general
applicability of the model. Overfitting limits the scope of use of an artificial neural network
and can potentially lead to misleading prediction results. The impact of input data quality
and quantity on the overfitting of the ANN is discussed in subsequent sections.
Fundamental details regarding the architecture of the ANN built and utilised for this
study are presented in the following paragraphs, allowing for a more holistic understanding
of the factors impacting the performance of the model. Similarly, the structure of the
input and ground truth data is thoroughly explained in the relevant chapter of this paper,
provoking thought on the relationship between input and output precision.
1.5. Input Data Reference
The development, implementation, and evaluation of the ANN algorithm constitute
the focal point of this work. The basis of the heat transfer input data, used to train the
ANN algorithm, became available after adjusting and interrogating finite element analysis
models developed as part of previous scientific research. Specifically, Kanellopoulos and
Koutsomarkos [20] set the foundation with their research on heat transfer through clay
brick walls, focusing on the contribution of radiation through the voids of the masonry
units. This was further developed by Kontoleon et al. [20,21] and their work on heat transfer
through insulated and non-insulated masonry wall assemblies. The various amendments
and pre-processing routines imposed on this foundation dataset are described in the
following paragraphs.
1.6. Software and Hardware Utilised for This Research
The analysis of the physical phenomenon of heat transfer was undertaken using
COMSOL Multiphysics® simulation software (v 5.3a). The development, training, and
review of the ANN algorithm were carried out using the programming language Python
and the application programming interface Keras along with its associated libraries. Other
libraries used include Pandas, NumPy, Statistics, Scikit-learn [22], Matplotlib, and Seaborn
(the last two for visualisation purposes). Finally, the ANN predictions were exported to
MS Excel for ease of diagram formatting and presentation.
The finite element analysis and the development and training of the ANN were
performed using a 64-bit operating system, running on an Intel Core i7-920 processor at
2.67 MHz, and with 24GB of RAM installed.
2. Modelling and Methods
2.1. Masonry Wall Assembly Finite Element (FE) Models
2.1.1. Geometrical Features of Models’ Components
The training of the ANN algorithm was based on data extracted from 30 different
wall assembly FE models. Despite having a core model onto which the structure of
the wall samples was founded, a range of variables was incorporated to enable a more
accurate representation of the phenomenon of heat transfer through the wall samples. This
pluralism also enhanced the understanding of the impact of various parameters on the
process, through the use of ANNs in the following stages of the study.
Appl. Sci. 2021, 11, 11435
7 of 26
The foundation model involved the use of a single skin of perforated clay bricks
stacked with cement bedding mortar. In the simplest modelling case, either face of this
brick core was covered with a single layer of cement render before it was exposed to fire
on one side (details regarding the boundary conditions and fire load applied to the model
are included in the following paragraphs). Specifically, the masonry core consisted of
250 mm wd × 140 mm dp × 330 mm lg clay bricks incorporating an array of 18 full-length
holes (12 holes of 26 mm × 34.7 mm and 6 more elongated holes of 26 mm × 56 mm), as
indicated by the following sections in Figure 3. The bedding mortar and cement render
were consistently 10 mm thick.
Figure 3. Sections of typical masonry wall assemblies modelled as part of this study. (a) Wall section insulated externally,
(b) non-insulated wall section, (c) wall section insulated internally.
Two variations of this basic matrix model were then developed and analysed; namely,
a brick wall insulated with EPS internally (exposed to fire) and a brick wall insulated with
EPS externally (non-exposed face of the wall). For each case, two EPS layer thicknesses
were analysed: 50 mm and 100 mm.
2.1.2. Model Material Properties and Combinations
The range of material properties incorporated into the FE models enriched the available analysis cases and consequently provided a wide variety of output data. The properties
and their values considered as part of the FE analysis included:
•
•
•
•
Clay brick density (ρ): 2000 kg/m3 and 1000 kg/m3 .
Clay brick thermal conductivity coefficient (λ): 0.8 W/(m·K) and 0.4 W/(m·K).
Clay brick thermal emissivity coefficient (ε): 0.1, 0.5, 0.9.
Insulation thickness (d): 0 mm, 50 mm, 100 mm.
The above range of variables and material property values allowed for the formation
of the following combinations. Each combination appearing in Table 1 represents a separate
FE model whose analysis output subsequently offered portions of the overall input data
for training, testing, and evaluating the ANN model.
The development of a complete finite element analysis model required the definition
of some additional parameters (which were not explicitly used for the development of the
ANN). The specific heat capacity (Cp ) of all components needed to be specified, along with
the density (ρ) and thermal conductivity coefficient (λ) of the cement mortar and insulation
panels. Table 2 summarises this additional information.
Appl. Sci. 2021, 11, 11435
8 of 26
Table 1. Material property combinations defining the various FE models analysed.
Insulation Position and
Thickness
Brick Density
Thermal Conductivity
Coefficient
Thermal Emissivity
Coefficient
Sample Reference 1
No insulation
1000
1000
1000
0.4
0.4
0.4
0.1
0.5
0.9
Smpl1-1
Smpl1-2
Smpl1-3
No insulation
2000
2000
2000
0.8
0.8
0.8
0.1
0.5
0.9
Smpl2-1
Smpl2-2
Smpl2-3
Non-exposed EPS (50 mm)
1000
1000
1000
0.4
0.4
0.4
0.1
0.5
0.9
Smpl3-1
Smpl3-2
Smpl3-3
Non-exposed EPS (50 mm)
2000
2000
2000
0.8
0.8
0.8
0.1
0.5
0.9
Smpl4-1
Smpl4-2
Smpl4-3
EPS exposed to fire (50 mm)
1000
1000
1000
0.4
0.4
0.4
0.1
0.5
0.9
Smpl5-1
Smpl5-2
Smpl5-3
EPS exposed to fire (50 mm)
2000
2000
2000
0.8
0.8
0.8
0.1
0.5
0.9
Smpl6-1
Smpl6-2
Smpl6-3
Non-exposed EPS (100 mm)
1000
1000
1000
0.4
0.4
0.4
0.1
0.5
0.9
Smpl7-1
Smpl7-2
Smpl7-3
Non-exposed EPS (100 mm)
2000
2000
2000
0.8
0.8
0.8
0.1
0.5
0.9
Smpl8-1
Smpl8-2
Smpl8-3
EPS exposed to fire (100 mm)
1000
1000
1000
0.4
0.4
0.4
0.1
0.5
0.9
Smpl9-1
Smpl9-2
Smpl9-3
EPS exposed to fire (100 mm)
2000
2000
2000
0.8
0.8
0.8
0.1
0.5
0.9
Smpl10-1
Smpl10-2
Smpl10-3
1
Only used for ease of reference in the following sections of this study and for cross-referencing with model analysis files by the
research team.
Table 2. Other material properties used for the development of the FE models.
Material
Density
kg/m3
Thermal Conductivity Coefficient
W/(m·K)
Specific Heat Capacity
J/kg·K
Clay bricks
Insulation (EPS)
Cement mortar
As above
30
2000
As above
0.035
1.400
1000
1500
1000
It is worth highlighting that although most of the thermophysical and mechanical
properties of the masonry structure fluctuate depending on the temperature of the components [23], for the needs of this study they have all been assumed to remain constant
throughout the development of the phenomenon. It has also been shown that clay brick
spalling affects the thermal performance of the masonry wall when exposed to fire [24,25];
this parameter has not been considered in the finite element models. The dramatic impact
on the insulation’s performance, due to phase change when exposed to high temperatures,
could not be ignored. The modelling convention used to reflect this characteristic behaviour
is explained in the following paragraphs.
Appl. Sci. 2021, 11, 11435
9 of 26
2.2. Heat Transfer Analysis, Fire Load, Assumptions, and Conventions
To effectively assess the performance of the ANN and evaluate the accuracy of its
predictions, a basic description and understanding of the physical phenomenon under
consideration are deemed necessary. The founding principles and equations of heat transfer
are presented herein, in conjunction with the various assumptions, simplifications, and
conventions used when constructing the relevant finite element analysis model.
2.2.1. Heat Transfer Fundamentals
Although the aim of this study is not to review and present the fundamental principles
of heat transfer, it is worth briefly presenting the basic mechanisms operating on the finite
element analysis model. The analysis commences with the wall samples in thermal balance,
with an ambient room temperature applied on either side. At time t = 0 sec, an increasing
fire load (as described in the following paragraph) is applied on one side (hereon referred
to as “exposed”), initiating the combined heat transfer mechanism of convection and
radiation. The fundamental Equations (2) and (3) govern the convective and radiation heat
transfers, respectively [26], from the fire front towards the exposed wall surface:
00
qconv = h ( TS − T∞ )
00
4
qrad = ε·σ Ts4 − Tsur
00
(2)
(3)
00
where qconv and qrad are the resulting heat flux due to convection and radiation, respectively,
h is the convection heat transfer coefficient (W/(m2 ·K)), Ts is the surface temperature, T ∞ is
the surrounding fluid temperature (for convection), Tsur is the surrounding environment
temperature (for radiation), ε is the emissivity coefficient, and σ is the Stefan–Boltzmann
constant (σ = 5.67 × 10−8 W/(m2 ·K4 )).
The heat transfer mechanism of conduction is also activated as heat progressively
travels through the solid parts of the wall assembly. Equation (4) is the governing relationship for heat conduction, and it was used to replicate the phenomenon on the finite
element analysis model. The clay brick cavities play an integral part in the transfer of heat
through the sample [27], and thus attention was paid to modelling them accurately. Heat
transfer through radiation between the cavity walls and convection through air movement
within the cavities has been considered. The previously described convection and radiation
formulas apply:
00
qcond,x = −λ
∂T
∂x
(4)
00
where qcond,x is the resulting heat flux due to conduction, λ is the thermal conductivity, and
∂T
∂x
is the temperature gradient in the direction of the heat’s transient movement.
The final stage of the transient movement of heat through the wall sample is its release
to the environment through the non-exposed face of the section. The applicable heat
transfer mechanisms are convection, through the surrounding air, and radiation.
2.2.2. Boundary Conditions and Fire Load
A definition of the applicable boundary conditions is necessary before the solution of
the differential equations can be attempted. The analysis initiates assuming an ambient
room temperature of 20 ◦ C on both faces of the wall assembly. This condition is adhered to
throughout the analysis for the non-exposed wall face.
At time t = 0 sec, the “exposed” wall face becomes subject to the increasing fire load
represented by the standard fire curve ISO 834 as present in Eurocode EN1991-1-2 [28].
Although scientific research is being carried out to identify and compile new, more accurate
fire curves and loading regimes [29], it was considered that the well-established methodology proposed by the Eurocodes currently fulfils the needs of this study. Equation (5)
Appl. Sci. 2021, 11, 11435
10 of 26
mathematically describes the relationship of temperature increase due to the applied fire
load over time, while Figure 4 is the corresponding visual representation:
Qg = 20 + 345 log10 (8 t + 1)
(5)
where t is the time in seconds and Qg is the developed temperature in ◦ C.
Figure 4. Typical fire curve (ISO 834) applied to finite element analysis models as seen in EN1991-1-2.
The boundaries between the various layers of the wall assembly follow the fundamental heat transfer equations, as described in the previous paragraph.
2.2.3. Modeling Heat Transfer Assumptions and Conventions
An element that required special attention was the behaviour of EPS when exposed to
significantly high temperatures. The material is a combustible thermoplastic that remains
stable when exposed to temperatures up to 100 ◦ C. Its thermal properties start to rapidly
degrade shortly after that, while temperatures upwards of 160 ◦ C lead to the complete loss
of the inflated bead structure and the liquefaction of the insulation boards. Approaching
temperatures of 278 ◦ C leads to the gasification of the material [30]. Provided that the wall
samples were subject to temperatures significantly higher than 100 ◦ C, the above behaviour
had to be stimulated.
Conventionally, and accepting that no material can be removed from the finite element
analysis model once the analysis has commenced, a temperature-dependent variable was
utilised to reproduce the reduced performance and ultimate collapse of EPS. The thermal
conductivity coefficient (λEPS ) of the insulating material was artificially increased when
temperatures beyond 150 ◦ C were encountered. That allowed for the unhindered transfer
of heat through the melting EPS boards, resembling the gradual removal of the physical
barrier. The coefficient was linearly increased from a value of λEPS = 0.035 W/(m·K) at
150 ◦ C to λEPS = 20 W/(m·K) at 200 ◦ C (practically no heat transfer resistance). Similarly,
its density (ρEPS ) and special heat capacity (CpEPS ) were reduced to negligible figures.
Figure 5 demonstrates the steep material property changes for the melting EPS layer.
It is worth highlighting that the failure of the EPS material would naturally result in
the detachment and collapse of the attached render. To ensure the removal of the render’s
beneficial insulating function, a similar approach was adopted, wherein all associated
thermophysical properties were altered to reproduce its collapse. To ensure the change was
introduced in a timely manner, an exploratory analysis was performed to identify the time
for the interface between render and EPS to reach the critical temperature of 150 ◦ C. This
threshold temperature was achieved at t = 240 sec. The thermal conductivity coefficient
(λrender ) was increased from λrender = 1.40 W/(m·K) to λrender = 2000 W/(m·K), while its
density (ρrender ) and specific heat capacity (Cprender ) were reduced to negligible values.
Appl. Sci. 2021, 11, 11435
11 of 26
Although the same process would normally apply to both faces of the wall (exposed
to fire and non-exposed), only the properties of the exposed insulation and render were
altered within the scope of the present study. Similarly, no allowance has been made for
the ignition of EPS volatiles.
Figure 5. Artificial steep modification of insulation material properties to emulate its degradation
until complete destruction due to fire exposure.
2.3. ANN Complete Input Dataset (CID)
Since part of the algorithm’s structure (the size of the input layer) depends on the
structure of the input dataset, it was considered important to conclude the data format
prior to commencing the development of the ANN itself. A brief outline of the parameters
used has been given in the description of the FE models; these were further refined and
structured in an appropriate CSV file format ready for reading by the algorithm.
The file incorporated columns representing the independent variables defining the FE
models described in previous paragraphs. A “timestamp” column was also added to allow
for the observation and correlation of the temperature magnitude on the non-exposed
face of the wall over time as the phenomenon developed. The last column of the dataset
included the output of the FE model analysis, the temperature observed on the non-exposed
face of the wall at a 30 sec time step when the standard Eurocode fire curve was applied
to the other face. Table 3 provides a typical section of the dataset file offered to the ANN
algorithm; the example reflects the values used for an internally insulated wall panel.
Table 3. Representative example of the dataset structure and contents.
Index
Sample
Ref
Brick
Density
Thermal
Conductivity
Coef.
Thermal
Emissivity
Coef.
Insulation
Thickness
...
11,449
11,450
11,451
11,452
11,453
...
...
Smpl6-1
Smpl6-1
Smpl6-1
Smpl6-1
Smpl6-1
...
...
2000
2000
2000
2000
2000
...
...
0.8
0.8
0.8
0.8
0.8
...
...
0.1
0.1
0.1
0.1
0.1
...
...
50
50
50
50
50
...
Insulation Insulation
Type
Position
...
EPS
EPS
EPS
EPS
EPS
...
...
Int
Int
Int
Int
Int
...
Time
Temperature of
Non-Exposed Face
...
18,990
19,020
19,050
19,080
19,110
...
...
41.77246
41.85622
41.94014
42.02421
42.10843
...
It is apparent that not all columns were necessary for the training, testing, and evaluation of the algorithm, thus the input data was further modified and stripped into a
more appropriate format as part of the “preprocessing” phase (see following paragraphs).
Nevertheless, these variables and references are useful for an intuitive understanding of the
Appl. Sci. 2021, 11, 11435
12 of 26
input data content and structure. Variables referring to material properties were discussed
in detail in previous paragraphs. Other columns include:
•
•
•
•
•
•
Index—variable purely counting the number of unique observations included in
the data. Each temperature measurement generated by the FE model analysis (time
step of 30 s) is used as a separate observation. The complete dataset includes a
total of 21,630 observations. The index was excluded from any training or testing of
the algorithm.
Sample reference—this variable enabled the research team to easily cross-reference
between the dataset tables and the analysis files. Similarly, it was excluded from any
training or testing of the neural network.
Insulation type—the data structure and ANN algorithm were developed with the
intention of incorporating and analysing various insulation materials. Although the
present study only considers EPS, it was considered useful to build some flexibility in
the algorithm, enabling the further expansion of the scope of work in the future. It
takes the values “EPS” and “NoIns”, representing the insulated and non-insulated
wall samples, respectively. This categorical variable was encoded into a numerical one
as part of the pre-processing phase.
Insulation position—this variable represents the position of the insulation. It takes
three values; “Int”, “Ext”, and “AbsIns”, representing insulation exposed to fire
(internal insulation), insulation not exposed to fire (external insulation), and noninsulated wall samples, respectively. As a categorical variable, this was also encoded
as part of the pre-processing phase.
Time—to enable the close observation of the temperature development on the nonexposed face of the wall over time, it was considered necessary to include a “timestamp” variable. This was obtained directly from the FE model analysis output, where
time and temperature are given as two separate columns. The temperature is measured every 30 s for a duration of 6 h and a total of 720 observations for each of the
30 models.
Temperature of the non-exposed face—this is the output of the finite element analysis
models in ◦ C as generated by COMSOL Multiphysics® simulation software. It reflects
the temperature developed gradually within a reference area on the non-exposed face
of the wall panels. This constitutes the dependent variable of the dataset and ultimately
is the figure that the ANN algorithm will be trying to predict in the following steps.
It is apparent that the above set of parameters and values are relevant only to the
specific masonry wall heat transfer application presented in this study. However, the
process of compiling a group of independent variables and organising them into a dataset
structure appropriate for analysis in Python is universal. Depending on the number of the
recorded independent variables, the architecture of the network might need to be adapted
(more or fewer input neurons), different hyperparameters might generate better results, or
slightly different preprocessing methods might be applicable (scaling might/might not be
necessary, encoding of categoric variables might be needed or not, etc). The list of used
variables, and their range of values given in the preceding Tables 1 and 3, should provide
a guide for understanding the form of the dataset and possibly substituting it with other
data available to interested research parties. Similarly, the list of hyperparameters included
in the following sections of the study, along with the values used for this research, should
provide adequate detail for understanding and replicating the structure of the ANN itself,
if desired.
2.4. Test Cases Examined
Part of this study’s unique contribution is to examine the impact of varying degrees of
input data quantity and quality on the performance of the ANN model. An attempt was
made to isolate and assess the influence of data by keeping the same algorithm architecture
and gradually altering the amount of information provided for training. This provided
App
App
App
App
App
App
App
App
Sc
Sc
Sc
Sc
Sc
Sc
Sc
Sc
was madewas
to isolate
madewas
and
to isolate
assess
madewas
and
to
theisolate
assess
made
influence
and
to
theisolate
assess
of
influence
data
and
the
byassess
of
influence
keeping
datathe
bythe
of
influence
keeping
data
sameby
algorithm
the
of
keeping
data
sameby
algorithm
archithe
keeping
same algorithm
archithe same alg
ar
should
provide
detail
for
understanding
and
replicating
the
structure
the
tecture
and
tecture
gradually
and
tecture
gradually
altering
and
tecture
the
gradually
altering
amount
and
the
gradually
altering
ofand
amount
information
the
altering
of
amount
information
provided
the
of
amount
information
for
provided
training.
of
information
for
provided
This
training.
profor
provided
This
training.
proforThis
train
p
fewer
(more
input
or
fewer
(more
neurons),
input
or
fewer
(more
neurons),
different
input
or
fewer
hyperparameters
neurons),
different
input
hyperparameters
neurons),
different
might
hyperparameters
different
generate
might
hyperparameters
better
generate
might
results,
better
generate
might
results,
better
generate
resu
ANN
itself,
if encoding
desired.
ANN
itself,
if
desired.
vided
aor
level
vided
ground
aadequate
level
for
ground
comparing
for
comparing
the
algorithms,
the
algorithms,
without
introducing
without
introducing
inconsistencies
inconsistencies
or
slightly
different
or
preprocessing
slightly
different
methods
preprocessing
might
be
applicable
methods
might
(scaling
be
might/might
applicable
(scaling
might/might
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
to
hyperparameter
due
and
to
hyperparameter
architecture
due
and
to
hyperparameter
architecture
variations.
architecture
variations.
and
architecture
variations.
variations.
2021
App11 Sc
x FOR
2021PEER
11 xREV
FOR(more
EW
PEER
REV
EW
13of
o
27
13
o
be
necessary,
of
categoric
variables
might
be
needed
or
not,
etc).
The
list
of
used
variables,
and
their
range
variables,
of
values
and
their
given
range
in
the
of
preceding
values
given
Tables
in
the
1on
and
preceding
3,
should
Tables
provide
1not
and
3,
should
prov
adata
guide
for
understanding
acluded
guide
the
for
form
understanding
of
the
dataset
the
and
form
possibly
of
the
dataset
substituting
and
possibly
it
with
other
substituting
it27
with
o
Part
of
this
study’s
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
impact
is
to
examine
of
varying
the
degrees
impact
of
varying
deg
available
to
interested
research
parties.
Similarly,
the
list
of
hyperparameters
inof
input
data
of
input
quantity
data
of
input
and
quantity
quality
data
of
input
and
quantity
on
quality
data
the
performance
and
quantity
on
quality
the
performance
and
on
of
quality
the
the
performance
ANN
of
the
the
model.
performance
ANN
of
An
the
model.
attempt
ANN
of
An
the
model.
attempt
ANN
An
mode
attem
cluded
in
the
following
sections
in
the
of
following
the
study,
sections
along
with
of
the
the
study,
values
along
used
with
for
this
the
research,
values
used
for
this
resea
was
made
was
to
isolate
made
and
to
isolate
assess
and
the
assess
influence
the
of
influence
data
by
of
keeping
data
by
the
keeping
same
algorithm
the
same
algorithm
archiarchishould
provide
adequate
should
detail
provide
for
adequate
understanding
detail
and
for
understanding
replicating
and
structure
replicating
of
the
the
structure
of
tecture
and
tecture
gradually
and
tecture
gradually
altering
and
tecture
the
gradually
altering
amount
and
the
gradually
altering
of
amount
information
the
altering
of
amount
information
provided
the
of
amount
information
for
provided
training.
of
information
for
provided
This
training.
profor
provided
This
training.
profor
This
train
p
(more
or
fewer
(more
input
or
fewer
neurons),
input
neurons),
different
hyperparameters
different
hyperparameters
might
generate
might
better
generate
results,
better
results,
ANN
itself,
if
desired.
vided
a
level
vided
ground
a
level
vided
for
ground
comparing
a
level
vided
for
ground
comparing
a
the
level
algorithms,
for
ground
comparing
the
algorithms,
for
without
comparing
the
algorithms,
introducing
without
the
algorithms,
introducing
without
inconsistencies
introducing
without
inconsistencies
introducing
inconsisten
in
or
slightly
or
different
slightly
or
preprocessing
different
slightly
or
preprocessing
different
slightly
methods
preprocessing
different
might
methods
preprocessing
be
applicable
might
methods
be
applicable
might
(scaling
methods
be
might/might
applicable
might
(scaling
be
might/might
applicable
(scaling
not
might/might
(scaling
not
mi
2.4.
Test
Cases
Examined
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
to
hyperparameter
and
architecture
and
architecture
variations.
variations.
2021
App11 Sc
x FOR
2021
App
PEER
11 Sc
xREV
FOR
2021
EW
App
PEER
11
Sc
xREV
FOR
2021
EW
PEER
11
xREV
FOR
EW
PEER
REV
EW
13
onot,
27
13
othe
27list
13ouo
be
necessary,
encoding
be
necessary,
of
categoric
encoding
variables
of
might
categoric
be needed
variables
or
not,
might
be
The
needed
list
of
or
used
of
Table
4understanding
summarises
Table
4guide
summarises
Table
the
input
4understanding
summarises
Table
data
the
input
used
4insummarises
data
the
for
training
input
used
data
the
for
each
training
input
used
of
data
the
for
each
training
4etc).
used
ANN
of
the
for
models.
each
training
4the
ANN
of
the
The
models.
each
4etc).
ANN
ofThe
The
4 ANN
variables,
and
their
range
ofthe
values
given
the
preceding
Tables
1Similarly,
and
3,
should
provide
adata
guide
for
adata
for
form
of
the
dataset
the
and
form
possibly
of
the
dataset
substituting
and
possibly
it
with
other
substituting
itmodels.
with
Part
of
this
study’s
unique
contribution
is
to
examine
impact
of
varying
degrees
available
to
interested
available
research
to
interested
parties.
Similarly,
research
the
parties.
list
of
hyperparameters
the
list
of
inhyperparameters
of
input
data
quantity
of
input
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
the
performance
ANN
model.
of
An
attempt
ANN
model.
An
attem
cluded
in
the
following
sections
of
the
study,
along
with
the
values
used
for
this
research,
was
made
was
to
isolate
made
was
and
to
isolate
assess
made
was
and
to
the
isolate
assess
made
influence
and
to
the
isolate
assess
of
influence
data
and
the
by
assess
of
influence
keeping
data
the
by
the
of
influence
keeping
data
same
by
algorithm
the
of
keeping
data
same
by
algorithm
archithe
keeping
same
algorithm
archithe
same
alg
ar
should
provide
adequate
should
detail
provide
for
adequate
understanding
detail
and
for
understanding
replicating
the
and
structure
replicating
of
the
the
structure
of
tecture
and
tecture
gradually
and
gradually
altering
the
altering
amount
the
of
amount
information
of
information
provided
for
provided
training.
for
This
training.
proThis
pro(more
or
fewer
(more
input
or
fewer
(more
neurons),
input
or
fewer
(more
neurons),
different
input
or
fewer
hyperparameters
neurons),
different
input
hyperparameters
neurons),
different
might
hyperparameters
different
generate
might
hyperparameters
better
generate
might
results,
better
generate
might
results,
better
generate
resu
ANN
itself,
if
desired.
ANN
itself,
if
desired.
vided
a
level
vided
ground
a
level
vided
for
ground
comparing
a
level
vided
for
ground
comparing
a
the
level
algorithms,
for
ground
comparing
the
algorithms,
for
without
comparing
the
algorithms,
introducing
without
the
algorithms,
introducing
without
inconsistencies
introducing
without
inconsistencies
introducing
inconsisten
in
or
slightly
or
different
slightly
preprocessing
different
preprocessing
methods
might
methods
be
applicable
might
be
applicable
(scaling
might/might
(scaling
might/might
not
not
2.4.
Test
Cases
Examined
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
to
hyperparameter
due
and
to
hyperparameter
architecture
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
and
architecture
variations.
variations.
2021
App11 Sc
x FOR
2021
App
PEER
11 Sc
xREV
FOR
2021
EW
App
PEER
11
Sc
x
REV
FOR
2021
EW
PEER
11
x
REV
FOR
EW
PEER
REV
EW
13
o
27
13
o
27
13
o
be
necessary,
be
necessary,
encoding
be
necessary,
of
encoding
categoric
be
necessary,
of
encoding
variables
categoric
of
encoding
might
variables
categoric
be
of
needed
might
variables
categoric
be
or
needed
not,
might
variables
etc).
be
or
The
needed
not,
might
list
etc).
be
of
or
The
used
needed
not,
list
etc).
of
or
The
used
not,
list
etc).
of
T
u
Table
4understanding
summarises
Table
4 summarises
the
input
data
the
input
used
data
for
training
used
for
each
training
of
the
each
4model.
ANN
of
the
4the
ANN
The
models.
The
variables,
and
their
range
variables,
ofthe
values
and
their
given
range
indeveloped
the
of
preceding
values
given
Tables
indeveloped
the
1comprising
and
preceding
3,for
should
Tables
provide
1comprising
and
3,for
should
prov
original
algorithm
original
algorithm
(ANN
original
1)
was
algorithm
(ANN
original
developed
1)
was
algorithm
(ANN
using
1)
was
(ANN
the
developed
full
using
1)the
dataset,
was
the
full
using
dataset,
the
full
using
comprising
dataset,
the
the
entirety
full
dataset,
the
entirety
comprisin
theresea
enti
adata
guide
for
form
of
the
dataset
and
possibly
substituting
itmodels.
with
other
Part
of
this
study’s
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
the
impact
is
to
examine
of
varying
degrees
impact
of
varying
deg
available
to
interested
data
available
research
to
interested
parties.
Similarly,
research
the
parties.
list
of
Similarly,
hyperparameters
the
list
of
inhyperparameters
of
input
data
quantity
and
quality
on
the
performance
of
the
ANN
An
attempt
cluded
in
the
following
cluded
sections
in
the
of
following
the
study,
sections
along
with
of
the
study,
values
along
used
with
this
the
research,
values
used
this
was
made
to
isolate
was
and
assess
made
to
the
isolate
influence
and
assess
of
data
the
by
influence
keeping
the
of
data
same
by
algorithm
keeping
archithe
same
algorithm
arp
provide
detail
for
understanding
and
replicating
the
structure
of
the
tecture
and
tecture
gradually
and
tecture
gradually
altering
and
tecture
the
gradually
altering
amount
and
the
gradually
altering
amount
information
the
altering
of
amount
information
provided
the
of
amount
information
for
provided
training.
of
information
for
provided
This
training.
profor
provided
This
training.
profor
This
train
(more
fewer
(more
input
or
fewer
(more
neurons),
nput
or
fewer
(more
neurons)
different
nput
or
fewer
hyperparameters
neurons)
dof
fferent
nput
hyperparameters
neurons)
d
fferent
might
hyperparameters
d
fferent
generate
m
hyperparameters
ght
better
generate
m
results,
ght
better
generate
m
resu
ght
better
generate
ts
resu
ANN
itself,
if
desired.
ANN
itself,
if
desired.
vided
aor
level
vided
ground
aadequate
level
for
ground
comparing
for
comparing
the
algorithms,
the
algorithms,
without
introducing
without
introducing
inconsistencies
inconsistencies
or
slightly
or
different
slightly
or
preprocessing
different
slightly
or
preprocessing
different
slightly
methods
preprocessing
different
might
methods
preprocessing
be
applicable
might
methods
be
applicable
might
(scaling
methods
be
might/might
applicable
might
(scaling
be
might/might
applicable
(scaling
not
might/might
(scaling
not
mi
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
to
hyperparameter
due
and
to
hyperparameter
architecture
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
and
architecture
variations.
variations.
2021
App11 Sc
x FOR
2021PEER
11 xREV
FORshould
EW
PEER
REV
EW
13
o
27
13
o
27
be
necessary,
be
necessary,
encoding
of
encoding
categoric
of
variables
categoric
might
variables
be
needed
might
be
or
needed
not,
etc).
or
The
not,
list
etc).
of
The
used
list
of
used
Table
4
summarises
Table
4
summarises
Table
the
input
4
summarises
Table
data
the
input
used
4
summarises
data
the
for
training
input
used
data
the
for
each
training
input
used
of
data
the
for
each
training
4
used
ANN
of
the
for
models.
each
training
4
ANN
of
the
The
models.
each
4
ANN
of
the
The
models.
4
ANN
variables,
variables,
and
their
range
variables,
and
their
of
values
range
variables,
and
their
given
of
values
range
and
in
the
their
given
of
preceding
values
range
in
the
given
of
preceding
Tables
values
in
the
1
given
and
preceding
Tables
3,
in
should
the
1
and
preceding
Tables
provide
3,
should
1
and
Tables
provide
3,
should
1
and
prov
3,
sh
original
algorithm
original
algorithm
(ANN
1)
was
(ANN
developed
1)
was
developed
using
the
full
using
dataset,
the
full
comprising
dataset,
comprising
the
entirety
the
entirety
acluded
guide
for
understanding
a
guide
the
for
form
understanding
of
the
dataset
the
and
form
possibly
of
the
dataset
substituting
and
possibly
it
with
other
substituting
it
with
o
Part
of
this
study’s
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
impact
is
to
examine
of
varying
the
degrees
impact
of
varying
deg
of
the
data
of
obtained
the
data
of
through
obtained
the
data
of
the
through
obtained
the
FE
data
analysis,
the
through
obtained
FE
as
analysis,
the
described
through
FE
as
analysis,
the
described
in
detail
FE
as
analysis,
described
in
in
previous
detail
as
described
in
in
paraprevious
detail
in
in
paraprevious
detail
in
p
pp
data
available
to
interested
research
parties.
Similarly,
the
list
of
hyperparameters
ininput
data
quantity
input
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
the
performance
ANN
model.
of
An
the
attempt
ANN
model.
An
attem
in
the
following
cluded
sections
in
the
of
following
the
study,
sections
along
with
of
the
the
study,
values
along
used
with
for
this
the
research,
values
used
for
this
resea
was
made
to
isolate
and
assess
the
influence
of
data
by
keeping
the
same
algorithm
archishould
provide
adequate
should
detail
provide
for
adequate
understanding
detail
and
for
understanding
replicating
the
and
structure
replicating
of
the
the
structure
of
tecture
and
gradually
tecture
altering
and
the
gradually
amount
altering
of
information
the
amount
provided
of
information
for
training.
provided
This
profor
training.
This
(more
or
fewer
(more
nput
or
fewer
neurons)
nput
neurons)
d
fferent
hyperparameters
d
fferent
hyperparameters
m
ght
generate
m
ght
better
generate
resu
better
ts
resu
ts
ANN
itself,
if
desired.
vided
aFOR
level
vided
ground
aFOR
level
vided
for
ground
comparing
athe
level
vided
for
ground
comparing
athe
level
algorithms,
for
ground
comparing
the
algorithms,
for
without
comparing
the
algorithms,
introducing
without
the
algorithms,
introducing
without
inconsistencies
introducing
without
inconsistencies
introducing
inconsisten
in
or
slightly
or
different
snecessary,
ght
yrange
or
preprocessing
d
sfferent
ght
y
or
preprocess
d
sfferent
methods
ght
y
preprocess
d
ng
fferent
might
methods
preprocess
be
ng
applicable
m
methods
ght
be
ng
app
m
(scaling
methods
ght
cab
be
might/might
eof
app
m
(sca
ght
cab
ng
be
m
eof
app
(sca
ght/m
not
cab
ng
ght
m
eofother
(sca
ght/m
not
ng
ght
moT
2.4.
Test
Cases
Examined
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
to
hyperparameter
and
architecture
and
architecture
variations.
variations.
2021
App11 Sc
x FOR
2021
App
PEER
11 Sc
xREV
FOR
2021
EW
App
PEER
11
Sc
xREV
2021
EW
PEER
11
xREV
EW
PEER
REV
EW
13
onot,
27
13
onot,
27
13
be
necessary,
be
encoding
be
necessary,
of
encoding
categoric
be
necessary,
of
encoding
variables
categoric
of
encoding
might
variables
categoric
be
of
needed
might
variables
categoric
be
or
needed
not,
might
variables
etc).
be
or
The
needed
not,
might
list
etc).
be
of
or
The
used
needed
list
etc).
of
or
The
used
list
etc).
of
uo
Table
4
summarises
Table
4
summarises
Table
the
input
4
summarises
Table
data
the
input
used
4
summarises
data
the
for
training
input
used
data
the
for
each
training
input
used
of
data
the
for
each
training
4
used
ANN
the
for
models.
each
training
4
ANN
the
The
models.
each
4
ANN
the
The
models.
4
ANN
variables,
variables,
and
their
and
their
of
values
range
given
of
values
in
the
given
preceding
in
the
preceding
Tables
1
and
Tables
3,
should
1
and
provide
3,
should
provide
original
algorithm
original
algorithm
(ANN
original
1)
was
algorithm
(ANN
original
developed
1)
was
algorithm
(ANN
developed
using
1)
was
(ANN
the
developed
full
using
1)
dataset,
was
the
developed
full
using
comprising
dataset,
the
full
using
comprising
dataset,
the
the
entirety
full
comprising
dataset,
the
entirety
comprisin
the
enti
acluded
guide
for
a
understanding
guide
for
a
understanding
guide
for
a
form
understanding
guide
of
the
for
the
form
understanding
dataset
of
the
the
and
form
dataset
possibly
of
the
the
and
form
dataset
substituting
possibly
of
the
and
dataset
substituting
possibly
it
with
and
other
substituting
possibly
it
with
substituting
it
with
Part
of
this
study’s
unique
contribution
is
to
examine
impact
of
varying
degrees
the
data
of
obtained
the
data
through
obtained
the
through
FE
analysis,
the
FE
as
analysis,
described
as
described
in
detail
in
in
previous
detail
in
paraprevious
paradata
available
to
interested
data
available
research
to
interested
parties.
Similarly,
research
the
parties.
list
of
Similarly,
hyperparameters
the
list
of
inhyperparameters
of
input
data
quantity
of
input
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
the
performance
ANN
model.
of
An
the
attempt
ANN
model.
An
attem
graphs.
Each
graphs.
subsequent
Each
graphs.
subsequent
algorithm
Each
graphs.
subsequent
algorithm
was
Each
trained
subsequent
algorithm
was
with
trained
am
algorithm
subset
was
with
trained
of
afferent
the
subset
was
with
original
trained
of
ainconsistencies
the
subset
input
with
original
of
inforaprothe
subset
input
original
of
inforthe
input
origina
in
in
the
following
sections
of
the
study,
along
with
the
values
used
for
this
research,
was
made
to
isolate
was
and
assess
made
to
the
isolate
influence
and
assess
of
data
the
by
influence
keeping
the
of
data
same
by
algorithm
keeping
archithe
same
algorithm
ar
should
provide
adequate
should
detail
provide
for
adequate
understanding
detail
and
for
understanding
replicating
the
and
structure
replicating
of
the
the
structure
of
tecture
and
gradually
altering
the
amount
information
provided
for
training.
This
(more
or
fewer
(more
nput
or
fewer
(more
neurons)
nput
or
fewer
(more
neurons)
d
fferent
nput
or
fewer
hyperparameters
neurons)
dofof
fferent
nput
hyperparameters
neurons)
des
fferent
m
hyperparameters
ght
d
generate
m
hyperparameters
ght
better
generate
m
resu
ght
better
generate
ts
m
resu
ght
better
generate
ts
resu
ANN
itself,
if11
desired.
ANN
itself,
if
desired.
Appl.
Sci.Sc
11,
11435
13
of
26
vided
a
level
ground
vided
for
comparing
a
level
ground
the
algorithms,
for
comparing
without
the
algorithms,
introducing
without
introducing
inconsisten
or
s
ght
y
or
d
s
fferent
ght
y
preprocess
d
fferent
preprocess
ng
methods
ng
m
methods
ght
be
app
ght
cab
be
e
app
(sca
cab
ng
m
e
(sca
ght/m
ng
m
ght/m
not
not
2.4.
Test
Cases
Examined
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
to
hyperparameter
due
and
to
hyperparameter
architecture
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
and
architecture
variations.
variations.
2021
App11
x2021,
FOR
2021
App
PEER
11
Sc
xREV
FOR
2021
EW
App
PEER
11
Sc
x
REV
FOR
2021
EW
PEER
x
REV
FOR
EW
PEER
REV
EW
13
o
27
13
o
27
13
be
necessary,
be
necessary
encoding
be
necessary
of
encod
categoric
be
ng
necessary
of
encod
variables
categor
ng
encod
c
might
var
categor
ab
ng
be
of
c
needed
m
var
categor
ght
ab
be
or
es
c
needed
not,
m
var
ght
etc).
ab
be
or
es
The
needed
not
m
ght
list
etc)
be
of
or
The
used
needed
not
etc)
st
of
or
The
used
not
etc)
st
of
T
Table
4
summarises
Table
4
summarises
the
input
data
the
input
used
data
for
training
used
for
each
training
of
the
each
4
ANN
of
the
models.
4
ANN
The
models.
The
variables,
variables,
and
their
range
variables,
and
their
of
values
range
variables,
and
their
given
of
values
range
and
in
the
their
given
of
preceding
values
range
in
the
given
of
preceding
Tables
values
in
the
1
given
and
preceding
Tables
3,
in
should
the
1
and
preceding
Tables
provide
3,
should
1
and
Tables
provide
3,
should
1
and
prov
3,
sh
original
algorithm
original
algorithm
(ANN
original
1)detail
was
algorithm
(ANN
original
developed
1)
was
algorithm
(ANN
developed
using
1)
was
(ANN
the
developed
full
using
1)
dataset,
was
the
developed
full
using
comprising
dataset,
the
full
using
comprising
dataset,
the
the
entirety
full
comprising
dataset,
the
entirety
comprisin
the
enti
ashould
guide
for
aof
understanding
guide
for
understanding
the
form
of
the
the
form
dataset
of
the
and
dataset
possibly
and
substituting
possibly
substituting
it
with
other
itsame
with
other
Part
of
this
study’s
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
the
impact
is
to
examine
of
varying
the
degrees
impact
of
varying
deg
the
data
obtained
the
data
of
through
obtained
the
data
of
the
through
obtained
the
FE
data
analysis,
the
through
obtained
FE
as
analysis,
the
described
through
FE
as
analysis,
the
described
in
detail
FE
as
analysis,
described
in
in
previous
detail
as
described
in
in
paraprevious
detail
in
in
paraprevious
detail
in
p
puop
data
available
data
to
available
interested
data
to
available
interested
research
data
to
available
interested
parties.
research
to
Similarly,
interested
parties.
research
Similarly,
the
parties.
research
list
of
Similarly,
the
hyperparameters
parties.
list
of
Similarly,
the
hyperparameters
list
of
inthe
hyperparameters
list
of
inhyperp
of
input
data
quantity
and
quality
on
the
performance
of
the
ANN
model.
An
attempt
graphs.
Each
graphs.
subsequent
Each
subsequent
algorithm
algorithm
was
trained
was
with
trained
am
subset
with
of
am
the
subset
original
of
the
input
original
inforinput
inforcluded
in
the
following
cluded
sections
in
the
of
following
the
study,
sections
along
with
of
the
the
study,
values
along
used
with
for
this
the
research,
values
used
for
this
resea
was
made
to
isolate
was
and
assess
made
to
the
isolate
influence
and
assess
of
data
the
by
influence
keeping
the
of
data
same
by
algorithm
keeping
archithe
algorithm
ar
mation.
Specifically,
mation.
Specifically,
mation.
the
cases
Specifically,
mation.
examined
the
cases
Specifically,
examined
include:
the
cases
examined
include:
the
cases
examined
include:
include:
provide
adequate
for
understanding
and
replicating
the
structure
of
the
tecture
and
gradually
tecture
altering
and
the
gradually
amount
altering
of
information
the
amount
provided
of
information
for
training.
provided
This
profor
training.
This
or
fewer
(more
nput
or
fewer
(more
neurons)
nput
or
fewer
(more
neurons)
d
fferent
nput
or
fewer
hyperparameters
neurons)
d
fferent
nput
hyperparameters
neurons)
d
fferent
m
hyperparameters
ght
d
fferent
generate
m
hyperparameters
ght
better
generate
m
resu
better
generate
ts
m
resu
better
generate
ts
resu
ANN
itself,
if
desired.
ANN
itself,
if
desired.
vided
a
level
ground
for
comparing
the
algorithms,
without
introducing
inconsistencies
or
s
ght
y
or
d
s
fferent
ght
y
or
preprocess
d
s
fferent
ght
y
or
preprocess
d
ng
s
fferent
methods
ght
y
preprocess
d
ng
fferent
m
methods
ght
preprocess
be
ng
app
methods
ght
cab
be
ng
e
app
(sca
methods
ght
cab
ng
be
m
e
app
m
(sca
ght/m
ght
cab
ng
be
ght
m
e
app
(sca
ght/m
not
cab
ng
ght
m
e
(sca
ght/m
not
ng
ght
m
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
2021
App11 Sc
x FOR
2021PEER
11 xREV
FOR(more
EW
PEER
REV
EW
13
o
27
13
o
27
be
necessary
be
necessary
encod
ng
of
encod
categor
ng
of
c
var
categor
ab
es
c
m
var
ght
ab
be
es
needed
m
ght
be
or
needed
not
etc)
or
The
not
etc)
st
of
The
used
st
of
used
Table
4
summarises
Table
4
summarises
Table
the
input
4
summarises
Table
data
the
input
used
4
summarises
data
the
for
training
input
used
data
the
for
each
training
input
used
of
data
the
for
each
training
4
used
ANN
of
the
for
models.
each
training
4
ANN
of
the
The
models.
each
4
ANN
of
the
The
models.
4
ANN
variables,
var
and
ab
their
es
range
var
and
ab
the
of
es
r
values
range
var
and
ab
the
given
of
es
r
va
range
and
ues
in
the
the
g
of
ven
preceding
r
va
range
ues
n
the
g
of
ven
preced
Tables
va
ues
n
the
ng
1
g
and
ven
preced
Tab
3,
n
es
should
the
ng
1
and
preced
Tab
provide
3
es
shou
ng
1
and
Tab
d
prov
3
es
shou
1
de
and
d
prov
3
sh
original
algorithm
original
algorithm
(ANN
1)
was
(ANN
developed
1)
was
developed
using
the
full
using
dataset,
the
full
comprising
dataset,
comprising
the
entirety
the
entirety
ashould
guide
for
aof
understanding
guide
for
ashould
understanding
guide
the
for
aof
form
understanding
guide
of
the
for
the
form
understanding
dataset
of
the
the
and
form
dataset
possibly
of
the
the
and
form
dataset
substituting
possibly
of
the
and
dataset
substituting
possibly
it
with
and
substituting
possibly
itused
with
other
substituting
itused
with
o
Part
of
this
study’s
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
impact
is
to
examine
of
varying
the
degrees
impact
of
varying
deg
the
data
obtained
the
data
through
obtained
the
data
the
through
obtained
the
FE
data
analysis,
the
through
obtained
FE
as
analysis,
the
described
through
FE
as
analysis,
the
described
in
detail
FE
as
analysis,
described
in
in
previous
detail
as
described
in
in
paraprevious
detail
in
in
paraprevious
detail
in
p
pp
data
available
data
to
available
interested
to
interested
research
parties.
research
Similarly,
parties.
Similarly,
the
list
of
the
hyperparameters
list
of
hyperparameters
ininof
data
quantity
of
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
the
performance
ANN
model.
of
An
the
attempt
ANN
model.
An
attem
graphs.
Each
graphs.
subsequent
Each
graphs.
subsequent
algorithm
Each
graphs.
subsequent
algorithm
was
Each
trained
subsequent
algorithm
was
with
trained
am
algorithm
subset
was
with
trained
of
am
the
subset
was
with
original
trained
of
am
the
subset
input
with
original
of
inforaother
the
subset
input
original
of
inforthe
input
origina
in
cluded
in
the
cluded
following
in
the
cluded
following
sections
in
the
cluded
of
following
sections
the
in
study,
the
of
following
sections
the
along
study,
with
of
sections
the
along
the
study,
values
with
of
the
along
the
used
study,
values
with
for
along
this
the
used
research,
values
with
for
this
the
research,
values
for
this
resea
for
was
made
to
isolate
and
assess
the
influence
of
data
by
keeping
the
same
algorithm
archimation.
Specifically,
mation.
Specifically,
the
cases
examined
the
cases
examined
include:
include:
provide
adequate
detail
provide
for
adequate
understanding
detail
and
for
understanding
replicating
the
structure
replicating
of
the
the
of
tecture
and
gradually
tecture
altering
and
the
gradually
amount
altering
of
information
the
provided
of
information
for
training.
provided
This
profor
training.
This
(more
fewer
(more
nput
or
neurons)
neurons)
d
fferent
d
fferent
hyperparameters
m
ght
generate
m
ght
better
generate
resu
better
ts
resu
ts
ANN
itself,
if
desired.
vided
aor
level
ground
vided
for
comparing
anput
level
ground
the
algorithms,
for
comparing
without
the
algorithms,
introducing
without
inconsistencies
introducing
inconsisten
or
sinput
ght
y
or
d
sfferent
ght
yfewer
or
d
sinput
fferent
ght
y
or
d
sfferent
methods
ght
y
preprocess
d
ng
fferent
methods
ght
preprocess
be
ng
app
methods
ght
cab
be
ng
eof
app
(sca
methods
ght
cab
ng
m
eand
app
(sca
ght/m
ght
cab
ng
ght
m
eof
app
(sca
ght/m
not
cab
ng
ght
m
estructure
(sca
ght/m
not
ng
ght
m
Test
Cases
Examined
Test
Cases
due
to
hyperparameter
and
architecture
variations.
2021
App11 Sc
x FOR
2021
App
PEER
11 Sc
xREV
FOR
2021
App
PEER
11
Sc
xREV
FOR
2021
PEER
11
xREV
FOR
EW
PEER
REV
EW
13
o
13
onot
13
•EW
ANN
•EW
As
ANN
mentioned
•preprocess
1:understand
As
ANN
mentioned
above,
•preprocess
As
ANN
this
mentioned
above,
uses
1:hyperparameters
As
the
this
mentioned
above,
complete
uses
the
this
above,
dataset
complete
uses
the
this
for
dataset
complete
training
uses
the
for
and
dataset
complete
training
testing
for
and
dataset
training
testing
for
and
trainin
tes
be
necessary
be
necessary
encod
be
ng
necessary
of
encod
categor
be
ng
necessary
of
encod
cExamined
var
categor
ab
ng
of
encod
cm
m
var
categor
ght
ab
ng
be
of
camount
needed
m
var
categor
ght
ab
be
or
es
csubstituting
needed
not
m
var
ght
ab
es
The
needed
not
m
ght
etc)
st
of
or
The
used
needed
not
etc)
st
of
or
The
used
etc)
st
of
Table
41:understanding
summarises
Table
the
input
41:ng
summarises
data
used
the
for
training
input
data
each
used
the
for
training
4etc)
ANN
each
the
The
4d
models.
var
ab
es
var
and
ab
the
esfor
ralgorithm
range
and
the
of
ralgorithm
va
range
ues
g
of
ven
va
ues
nes
the
g
ven
preced
nes
the
ng
preced
Tab
es
ng
1comprising
and
Tab
3be
es
shou
1comprising
and
d
prov
3be
shou
de
prov
de
original
algorithm
(ANN
original
1)
was
(ANN
developed
1)
was
algorithm
(ANN
developed
using
1)
was
(ANN
the
developed
full
using
1)
dataset,
was
the
developed
full
using
dataset,
the
full
using
dataset,
the
the
entirety
full
comprising
dataset,
the
entirety
comprisin
the
enti
a2.4.
guide
for
aoriginal
gu
de
a2.4.
gu
de
the
for
aoriginal
form
understand
gu
ng
de
of
the
for
the
form
understand
dataset
ng
of
the
the
and
form
dataset
ng
possibly
of
the
the
and
form
dataset
poss
of
the
bbe
and
yor
dataset
subst
poss
itmodels.
with
tut
bbe
and
y
ng
other
subst
poss
t27
w
tut
bANN
th
y
ng
other
subst
t27
w
tut
th
ng
oTuo
Part
of
this
study’s
unique
contribution
is
to
examine
the
impact
of
varying
degrees
the
data
of
obtained
the
data
through
obtained
the
through
FE
analysis,
the
FE
as
analysis,
described
as
described
in
detail
in
in
detail
in
previous
paradata
available
data
to
available
interested
data
to
available
interested
research
data
to
available
interested
parties.
research
to
Similarly,
interested
parties.
research
Similarly,
the
parties.
research
list
of
Similarly,
the
hyperparameters
parties.
list
of
Similarly,
the
hyperparameters
list
of
inthe
hyperparameters
list
of
inhyperp
of
data
quantity
of
input
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
performance
ANN
model.
of
An
the
attempt
ANN
model.
An
attem
graphs.
Each
graphs.
subsequent
Each
graphs.
subsequent
algorithm
Each
graphs.
subsequent
algorithm
was
Each
trained
subsequent
algorithm
was
with
trained
am
algorithm
subset
was
with
trained
of
afferent
the
subset
was
with
original
trained
of
aprevious
the
subset
input
with
original
of
inforaparathe
subset
input
original
of
inforthe
input
origina
in
cluded
in
the
cluded
following
in
the
following
sections
of
sections
the
study,
of
the
along
study,
with
along
the
values
with
the
used
values
for
this
used
research,
for
this
research,
was
made
to
isolate
was
and
assess
made
to
the
isolate
influence
and
assess
of
data
the
by
influence
keeping
the
of
data
same
by
algorithm
keeping
archithe
same
algorithm
ar
mation.
Specifically,
mation.
Specifically,
mation.
the
cases
Specifically,
mation.
examined
the
cases
Specifically,
examined
include:
the
cases
examined
include:
the
cases
examined
include:
include:
should
provide
should
adequate
provide
should
adequate
detail
provide
should
for
adequate
detail
understanding
provide
for
adequate
detail
understanding
and
for
detail
understanding
replicating
and
for
understanding
replicating
the
structure
replicating
the
and
of
structure
the
replicating
the
of
structure
the
the
of
str
tecture
and
gradually
altering
the
amount
of
information
provided
for
training.
This
pro(more
fewer
(more
nput
or
(more
neurons)
or
fewer
(more
neurons)
d
fferent
nput
or
fewer
hyperparameters
neurons)
d
fferent
nput
hyperparameters
neurons)
d
fferent
m
hyperparameters
ght
d
generate
m
hyperparameters
ght
better
generate
m
resu
better
generate
ts
m
resu
better
generate
ts
resu
ANN
itself,
if
desired.
ANN
itself,
if
vided
aor
level
ground
vided
for
comparing
anput
level
ground
the
algorithms,
for
comparing
without
the
algorithms,
introducing
without
inconsistencies
introducing
inconsisten
or
sinput
ght
y
or
d
sfferent
ght
yfewer
preprocess
d
fferent
preprocess
ng
methods
ng
methods
ght
be
app
ght
cab
be
eof
app
(sca
cab
ng
m
eand
(sca
ght/m
ng
ght
m
ght/m
not
ght
2.4.
Test
Cases
Examined
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
• gu
ANN
•of
As
ANN
mentioned
1:understand
As
mentioned
above,
this
above,
uses
this
complete
uses
the
dataset
complete
for
dataset
training
and
training
testing
and
testing
be
necessary
be
necessary
encod
be
ng
necessary
of
encod
categor
be
ng
necessary
of
encod
cdesired.
var
categor
ab
ng
of
encod
cm
m
var
categor
ght
ab
ng
be
of
cthe
needed
m
var
categor
ght
ab
be
or
csubst
needed
not
m
var
ght
ab
The
needed
not
m
ght
etc)
st
of
or
The
used
needed
not
etc)
stofth
of
or
The
used
not
etc)
stin
T
Table
41:understand
summarises
the
input
data
used
for
training
each
the
4etc)
ANN
models.
The
purposes.
purposes.
purposes.
purposes.
var
ab
es
var
and
ab
the
es
r(ANN
range
var
and
ab
the
of
es
ralgorithm
va
range
var
and
ues
ab
the
g
of
es
ven
rparties.
va
range
and
ues
nes
the
the
g
of
ven
preced
rpart
va
range
ues
nes
ng
gsubset
of
ven
preced
Tab
va
ues
nes
es
the
ng
1gsubset
and
ven
preced
Tab
3be
nes
es
shou
the
ng
1for
and
preced
Tab
d
prov
3be
es
shou
ng
1comprising
de
and
Tab
d
prov
3in
es
shou
1not
de
and
d
prov
3of
sh
original
algorithm
original
1)
was
developed
(ANN
using
1)
was
the
developed
full
dataset,
using
comprising
the
full
dataset,
the
entirety
the
enti
adata
de
for
adata
gu
de
for
ng
the
form
ng
of
the
the
form
dataset
of
the
and
dataset
poss
b
and
yof
poss
tut
b
yor
ng
subst
tthe
w
tut
th
ng
other
tthe
w
other
Part
of
this
study’s
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
the
impact
is
to
examine
of
varying
the
degrees
impact
varying
deg
the
data
obtained
the
data
of
through
obtained
the
data
of
the
through
obtained
the
FE
data
analysis,
the
through
obtained
FE
as
analysis,
the
described
through
FE
as
analysis,
the
described
in
detail
FE
as
analysis,
described
in
in
previous
detail
as
described
in
in
paraprevious
detail
in
paraprevious
detail
p
pup
available
to
ava
interested
data
ab
e
to
ava
nterested
research
data
ab
e
to
ava
nterested
research
ab
e
to
Similarly,
nterested
research
es
S
the
part
m
research
list
ar
es
y
of
S
the
hyperparameters
part
m
ar
st
es
y
of
S
hyperparameters
m
ar
st
y
of
inhyperparameters
st
of
nhyperp
of
input
data
quantity
and
quality
on
the
performance
of
the
ANN
model.
An
attempt
graphs.
Each
graphs.
subsequent
Each
subsequent
algorithm
algorithm
was
trained
was
with
trained
a
with
of
a
the
original
of
the
input
original
inforinput
inforcluded
in
the
cluded
following
in
the
cluded
following
sections
in
the
cluded
of
following
sections
the
in
study,
the
of
following
sections
the
along
study,
with
of
sections
the
along
the
study,
values
with
the
along
the
used
study,
values
with
for
along
this
the
used
research,
values
with
for
this
the
used
research,
values
for
this
used
resea
for
was
made
to
isolate
was
and
assess
made
to
the
isolate
influence
and
assess
of
data
the
by
influence
keeping
the
of
data
same
by
algorithm
keeping
archithe
same
algorithm
ar
mation.
Specifically,
mation.
Specifically,
mation.
the
cases
Specifically,
mation.
examined
the
cases
Specifically,
examined
include:
the
cases
examined
include:
the
cases
examined
include:
include:
should
provide
should
adequate
provide
adequate
detail
for
detail
understanding
for
understanding
and
replicating
and
replicating
the
structure
the
of
structure
the
of
the
and
gradually
altering
and
the
gradually
amount
altering
of
information
the
amount
provided
of
information
for
training.
provided
This
profor
training.
This
(more
(more
nput
or
(more
neurons)
nput
or
fewer
(more
neurons)
dsummarises
fferent
nput
or
fewer
neurons)
d
fferent
nput
hyperparameters
neurons)
d
fferent
m
hyperparameters
ght
d
fferent
generate
m
hyperparameters
ght
better
generate
m
resu
better
generate
ts
m
resu
better
generate
ts
resu
ANN
itself,
ANN
ifsummarises
desired.
itself,
ANN
if
desired.
itself,
ANN
if
itself,
if
desired.
alevel
level
ground
for
comparing
the
algorithms,
without
introducing
inconsistencies
due
to
vided
aor
ground
for
comparing
the
algorithms,
without
introducing
inconsistencies
or
snecessary
ght
yfewer
or
d
sfferent
ght
yfewer
or
preprocess
d
sfferent
ght
y
or
d
ng
sfferent
methods
ght
y
preprocess
d
ng
fferent
methods
ght
preprocess
be
ng
app
m
methods
ght
cab
be
ng
eof
app
m
(sca
methods
ght
cab
ng
be
m
edataset
app
m
(sca
ght/m
ght
cab
ng
be
ght
m
eof
app
(sca
ght/m
not
cab
ng
ght
m
ees
(sca
ght/m
not
ng
ght
m
2.4.
Test
Cases
Examined
due
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
•tecture
ANN
••and
1:understand
As
ANN
mentioned
•tecture
1:understand
As
ANN
mentioned
above,
••preprocess
1:
As
ANN
this
mentioned
above,
uses
1:hyperparameters
As
the
this
mentioned
above,
complete
uses
the
this
above,
dataset
complete
uses
the
this
for
dataset
complete
training
uses
the
for
and
dataset
complete
training
testing
for
and
dataset
training
testing
for
and
trainin
tes
be
be
necessary
encod
ng
of
encod
categor
ng
of
cdesired.
var
categor
ab
es
cm
m
var
ght
ab
be
es
needed
m
ght
be
or
needed
not
etc)
or
The
not
etc)
st
of
The
used
st
of
used
Table
4
Table
the
input
4
data
used
the
for
training
input
data
each
used
the
for
training
4
ANN
models.
each
the
The
4
ANN
models.
purposes.
purposes.
var
ab
es
var
ab
the
es
r
range
var
and
ab
the
of
es
r
va
range
var
and
ues
ab
the
g
of
es
ven
r
va
range
and
ues
n
the
g
of
ven
preced
r
va
range
ues
n
the
ng
g
of
ven
preced
Tab
va
ues
n
es
the
ng
1
g
and
ven
preced
Tab
3
n
es
shou
the
ng
1
and
preced
Tab
d
prov
3
es
shou
ng
1
de
and
Tab
d
prov
3
shou
1
de
and
d
prov
3
sh
original
algorithm
(ANN
1)
was
developed
using
the
full
dataset,
comprising
the
entirety
•cluded
ANN
2:
The
ANN
second
•
2:
The
ANN
algorithm
second
2:
The
ANN
algorithm
was
second
developed
2:
The
algorithm
was
second
developed
using
algorithm
was
only
developed
using
the
was
extreme
only
developed
using
the
values
extreme
only
using
of
the
insuvalues
extreme
only
of
the
insuvalues
extreme
of
in
adata
gu
de
for
a
gu
de
for
a
gu
ng
de
the
for
a
form
understand
gu
ng
de
of
the
for
the
form
understand
dataset
ng
of
the
the
and
form
dataset
ng
poss
of
the
the
b
and
form
y
dataset
subst
poss
of
the
tut
b
and
y
ng
subst
poss
t
w
tut
b
and
th
y
ng
other
subst
poss
t
w
tut
b
th
y
ng
other
subst
t
w
tut
th
ng
o
Part
of
this
study’s
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
impact
is
to
examine
of
varying
the
degrees
impact
of
varying
deg
of
the
data
obtained
of
through
the
data
the
obtained
FE
analysis,
through
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
p
ava
data
ab
e
to
ava
nterested
ab
e
to
nterested
research
part
research
es
S
part
m
ar
es
y
S
the
m
ar
st
y
of
the
hyperparameters
st
of
hyperparameters
nndata
quantity
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
the
performance
ANN
model.
of
An
the
attempt
ANN
model.
An
attem
graphs.
Each
graphs.
subsequent
Each
graphs.
subsequent
algorithm
Each
graphs.
subsequent
algorithm
was
Each
trained
subsequent
algorithm
was
with
trained
am
subset
was
with
trained
of
ath
the
was
with
original
trained
of
ath
the
subset
input
with
original
of
inforath
the
subset
input
of
inforthe
input
invp
in
the
cshould
uded
following
n
the
cshould
uded
fo
sections
ow
n
the
cshould
ng
uded
of
fo
sect
the
ow
n
ons
study,
the
ng
of
fo
sect
the
along
ow
ons
study
ng
with
of
sect
the
aalgorithm
the
ong
ons
study
values
w
of
the
asubset
the
ong
used
study
va
w
for
ues
aght/m
this
the
ong
used
research,
va
w
for
ues
th
the
used
soriginal
research
va
for
ues
th
used
sorigina
resea
for
was
made
to
isolate
and
assess
the
influence
of
data
by
keeping
the
same
algorithm
archimation.
Specifically,
mation.
Specifically,
the
cases
examined
the
cases
examined
include:
include:
should
provide
adequate
provide
adequate
detail
provide
for
adequate
detail
understanding
provide
for
adequate
detail
understanding
and
for
detail
understanding
replicating
and
for
understanding
replicating
the
structure
replicating
the
and
of
structure
the
replicating
the
structure
the
the
of
str
tecture
and
gradually
tecture
altering
and
the
gradually
amount
altering
of
information
the
provided
of
information
for
training.
provided
This
profor
training.
This
(more
fewer
(more
nput
or
neurons)
neurons)
d
fferent
hyperparameters
d
fferent
hyperparameters
m
ght
generate
m
ght
better
generate
resu
better
ts
resu
ts
ANN
itself,
ANN
ifsummarises
desired.
itself,
if
desired.
aor
level
ground
for
comparing
anput
level
ground
the
algorithms,
for
comparing
without
the
algorithms,
introducing
without
inconsistencies
introducing
inconsisten
or
sinput
ght
y
or
d
sfferent
ght
yfewer
or
preprocess
d
sinput
fferent
ght
y
or
d
ng
sfferent
methods
ght
y
preprocess
d
ng
fferent
methods
ght
preprocess
be
ng
app
methods
ght
cab
be
ng
eof
app
(sca
methods
ght
cab
ng
be
m
eand
app
m
(sca
ght
cab
ng
be
ght
m
eof
app
(sca
ght/m
not
cab
ng
ght
m
eof
(sca
ght/m
not
ng
ght
m
hyperparameter
and
architecture
variations.
2.4.
Test
Cases
Examined
2.4.
Test
Cases
due
to
hyperparameter
and
architecture
variations.
•vided
ANN
••and
1:
As
ANN
mentioned
•vided
1:
As
ANN
mentioned
above,
•preprocess
1:
As
ANN
this
mentioned
above,
uses
1:
As
the
this
mentioned
above,
complete
uses
the
this
above,
dataset
complete
uses
the
this
for
dataset
complete
training
uses
the
for
and
dataset
complete
training
testing
for
and
dataset
training
testing
for
and
trainin
tes
be
necessary
be
necessary
encod
be
ng
necessary
of
encod
categor
be
ng
necessary
of
encod
cExamined
var
categor
ab
ng
es
of
encod
cm
m
var
categor
ght
ab
ng
be
es
of
camount
needed
m
var
categor
ght
ab
be
or
es
cm
needed
not
m
var
ght
etc)
ab
be
or
es
The
needed
not
m
ght
etc)
st
be
of
or
The
used
needed
not
etc)
st
of
or
The
used
not
etc)
st
of
Table
4
Table
the
input
4
summarises
data
used
the
for
training
input
data
each
used
the
for
training
4
ANN
models.
each
the
The
4
ANN
models.
purposes.
purposes.
purposes.
purposes.
var
ab
es
var
ab
the
es
r
range
and
the
of
r
va
range
ues
g
of
ven
va
ues
n
the
g
ven
preced
n
the
ng
preced
Tab
es
ng
1
and
Tab
3
es
shou
1
and
d
prov
3
shou
de
d
prov
de
original
algorithm
(ANN
original
1)
was
algorithm
developed
(ANN
using
1)
was
the
developed
full
dataset,
using
comprising
the
full
dataset,
the
entirety
comprising
the
enti
•cshould
ANN
2:
The
ANN
second
2:
The
algorithm
second
algorithm
was
developed
was
developed
using
only
using
the
extreme
only
the
values
extreme
of
insuvalues
of
insuadata
gu
de
for
a
understand
gu
de
for
a
understand
gu
ng
de
the
for
a
form
understand
gu
ng
de
of
the
for
the
form
understand
dataset
ng
of
the
the
and
form
dataset
ng
poss
of
the
the
b
and
form
y
dataset
subst
poss
of
the
tut
b
and
y
dataset
ng
subst
poss
t
w
tut
b
and
th
y
ng
other
subst
poss
t
w
tut
b
th
y
ng
other
subst
t
w
tut
th
ng
oTu
Part
of
this
study’s
unique
contribution
is
to
examine
impact
of
varying
degrees
of
the
data
obtained
through
the
FE
analysis,
as
described
in
detail
in
previous
paralation
thickness.
lation
thickness.
As
lation
such,
the
thickness.
As
wall
lation
such,
assemblies
the
thickness.
As
wall
such,
assemblies
considered
the
As
wall
such,
assemblies
considered
the
included
wall
assemblies
considered
the
included
non-insulated
considered
the
included
non-insulated
the
included
non-insula
the
ava
data
ab
e
to
ava
nterested
data
ab
e
to
ava
nterested
research
data
ab
e
to
ava
part
nterested
research
ab
es
e
to
S
part
m
nterested
research
ar
es
y
S
the
part
m
research
ar
st
es
y
of
S
the
hyperparameters
part
m
ar
st
es
y
of
S
the
hyperparameters
m
ar
st
y
of
nthe
hyperparameters
st
of
nhyperp
input
data
quantity
of
input
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
the
performance
ANN
model.
of
An
the
attempt
ANN
model.
An
attem
graphs.
Each
subsequent
graphs.
algorithm
Each
subsequent
was
trained
algorithm
with
ath
was
trained
of
the
with
original
ainconsistencies
subset
input
of
inforthe
input
in
uded
n
the
cshou
uded
fo
ow
n
the
ng
fo
sect
ow
ons
ng
of
sect
the
ons
study
of
the
acomparing
ong
study
w
asubset
the
ong
va
w
ues
th
the
used
va
for
ues
th
used
sthe
research
for
th
soriginal
research
was
made
to
isolate
was
and
assess
made
to
the
isolate
influence
and
assess
of
data
the
by
influence
keeping
the
of
data
same
by
algorithm
keeping
archithe
same
algorithm
ar
mation.
Specifically,
mation.
Specifically,
mation.
the
cases
Specifically,
mation.
examined
the
cases
Specifically,
examined
include:
the
cases
examined
include:
the
cases
examined
include:
include:
provide
ditself,
adequate
prov
shou
de
d
adequate
detail
prov
shou
de
for
d
adequate
deta
understanding
prov
de
for
adequate
deta
understand
and
for
deta
understand
replicating
ng
and
for
understand
rep
the
ng
cat
and
structure
ng
rep
ng
cat
and
of
structure
ng
the
rep
the
cat
of
structure
ng
the
the
of
str
tecture
and
gradually
altering
the
amount
of
information
provided
for
training.
This
pro(more
or
fewer
(more
nput
or
fewer
(more
neurons)
nput
or
fewer
(more
neurons)
d
fferent
nput
or
fewer
hyperparameters
neurons)
d
fferent
nput
hyperparameters
neurons)
d
fferent
m
hyperparameters
ght
d
fferent
generate
m
hyperparameters
ght
better
generate
m
resu
ght
better
generate
ts
m
resu
ght
better
generate
ts
resu
ANN
itself,
ANN
ifsummarises
desired.
ANN
if
desired.
itself,
ANN
if
desired.
itself,
if
desired.
vided
a
level
ground
vided
for
comparing
a
level
ground
the
algorithms,
for
without
the
algorithms,
introducing
without
introducing
inconsisten
or
s
ght
y
or
d
s
fferent
ght
y
preprocess
d
fferent
preprocess
ng
methods
ng
m
methods
ght
be
app
m
ght
cab
be
e
app
(sca
cab
ng
m
e
(sca
ght/m
ng
m
ght/m
not
not
2.4.
Test
Cases
2.4.
Test
Examined
Cases
2.4.
Test
Examined
Cases
2.4.
Test
Examined
Cases
Examined
due
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
•adata
ANN
•
1:
As
ANN
mentioned
1:
As
mentioned
above,
this
above,
uses
this
complete
uses
the
dataset
complete
for
dataset
training
for
and
training
testing
and
testing
be
necessary
be
necessary
encod
be
ng
necessary
of
encod
categor
be
ng
necessary
of
encod
c
var
categor
ab
ng
es
of
encod
c
m
var
categor
ght
ab
ng
be
es
of
c
needed
m
var
categor
ght
ab
be
or
es
c
needed
not
m
var
ght
etc)
ab
be
or
es
The
needed
not
m
ght
etc)
st
be
of
or
The
used
needed
not
etc)
st
of
or
The
used
not
etc)
st
of
T
u
Table
4
summarises
the
input
data
used
for
training
each
of
the
4
ANN
models.
The
Table
4
the
input
data
used
for
training
each
of
the
4
ANN
models.
The
purposes.
purposes.
purposes.
purposes.
var
ab
es
var
and
ab
the
es
r
range
var
and
ab
the
of
es
r
va
range
var
and
ues
ab
the
g
of
es
ven
r
va
range
and
ues
n
the
the
g
of
ven
preced
r
va
range
ues
n
the
ng
g
of
ven
preced
Tab
va
ues
n
es
the
ng
1
g
and
ven
preced
Tab
3
n
es
shou
the
ng
1
and
preced
Tab
d
prov
3
es
shou
ng
1
de
and
Tab
d
prov
3
es
shou
1
de
and
d
prov
3
sh
original
algorithm
(ANN
original
1)adequate
was
algorithm
developed
(ANN
using
1)
was
the
developed
full
dataset,
using
comprising
the
full
dataset,
the
entirety
comprising
the
enti
•was
ANN
•and
2:
The
ANN
second
•was
2:
The
ANN
algorithm
second
•and
2:
The
ANN
algorithm
was
second
developed
2:through
The
algorithm
was
second
developed
using
algorithm
was
only
developed
using
the
was
extreme
only
developed
using
the
values
extreme
only
using
of
the
insuvalues
extreme
only
of
the
insuvalues
extreme
offor
in
v
gu
de
for
adata
understand
gu
de
for
understand
ng
the
form
ng
of
the
the
form
dataset
of
the
and
dataset
poss
b
and
y
subst
poss
tut
b
y
ng
subst
tthe
w
tut
th
ng
other
tthe
w
th
other
Part
of
this
unique
Part
of
contribution
this
study’s
unique
is
to
examine
contribution
the
impact
is
to
examine
of
varying
degrees
impact
of
varying
deg
the
data
obtained
of
through
the
data
the
obtained
FE
analysis,
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
p
lation
thickness.
lation
thickness.
As
such,
the
As
wall
such,
assemblies
the
wall
assemblies
considered
considered
included
the
included
non-insulated
the
non-insulated
ava
ab
ede
to
ava
nterested
data
ab
ede
to
ava
nterested
research
data
ab
efor
to
ava
part
nterested
research
ab
es
efor
to
Sow
part
m
nterested
research
ar
es
yinfluence
Sof
the
part
m
research
ar
st
es
y
Sdata
the
hyperparameters
part
m
ar
st
es
ykeeping
of
Song
hyperparameters
m
ar
st
y
of
nhyperparameters
st
of
nhyperp
of
input
data
quantity
and
quality
on
the
performance
the
ANN
model.
An
attempt
graphs.
Each
subsequent
algorithm
was
trained
with
aof
subset
of
the
original
input
inforones
those
ones
and
those
ones
insulated
those
ones
100
mm
and
insulated
of
those
EPS
mm
with
insulated
internally
EPS
mm
with
internally
and
of
100
externally.
EPS
mm
internally
and
of
externally.
EPS
internally
and
externally.
and
externally
cshou
uded
n
the
cshou
uded
fo
ow
n
the
cANN
ng
uded
fo
sect
ow
n
ons
the
cANN
ng
uded
of
fo
sect
the
ow
n
ons
study
the
ng
of
fo
sect
the
a100
ong
ons
study
ng
w
of
sect
th
the
a100
the
ong
ons
study
va
w
of
ues
th
the
aof
the
ong
used
study
va
w
for
ues
th
aght/m
th
the
used
sthe
research
va
w
for
ues
th
th
the
used
sng
research
va
for
ues
th
used
sng
resea
made
to
isolate
assess
made
to
the
isolate
influence
assess
of
data
the
by
keeping
the
of
same
by
algorithm
archithe
same
algorithm
ar
mation.
Specifically,
mation.
the
cases
Specifically,
examined
include:
the
cases
examined
include:
ditself,
prov
dstudy’s
adequate
prov
deta
deta
understand
understand
ng
and
rep
ng
cat
and
ng
rep
the
cat
structure
ng
the
of
structure
the
of
the
tecture
and
gradually
tecture
altering
and
the
gradually
amount
altering
of
information
the
amount
provided
of
information
for
training.
provided
This
profor
training.
This
p
ANN
ANN
if
desired.
tse
finsulated
ffferent
des
tse
red
fwith
ffferent
des
tse
red
fwith
ffferent
des
red
vided
a
level
ground
for
comparing
the
algorithms,
without
introducing
inconsistencies
or
s
ght
y
or
d
s
fferent
ght
y
or
preprocess
d
s
ght
y
or
preprocess
d
ng
s
methods
ght
y
preprocess
d
ng
m
methods
ght
preprocess
be
ng
app
m
methods
ght
cab
be
ng
e
app
m
(sca
methods
ght
cab
ng
be
m
e
app
m
(sca
ght
cab
ng
be
ght
m
e
app
(sca
ght/m
not
cab
ght
m
e
(sca
ght/m
not
ght
m
2.4.
Test
Cases
2.4.
Test
Examined
Cases
Examined
due
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
•adata
ANN
•
1:
As
ANN
mentioned
•
1:
As
ANN
mentioned
above,
•
1:
As
ANN
this
mentioned
above,
uses
1:
As
this
mentioned
above,
complete
uses
the
this
above,
dataset
complete
uses
the
this
for
dataset
complete
training
uses
the
for
and
dataset
complete
training
testing
for
and
dataset
training
testing
for
and
trainin
tes
be
necessary
be
necessary
encod
ng
of
encod
categor
ng
of
c
var
categor
ab
es
c
m
var
ght
ab
be
es
needed
m
ght
be
or
needed
not
etc)
or
The
not
etc)
st
of
The
used
st
of
used
Table
4
summarises
Table
the
input
4
summarises
data
used
the
for
training
input
data
each
used
of
the
for
training
4
ANN
models.
each
of
the
The
4
ANN
models.
purposes.
purposes.
original
algorithm
(ANN
1)
was
developed
using
the
full
dataset,
comprising
the
entirety
var
ab
es
var
and
ab
the
es
r
range
var
and
ab
the
of
es
r
va
range
var
and
ues
ab
the
g
of
es
ven
r
va
range
and
ues
n
the
the
g
of
ven
preced
r
va
range
ues
n
the
ng
g
of
ven
preced
Tab
va
ues
n
es
the
ng
1
g
and
ven
preced
Tab
3
n
es
shou
the
ng
1
and
preced
Tab
d
prov
3
es
shou
ng
1
de
and
Tab
d
prov
3
es
shou
1
de
and
d
prov
3
sh
original
algorithm
(ANN
1)
was
developed
using
the
full
dataset,
comprising
the
entirety
•was
ANN
•
2:
The
ANN
second
•
2:
The
ANN
algorithm
second
•
2:
The
ANN
algorithm
was
second
developed
2:
The
algorithm
was
second
developed
using
algorithm
was
only
developed
using
the
was
extreme
only
developed
using
the
values
extreme
only
using
of
the
insuvalues
extreme
only
of
the
insuvalues
extreme
of
in
v
gu
de
for
a
understand
gu
de
for
a
understand
gu
ng
de
the
for
a
form
understand
gu
ng
de
of
the
for
the
form
understand
dataset
ng
of
the
the
and
form
dataset
ng
poss
of
the
the
b
and
form
y
dataset
subst
poss
of
the
tut
b
and
y
dataset
ng
subst
poss
t
w
tut
b
and
th
y
ng
other
subst
poss
t
w
tut
b
th
y
ng
other
subst
t
w
tut
th
ng
o
Part
of
this
Part
study’s
of
this
unique
Part
study’s
of
contribution
this
unique
Part
study’s
of
contribution
this
unique
is
study’s
to
examine
contribution
unique
is
to
examine
contribution
impact
is
to
examine
of
impact
varying
is
to
the
examine
of
degrees
impact
varying
the
of
degrees
impact
varying
of
deg
va
the
data
obtained
through
the
data
the
obtained
FE
analysis,
through
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
p
lation
thickness.
lation
thickness.
As
lation
such,
the
thickness.
As
wall
lation
such,
assemblies
the
thickness.
As
wall
such,
assemblies
considered
the
As
wall
such,
assemblies
considered
the
included
wall
assemblies
considered
the
included
non-insulated
considered
the
included
non-insulated
the
included
non-insula
the
ava
data
ab
e
to
ava
nterested
ab
e
to
nterested
research
part
research
es
S
part
m
ar
es
y
S
the
m
ar
st
y
of
the
hyperparameters
st
of
hyperparameters
nnof
input
data
quantity
of
input
and
quality
data
quantity
on
the
performance
and
quality
on
of
the
the
performance
ANN
model.
of
An
the
attempt
ANN
model.
An
attem
graphs.
Each
subsequent
graphs.
algorithm
Each
subsequent
was
trained
algorithm
with
a
subset
was
trained
of
the
with
original
a
subset
input
of
inforthe
original
input
in
ones
and
those
ones
and
insulated
those
with
insulated
100
mm
with
of
100
EPS
mm
internally
of
EPS
internally
and
externally.
and
externally.
cshou
uded
n
the
c
uded
fo
ow
n
the
c
ng
uded
fo
sect
ow
n
ons
the
c
ng
uded
of
fo
sect
the
ow
n
ons
study
the
ng
of
fo
sect
the
a
ow
ong
ons
study
ng
w
of
sect
th
the
a
the
ong
ons
study
va
w
of
ues
th
the
a
the
ong
used
study
va
w
for
ues
th
a
th
the
ong
used
s
research
va
w
for
ues
th
th
the
used
s
research
va
for
ues
th
used
s
resea
for
made
to
isolate
assess
the
influence
of
data
by
keeping
the
same
algorithm
archimation.
Specifically,
the
cases
examined
include:
•be
ANN
•
3:
Only
ANN
the
•
3:
extreme
Only
ANN
the
•
3:
values
extreme
Only
ANN
of
3:
the
values
extreme
Only
emissivity
of
the
the
values
extreme
emissivity
coefficient
of
the
values
emissivity
coefficient
were
of
the
used
emissivity
coefficient
were
for
the
used
decoefficient
were
for
the
used
dewere
for
the
us
d
prov
shou
de
d
adequate
prov
shou
de
d
adequate
deta
prov
shou
de
for
d
adequate
deta
understand
prov
de
for
adequate
deta
understand
ng
and
for
deta
understand
rep
ng
cat
and
for
ng
understand
rep
the
ng
cat
and
structure
ng
rep
the
ng
cat
and
of
structure
ng
the
rep
the
cat
of
structure
ng
the
the
of
str
tecture
and
gradually
tecture
altering
and
the
gradually
amount
altering
of
information
the
amount
provided
of
information
for
training.
provided
This
profor
training.
This
p
ANN
tse
ANN
f
f
des
tse
red
f
f
des
red
vided
a
level
ground
vided
for
comparing
a
level
ground
the
algorithms,
for
comparing
without
the
algorithms,
introducing
without
inconsistencies
introducing
inconsisten
Test
Cases
2.4.
Test
Examined
2.4.
Test
Examined
Cases
2.4.
Test
Cases
to
hyperparameter
and
architecture
variations.
• gu
ANN
As
mentioned
•and
ANN
above,
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
training
and
tes
necessary
necessary
encod
ng
necessary
of
encod
categor
ng
necessary
of
encod
cExamined
var
categor
ab
ng
of
encod
cExamined
m
var
categor
ght
ab
ng
be
es
of
cthe
needed
m
var
categor
ght
ab
be
or
es
csubst
needed
not
m
var
ght
etc)
ab
or
es
The
needed
m
ght
etc)
st
of
or
The
used
needed
not
etc)
st
of
or
The
used
not
etc)
st
of
Table
41:
summarises
Table
the
input
41:
summarises
data
used
for
training
input
data
each
used
of
the
for
training
4assemblies
ANN
models.
each
of
the
The
4d
ANN
models.
purposes.
purposes.
purposes.
purposes.
var
ab
esof
var
and
ab
the
es
rCases
range
the
of
ralgorithm
va
range
ues
gon
of
ven
va
ues
nes
the
g
ven
preced
n
preced
Tab
es
ng
1externally.
and
Tab
3be
es
shou
1subset
and
d
prov
3be
shou
de
prov
de
original
algorithm
(ANN
original
1)
was
developed
(ANN
using
1)
was
the
developed
full
dataset,
using
comprising
the
full
dataset,
the
entirety
comprising
the
enti
the
data
obtained
through
the
FE
analysis,
as
described
in
detail
in
previous
paragraphs.
•due
ANN
•of
2:
The
ANN
second
2:
The
algorithm
second
algorithm
was
developed
was
developed
using
only
using
the
extreme
only
the
values
extreme
of
insuvalues
of
insua2.4.
de
for
abe
understand
gu
de
for
abe
understand
gu
ng
de
the
for
abe
form
understand
gu
ng
de
of
the
for
the
form
understand
dataset
ng
of
the
the
and
form
dataset
ng
poss
of
the
the
b
and
form
y
dataset
poss
of
the
tut
b
and
y
dataset
ng
subst
poss
tthe
w
tut
b
and
th
y
ng
other
subst
poss
tthe
w
tut
b
th
y
ng
other
subst
tinput
w
tut
th
ng
oTu
Part
of
this
Part
study’s
of
this
unique
study’s
contribution
unique
contribution
is
to
examine
is
to
examine
impact
of
impact
varying
of
degrees
varying
degrees
the
data
obtained
through
the
FE
analysis,
as
described
in
detail
in
previous
paralation
thickness.
lation
thickness.
As
lation
such,
the
thickness.
As
wall
lation
such,
assemblies
the
thickness.
As
wall
such,
assemblies
considered
the
As
wall
such,
assemblies
considered
the
included
wall
considered
the
included
non-insulated
considered
the
included
non-insulated
the
included
non-insula
the
ava
ab
ede
to
ava
nterested
data
ab
ede
to
ava
nterested
research
data
ab
ede
to
ava
part
nterested
research
ab
es
ede
to
Sunderstand
part
m
nterested
research
ar
es
yinfluence
Sof
the
part
m
research
ar
st
es
y
of
Sof
the
hyperparameters
part
m
ar
st
es
ynot
of
Sof
hyperparameters
m
ar
st
yproof
nhyperparameters
st
of
nhyperp
of
input
data
input
quantity
data
of
input
and
quantity
quality
data
of
input
and
quantity
quality
data
the
performance
and
quantity
on
quality
the
performance
and
on
quality
the
the
performance
ANN
on
the
the
model.
performance
ANN
An
the
model.
attempt
ANN
of
An
the
model.
attempt
ANN
An
mode
attem
graphs.
Each
subsequent
graphs.
algorithm
Each
subsequent
was
trained
algorithm
with
aof
subset
was
trained
of
the
with
original
aof
input
of
inforthe
in
ones
and
those
ones
and
insulated
those
ones
and
with
insulated
those
ones
100
mm
and
with
insulated
of
those
100
EPS
mm
with
insulated
internally
100
EPS
mm
with
internally
and
of
100
EPS
mm
internally
and
externally.
EPS
internally
and
externally.
and
externally
cdata
uded
n
the
cdata
uded
fo
ow
n
the
ng
fo
sect
ow
ons
ng
of
sect
the
ons
study
of
the
athe
ong
study
w
th
ang
the
ong
va
w
ues
th
the
used
va
for
ues
th
used
sthe
research
for
th
soriginal
research
was
made
to
isolate
was
assess
made
to
the
isolate
influence
assess
of
data
the
by
keeping
the
of
data
same
by
algorithm
keeping
archithe
same
algorithm
ar
mation.
Specifically,
mation.
the
cases
Specifically,
examined
include:
the
cases
examined
include:
•due
ANN
•
3:
Only
ANN
the
3:
extreme
Only
the
values
extreme
of
the
values
emissivity
of
the
emissivity
coefficient
coefficient
were
used
were
for
the
used
defor
the
deshou
d
prov
shou
d
adequate
prov
shou
d
adequate
deta
prov
shou
for
d
adequate
deta
understand
prov
for
adequate
deta
ng
and
for
deta
understand
rep
ng
cat
and
for
ng
understand
rep
ng
cat
and
structure
ng
rep
ng
cat
and
of
structure
ng
the
rep
the
cat
of
structure
ng
the
the
of
str
tecture
and
gradually
altering
the
amount
of
information
provided
for
training.
This
velopment
velopment
of
the
third
velopment
of
algorithm.
the
third
velopment
of
algorithm.
the
Wall
third
assemblies
of
algorithm.
the
Wall
third
assemblies
with
algorithm.
Wall
ε
=
0.5
assemblies
with
were
Wall
ε
=
disregarded
0.5
assemblies
with
were
ε
=
disregarded
0.5
and
with
were
ε
=
disregarded
0.5
and
were
dis
ANN
tse
ANN
f
f
des
tse
red
ANN
f
f
des
tse
red
ANN
f
f
des
tse
red
f
f
des
red
vided
a
level
ground
vided
for
comparing
a
level
ground
the
algorithms,
for
comparing
without
the
algorithms,
introducing
without
inconsistencies
introducing
inconsisten
2.4.
Test
Cases
2
4
Test
Examined
Cases
2
4
Test
Exam
Cases
2
ned
4
Test
Exam
Cases
ned
Exam
ned
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
•• gu
ANN
As
mentioned
above,
this
uses
complete
dataset
for
training
and
testing
Table
41:
summarises
the
input
data
used
for
training
each
of
4the
ANN
The
purposes.
purposes.
var
ab
esEach
var
and
ab
the
es
rnterested
range
var
and
ab
the
of
es
ralgorithm
va
range
var
and
ues
ab
the
gon
of
es
ven
rpart
va
range
and
ues
ndataset
the
the
g
of
ven
preced
rpart
va
range
ues
ndeveloped
the
gsubset
of
ven
preced
Tab
va
ues
nof
es
the
ng
1gexternally.
and
ven
preced
Tab
3ar
nst
es
shou
the
ng
1externally.
and
preced
Tab
dmodel.
prov
3of
es
shou
ng
1externally.
de
and
Tab
dresearch
prov
3of
es
shou
1externally
de
and
3offor
sh
original
algorithm
(ANN
original
1)
was
developed
(ANN
using
1)
was
the
full
dataset,
using
comprising
the
full
dataset,
the
entirety
comprising
the
enti
ANN
•of
2:
The
ANN
second
•was
2:
The
ANN
algorithm
second
•was
2:
The
ANN
algorithm
was
second
developed
2:
The
algorithm
was
second
developed
using
algorithm
was
only
developed
using
the
was
extreme
only
developed
using
the
values
extreme
only
using
the
insuvalues
extreme
only
the
insuvalues
extreme
in
v
adata
de
for
adata
understand
gu
de
for
understand
ng
the
form
ng
of
the
the
form
of
the
and
dataset
poss
b
and
y
subst
poss
tut
b
y
ng
subst
tthe
w
tut
th
ng
other
tthe
w
th
other
subsequent
algorithm
was
trained
with
aassess
subset
of
the
original
input
information.
Part
of
this
Part
study’s
of
this
unique
Part
study’s
of
contribution
this
unique
Part
study’s
of
contribution
this
unique
is
study’s
to
examine
contribution
unique
is
to
the
examine
contribution
impact
is
to
examine
of
impact
varying
is
to
examine
of
degrees
impact
varying
of
degrees
impact
varying
ofprov
deg
va
the
data
obtained
of
through
the
data
the
obtained
FE
analysis,
through
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
p
lation
thickness.
lation
thickness.
As
such,
the
As
wall
such,
assemblies
the
wall
assemblies
considered
considered
included
the
included
non-insulated
the
non-insulated
ava
ab
ede
to
ava
data
ab
ede
to
ava
nterested
research
data
ab
efor
to
ava
nterested
research
ab
es
efor
to
Sow
m
nterested
research
ar
es
yinfluence
Sof
the
part
m
research
ar
st
es
y
of
Sof
the
hyperparameters
part
m
es
ykeeping
Smodels.
hyperparameters
m
ar
st
y
of
nhyperparameters
st
of
nhyperp
of
input
data
quantity
data
and
quantity
quality
and
quality
performance
on
performance
the
ANN
the
model.
ANN
An
attempt
An
attempt
graphs.
algorithm
was
trained
with
ain
the
original
input
inforones
those
ones
and
those
ones
insulated
those
ones
100
mm
and
insulated
of
those
EPS
mm
with
insulated
of
100
EPS
mm
with
internally
and
of
100
EPS
mm
internally
and
of
EPS
internally
and
and
cshou
uded
n
the
cshou
uded
fo
ow
n
the
cANN
ng
uded
fo
sect
ow
n
ons
the
cANN
ng
uded
fo
the
ow
n
ons
study
the
ng
of
fo
the
a100
ong
ons
study
ng
w
th
the
ang
the
ong
ons
study
va
w
ues
th
the
athe
the
ong
used
study
va
w
ues
th
aof
th
the
ong
used
sthe
research
va
w
for
ues
th
th
the
used
sthe
va
for
ues
th
used
sdThis
resea
was
made
was
isolate
made
to
isolate
assess
made
to
the
isolate
assess
made
influence
to
the
isolate
assess
of
influence
data
and
the
by
of
keeping
data
the
by
the
of
influence
keeping
data
same
by
algorithm
the
of
data
same
by
algorithm
archithe
keeping
same
algorithm
archithe
same
alg
ar
mation.
Specifically,
mation.
the
cases
Specifically,
examined
include:
the
cases
examined
•tecture
ANN
•and
3:
Only
ANN
the
•tecture
Only
ANN
the
•and
values
Only
ANN
of
the
the
values
extreme
Only
emissivity
of
the
the
values
extreme
emissivity
coefficient
of
the
values
emissivity
coefficient
were
of
the
used
emissivity
coefficient
were
for
the
used
decoefficient
were
for
the
used
dewere
for
the
us
datse
prov
d
adequate
prov
adequate
deta
deta
understand
understand
ng
rep
ng
and
ng
rep
the
cat
structure
ng
the
of
structure
the
of
the
and
gradually
altering
and
the
gradually
amount
of
information
the
amount
provided
of
information
for
training.
provided
This
profor
training.
p
velopment
velopment
of
the
third
of
algorithm.
the
third
algorithm.
Wall
assemblies
Wall
assemblies
with
εinclude:
=of
with
were
εANN
=for
disregarded
0.5
were
disregarded
and
and
ANN
ANN
fto
finput
des
tse
red
finsulated
des
tse
red
fwith
fextreme
des
tse
red
fwith
des
red
vided
level
ground
for
comparing
the
algorithms,
without
introducing
inconsistencies
only
those
only
with
those
ε3:understand
=fextreme
0.1
only
with
and
those
ε3:
εof
=developed
=sect
0.1
only
0.9
with
and
were
those
ε3:
εaltering
=fdataset
=sect
0.1
included
0.9
with
and
were
εinternally
εof
=and
=sect
0.1
included
0.9
the
and
were
dataset.
εcat
=0.5
in
included
0.9
the
were
dataset.
in
included
the
dataset.
in
the
dataset.
2adata
4gu
Test
Cases
2
4
Test
Exam
Cases
ned
Exam
ned
due
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
•original
ANN
1:
As
mentioned
•
ANN
above,
1:
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
training
and
tes
Table
4
summarises
Table
the
input
4
summarises
data
used
the
for
training
input
data
each
used
of
the
for
training
4
models.
each
of
the
The
4
ANN
models.
purposes.
algorithm
(ANN
1)
was
using
the
full
dataset,
comprising
the
entirety
•shou
ANN
2:
The
second
•
ANN
algorithm
2:
The
was
second
developed
algorithm
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
in
de
for
a
understand
gu
de
for
a
gu
ng
de
the
for
a
form
understand
gu
ng
de
of
the
for
the
form
understand
ng
of
the
the
and
form
dataset
ng
poss
of
the
the
b
and
form
y
dataset
subst
poss
of
the
tut
b
and
y
dataset
ng
subst
poss
t
w
tut
b
and
th
y
ng
other
subst
poss
t
w
tut
b
th
y
ng
other
subst
t
w
tut
th
ng
o
Part
of
this
Part
study’s
of
th
unique
Part
s
study
of
contribution
th
s
un
Part
s
study
que
of
contr
th
s
un
s
is
study
to
que
but
examine
on
contr
s
un
s
to
que
but
exam
on
contr
impact
s
ne
to
but
exam
of
on
varying
mpact
s
ne
to
the
exam
of
degrees
vary
mpact
ne
ng
the
of
degrees
vary
mpact
ng
of
deg
va
of
the
data
obtained
of
through
the
data
the
obtained
FE
analysis,
through
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
p
lation
thickness.
lation
thickness.
As
lation
such,
the
thickness.
As
wall
lation
such,
assemblies
the
thickness.
As
wall
such,
assemblies
considered
the
As
wall
assemblies
considered
the
included
wall
assemblies
considered
the
included
non-insulated
considered
the
included
non-insulated
the
included
non-insula
the
ava
data
ab
ede
to
ava
nterested
ab
ede
to
nterested
research
part
research
es
Sow
part
m
ar
es
ysuch,
Sof
the
m
ar
st
ythe
the
hyperparameters
st
hyperparameters
nnSpecifically,
the
cases
examined
include:
input
data
input
quantity
data
input
and
quantity
quality
data
input
and
quantity
quality
data
performance
and
quantity
on
quality
performance
and
on
quality
the
the
performance
ANN
on
of
the
model.
performance
ANN
of
An
the
model.
attempt
ANN
of
An
the
model.
attempt
ANN
An
mode
attem
Each
subsequent
algorithm
Each
subsequent
was
trained
algorithm
with
ain
subset
was
trained
of
the
with
original
ath
subset
input
of
inforthe
input
inp
ones
and
those
ones
and
insulated
those
with
insulated
100
mm
with
of
EPS
mm
internally
of
EPS
internally
and
externally.
and
externally.
cANN
uded
n
the
cANN
uded
fo
ow
n
the
ctecture
ng
uded
fo
sect
ow
n
ons
the
ctecture
ng
uded
of
fo
sect
the
ow
nons
study
the
ng
of
fo
sect
the
a100
ong
ons
study
ng
w
of
sect
th
the
awith
the
ong
ons
study
va
w
of
ues
th
aof
the
ong
used
study
va
w
for
ues
aof
th
the
ong
used
sthe
research
va
w
for
ues
thproth
the
used
soriginal
research
va
for
ues
th
used
sthe
resea
for
made
was
to
isolate
made
to
isolate
assess
and
the
assess
influence
the
of
influence
data
by
of
keeping
data
by
the
keeping
same
algorithm
the
same
algorithm
archiarchimation.
Specifically,
the
cases
examined
include:
•graphs.
ANN
•of
3:
Only
ANN
the
•graphs.
3:
extreme
Only
ANN
the
•of
3:
values
extreme
Only
ANN
of
the
3:
the
values
extreme
Only
emissivity
of
the
the
values
extreme
emissivity
coefficient
of
the
values
emissivity
coefficient
were
of
the
used
emissivity
coefficient
were
for
the
used
decoefficient
were
for
the
used
dewere
for
the
us
datse
prov
shou
d
adequate
prov
shou
d
adequate
deta
prov
shou
de
for
don
adequate
deta
understand
prov
de
for
adequate
deta
understand
ng
and
for
deta
understand
rep
ng
cat
and
for
ng
understand
rep
the
ng
cat
and
structure
ng
rep
ng
cat
and
of
structure
ng
the
rep
the
cat
of
structure
ng
the
of
str
tecture
and
tecture
gradually
and
gradually
altering
and
the
gradually
altering
amount
and
the
gradually
altering
of
amount
information
the
altering
of
amount
information
provided
the
of
amount
information
for
provided
training.
of
information
for
provided
This
training.
for
provided
This
training.
profor
This
train
velopment
velopment
of
the
third
velopment
of
algorithm.
the
third
velopment
of
algorithm.
the
Wall
third
assemblies
of
algorithm.
the
Wall
third
assemblies
algorithm.
Wall
ε
=
0.5
assemblies
with
were
Wall
ε
=
disregarded
0.5
assemblies
with
were
ε
=
disregarded
0.5
and
with
were
ε
=
disregarded
0.5
and
were
dis
f
f
des
tse
red
f
f
des
red
vided
level
ground
vided
for
comparing
a
level
ground
the
algorithms,
for
comparing
without
the
algorithms,
introducing
without
inconsistencies
introducing
inconsisten
only
those
only
with
those
ε
=
0.1
with
and
ε
ε
=
=
0.1
0.9
and
were
ε
=
included
0.9
were
included
the
dataset.
in
the
dataset.
2data
4
Test
Cases
2
4
Test
Exam
Cases
2
ned
4
Test
Exam
Cases
2
ned
4
Test
Exam
Cases
ned
Exam
ned
due
to
hyperparameter
and
architecture
variations.
1:
As
mentioned
above,
1:
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
training
and
tes
•was
ANN
•
4:
This
ANN
was
•
4:
the
This
ANN
most
was
•
4:
input
the
This
ANN
most
data-deprived
was
4:
input
the
This
most
data-deprived
was
input
algorithm—a
the
most
data-deprived
input
algorithm—a
combination
data-deprived
algorithm—a
combination
of
the
algorithm—a
precombination
of
the
precombinat
of
the
p
Table
4thickness.
summarises
Table
the
input
4the
summarises
data
used
for
training
input
data
each
used
of
the
for
training
4the
ANN
models.
each
of
the
The
4the
ANN
models.
purposes.
purposes.
original
algorithm
(ANN
original
1)study
was
algorithm
developed
(ANN
using
1)
was
the
developed
full
dataset,
using
comprising
the
full
dataset,
the
entirety
comprising
the
enti
•graphs.
ANN
2:
The
second
was
developed
using
only
the
extreme
values
of
insuPart
of
th
Part
sto
study
of
th
sextreme
un
salgorithm
que
contr
sextreme
un
que
but
on
contr
son
to
but
exam
on
son
ne
to
the
exam
mpact
ne
of
vary
mpact
ng
of
degrees
vary
ng
degrees
of
the
data
obtained
through
FE
analysis,
as
described
in
detail
in
previous
paralation
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
ava
data
ab
ede
ava
nterested
data
ab
ede
to
ava
nterested
research
data
ab
ede
to
ava
part
nterested
research
ab
es
ede
to
Sunderstand
part
m
nterested
research
ar
es
yinfluence
Sof
the
part
m
research
ar
st
es
y
of
Sof
the
hyperparameters
part
m
ar
st
es
ykeeping
of
Sof
the
hyperparameters
m
ar
st
yproof
nthe
hyperparameters
st
of
nhyperp
input
data
of
nput
quantity
data
of
nput
and
quant
quality
data
of
ty
nput
and
quant
on
qua
data
the
ty
ty
performance
and
quant
qua
the
ty
ty
performance
and
qua
the
ty
performance
ANN
on
the
model.
performance
ANN
An
the
mode
attempt
ANN
of
An
mode
attempt
ANN
An
mode
attem
Each
subsequent
graphs.
algorithm
Each
subsequent
was
trained
algorithm
with
ain
subset
was
trained
of
the
with
original
ain
subset
input
of
inforthe
input
in
ones
and
those
ones
and
insulated
those
ones
and
with
insulated
those
ones
100
mm
and
with
insulated
of
those
100
EPS
mm
with
insulated
internally
of
EPS
mm
with
internally
and
of
100
externally.
EPS
mm
internally
and
of
externally.
EPS
internally
and
externally.
and
externally
cshou
uded
n
the
cshou
fo
ow
n
the
ng
fo
sect
ow
ons
ng
of
sect
the
ons
study
of
the
athe
ong
study
w
th
a100
the
ong
va
w
ues
th
the
used
va
for
ues
th
used
sthe
research
for
th
soriginal
research
made
was
to
isolate
made
was
to
isolate
assess
made
was
to
the
isolate
assess
made
influence
to
the
isolate
assess
of
influence
data
and
the
by
assess
of
keeping
data
the
by
the
of
influence
keeping
data
same
by
algorithm
the
of
data
same
by
algorithm
archithe
keeping
same
algorithm
archithe
same
alg
ar
mation.
Specifically,
mation.
the
cases
Specifically,
examined
include:
the
cases
examined
include:
•due
ANN
•vided
3:uded
Only
ANN
the
3:
Only
the
values
of
the
values
emissivity
of
the
emissivity
coefficient
coefficient
were
used
were
for
the
used
defor
the
dedatse
prov
d
adequate
prov
shou
d
adequate
deta
prov
shou
for
d
adequate
deta
understand
prov
for
adequate
deta
ng
and
for
deta
understand
rep
ng
cat
and
for
ng
understand
rep
the
ng
cat
and
structure
ng
rep
ng
cat
and
of
structure
ng
the
rep
the
cat
of
structure
ng
the
the
of
str
tecture
and
tecture
gradually
and
gradually
altering
the
altering
amount
the
of
amount
information
of
information
provided
for
provided
training.
for
This
training.
This
provelopment
velopment
of
the
third
velopment
of
algorithm.
the
third
velopment
of
algorithm.
the
Wall
third
assemblies
of
algorithm.
the
Wall
third
assemblies
with
algorithm.
Wall
ε
=
0.5
assemblies
with
were
Wall
ε
=
disregarded
0.5
assemblies
with
were
ε
=
disregarded
0.5
and
with
were
ε
=
disregarded
0.5
and
were
dis
ANN
ANN
f
f
des
tse
red
ANN
f
f
des
tse
red
ANN
f
f
des
tse
red
f
f
des
red
vided
level
ground
a
level
vided
for
ground
comparing
a
level
vided
for
ground
comparing
a
the
level
algorithms,
for
ground
comparing
the
algorithms,
for
without
comparing
the
algorithms,
introducing
without
the
algorithms,
introducing
without
inconsistencies
introducing
without
inconsistencies
introducing
inconsisten
in
only
those
only
with
those
ε
=
0.1
only
with
and
those
ε
ε
=
=
0.1
only
0.9
with
and
were
those
ε
ε
=
=
0.1
included
0.9
with
and
were
ε
ε
=
=
0.1
included
0.9
the
and
were
dataset.
ε
=
in
included
0.9
the
were
dataset.
included
the
dataset.
in
the
dataset.
2•was
4
Test
Cases
2
4
Test
Exam
Cases
2
ned
4
Test
Exam
Cases
2
ned
4
Test
Exam
Cases
ned
Exam
ned
to
hyperparameter
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
1:
As
mentioned
above,
this
uses
the
complete
dataset
for
training
and
testing
ANN
•
4:
This
ANN
was
4:
the
This
most
was
input
the
most
data-deprived
input
data-deprived
algorithm—a
algorithm—a
combination
combination
of
the
preof
the
preTable
4
summarises
the
input
data
used
for
training
each
of
the
4
ANN
models.
The
purposes.
purposes.
vious
two
vious
two
Only
vious
cases.
the
two
extreme
Only
vious
cases.
the
cases
two
extreme
Only
cases.
of
the
insulation
cases
extreme
Only
of
the
insulation
and
cases
extreme
thermal
of
insulation
and
cases
emissivity
thermal
of
insulation
emissivity
coeffithermal
and
emissivity
coeffithermal
emi
co
original
algorithm
(ANN
original
1)
was
algorithm
developed
(ANN
using
1)
was
the
developed
full
dataset,
using
the
full
dataset,
the
entirety
comprising
the
enti
•n
ANN
1:
As
mentioned
above,
this
uses
the
complete
dataset
for
training
and
•mation.
ANN
2:
The
second
•mation.
ANN
2:
The
was
second
developed
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
in
Part
of
th
Part
scases.
study
of
th
sthe
un
Part
salgorithm
study
que
of
contr
th
un
Part
sFE
study
que
but
of
on
contr
th
sty
un
salgorithm
son
study
to
que
but
exam
on
contr
sextreme
un
sne
to
que
but
the
exam
on
contr
mpact
scomprising
ne
to
but
the
exam
of
on
vary
mpact
sth
ne
to
ng
the
exam
of
degrees
mpact
ne
ng
the
of
degrees
mpact
ng
of
deg
the
data
obtained
of
through
data
the
obtained
analysis,
through
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
p
lation
thickness.
As
such,
the
wall
assemblies
considered
included
the
non-insulated
of
nput
data
of
nput
quant
data
ty
and
quant
qua
ty
ty
and
on
qua
the
performance
the
performance
of
the
ANN
of
the
mode
ANN
An
mode
attempt
An
attempt
graphs.
Each
subsequent
algorithm
was
trained
with
ain
subset
of
the
original
input
inforones
and
those
ones
and
those
100
mm
of
EPS
with
100
mm
and
of
externally.
EPS
internally
and
externally.
cshou
uded
the
cshou
uded
fo
ow
n
the
cANN
ng
fo
sect
ow
n
ons
the
cANN
ng
uded
of
fo
the
ow
n
ons
study
the
ng
of
fo
the
anf
ow
ong
ons
study
ng
w
of
th
the
anf
the
ong
ons
study
va
w
ues
th
the
anf
the
ong
used
study
va
w
for
ues
th
aemissivity
the
ong
used
sand
research
va
w
for
ues
th
th
the
used
sthm
research
va
for
ues
th
used
sthm
resea
for
made
isolate
made
and
to
so
assess
made
ate
to
the
so
assess
made
influence
ate
and
to
the
so
assess
of
ate
data
uence
and
the
by
assess
of
keeping
data
uence
the
by
the
of
keep
data
same
uence
ng
by
algorithm
the
of
keep
data
same
ng
by
avary
archithe
gor
keep
same
ng
avary
arch
the
gor
same
-for
ava
ar
gp
Specifically,
the
cases
Specifically,
examined
include:
the
cases
examined
•vided
ANN
•vided
3:
Only
ANN
the
•due
Only
ANN
the
•due
values
Only
ANN
of
the
values
Only
emissivity
of
the
values
emissivity
coefficient
of
the
values
emissivity
coefficient
were
of
the
used
coefficient
were
for
the
used
decoefficient
were
for
the
used
dewere
the
us
datse
prov
de
d
prov
de
adequate
deta
for
deta
understand
for
understand
ng
rep
ng
and
ng
rep
the
cat
structure
ng
the
of
structure
the
of
the
tecture
and
tecture
and
tecture
gradually
altering
and
tecture
the
gradually
amount
and
the
gradually
of
amount
information
the
of
amount
information
provided
the
of
amount
information
for
provided
training.
information
for
provided
This
training.
profor
provided
This
training.
profor
train
velopment
velopment
of
the
third
of
algorithm.
the
third
algorithm.
Wall
assemblies
Wall
assemblies
with
εinclude:
=of
with
were
εvariations.
=of
disregarded
0.5
were
disregarded
and
and
ANN
ANN
fto
fThis
des
tse
red
finsulated
des
tse
red
fwith
fsextreme
des
tse
red
finsulated
des
red
level
ground
aadequate
level
for
ground
comparing
for
comparing
the
algorithms,
the
algorithms,
without
introducing
without
introducing
inconsistencies
inconsistencies
only
those
only
with
those
ε3:uded
=fextreme
0.1
only
with
and
those
ε3:
εaltering
=developed
=sect
0.1
only
0.9
with
and
were
those
ε3:
εaltering
=fextreme
=sect
0.1
included
0.9
with
and
were
εinternally
εaltering
=and
=sect
0.1
included
0.9
the
and
were
dataset.
εcat
=0.5
in
included
0.9
the
were
dataset.
in
included
the
dataset.
in
the
dataset.
2•was
4the
Test
Cases
2•was
4gradually
Test
Exam
Cases
ned
Exam
ned
due
to
hyperparameter
due
to
hyperparameter
and
to
hyperparameter
architecture
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
and
architecture
variations.
1:
As
mentioned
above,
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
training
and
tes
ANN
4:
ANN
was
•was
4:
the
This
ANN
most
was
•was
4:
input
the
This
ANN
most
data-deprived
was
4:
input
the
This
most
data-deprived
was
input
algorithm—a
the
most
data-deprived
input
algorithm—a
combination
data-deprived
algorithm—a
combination
of
the
algorithm—a
precombination
of
the
precombinat
ofThis
the
Table
4two
summarises
Table
the
input
41:
summarises
data
used
the
for
training
input
data
each
used
of
the
for
training
4the
ANN
models.
each
of
the
The
4the
ANN
models.
purposes.
vious
vious
cases.
two
Only
cases.
the
extreme
Only
the
cases
extreme
of
insulation
cases
of
insulation
and
thermal
and
emissivity
thermal
emissivity
coefficoeffioriginal
algorithm
(ANN
1)
was
using
the
full
dataset,
comprising
the
entirety
•shou
ANN
2:
The
second
•
ANN
algorithm
2:
The
was
second
developed
algorithm
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
inp
cient
were
cient
offered
were
to
cient
offered
the
were
algorithm
to
cient
offered
the
were
algorithm
at
the
to
offered
the
training
algorithm
at
the
to
stage,
the
training
algorithm
at
considerably
the
stage,
training
at
considerably
the
stage,
reducing
training
considerably
the
stage,
reducing
considerably
the
reducing
Part
of
th
Part
s
study
of
th
s
un
Part
s
study
que
of
contr
th
s
un
Part
s
study
que
but
of
on
contr
th
s
un
s
s
study
to
que
but
exam
on
contr
s
un
s
ne
to
que
but
the
exam
on
contr
mpact
s
ne
to
but
exam
of
on
vary
mpact
s
ne
to
ng
the
exam
of
degrees
vary
mpact
ne
ng
of
degrees
vary
mpact
ng
of
deg
va
testing
purposes.
data
obtained
through
the
data
the
obtained
FE
analysis,
through
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
p
lation
thickness.
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
of
nput
data
of
nput
quant
data
of
ty
nput
and
quant
qua
data
of
ty
ty
nput
and
quant
on
qua
data
the
ty
ty
performance
and
quant
on
qua
the
ty
ty
performance
and
on
of
qua
the
ty
performance
ANN
on
of
the
mode
performance
ANN
of
An
the
mode
attempt
ANN
of
An
the
mode
attempt
ANN
An
mode
attem
graphs.
Each
subsequent
graphs.
algorithm
Each
subsequent
was
trained
algorithm
with
a
subset
was
trained
of
the
with
original
a
subset
input
of
inforthe
original
input
in
ones
and
those
insulated
with
100
mm
of
EPS
internally
and
externally.
was
made
was
to
so
made
ate
and
to
so
assess
ate
and
the
assess
nf
uence
the
of
nf
data
uence
by
of
keep
data
ng
by
the
keep
same
ng
a
the
gor
same
thm
a
arch
gor
thm
arch
mation.
Specifically,
the
cases
examined
include:
•2•due
ANN
3:
Only
the
•
extreme
ANN
3:
values
Only
of
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
the
dewere
used
for
the
d
prov
shou
de
d
adequate
prov
shou
de
d
adequate
deta
prov
shou
de
for
d
adequate
deta
understand
prov
de
for
adequate
deta
understand
ng
and
for
deta
understand
rep
ng
cat
and
for
ng
understand
rep
the
ng
cat
and
structure
ng
rep
the
ng
cat
and
of
structure
ng
the
rep
the
cat
of
structure
ng
the
the
of
str
tecture
and
tecture
gradually
and
tecture
gradua
altering
and
tecture
y
the
gradua
a
ter
amount
ng
and
y
the
gradua
a
of
ter
amount
information
ng
y
the
a
of
ter
amount
nformat
ng
provided
the
of
amount
on
nformat
for
prov
training.
of
ded
on
nformat
for
prov
This
tra
ded
n
on
prong
for
prov
Th
tra
ded
s
n
prong
for
Th
tra
s
n
p
velopment
velopment
of
the
third
velopment
of
algorithm.
the
third
velopment
of
algorithm.
the
Wall
third
assemblies
of
algorithm.
the
Wall
third
assemblies
with
algorithm.
Wall
ε
=
0.5
assemblies
with
were
Wall
ε
=
disregarded
0.5
assemblies
with
were
ε
=
disregarded
0.5
and
with
were
ε
=
disregarded
0.5
and
were
dis
ANN
tse
ANN
f
f
des
tse
red
f
f
des
red
vided
a
level
vided
ground
a
level
vided
for
ground
comparing
a
level
vided
for
ground
comparing
a
the
level
algorithms,
for
ground
comparing
the
algorithms,
for
without
comparing
the
algorithms,
introducing
without
the
algorithms,
introducing
without
inconsistencies
introducing
without
inconsistencies
introducing
inconsisten
in
only
those
only
with
those
ε
=
0.1
with
and
ε
ε
=
=
0.1
0.9
and
were
ε
=
included
0.9
were
in
included
the
dataset.
in
the
dataset.
4the
Test
Cases
2•due
4th
Test
Exam
Cases
2•original
ned
4Only
Test
Exam
Cases
2•ned
4Only
Test
Exam
Cases
ned
Exam
ned
to
hyperparameter
to
hyperparameter
and
architecture
and
architecture
variations.
variations.
1:
As
mentioned
above,
1:
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
and
tes
ANN
4:
This
ANN
was
4:
the
This
ANN
most
was
4:
input
the
This
ANN
most
data-deprived
was
4:
input
the
This
most
data-deprived
was
input
algorithm—a
the
most
data-deprived
input
algorithm—a
combination
data-deprived
algorithm—a
combination
of
the
algorithm—a
precombination
of
the
precombinat
of
the
p
Table
4two
summarises
Table
4two
summarises
Table
the
input
4two
summarises
Table
data
the
input
used
4two
summarises
data
the
for
training
input
used
data
the
for
each
training
input
used
of
data
the
for
each
training
4the
used
ANN
of
the
for
models.
each
training
4the
ANN
of
the
The
models.
each
4training
ANN
of
the
The
models.
4mode
ANN
purposes.
purposes.
vious
vious
cases.
vious
cases.
the
extreme
vious
cases.
the
cases
extreme
Only
cases.
of
the
insulation
cases
extreme
Only
of
the
insulation
and
cases
extreme
thermal
of
insulation
and
cases
emissivity
thermal
of
insulation
and
emissivity
coeffithermal
and
emissivity
coeffithermal
emi
co
original
algorithm
(ANN
1)
was
algorithm
developed
(ANN
using
1)
was
the
developed
full
dataset,
using
comprising
the
full
dataset,
the
entirety
comprising
the
enti
•mation.
ANN
2:
The
second
was
developed
using
only
the
extreme
values
of
insucient
were
cient
offered
were
to
offered
the
algorithm
to
the
algorithm
at
the
training
at
the
stage,
training
considerably
stage,
considerably
reducing
the
reducing
the
Part
of
Part
sthe
study
of
th
sextreme
un
salgorithm
study
que
contr
sdata.
un
que
but
on
contr
swas
to
but
exam
on
son
ne
to
the
exam
mpact
ne
of
vary
mpact
ng
of
degrees
vary
ng
degrees
data
obtained
through
the
FE
analysis,
as
described
in
detail
in
previous
paralation
thickness.
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
density
of
density
offered
of
density
the
input
offered
of
density
the
input
offered
of
data.
the
input
offered
data.
input
data.
•Each
ANN
2:
The
second
algorithm
developed
using
only
the
extreme
values
of
of
nput
data
of
nput
quant
data
of
ty
nput
and
quant
qua
data
of
ty
ty
nput
and
quant
on
qua
data
the
ty
ty
performance
and
quant
on
qua
the
ty
ty
performance
and
of
qua
the
ty
performance
ANN
on
of
the
mode
performance
ANN
of
An
mode
attempt
ANN
of
An
the
mode
attempt
ANN
An
attem
graphs.
subsequent
graphs.
algorithm
Each
subsequent
was
trained
algorithm
with
ain
subset
was
trained
of
the
with
original
ain
subset
input
of
inforthe
original
input
ing
ones
and
those
insulated
ones
and
with
those
100
mm
insulated
of
EPS
with
internally
100
mm
and
of
externally.
EPS
internally
and
externally.
was
made
was
to
so
made
ate
was
and
to
so
assess
made
ate
was
to
the
so
assess
made
nf
ate
uence
and
to
the
so
assess
of
nf
ate
data
uence
and
the
by
assess
of
keep
nf
data
uence
ng
the
by
the
of
keep
nf
data
same
uence
ng
by
a
the
of
gor
keep
data
same
thm
ng
by
a
arch
the
gor
keep
same
thm
ng
a
arch
the
gor
same
thm
a
ar
Specifically,
mation.
the
cases
Specifically,
examined
include:
the
cases
examined
include:
•2•due
ANN
3:
Only
the
values
of
the
emissivity
coefficient
were
used
for
the
detecture
and
tecture
gradua
and
y
gradua
a
ter
ng
y
the
a
ter
amount
ng
the
of
amount
nformat
of
on
nformat
prov
ded
on
for
prov
tra
ded
n
ng
for
Th
tra
s
n
prong
Th
s
provelopment
of
the
third
velopment
algorithm.
of
the
Wall
third
assemblies
algorithm.
with
Wall
ε
=
0.5
assemblies
were
disregarded
with
ε
=
0.5
and
were
disregarded
ANN
tse
ANN
f
f
des
tse
red
ANN
f
f
des
tse
red
ANN
f
f
des
tse
red
f
f
des
red
vided
a
level
v
ded
ground
a
eve
v
for
ded
ground
comparing
a
eve
v
for
ded
ground
compar
a
the
eve
algorithms,
for
ground
ng
compar
the
a
for
gor
without
ng
compar
thms
the
a
introducing
gor
w
ng
thout
thms
the
a
ntroduc
gor
w
inconsistencies
thout
thms
ng
ntroduc
w
ncons
thout
ng
stenc
ntroduc
ncons
es
ng
sten
n
only
those
only
with
those
ε
=
0.1
only
with
and
those
ε
ε
=
=
0.1
only
0.9
with
and
were
those
ε
ε
=
=
0.1
included
0.9
with
and
were
ε
ε
=
=
0.1
included
0.9
the
and
were
dataset.
ε
=
in
included
0.9
the
were
dataset.
included
the
dataset.
in
the
dataset.
4the
Test
Cases
2
4
Test
Exam
Cases
2
ned
4
Test
Exam
Cases
2
ned
4
Test
Exam
Cases
ned
Exam
ned
to
hyperparameter
due
to
hyperparameter
due
and
to
hyperparameter
architecture
due
and
to
hyperparameter
architecture
variations.
and
architecture
variations.
and
architecture
variations.
variations.
1:
As
mentioned
above,
this
uses
the
complete
dataset
for
training
and
testing
ANN
•
4:
This
ANN
was
4:
the
This
most
was
input
the
most
data-deprived
input
data-deprived
algorithm—a
algorithm—a
combination
combination
of
the
preof
the
preTable
4
summarises
Table
4
summarises
the
input
data
the
input
used
data
for
training
used
for
each
training
of
the
each
4
ANN
of
the
models.
4
ANN
The
models.
The
purposes.
purposes.
vious
two
vious
cases.
two
Only
vious
cases.
the
two
extreme
Only
vious
cases.
the
cases
two
extreme
Only
cases.
of
the
insulation
cases
extreme
Only
of
the
insulation
and
cases
extreme
thermal
of
insulation
and
cases
emissivity
thermal
of
insulation
and
emissivity
coeffithermal
and
emissivity
coeffithermal
emi
co
original
algorithm
original
algorithm
(ANN
original
1)
was
algorithm
(ANN
original
developed
1)
was
algorithm
(ANN
developed
using
1)
was
(ANN
the
developed
full
using
1)
dataset,
was
the
developed
full
using
comprising
dataset,
the
full
using
comprising
dataset,
the
the
entirety
full
comprising
dataset,
the
entirety
comprisin
the
enti
•of
ANN
2:
The
second
•
ANN
algorithm
2:
The
was
second
developed
algorithm
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
in
cient
were
cient
offered
were
to
cient
offered
the
were
algorithm
to
cient
offered
the
were
algorithm
at
the
to
offered
the
training
algorithm
at
the
to
stage,
the
training
algorithm
at
considerably
the
stage,
training
at
considerably
the
stage,
reducing
training
considerably
the
stage,
reducing
considerably
the
reducing
Part
of
th
Part
sthe
study
of
th
sthe
un
Part
sthe
study
que
of
contr
th
sdata.
un
Part
sFE
study
que
but
of
on
contr
th
sdata.
un
sthe
son
study
to
que
but
exam
on
contr
sperformance
un
sne
to
que
but
the
exam
on
contr
mpact
sne
to
but
the
exam
of
on
mpact
sne
to
ng
the
exam
of
degrees
ne
ng
the
of
degrees
ng
of
deg
data
obtained
of
through
data
the
obtained
analysis,
through
as
the
described
FE
analysis,
in
detail
as
described
in
previous
in
paradetail
in
previous
pngp
lation
thickness.
As
such,
the
wall
assemblies
considered
included
the
non-insulated
density
of
density
offered
of
input
offered
input
nput
data
of
nput
quant
data
ty
and
quant
qua
ty
ty
and
on
qua
ty
performance
the
of
the
ANN
of
the
mode
ANN
An
mode
attempt
An
attempt
graphs.
Each
subsequent
algorithm
was
trained
with
subset
of
the
original
input
inforones
and
those
insulated
ones
and
with
those
100
mm
insulated
of
EPS
with
100
mm
and
of
externally.
EPS
internally
and
externally.
insulation
thickness.
As
such,
wall
assemblies
considered
included
nonmade
to
so
made
ate
was
and
so
assess
made
ate
to
the
so
assess
made
nf
ate
uence
and
to
so
assess
of
nf
ate
data
uence
and
the
by
assess
of
keep
nf
data
uence
ng
the
by
the
of
keep
nf
data
same
uence
ng
by
avary
the
of
gor
keep
data
same
thm
ng
by
avary
arch
the
gor
keep
same
-the
thm
ng
avary
arch
the
same
-for
thm
ava
ar
mation.
Specifically,
mation.
the
cases
Specifically,
examined
include:
the
cases
examined
•tecture
ANN
3:
Only
the
•tecture
extreme
ANN
3:
values
Only
of
the
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
the
dewere
used
the
and
tecture
and
yto
gradua
athe
ter
ng
and
tecture
ythe
gradua
ter
amount
ng
and
ythe
gradua
athe
of
ter
amount
nformat
ng
ythe
of
ter
amount
on
nformat
ng
prov
the
of
ded
amount
on
nformat
for
prov
tra
of
ded
n
on
nformat
ng
for
prov
Th
tra
ded
smpact
n
on
prong
for
prov
Th
tra
ded
smpact
ngor
prong
for
Th
tra
sof
velopment
of
the
third
algorithm.
Wall
assemblies
with
εinclude:
=ntroduc
0.5
were
disregarded
and
Each
algorithm
Each
algorithm
was
Each
eventually
algorithm
was
Each
eventually
compared
algorithm
was
eventually
compared
to
the
was
values
eventually
compared
to
the
included
values
compared
to
the
in
included
the
values
full
to
the
in
set
included
the
values
of
full
inin
set
included
the
of
full
inin
set
the
v
ahyperparameter
eve
v
ded
ground
amentioned
eve
for
ground
compar
for
ng
compar
the
a4εthe
gor
ng
thms
the
ainternally
gor
w
thout
thms
w
thout
ng
ntroduc
ncons
ng
stenc
ncons
es
stenc
es
only
those
with
εOnly
=summarises
0.1
only
and
those
εathe
=to
0.9
with
were
=developed
0.1
included
and
εathe
=ato
in
0.9
the
were
dataset.
included
in
the
dataset.
2was
4ded
Test
Cases
2•was
4gradua
Test
Exam
Cases
Exam
ned
due
to
due
to
hyperparameter
due
and
to
hyperparameter
architecture
due
and
to
hyperparameter
arch
variations.
tecture
and
arch
var
tecture
and
at
ons
arch
var
tecture
at
ons
var
at
ons
•original
ANN
1:
As
••ned
ANN
above,
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
training
and
tes
4:
This
ANN
was
4:
This
most
was
•was
4:
input
This
ANN
most
data-deprived
was
4:
input
This
most
data-deprived
was
input
algorithm—a
most
data-deprived
input
algorithm—a
combination
data-deprived
algorithm—a
combination
of
the
algorithm—a
precombination
of
the
precombinat
of
the
p
Table
4two
summarises
Table
4two
Table
the
input
41:
summarises
Table
data
the
input
used
summarises
data
the
for
training
input
used
data
the
for
each
training
input
used
of
data
the
for
each
training
4the
used
ANN
of
the
for
models.
each
training
4the
ANN
of
the
The
models.
each
4the
ANN
of
the
The
models.
4mode
ANN
purposes.
vious
vious
cases.
cases.
the
extreme
Only
the
cases
extreme
of
insulation
cases
of
insulation
and
thermal
and
emissivity
thermal
emissivity
coefficoeffialgorithm
original
algorithm
(ANN
1)
was
(ANN
developed
1)
was
using
the
full
using
dataset,
the
full
comprising
dataset,
comprising
the
entirety
the
entirety
•was
ANN
2:
The
second
ANN
algorithm
2:
The
was
second
developed
algorithm
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
in
cient
were
cient
offered
were
to
cient
offered
the
were
algorithm
cient
offered
the
were
algorithm
at
the
to
offered
the
training
algorithm
at
the
stage,
the
training
algorithm
at
considerably
the
stage,
training
at
considerably
the
stage,
reducing
training
considerably
the
stage,
reducing
considerably
the
reducing
Part
of
th
Part
s
study
of
th
s
un
Part
s
study
que
of
contr
th
s
un
Part
s
study
que
but
of
on
contr
th
s
un
s
s
study
to
que
but
exam
on
contr
s
un
s
ne
to
que
but
the
exam
on
contr
mpact
s
ne
to
but
exam
of
on
vary
mpact
s
ne
to
ng
exam
of
degrees
vary
mpact
ne
ng
of
degrees
vary
mpact
ng
of
deg
va
the
data
obtained
the
data
through
obtained
the
data
the
through
obtained
the
FE
data
analysis,
the
through
obtained
FE
as
analysis,
the
described
through
FE
as
analysis,
the
described
in
detail
FE
as
analysis,
described
in
in
previous
detail
as
described
in
in
paraprevious
detail
in
in
paraprevious
detail
in
p
p
lation
thickness.
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
density
of
density
the
offered
of
density
the
input
offered
of
data.
density
the
input
offered
of
data.
the
input
offered
data.
input
data.
of
nput
data
of
nput
quant
data
of
ty
nput
and
quant
qua
data
of
ty
ty
nput
and
quant
on
qua
data
ty
ty
performance
and
quant
on
qua
the
ty
ty
performance
and
on
of
qua
the
ty
performance
ANN
on
of
the
mode
performance
ANN
of
An
the
mode
attempt
ANN
of
An
the
mode
attempt
ANN
An
attem
graphs.
Each
subsequent
graphs.
algorithm
Each
subsequent
was
trained
algorithm
with
a
subset
was
trained
of
the
with
original
a
subset
input
of
inforthe
original
input
in
ones
and
those
insulated
with
100
mm
of
EPS
internally
and
externally.
made
was
to
so
made
ate
and
to
so
assess
ate
and
the
assess
nf
uence
the
of
nf
data
uence
by
of
keep
data
ng
by
the
keep
same
ng
a
the
gor
same
thm
a
arch
gor
thm
arch
mation.
Specifically,
the
cases
examined
include:
•2•formation,
ANN
3:
Only
the
•
extreme
ANN
3:
values
Only
of
the
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
the
dewere
used
for
the
insulated
ones
and
those
insulated
with
100
mm
of
EPS
internally
and
externally.
tecture
and
tecture
and
tecture
ywas
gradua
athe
ter
ng
and
tecture
ythe
gradua
ter
amount
ng
and
ythe
gradua
athe
of
ter
amount
nformat
ng
ythe
aof
ter
amount
on
nformat
ng
prov
the
of
ded
amount
on
nformat
for
prov
of
ded
n
on
nformat
ng
for
prov
Th
tra
ded
sthermal
n
on
prong
for
prov
Th
tra
ded
sthermal
nncons
prong
for
Th
tra
sby
nn
p
velopment
of
the
third
velopment
algorithm.
of
the
Wall
third
assemblies
algorithm.
with
Wall
εincluded
=ntroduc
0.5
assemblies
were
disregarded
with
εANN
=ntroduc
0.5
and
were
disregarded
Each
algorithm
Each
algorithm
eventually
was
eventually
compared
to
the
values
to
the
values
in
included
the
full
in
set
the
of
full
inset
of
inv
ahyperparameter
eve
v
ded
ground
amentioned
eve
v
for
ded
ground
compar
athe
eve
v
for
ded
ground
ng
compar
acompared
the
eve
a4two
for
gor
ground
ng
compar
thms
the
aat
for
gor
w
ng
compar
thout
thms
the
aextreme
gor
w
ng
thout
thms
the
atra
ng
ntroduc
gor
w
ncons
thout
thms
ng
stenc
w
ncons
thout
es
ng
stenc
ntroduc
es
ng
sten
only
those
with
εwith
=summar
0.1
and
εadeveloped
=to
0.9
were
included
in
the
dataset.
with
the
aim
of
the
identifying
aim
with
of
the
identifying
aim
with
level
of
the
identifying
of
the
aim
inaccuracy
level
of
identifying
of
the
inaccuracy
introduced
level
of
the
inaccuracy
introduced
level
by
withholding
of
inaccuracy
introduced
by
withholding
introduced
by
withhold
4ded
Test
Cases
2formation,
4gradua
Test
Exam
Cases
2•formation,
ned
4Only
Test
Exam
Cases
2formation,
ned
4ses
Test
Exam
Cases
ned
Exam
ned
due
to
due
to
hyperparameter
and
arch
tecture
and
arch
var
tecture
at
ons
var
at
ons
1:
As
above,
1:
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
training
and
tes
ANN
4:
This
was
ANN
most
4:
input
This
data-deprived
was
most
input
algorithm—a
data-deprived
combination
algorithm—a
of
the
precombination
of
the
p
Table
4
summarises
Tab
e
4
Tab
the
input
e
4
summar
Tab
data
the
e
nput
used
ses
summar
data
the
for
training
nput
used
ses
data
the
for
each
tra
nput
used
n
of
ng
data
the
for
each
tra
4
used
ANN
n
of
ng
the
for
models.
each
tra
4
n
of
ng
the
The
mode
each
4
ANN
of
s
the
The
mode
4
ANN
s
purposes.
purposes.
vious
two
vious
cases.
two
vious
cases.
the
two
extreme
Only
vious
cases.
the
cases
extreme
Only
cases.
of
the
insulation
cases
extreme
Only
of
the
insulation
and
cases
thermal
of
insulation
and
cases
emissivity
thermal
of
insulation
and
emissivity
coeffiand
emissivity
coeffiemi
co
original
algorithm
original
algorithm
(ANN
original
1)
was
algorithm
(ANN
original
1)
was
algorithm
(ANN
developed
using
1)
was
(ANN
the
developed
full
using
1)
dataset,
was
the
developed
full
using
comprising
dataset,
the
full
using
comprising
dataset,
the
the
entirety
full
comprising
dataset,
the
entirety
comprisin
the
enti
•was
ANN
2:
The
second
algorithm
was
developed
using
only
the
extreme
values
of
insucient
were
cient
offered
were
to
offered
algorithm
the
algorithm
at
the
training
the
stage,
training
considerably
stage,
considerably
reducing
the
reducing
the
Part
of
th
Part
s
study
of
th
s
un
s
study
que
contr
s
un
que
but
on
contr
s
to
but
exam
on
s
ne
to
the
exam
mpact
ne
the
of
vary
mpact
ng
of
degrees
vary
ng
degrees
of
the
data
of
obtained
the
data
through
obtained
the
through
FE
analysis,
the
FE
as
analysis,
described
as
described
in
detail
in
in
previous
detail
in
paraprevious
paralation
thickness.
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
density
of
density
the
offered
of
density
the
input
offered
of
data.
density
the
input
offered
of
data.
the
input
offered
data.
input
data.
nput
data
nput
quant
data
of
ty
nput
and
quant
qua
data
of
ty
ty
nput
and
quant
on
qua
data
the
ty
ty
performance
and
quant
on
qua
the
ty
ty
performance
and
on
of
qua
the
ty
performance
ANN
on
of
the
mode
performance
ANN
of
An
the
mode
attempt
ANN
of
An
the
mode
attempt
ANN
An
mode
attem
graphs.
Each
graphs.
subsequent
Each
graphs.
subsequent
algorithm
Each
graphs.
subsequent
algorithm
was
Each
trained
subsequent
algorithm
was
with
trained
ain
algorithm
subset
was
with
trained
of
aof
the
subset
was
with
original
trained
of
ain
the
subset
input
with
original
of
inforaused
the
subset
input
original
of
inforthe
input
origina
in
ones
and
those
insulated
ones
and
with
those
100
mm
insulated
of
EPS
with
internally
100
mm
and
externally.
EPS
internally
and
externally.
made
was
to
so
made
ate
was
and
to
so
assess
made
ate
was
to
the
so
assess
made
nf
ate
uence
and
to
the
so
assess
of
nf
ate
data
uence
and
the
by
assess
of
keep
nf
data
uence
ng
the
by
the
of
keep
nf
data
same
uence
ng
by
aw
the
of
gor
keep
data
same
thm
ng
by
aw
arch
the
gor
keep
same
-thm
ng
aarch
the
gor
same
-thm
aar
g
mation.
Specifically,
mation.
the
cases
Specifically,
examined
include:
the
cases
examined
include:
•v
ANN
3:
Only
the
extreme
values
of
the
emissivity
coefficient
were
used
for
the
detecture
and
tecture
gradua
and
y
gradua
a
ter
ng
y
the
a
ter
amount
ng
the
of
amount
nformat
of
on
nformat
prov
ded
on
for
prov
tra
ded
n
ng
for
Th
tra
s
n
prong
Th
s
provelopment
of
the
third
velopment
algorithm.
of
the
Wall
third
assemblies
algorithm.
with
Wall
ε
=
0.5
assemblies
were
disregarded
with
ε
=
0.5
and
were
disregarded
•
ANN
3:
Only
the
extreme
values
of
the
emissivity
coefficient
were
for
the
Each
algorithm
Each
algorithm
was
Each
eventually
algorithm
was
Each
eventually
compared
algorithm
was
eventually
compared
to
the
was
values
eventually
compared
to
the
included
values
compared
to
the
in
included
the
values
full
to
the
in
set
included
the
values
of
full
inin
set
included
the
of
full
inin
set
the
of
ded
a
eve
v
ded
ground
a
eve
v
for
ded
ground
compar
a
eve
v
for
ded
ground
ng
compar
a
the
eve
a
for
gor
ground
ng
compar
thms
the
a
for
gor
w
ng
compar
thout
thms
the
a
ntroduc
gor
w
ng
thout
thms
the
a
ng
ntroduc
gor
ncons
thout
thms
ng
stenc
ntroduc
ncons
thout
es
ng
stenc
ntroduc
ncons
es
ng
sten
n
only
those
with
ε
=
0.1
only
and
those
ε
=
0.9
with
were
ε
=
0.1
included
and
ε
=
0.9
the
were
dataset.
included
the
dataset.
formation,
with
the
aim
with
of
the
identifying
aim
of
identifying
level
of
the
inaccuracy
level
of
inaccuracy
introduced
introduced
by
withholding
by
withholding
due
to
hyperparameter
due
to
hyperparameter
due
and
to
hyperparameter
arch
due
tecture
and
to
hyperparameter
arch
var
tecture
and
at
ons
arch
var
tecture
and
at
ons
arch
var
tecture
at
ons
var
at
ons
1:
As
mentioned
above,
this
uses
the
complete
dataset
for
training
and
testing
•formation,
ANN
4:
This
was
the
most
input
data-deprived
algorithm—a
combination
of
the
prepart
of
the
part
input
of
the
data.
part
input
The
of
the
data.
comparison
part
The
of
the
data.
comparison
was
input
The
carefully
data.
comparison
was
The
made
carefully
comparison
was
against
made
carefully
the
was
against
wall
made
carefully
assemblies
the
against
wall
made
assemblies
the
against
wall
assemb
the
w
Tab
e
4
summar
Tab
e
4
ses
summar
the
nput
ses
data
the
nput
used
data
for
tra
used
n
ng
for
each
tra
n
of
ng
the
each
4
ANN
of
the
mode
4
ANN
s
The
mode
s
The
purposes.
purposes.
vious
two
cases.
Only
vious
two
extreme
cases.
cases
Only
of
the
insulation
extreme
and
cases
thermal
of
insulation
emissivity
and
coeffithermal
emissivity
co
original
algorithm
or
g
na
a(ANN
or
gor
g
thm
na
1)
was
a(ANN
or
gor
developed
g
thm
na
1)
was
aanalysis,
(ANN
gor
deve
using
thm
1)
was
oped
(ANN
the
deve
full
us
1)
ng
dataset,
was
oped
the
deve
fu
us
ng
dataset
oped
the
fu
us
ng
dataset
the
the
entirety
sarch
fu
ng
compr
dataset
the
ent
sused
ng
rety
compr
the
ent
sar
n
•was
ANN
2:
The
second
•was
ANN
2:
The
was
second
developed
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
in
cient
were
cient
offered
were
to
cient
offered
the
were
algorithm
to
cient
offered
the
were
algorithm
at
the
to
offered
the
training
algorithm
at
the
to
stage,
the
training
algorithm
at
considerably
the
stage,
training
at
considerably
the
stage,
reducing
training
considerably
the
stage,
reducing
considerably
the
reducing
Part
of
th
Part
sthe
study
of
th
sthe
un
Part
salgorithm
study
que
of
contr
th
sdata.
un
Part
sFE
study
que
but
of
on
contr
th
sdata.
un
salgorithm
son
study
to
que
but
exam
on
contr
sperformance
un
sne
to
que
but
the
exam
on
contr
mpact
scomprising
ne
to
but
the
exam
of
on
mpact
scompr
ne
to
ng
the
exam
of
degrees
ne
ng
the
of
degrees
mpact
ng
of
deg
of
the
data
of
obtained
the
data
of
through
data
of
the
through
obtained
the
data
the
through
FE
as
analysis,
the
described
through
FE
as
analysis,
the
described
in
detail
FE
as
analysis,
described
in
in
previous
detail
as
described
in
in
paraprevious
detail
in
in
paraprevious
in
p
p
lation
thickness.
As
such,
the
wall
assemblies
considered
included
the
non-insulated
density
of
density
offered
of
the
input
offered
input
nput
data
nput
quant
data
ty
and
quant
qua
ty
ty
and
on
qua
ty
performance
the
of
the
ANN
of
the
mode
ANN
An
mode
attempt
An
attempt
graphs.
Each
graphs.
subsequent
Each
subsequent
algorithm
algorithm
was
trained
was
with
trained
subset
with
of
aof
the
subset
original
of
the
input
original
inforinput
inforones
and
those
insulated
ones
and
with
those
100
mm
insulated
of
EPS
with
100
mm
and
externally.
EPS
internally
and
externally.
made
was
to
so
made
ate
and
so
assess
made
ate
was
to
the
so
assess
made
nf
ate
uence
and
to
so
assess
of
nf
ate
data
uence
and
the
by
assess
of
keep
nf
data
uence
ng
the
by
the
of
keep
nf
data
same
uence
ng
by
avary
the
of
gor
keep
data
same
thm
ng
by
avary
the
gor
keep
same
-thm
ng
avary
arch
the
gor
same
-for
thm
ava
mation.
Specifically,
mation.
Specifically,
mation.
the
cases
Specifically,
mation.
examined
the
cases
Specifically,
examined
include:
the
cases
examined
include:
the
cases
examined
include:
•due
ANN
3:
Only
the
•due
extreme
ANN
3:
values
Only
of
the
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
the
dewere
the
tecture
and
tecture
gradua
and
tecture
yto
gradua
aobtained
ter
ng
and
tecture
ythe
gradua
amount
ng
and
ythe
gradua
aobtained
of
ter
amount
nformat
ng
ythe
of
ter
amount
on
nformat
ng
prov
the
of
ded
amount
on
nformat
for
prov
tra
of
ded
n
on
nformat
ng
for
prov
Th
tra
ded
smpact
n
on
prong
for
prov
Th
tra
ded
ndetail
prong
for
Th
tra
ngp
velopment
of
the
third
algorithm.
Wall
assemblies
with
εinclude:
=ntroduc
0.5
were
disregarded
and
Each
algorithm
Each
algorithm
was
Each
eventually
algorithm
was
Each
eventually
compared
algorithm
was
eventually
compared
to
the
was
values
eventually
compared
to
the
included
values
compared
to
the
in
included
the
values
full
to
the
in
set
included
the
values
of
full
inin
set
included
the
of
full
inin
set
the
of
v
ded
eve
v
ded
ground
amentioned
eve
for
ground
compar
for
ng
compar
the
a4εthe
gor
ng
thms
the
ainternally
gor
w
thout
thms
w
thout
ng
ntroduc
ncons
ng
stenc
ncons
es
stenc
es
development
of
third
algorithm.
Wall
assemblies
with
εalgorithm—a
=
0.5
disregarded
only
with
=summar
0.1
only
and
those
εadeve
=ter
0.9
with
were
=deve
0.1
included
and
εan
=ain
0.9
the
were
dataset.
included
in
the
dataset.
formation,
with
the
aim
of
the
identifying
formation,
aim
with
of
the
identifying
the
aim
with
level
of
the
identifying
of
the
aim
inaccuracy
level
of
identifying
of
the
inaccuracy
introduced
level
of
the
inaccuracy
introduced
level
by
withholding
of
inaccuracy
introduced
by
withholding
introduced
by
withhold
by
to
hyperparameter
due
to
hyperparameter
and
to
hyperparameter
arch
due
tecture
and
to
hyperparameter
arch
var
tecture
and
at
ons
arch
var
tecture
and
at
ons
arch
var
tecture
at
ons
var
at
ons
1:
As
above,
As
this
mentioned
uses
the
above,
complete
this
dataset
uses
the
for
complete
training
and
dataset
testing
for
and
tes
•formation,
ANN
4:
This
was
the
ANN
most
4:
input
This
data-deprived
was
the
most
input
algorithm—a
data-deprived
combination
of
the
precombination
of
the
p
part
ofadata
the
part
input
of
the
data.
input
The
data.
comparison
The
comparison
was
carefully
was
made
carefully
against
made
the
against
wall
assemblies
the
assemblies
Tab
ethose
4two
summar
Tab
e•formation,
4εwith
ses
Tab
the
ethe
nput
41:
ses
summar
Tab
data
the
ethe
nput
used
ses
summar
data
the
for
tra
nput
used
ses
ng
data
the
for
each
tra
nput
used
n
of
ng
data
the
for
each
tra
4the
used
ANN
n
of
ng
the
for
mode
each
tra
4were
ANN
n
of
swall
ng
the
The
mode
each
4training
ANN
of
ssANN
the
The
mode
4ous
ANN
ssin
purposes.
vious
cases.
Only
the
extreme
cases
of
insulation
and
thermal
emissivity
coeffiincorporating
incorporating
the
variable
incorporating
the
values
variable
incorporating
that
the
values
the
variable
algorithms
that
the
values
the
variable
algorithms
were
that
values
the
deprived
algorithms
were
that
of.
the
deprived
Since
algorithms
were
the
of.
deprived
regressor
Since
were
the
of.
deprived
regressor
Since
the
of.
regre
Since
or
g
na
a
or
gor
g
thm
na
a
(ANN
gor
thm
1)
was
(ANN
1)
was
oped
us
ng
oped
the
fu
us
ng
dataset
the
fu
compr
dataset
s
compr
the
ent
s
rety
the
ent
rety
•was
ANN
2:
The
second
•
ANN
algorithm
2:
The
was
second
developed
algorithm
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
in
cient
were
offered
to
cient
the
were
algorithm
offered
at
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
the
obtained
the
data
through
obta
the
data
ned
the
through
obta
the
FE
data
ned
analysis
the
through
obta
FE
ned
as
ana
the
descr
through
ys
FE
s
bed
as
ana
the
descr
n
ys
deta
FE
s
bed
as
ana
descr
n
n
ys
prev
deta
s
bed
as
ous
descr
n
n
paraprev
deta
bed
ous
n
n
paraprev
deta
n
p
p
lation
thickness.
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
density
of
density
the
offered
of
density
the
input
offered
of
data.
density
the
input
offered
of
data.
the
input
offered
data.
input
data.
of
nput
data
of
nput
quant
data
of
ty
nput
and
quant
qua
data
of
ty
ty
nput
and
quant
on
qua
data
ty
ty
performance
and
quant
on
qua
the
ty
ty
performance
and
on
of
qua
the
the
ty
performance
ANN
on
of
the
mode
performance
ANN
of
An
the
mode
attempt
ANN
of
An
the
mode
attempt
An
mode
attem
graphs.
Each
graphs.
subsequent
Each
graphs.
subsequent
algorithm
Each
graphs.
subsequent
algorithm
was
Each
trained
subsequent
algorithm
was
with
trained
a
algorithm
subset
was
with
trained
of
a
the
subset
was
with
original
trained
of
a
the
subset
input
with
original
of
infora
the
subset
input
original
of
inforthe
input
origina
ones
and
those
insulated
with
100
mm
of
EPS
internally
and
externally.
made
was
to
so
made
ate
and
so
assess
ate
and
the
assess
nf
uence
the
of
nf
data
uence
by
of
keep
data
ng
by
the
keep
same
ng
aw
the
gor
same
thm
aw
arch
gor
-thm
arch
-for
mation.
Specifically,
mation.
Specifically,
the
cases
examined
the
cases
examined
include:
include:
•due
ANN
3:
Only
the
•aim
extreme
ANN
3:
values
Only
of
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
the
dewere
used
the
tecture
and
tecture
gradua
and
tecture
yto
gradua
athe
ter
ng
and
tecture
ythe
gradua
asummar
ter
amount
ng
and
ythe
gradua
athe
of
ter
amount
nformat
ng
ythe
an
of
ter
amount
on
nformat
ng
prov
the
of
ded
amount
on
nformat
for
prov
tra
of
ded
n
on
nformat
ng
for
prov
Th
tra
ded
sthermal
n
on
prong
for
prov
Th
tra
ded
sby
nncons
prong
for
Th
tra
sby
nn
p
velopment
of
the
third
velopment
algorithm.
of
the
Wall
third
assemblies
algorithm.
with
Wall
εincluded
=ntroduc
0.5
assemblies
were
disregarded
with
εSince
=ntroduc
0.5
and
were
disregarded
Each
algorithm
Each
algorithm
was
eventually
was
eventually
compared
compared
to
the
values
to
the
values
in
included
the
full
in
set
the
of
full
inset
of
inv
ded
a
eve
v
ded
ground
a
eve
v
for
ded
ground
compar
a
eve
v
for
ded
ground
ng
compar
a
the
eve
a
for
gor
ground
ng
compar
thms
the
a
for
gor
w
ng
compar
thout
thms
the
a
gor
w
ng
thout
thms
the
a
ng
ntroduc
gor
ncons
thout
thms
ng
stenc
ncons
thout
es
ng
stenc
ntroduc
es
ng
sten
only
those
with
ε
=
0.1
and
ε
=
0.9
were
included
in
the
dataset.
formation,
with
the
formation,
with
of
the
identifying
formation,
aim
with
of
the
identifying
aim
with
level
of
the
identifying
of
the
aim
inaccuracy
level
of
identifying
of
the
inaccuracy
introduced
level
of
the
inaccuracy
introduced
level
by
withholding
of
inaccuracy
introduced
by
withholding
introduced
withhold
and
only
those
with
ε
=
0.1
and
ε
=
0.9
were
included
in
dataset.
to
hyperparameter
due
to
hyperparameter
and
arch
tecture
and
arch
var
tecture
at
ons
var
at
ons
•
1:
As
ANN
mentioned
1:
As
mentioned
above,
•
1:
As
ANN
this
mentioned
above,
uses
1:
As
the
this
mentioned
above,
complete
uses
the
this
above,
dataset
complete
uses
the
this
for
dataset
complete
training
uses
the
for
and
dataset
complete
training
testing
for
and
dataset
training
testing
for
and
trainin
tes
•formation,
ANN
4:
This
was
•
ANN
most
4:
input
This
data-deprived
was
most
input
algorithm—a
data-deprived
combination
algorithm—a
of
the
precombination
of
the
p
part
of
the
part
input
of
the
data.
part
input
The
of
data.
comparison
part
The
of
the
data.
comparison
was
input
The
carefully
data.
comparison
was
The
made
carefully
comparison
was
against
made
carefully
the
was
against
wall
made
carefully
assemblies
the
against
wall
made
assemblies
the
against
wall
assemb
the
w
Tab
e
4
summar
Tab
e
4
ses
summar
Tab
the
e
nput
4
ses
Tab
data
the
e
nput
used
4
ses
summar
data
the
for
tra
nput
used
ses
ng
data
the
for
each
tra
nput
used
n
of
ng
data
the
for
each
tra
4
used
ANN
n
of
ng
the
for
mode
each
tra
4
ANN
n
of
s
ng
the
The
mode
each
4
ANN
of
s
the
The
mode
4
ANN
s
purposes.
purposes.
vious
two
cases.
Only
vious
the
two
extreme
cases.
cases
Only
of
the
insulation
extreme
and
cases
thermal
of
insulation
emissivity
and
coeffiemissivity
co
incorporating
incorporating
the
variable
the
values
variable
that
values
algorithms
that
the
algorithms
were
deprived
were
of.
deprived
Since
the
of.
regressor
the
regressor
g
na
a models
or
gor
g
thm
na
a(ANN
or
gor
gextreme
thm
na
was
a(ANN
or
gor
deve
g
thm
na
was
oped
aana
(ANN
gor
deve
us
thm
ng
was
oped
(ANN
the
deve
fu
us
ng
dataset
was
oped
the
deve
fu
us
compr
ng
dataset
oped
the
fu
us
compr
ng
dataset
the
the
ent
fu
rety
compr
dataset
the
ent
ng
rety
compr
the
ent
•or
ANN
2:
The
second
algorithm
was
developed
using
only
the
extreme
values
of
insucient
were
offered
to
the
algorithm
at
the
training
stage,
considerably
reducing
the
models
were
generally
were
models
generally
trained
were
models
using
generally
trained
extreme
were
using
generally
trained
values
extreme
using
trained
of
insulation
values
extreme
using
of
insulation
values
(with
extreme
the
of
insulation
values
exception
(with
the
of
insulation
exception
(with
of
exceptio
(with
of
of
the
data
of
obta
the
data
ned
through
ned
the
through
FE
the
ys
FE
snformat
as
ana
descr
ys
snformat
bed
as
descr
n
deta
bed
n
ndisregarded
prev
deta
ous
n
prev
ous
paralation
thickness.
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
density
of
the
offered
density
input
data.
the
offered
input
data.
graphs
Each
graphs
subsequent
Each
graphs
subsequent
a1)
gor
Each
graphs
thm
subsequent
a1)
was
gor
Each
tra
thm
subsequent
ned
a1)
was
gor
w
tra
th
thm
ain
ned
a1)
subset
was
gor
w
tra
th
thm
of
aof
ned
the
subset
was
w
or
tra
th
g
of
na
ain
ned
the
subset
nput
w
or
th
g
of
na
aparanforthe
subset
nput
or
gntroduc
of
na
nforthe
nput
orin
gthe
na
nn
ones
and
those
insulated
ones
and
with
those
100
mm
insulated
of
EPS
with
100
mm
and
externally.
EPS
internally
and
externally.
was
made
was
to
so
made
ate
was
and
so
assess
made
ate
was
to
the
so
assess
made
nf
ate
uence
and
to
so
assess
of
nf
ate
data
uence
and
the
by
assess
of
keep
nf
data
uence
ng
by
the
of
keep
nf
data
same
uence
ng
by
asw
the
of
gor
keep
data
same
thm
ng
by
asw
arch
the
keep
same
-thm
ng
asthe
arch
gor
same
-thm
asthe
ar
g
mation.
Specifically,
mation.
Specifically,
mation.
the
cases
Specifically,
mation.
examined
the
cases
Specifically,
examined
include:
the
cases
examined
include:
the
cases
examined
include:
•due
ANN
3:
Only
the
values
of
the
emissivity
coefficient
were
used
for
the
detecture
and
tecture
gradua
and
yto
gradua
aobta
ter
ng
ythe
aof
ter
amount
ng
the
of
amount
of
on
prov
ded
on
for
prov
ded
n
ng
for
Th
tra
sthermal
ngor
prong
Th
sthe
provelopment
of
the
third
velopment
algorithm.
of
the
Wall
third
assemblies
algorithm.
with
Wall
εinclude:
=ntroduc
0.5
assemblies
were
with
εSince
=ntroduc
0.5
and
were
disregarded
Each
algorithm
Each
algorithm
was
Each
eventually
algorithm
was
Each
eventually
algorithm
was
eventually
compared
to
the
was
values
eventually
compared
to
the
included
values
compared
to
the
in
included
the
values
full
to
the
in
set
included
the
values
of
full
inin
set
included
the
of
full
inset
of
v
ded
ahyperparameter
eve
v
ded
ground
amentioned
eve
v
for
ded
ground
compar
amentioned
eve
v
for
ded
ground
ng
compar
acompared
the
eve
aεthe
for
gor
ground
ng
compar
thms
the
ainternally
for
gor
w
ng
compar
thout
thms
the
athe
gor
w
ng
thout
thms
the
atra
ng
ntroduc
gor
ncons
thout
thms
ng
stenc
ncons
thout
es
ng
stenc
ncons
es
ng
sten
n
only
those
with
ε
=
0.1
only
and
those
ε
=
0.9
with
were
=
0.1
included
and
ε
=
0.9
the
were
dataset.
included
the
dataset.
formation,
with
the
aim
with
of
the
identifying
aim
of
identifying
level
of
the
inaccuracy
level
of
inaccuracy
introduced
introduced
by
withholding
by
withholding
to
due
to
hyperparameter
due
and
to
hyperparameter
arch
due
tecture
and
to
hyperparameter
arch
var
tecture
and
at
ons
arch
var
tecture
and
at
ons
arch
var
tecture
at
ons
var
at
ons
•
1:
As
ANN
1:
As
above,
this
above,
uses
the
this
complete
uses
the
dataset
complete
for
dataset
training
for
and
training
testing
and
testing
•formation,
ANN
4:
This
was
the
most
input
data-deprived
algorithm—a
combination
of
the
prepart
of
the
part
input
of
the
data.
part
input
The
of
data.
comparison
part
The
of
the
data.
comparison
was
input
The
carefully
data.
comparison
was
The
made
carefully
comparison
was
against
made
carefully
the
was
against
wall
made
carefully
assemblies
the
against
wall
made
assemblies
the
against
wall
assemb
the
w
ANN
4:
This
was
the
most
data-deprived
algorithm—a
combination
of
the
•
Tab
e
4
summar
Tab
e
4
ses
summar
the
nput
ses
data
the
nput
used
data
for
tra
used
n
ng
for
each
tra
n
of
ng
the
each
4
ANN
of
the
mode
4
ANN
s
The
mode
s
The
purposes.
purposes.
purposes.
purposes.
vious
two
cases.
Only
vious
the
two
extreme
cases.
cases
Only
of
the
insulation
extreme
and
cases
thermal
of
insulation
emissivity
and
coeffiemissivity
co
incorporating
incorporating
the
variable
incorporating
the
values
variable
incorporating
that
the
values
variable
algorithms
that
the
values
the
variable
algorithms
were
that
values
the
deprived
algorithms
were
that
of.
the
deprived
Since
algorithms
were
the
of.
deprived
regressor
were
the
of.
deprived
regressor
Since
the
of.
regre
Since
or
g
na
a
or
gor
g
thm
na
a
(ANN
or
gor
g
thm
na
1)
was
a
(ANN
or
gor
deve
g
thm
na
1)
was
oped
a
(ANN
gor
deve
us
thm
1)
ng
was
oped
(ANN
the
deve
fu
us
1)
ng
dataset
was
oped
the
deve
fu
us
compr
ng
dataset
oped
the
s
fu
us
ng
compr
ng
dataset
the
the
ent
s
fu
ng
rety
compr
dataset
the
ent
s
ng
rety
compr
the
ent
s
n
•models
ANN
2:
The
second
•
ANN
algorithm
2:
The
was
second
developed
algorithm
using
was
only
developed
the
extreme
using
values
only
of
the
insuextreme
values
of
in
cient
were
offered
to
cient
the
were
algorithm
offered
at
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
were
models
generally
were
generally
trained
using
extreme
using
extreme
of
insulation
of
insulation
(with
the
exception
(with
exception
of
of
of
the
data
of
obta
the
data
ned
of
through
data
ned
of
the
through
obta
FE
data
ned
the
through
ys
FE
snformat
ned
as
the
descr
through
ys
FE
snformat
bed
as
ana
the
descr
n
ys
deta
FE
snformat
bed
as
ana
descr
nn
ys
prev
deta
snformat
bed
as
ous
descr
nthe
nthe
paraprev
deta
bed
ous
nnthe
paraprev
ous
nthe
p
p
lation
thickness.
As
such,
the
wall
assemblies
considered
included
the
non-insulated
density
of
the
offered
input
data.
ANN
1,
which
ANN
utilised
1,
which
ANN
the
utilised
full
which
ANN
dataset),
the
utilised
1,
full
which
the
dataset),
the
comparison
utilised
full
the
dataset),
the
comparison
full
the
made
dataset),
comparison
was
against
the
made
comparison
wall
was
against
assemblies
made
was
against
assemblies
made
wall
against
assemb
w
graphs
Each
graphs
Each
a1,
gor
thm
awith
was
gor
tra
thm
ned
was
w
tra
th
ned
subset
w
th
of
aof
the
subset
or
g
of
na
the
nput
or
g
na
nfornput
nforones
and
those
insulated
ones
and
with
those
100
mm
insulated
of
EPS
with
100
mm
and
externally.
EPS
internally
and
externally.
mat
on
Spec
mat
fsubsequent
on
ca
Spec
ythe
mat
the
fsubsequent
on
cases
ca
Spec
y those
mat
exam
the
ftrained
on
cases
ca
ned
Spec
yana
exam
the
nc
fand
ude
cases
ca
ned
yana
exam
the
nc
ude
cases
ned
exam
nc
ude
ned
nc
ude
• ded
ANN
3:
Only
the
•aim
extreme
ANN
3:
values
Only
of
the
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
dewere
used
for
tecture
and
tecture
gradua
and
tecture
ywas
gradua
aobta
ter
ng
and
tecture
y
the
gradua
amount
ng
and
y
the
gradua
aobta
of
ter
amount
ng
y
the
of
ter
amount
on
ng
prov
the
of
ded
amount
on
for
prov
tra
of
ded
n
on
ng
for
prov
Th
tra
ded
swall
n
on
prong
for
prov
Th
tra
ded
ndeta
prong
for
Th
tra
np
velopment
of
third
algorithm.
Wall
assemblies
with
εincluded
=ntroduc
0.5
were
disregarded
and
Each
algorithm
Each
eventually
algorithm
compared
was
eventually
to
the
values
compared
to
the
in
the
values
full
set
included
of
inin
full
set
of
v
ahyperparameter
v
ded
ground
amentioned
eve
for
ground
compar
for
ng
compar
the
a41:
gor
ng
thms
the
ainternally
gor
w
thout
thms
w
thout
ng
ntroduc
ncons
ng
stenc
ncons
es
stenc
es
only
with
εwith
=summar
0.1
only
and
εadeve
=ter
0.9
were
εat
=deve
0.1
included
and
εan
=ain
0.9
the
were
dataset.
included
in
the
dataset.
formation,
with
the
of
the
identifying
formation,
aim
with
of
the
identifying
the
aim
with
level
of
the
identifying
of
the
aim
inaccuracy
level
of
identifying
of
the
inaccuracy
introduced
level
of
the
inaccuracy
introduced
level
by
withholding
of
inaccuracy
introduced
by
withholding
introduced
withhold
due
to
due
to
hyperparameter
due
and
to
hyperparameter
arch
due
tecture
and
to
hyperparameter
arch
var
tecture
at
ons
arch
var
tecture
and
at
ons
arch
var
tecture
at
ons
var
at
ons
••of
1:
As
ANN
As
mentioned
above,
••of
As
ANN
this
mentioned
above,
uses
As
the
this
mentioned
above,
complete
uses
the
this
above,
dataset
complete
uses
the
this
for
dataset
complete
uses
the
for
and
dataset
complete
testing
for
and
dataset
testing
for
and
tes
•formation,
ANN
4:
This
was
the
ANN
most
4:
input
This
data-deprived
was
the
most
input
algorithm—a
data-deprived
combination
algorithm—a
of
the
precombination
of
the
p
part
of1,
the
part
input
of
the
data.
input
The
data.
comparison
The
comparison
was
carefully
was
made
carefully
against
made
the
against
wall
assemblies
the
assemblies
Tab
ethose
4two
summar
Tab
e•formation,
41:
ses
Tab
the
enput
41:
ses
summar
Tab
data
the
ethe
nput
used
ses
summar
data
the
for
tra
nput
used
ses
ng
data
the
for
each
tra
nput
used
n
of
ng
data
the
for
each
tra
4training
used
ANN
n
of
ng
the
for
mode
each
tra
4training
ANN
n
of
swall
ng
the
The
mode
each
4training
ANN
of
ssby
the
The
mode
4trainin
ANN
ssby
purposes.
purposes.
vious
cases.
Only
the
extreme
cases
of
insulation
and
thermal
emissivity
coeffiincorporating
incorporating
the
variable
incorporating
the
values
variable
incorporating
that
the
values
the
variable
algorithms
that
the
values
the
variable
algorithms
were
that
values
the
deprived
algorithms
were
that
of.
the
deprived
Since
algorithms
were
the
of.
deprived
regressor
Since
were
the
of.
deprived
regressor
Since
the
of.
regre
Since
previous
two
cases.
Only
the
extreme
cases
of
insulation
and
thermal
emissivity
or
g
na
aeve
or
gor
g
thm
na
ay(ANN
gor
thm
1)
was
(ANN
1)
was
oped
us
ng
oped
the
fu
us
ng
dataset
the
fu
compr
dataset
sthe
compr
the
ent
sthe
rety
the
ent
rety
•of
ANN
2:
The
ANN
second
•of
2:
The
ANN
algorithm
second
2:
The
ANN
algorithm
was
second
developed
2:
The
algorithm
was
second
developed
using
algorithm
was
only
developed
using
the
was
extreme
only
developed
using
the
values
extreme
only
using
of
the
insuvalues
extreme
only
of
the
insuvalues
extreme
of
in
cient
were
offered
to
cient
the
were
algorithm
offered
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
models
were
models
generally
were
models
generally
trained
were
models
using
generally
trained
extreme
were
using
generally
trained
extreme
using
trained
of
insulation
extreme
using
of
insulation
values
(with
extreme
of
insulation
values
exception
(with
of
insulation
exception
(with
of
the
exceptio
(with
of
the
the
data
obta
the
data
ned
through
obta
data
ned
the
through
obta
FE
data
ned
ana
the
through
obta
ys
FE
s
ned
as
ana
the
descr
through
ys
FE
s
bed
as
ana
the
descr
n
ys
deta
FE
s
bed
as
ana
descr
n
n
ys
prev
deta
s
bed
as
ous
descr
n
n
paraprev
deta
bed
ous
n
n
paraprev
deta
ous
n
p
p
lation
thickness.
As
lation
such,
the
thickness.
wall
assemblies
As
such,
considered
the
wall
assemblies
included
considered
the
non-insulated
included
the
non-insula
density
of
the
offered
density
input
of
data.
the
offered
input
data.
ANN
which
ANN
utilised
1,
which
the
utilised
full
dataset),
the
full
dataset),
comparison
the
comparison
made
against
made
wall
against
assemblies
wall
assemblies
graphs
Each
graphs
subsequent
Each
graphs
subsequent
a
gor
Each
graphs
thm
subsequent
a
was
gor
Each
tra
thm
subsequent
ned
a
was
gor
w
tra
th
thm
a
ned
a
subset
was
gor
w
tra
th
thm
of
a
ned
the
subset
was
w
or
tra
th
g
of
na
a
ned
the
subset
nput
w
or
th
g
of
na
a
nforthe
subset
nput
or
g
of
na
nforthe
nput
or
g
na
nv
ones
and
those
insulated
with
100
mm
of
EPS
internally
and
externally.
featuring
mid-range
featuring
mid-range
featuring
values
(i.e.,
mid-range
featuring
values
50
mm
(i.e.,
mid-range
of
values
50
insulation
mm
(i.e.,
of
values
50
insulation
or
mm
ε
(i.e.,
=
0.5).
of
50
insulation
or
Although
mm
ε
=
0.5).
of
insulation
or
Although
it
ε
was
=
0.5).
anticior
Although
it
ε
was
=
0.5).
anticiAlthough
it
was
an
mat
on
Spec
mat
f
on
ca
Spec
the
f
cases
ca
y
exam
the
cases
ned
exam
nc
ude
ned
nc
ude
••formation,
ANN
3:
Only
the
•
extreme
ANN
3:
values
Only
of
the
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
the
dewere
used
for
the
velopment
of
the
third
velopment
algorithm.
of
the
Wall
third
assemblies
algorithm.
with
Wall
ε
=
0.5
assemblies
were
disregarded
with
ε
=
0.5
and
were
disregarded
Each
algorithm
was
eventually
compared
to
the
values
included
in
the
full
set
of
inv
ded
a
eve
v
ded
ground
a
eve
v
for
ded
ground
compar
a
eve
v
for
ded
ground
ng
compar
a
the
eve
a
for
gor
ground
ng
compar
thms
the
a
for
gor
w
ng
compar
thout
thms
the
a
ntroduc
gor
w
ng
thout
thms
the
a
ng
ntroduc
gor
w
ncons
thout
thms
ng
stenc
ntroduc
w
ncons
thout
es
ng
stenc
ntroduc
ncons
es
ng
sten
only
those
with
ε
=
0.1
and
ε
=
0.9
were
included
in
the
dataset.
with
the
formation,
aim
of
identifying
with
the
aim
level
of
identifying
of
inaccuracy
the
introduced
level
of
inaccuracy
by
withholding
introduced
by
withhold
due
to
hyperparameter
due
to
hyperparameter
and
arch
tecture
and
arch
var
tecture
at
ons
var
at
ons
•
1
As
ANN
ment
1
oned
As
ment
above
•
1
oned
As
ANN
th
ment
s
above
uses
1
oned
As
the
th
ment
s
above
comp
uses
oned
the
ete
th
s
above
dataset
comp
uses
the
ete
th
for
s
dataset
comp
tra
uses
n
the
ete
ng
for
and
dataset
comp
tra
test
n
ete
ng
for
ng
and
dataset
tra
test
n
ng
for
ng
and
tra
tes
n
n
ANN
4:
This
was
•
the
ANN
most
4:
input
This
data-deprived
was
the
most
input
algorithm—a
data-deprived
combination
algorithm—a
of
the
precombination
of
the
p
part
of
the
part
input
of
the
data.
part
input
The
of
the
data.
comparison
part
The
of
the
data.
comparison
was
input
The
carefully
data.
comparison
was
The
made
carefully
comparison
was
against
made
carefully
the
was
against
wall
made
carefully
assemblies
the
against
wall
made
assemblies
the
against
wall
assemb
the
w
Tab
e
4
summar
Tab
e
4
ses
summar
Tab
the
e
nput
4
ses
summar
Tab
data
the
e
nput
used
4
ses
summar
data
the
for
tra
nput
used
ses
n
ng
data
the
for
each
tra
nput
used
n
of
ng
data
the
for
each
tra
4
used
ANN
n
of
ng
the
for
mode
each
tra
4
ANN
n
of
s
ng
the
The
mode
each
4
ANN
of
s
the
The
mode
4
ANN
s
purposes.
purposes.
purposes.
purposes.
vious
two
cases.
Only
vious
two
extreme
cases.
cases
Only
of
the
insulation
extreme
and
cases
thermal
of
insulation
emissivity
and
coeffithermal
emissivity
co
incorporating
incorporating
the
variable
the
values
variable
that
values
algorithms
that
the
algorithms
were
deprived
were
of.
deprived
Since
the
of.
regressor
Since
the
regressor
or
g
na
a
or
gor
g
thm
na
a
(ANN
or
gor
g
thm
na
1)
was
a
(ANN
or
gor
deve
g
thm
na
1)
was
oped
a
(ANN
gor
deve
us
thm
1)
ng
was
oped
(ANN
the
deve
fu
us
1)
ng
dataset
was
oped
the
deve
fu
us
compr
ng
dataset
oped
the
s
fu
us
compr
ng
dataset
the
the
ent
s
fu
rety
compr
dataset
the
ent
s
ng
rety
compr
the
ent
s
n
•ANN
ANN
•
2:
The
ANN
second
2:
The
algorithm
second
algorithm
was
developed
was
developed
using
only
using
the
extreme
only
the
values
extreme
of
insuvalues
of
insucient
were
offered
to
the
algorithm
at
the
training
stage,
considerably
reducing
the
coefficient
were
offered
to
the
algorithm
at
the
training
stage,
considerably
reducing
models
were
models
generally
were
models
generally
trained
were
models
using
generally
trained
extreme
were
using
generally
trained
values
extreme
using
trained
of
insulation
values
extreme
using
of
insulation
values
(with
extreme
the
of
insulation
values
exception
(with
the
of
insulation
exception
(with
of
the
exceptio
(with
of
the
of
the
data
of
obta
the
data
ned
through
obta
ned
the
through
FE
ana
the
ys
FE
s
as
ana
descr
ys
s
bed
as
descr
n
deta
bed
n
n
prev
deta
ous
n
paraprev
ous
paralation
thickness.
lation
thickness.
As
lation
such,
the
thickness.
As
wall
lation
such,
assemblies
the
thickness.
As
wall
such,
assemblies
considered
the
As
wall
such,
assemblies
considered
the
included
wall
assemblies
considered
the
included
non-insulated
considered
the
included
non-insulated
the
included
non-insula
the
density
of
the
offered
density
input
of
data.
the
offered
input
data.
1,
which
ANN
utilised
1,
which
ANN
the
utilised
full
which
ANN
dataset),
the
utilised
1,
full
which
dataset),
the
comparison
utilised
full
the
dataset),
the
comparison
was
full
made
dataset),
comparison
was
against
the
made
comparison
wall
was
against
assemblies
made
wall
was
against
assemblies
made
wall
against
assemb
graphs
Each
graphs
Each
graphs
a1,
gor
Each
graphs
thm
awith
was
gor
Each
tra
thm
ned
agood
was
gor
tra
th
thm
ain
ned
asubset
was
w
tra
th
thm
of
aof
ned
the
subset
was
w
tra
th
gscore
of
na
ain
ned
the
subset
nput
w
or
th
gthe
of
na
anforthe
subset
nput
or
gscore
of
na
nforthe
nput
orset
gw
na
ni
ones
and
those
insulated
ones
and
with
those
100
mm
insulated
of
EPS
with
internally
100
mm
and
externally.
EPS
internally
and
externally.
featuring
mid-range
featuring
mid-range
values
(i.e.,
values
50
mm
(i.e.,
of
50
insulation
mm
of
insulation
or
εgor
=the
0.5).
or
Although
εto
=or
0.5).
Although
itthe
was
anticiitinwas
anticimat
on
Spec
mat
fsubsequent
on
ca
Spec
ythe
mat
the
fsubsequent
on
cases
ca
Spec
ywould
mat
exam
the
fsubsequent
on
cases
ca
ned
Spec
ythe
exam
the
nc
fsubsequent
ude
cases
ca
ned
yw
exam
the
nc
ude
cases
ned
exam
nc
ude
ned
nc
ude
•due to
ANN
3:
Only
the
extreme
values
of
the
emissivity
coefficient
were
used
for
depated
that
pated
ANN
that
1
would
pated
ANN
have
that
1
pated
ANN
an
extremely
have
that
1
would
ANN
an
extremely
have
1
would
predictive
an
extremely
good
have
predictive
an
score
extremely
good
(since
predictive
it
good
was
(since
predictive
already
score
it
was
(since
already
it
was
(since
alre
velopment
of
third
velopment
algorithm.
of
Wall
third
assemblies
algorithm.
with
Wall
ε
=
0.5
assemblies
were
disregarded
with
ε
=
0.5
and
were
disregarded
Each
algorithm
was
Each
eventually
algorithm
compared
was
eventually
to
the
values
compared
included
the
in
the
values
full
set
included
of
in
the
full
of
only
those
with
ε
=
0.1
only
and
those
ε
=
0.9
were
ε
=
0.1
included
and
ε
=
0.9
the
were
dataset.
included
dataset.
with
the
aim
of
identifying
level
of
inaccuracy
introduced
by
withholding
hyperparameter
due
to
hyperparameter
due
and
to
hyperparameter
arch
due
tecture
and
to
hyperparameter
arch
var
tecture
and
at
ons
arch
var
tecture
and
at
ons
arch
var
tecture
at
ons
var
at
ons
•
1
As
ANN
ment
1
oned
As
ment
above
oned
th
s
above
uses
the
th
s
comp
uses
the
ete
dataset
comp
ete
for
dataset
tra
n
ng
for
and
tra
test
n
ng
ng
and
test
ng
•formation,
ANN
4:
This
was
the
most
input
data-deprived
algorithm—a
combination
of
the
prepart
of
the
input
data.
part
The
of
the
comparison
data.
was
The
carefully
comparison
made
was
against
carefully
the
wall
made
assemblies
against
the
wall
assemb
Tab
4two
summar
Tab
eANN
4thickness.
ses
summar
the
nput
ses
data
the
nput
used
data
for
tra
used
n
ng
for
each
tra
n
of
ng
the
each
4the
ANN
of
the
mode
4ous
ANN
sthermal
The
mode
swall
The
purposes
purposes
purposes
purposes
vious
cases.
Only
vious
two
extreme
cases.
cases
Only
of
the
insulation
extreme
cases
thermal
of
insulation
emissivity
coeffiemissivity
co
incorporating
incorporating
the
variable
incorporating
the
values
variable
incorporating
that
the
values
the
variable
algorithms
that
the
values
the
variable
algorithms
were
that
values
the
deprived
algorithms
were
that
of.
the
deprived
Since
algorithms
were
the
of.
deprived
regressor
Since
were
the
of.
deprived
regressor
Since
the
of.
regre
Since
or
na
aemid-range
or
gor
g
thm
na
ay(ANN
or
gor
g
thm
na
was
ay(ANN
or
gor
deve
g
thm
na
was
oped
ayana
(ANN
gor
deve
us
thm
1)
ng
was
oped
(ANN
the
deve
fu
us
1)
ng
was
oped
the
deve
fu
us
compr
ng
dataset
oped
the
scomparison
fu
us
ng
compr
ng
the
the
ent
sthe
fu
ng
rety
compr
dataset
the
ent
sused
ng
rety
compr
the
ent
sw
n
•graphs
ANN
•graphs
2:
The
ANN
•mat
2:
The
ANN
algorithm
second
•mat
2:
The
ANN
algorithm
was
second
developed
2:
The
algorithm
was
second
developed
using
algorithm
only
developed
using
the
extreme
only
developed
using
the
values
extreme
only
using
of
the
insuvalues
extreme
only
of
the
insuvalues
extreme
of
in
cient
were
offered
to
cient
the
were
algorithm
offered
at
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
models
were
models
generally
were
generally
trained
using
extreme
using
extreme
of
insulation
of
insulation
(with
the
exception
(with
exception
of
of
of
the
data
of
obta
data
ned
of
through
obta
data
ned
of
the
through
obta
FE
data
ned
the
through
obta
ys
FE
sgood
ned
as
the
descr
through
ys
FE
sgood
bed
as
ana
the
descr
n
ys
deta
FE
sAlthough
bed
as
ana
descr
n
n
ys
prev
deta
sAlthough
bed
as
descr
nthe
n
paraprev
deta
bed
ous
n
nthe
paraprev
deta
ous
nthe
p
pv
lation
thickness.
lation
As
such,
the
As
wall
such,
assemblies
the
wall
assemblies
considered
considered
included
the
included
non-insulated
the
non-insulated
the
density
of
the
offered
input
data.
density
of
the
offered
input
data.
ANN
1,
which
ANN
utilised
1,
which
the
utilised
1,
full
which
ANN
dataset),
the
utilised
1,
full
which
the
dataset),
the
comparison
utilised
full
the
dataset),
the
comparison
was
full
the
made
dataset),
comparison
was
against
made
wall
was
against
assemblies
made
wall
was
against
assemblies
made
against
assemb
Each
Each
a1)
thm
a1)
was
gor
tra
thm
ned
was
w
tra
th
ned
subset
w
th
of
aof
the
subset
g
of
na
the
nput
or
g
na
nfornput
nforones
and
those
ones
insulated
those
ones
and
with
insulated
those
ones
100
mm
and
with
insulated
of
those
100
EPS
mm
with
insulated
internally
of
100
EPS
mm
with
internally
and
100
externally.
EPS
mm
internally
and
of
externally.
EPS
internally
and
externally.
and
externally
featuring
featuring
mid-range
featuring
values
(i.e.,
mid-range
featuring
values
50
mm
(i.e.,
mid-range
of
values
50
mm
(i.e.,
of
values
50
insulation
or
mm
εdataset
(i.e.,
=and
0.5).
of
50
insulation
or
mm
εto
=or
0.5).
of
insulation
or
it
εdataset
was
=and
0.5).
anticior
Although
it
εwere
was
=reasons.
0.5).
anticiAlthough
it
was
an
mat
on
Spec
mat
fsubsequent
on
ca
Spec
the
fsubsequent
on
cases
ca
Spec
exam
the
ftrained
on
cases
ca
ned
Spec
exam
the
nc
ude
cases
ca
ned
yana
exam
the
nc
ude
cases
ned
exam
nc
ude
ned
nc
ude
• g
ANN
Only
the
•and
extreme
ANN
3:
values
Only
of
the
the
extreme
emissivity
values
coefficient
of
the
emissivity
were
used
coefficient
for
defor
pated
that
pated
ANN
that
1second
would
have
1gor
would
an
extremely
have
an
extremely
predictive
predictive
score
(since
score
it
was
(since
already
it
was
already
velopment
of
the
third
algorithm.
Wall
assemblies
with
εincluded
=n
0.5
were
disregarded
and
Each
algorithm
was
Each
eventually
algorithm
compared
was
eventually
to
the
values
compared
the
in
the
values
full
set
included
of
inin
full
set
of
trained
with
trained
full
data),
with
trained
full
it
was
data),
with
included
trained
full
it
was
data),
with
in
included
full
itfinsulation
was
resulting
data),
in
included
the
it
graphs
was
resulting
in
included
for
graphs
resulting
comparative
in
the
for
graphs
resulting
comparative
reasons.
for
graphs
comparative
for
comparat
reason
only
those
with
=summar
0.1
only
and
those
εoned
=the
0.9
with
were
=deve
0.1
included
and
εoned
=ain
0.9
the
were
dataset.
included
in
the
dataset.
with
the
aim
of
identifying
with
the
the
aim
level
of
identifying
of
inaccuracy
the
introduced
level
of
inaccuracy
by
withholding
introduced
by
withhold
•or
13:
As
ANN
ment
1εANN
oned
As
ment
above
•of
1the
As
ANN
th
ment
swere
above
uses
1εthe
oned
As
the
th
ment
sabove
comp
uses
the
ete
th
sdeprived
above
dataset
comp
uses
the
ete
th
for
sdataset
comp
tra
uses
n
the
ete
ng
for
and
dataset
comp
tra
test
n
ete
ng
for
ng
and
dataset
tra
test
nthe
ng
for
ng
and
tra
tes
nthe
np
•formation,
ANN
4:
This
was
•formation,
the
ANN
most
4:
input
This
data-deprived
was
the
most
input
algorithm—a
data-deprived
combination
algorithm—a
of
the
precombination
of
the
part
of1,
the
input
data.
The
comparison
was
carefully
made
against
the
wall
assemblies
Tab
e
4
summar
Tab
e
4
ses
Tab
the
e
nput
4
ses
summar
Tab
data
the
e
nput
used
4
ses
summar
data
the
for
tra
nput
used
ses
n
ng
data
the
for
each
tra
nput
used
of
ng
data
the
for
each
tra
4
used
ANN
n
of
ng
the
for
mode
each
tra
4
ANN
n
of
s
ng
the
The
mode
each
4
ANN
of
s
the
The
mode
4
ANN
s
purposes
purposes
vious
two
cases.
Only
the
extreme
cases
of
insulation
and
thermal
emissivity
coeffiincorporating
the
variable
incorporating
values
that
the
variable
algorithms
values
were
that
the
algorithms
of.
Since
were
the
deprived
regressor
of.
Since
the
regre
or
g
na
a
gor
g
thm
na
a
(ANN
gor
thm
1)
was
(ANN
deve
1)
was
oped
us
ng
oped
the
fu
us
ng
dataset
the
fu
compr
dataset
s
compr
the
ent
s
rety
the
ent
rety
•featuring
ANN
•
2
The
ANN
second
•
2
The
ANN
a
gor
second
•
thm
2
The
ANN
a
was
gor
second
deve
thm
2
The
a
was
oped
gor
second
deve
thm
us
ng
a
oped
gor
on
deve
y
thm
us
the
ng
oped
extreme
on
deve
y
us
the
ng
va
oped
extreme
on
ues
y
us
of
the
ng
va
nsuextreme
on
ues
y
of
the
va
nsuextreme
ues
of
v
cient
were
offered
to
cient
the
were
algorithm
offered
at
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
models
were
models
generally
were
models
generally
trained
were
models
using
generally
trained
extreme
using
generally
trained
extreme
using
trained
of
insulation
values
extreme
using
of
insulation
values
(with
extreme
the
of
insulation
values
exception
(with
the
of
insulation
exception
(with
of
exceptio
(with
of
of
the
data
of
obta
the
data
ned
of
through
obta
data
ned
through
obta
FE
data
ned
ana
through
obta
ys
FE
s
ned
as
ana
the
descr
through
ys
FE
s
bed
as
ana
the
descr
n
ys
deta
FE
s
bed
as
ana
descr
n
n
ys
prev
deta
s
bed
as
ous
descr
n
n
paraprev
deta
bed
ous
n
n
paraprev
deta
ous
n
p
p
lation
thickness.
lation
thickness.
As
lation
such,
the
thickness.
As
wall
lation
such,
assemblies
the
thickness.
As
wall
such,
assemblies
considered
the
As
wall
such,
assemblies
considered
the
included
wall
assemblies
considered
the
included
non-insulated
considered
the
included
non-insulated
the
included
non-insula
the
density
of
the
offered
density
input
of
data.
the
offered
input
data.
ANN
which
ANN
utilised
1,
which
the
utilised
full
dataset),
the
full
the
dataset),
comparison
the
comparison
was
made
was
against
made
wall
against
assemblies
wall
assemblies
graphs
Each
graphs
subsequent
Each
graphs
subsequent
a
gor
Each
graphs
thm
subsequent
a
was
gor
Each
tra
thm
subsequent
ned
a
was
gor
w
tra
th
thm
a
ned
a
subset
was
gor
w
tra
th
thm
of
a
ned
the
subset
was
w
or
tra
th
g
of
na
a
ned
the
subset
nput
w
or
th
g
of
na
a
nforthe
subset
nput
or
g
of
na
nforthe
nput
or
g
na
n
ones
ones
insulated
those
insulated
100
mm
with
of
100
EPS
mm
internally
of
EPS
internally
and
externally.
and
externally.
mid-range
featuring
mid-range
featuring
values
(i.e.,
mid-range
featuring
values
50
mm
(i.e.,
mid-range
of
values
50
insulation
mm
(i.e.,
of
values
50
insulation
or
mm
ε(i.e.,
=dataset.
0.5).
of
50
insulation
or
Although
mm
εwere
=in
0.5).
of
insulation
or
Although
itcoefficient
εwere
was
=reasons.
anticior
Although
itcoefficient
εwere
was
=reasons.
0.5).
anticiitwithhold
was
an
mat
on
Spec
mat
f those
on
ca
Spec
ythe
the
f0.1
cases
ca
ywould
exam
the
cases
ned
exam
nc
ude
ned
nc
ude
•
ANN
••and
3:
Only
ANN
the
•and
3:
extreme
Only
ANN
the
••with
3:
values
extreme
Only
ANN
of
the
3:
the
values
extreme
Only
emissivity
of
the
the
values
extreme
emissivity
coefficient
of
the
values
emissivity
coefficient
of
the
used
emissivity
for
the
used
defor
the
used
dewere
for
the
usn
pated
that
pated
ANN
that
1data.
would
pated
ANN
have
that
1and
pated
ANN
an
extremely
have
that
1data-deprived
would
ANN
an
extremely
good
have
1to
would
predictive
an
extremely
good
have
predictive
an
extremely
good
(since
predictive
score
it
good
was
(since
predictive
already
itprewas
(since
already
score
itAlthough
was
(since
alre
ip
velopment
of
third
velopment
algorithm.
of
the
Wall
third
assemblies
algorithm.
with
Wall
εincluded
=score
0.5
assemblies
were
disregarded
with
ε0.5).
=score
0.5
and
were
disregarded
Each
algorithm
was
eventually
compared
the
values
the
full
set
of
intrained
with
trained
full
data),
with
full
it
was
data),
included
it
was
in
included
resulting
in
graphs
resulting
for
graphs
comparative
for
comparative
only
those
with
ε
=
ε
=
0.9
were
included
in
the
with
the
formation,
aim
of
identifying
with
the
aim
level
of
identifying
of
inaccuracy
the
introduced
level
of
inaccuracy
by
withholding
introduced
by
1
As
ANN
ment
1
oned
As
ment
above
1
oned
As
ANN
th
ment
s
above
uses
1
oned
As
the
th
ment
s
above
comp
uses
oned
the
ete
th
s
above
dataset
comp
uses
the
ete
th
for
s
dataset
comp
tra
uses
n
the
ete
ng
for
and
dataset
comp
tra
test
n
ete
ng
for
ng
and
dataset
tra
test
n
ng
for
ng
and
tra
tes
n
n
•formation,
ANN
4:
This
was
•
the
ANN
most
4:
input
This
was
the
most
input
algorithm—a
data-deprived
combination
algorithm—a
of
the
combination
of
the
part
of
the
input
part
The
of
comparison
data.
was
The
carefully
comparison
made
was
against
carefully
the
wall
made
assemblies
against
the
wall
assemb
purposes
purposes
purposes
purposes
vious
two
cases.
Only
vious
the
two
extreme
cases.
cases
Only
of
the
insulation
extreme
and
cases
thermal
of
insulation
emissivity
and
coeffithermal
emissivity
co
incorporating
the
variable
values
that
the
algorithms
were
deprived
of.
Since
the
regressor
or
g
na
a List
or
gor
g
thm
na
ay(ANN
or
gor
g
thm
na
1)
was
ay(ANN
or
gor
deve
g
thm
na
1)
was
oped
ayana
(ANN
gor
deve
us
thm
1)
ng
was
oped
(ANN
the
deve
fu
us
1)
ng
dataset
was
oped
the
deve
fu
us
compr
ng
dataset
oped
the
scomparison
fu
us
ng
compr
ng
dataset
the
the
ent
sused
fu
ng
rety
compr
dataset
the
ent
sthe
ng
rety
compr
the
ent
sw
n
•mat
ANN
•mat
23:
The
ANN
second
23:
The
aat
gor
second
thm
aextremely
was
gor
deve
thm
was
oped
deve
us
ng
oped
on
y
us
the
ng
extreme
on
y
va
extreme
ues
of
va
nsuues
of
nsucient
were
offered
to
the
algorithm
at
the
training
stage,
considerably
reducing
the
models
were
generally
models
trained
were
using
generally
extreme
trained
values
using
of
insulation
extreme
values
(with
the
of
insulation
exception
(with
of
exceptio
Table
4.
of
wall
assembly
analysis
output
used
for
training
each
algorithm.
of
the
data
of
the
data
ned
through
obta
ned
the
through
FE
the
ys
FE
sgood
as
ana
descr
ys
sgood
bed
as
descr
n
deta
bed
n=the
ndisregarded
prev
deta
ous
nthe
paraprev
ous
paraat
on
thobta
ckness
at
on
th
As
ckness
such
on
the
th
As
ckness
wa
such
at
on
assemb
the
th
As
ckness
wa
such
es
assemb
cons
the
As
wa
such
dered
es
assemb
cons
the
nc
wa
dered
uded
es
assemb
cons
the
nc
nondered
uded
es
cons
nsu
the
nc
ated
nondered
uded
nsu
the
nc
ated
nonuded
nsu
the
density
of
the
offered
density
input
of
data.
the
offered
input
data.
ANN
1,
which
ANN
utilised
1,
which
ANN
the
utilised
1,
full
which
ANN
dataset),
the
utilised
1,
full
which
the
dataset),
the
comparison
utilised
full
the
dataset),
the
comparison
was
full
the
made
dataset),
comparison
was
against
the
made
wall
was
against
assemblies
made
wall
was
against
assemblies
made
wall
against
assemb
graphs
Each
graphs
subsequent
Each
graphs
subsequent
a
gor
Each
graphs
thm
subsequent
a
was
gor
Each
tra
thm
subsequent
ned
a
was
gor
w
tra
th
thm
a
ned
a
subset
was
gor
w
tra
th
thm
of
a
ned
the
subset
was
w
or
tra
th
g
of
na
a
ned
the
subset
nput
w
or
th
g
of
na
a
nforthe
subset
nput
or
g
of
na
nforthe
nput
or
g
na
n
ones
and
those
ones
and
insulated
those
ones
and
with
insulated
those
ones
100
mm
and
with
insulated
of
those
100
EPS
mm
with
insulated
internally
of
100
EPS
mm
with
internally
and
of
100
externally.
EPS
mm
internally
and
of
externally.
EPS
internally
and
externally.
and
externally
featuring
mid-range
featuring
mid-range
values
(i.e.,
values
50
mm
(i.e.,
of
50
insulation
mm
of
insulation
or
ε
=
0.5).
or
Although
ε
=
0.5).
Although
it
was
anticiit
was
anticion
Spec
f
on
ca
Spec
mat
the
f
on
cases
ca
Spec
mat
exam
the
f
on
cases
ca
ned
Spec
exam
the
nc
f
ude
cases
ca
ned
y
exam
the
nc
ude
cases
ned
exam
nc
ude
ned
nc
ude
••formation,
ANN
•
Only
ANN
the
extreme
Only
the
values
extreme
of
the
values
emissivity
of
the
emissivity
coefficient
coefficient
were
used
were
for
defor
the
depated
that
pated
ANN
that
1
would
pated
ANN
have
that
1
would
pated
ANN
an
have
that
1
would
ANN
an
extremely
have
1
would
predictive
an
extremely
have
predictive
an
score
extremely
good
(since
predictive
score
it
good
was
(since
predictive
already
score
it
was
(since
already
score
it
was
(since
alre
ia
velopment
velopment
of
the
third
velopment
of
algorithm.
the
third
velopment
of
algorithm.
the
Wall
third
assemblies
of
algorithm.
the
Wall
third
assemblies
with
algorithm.
Wall
ε
=
0.5
assemblies
with
were
Wall
ε
0.5
assemblies
with
were
ε
=
disregarded
0.5
and
with
were
ε
=
disregarded
0.5
and
were
dis
Each
algorithm
was
Each
eventually
algorithm
compared
was
eventually
to
the
values
compared
included
to
the
in
the
values
full
set
included
of
inin
the
full
set
of
trained
with
trained
full
data),
with
trained
full
it
was
data),
with
included
trained
full
it
was
data),
with
in
included
full
it
was
resulting
data),
in
included
it
graphs
was
resulting
in
included
the
for
graphs
resulting
comparative
in
the
for
graphs
resulting
comparative
reasons.
for
graphs
comparative
reasons.
for
comparat
reason
only
those
with
ε
=
0.1
only
and
those
ε
=
0.9
with
were
ε
=
0.1
included
and
ε
=
in
0.9
the
were
dataset.
included
in
the
dataset.
Table 4.
List
Table
of
wall
4.
List
assembly
Table
of
wall
4.
analysis
List
assembly
Table
of
wall
output
4.
analysis
List
assembly
of
used
wall
output
for
analysis
assembly
training
used
output
for
analysis
each
training
used
algorithm.
output
for
each
training
used
algorithm.
for
each
training
algorithm.
each
algorithm.
with
the
aim
of
identifying
level
of
inaccuracy
introduced
by
withholding
••of
1th
As
ANN
ment
1th
oned
As
ment
above
oned
th
sabove
uses
the
th
scomparison
comp
uses
the
ete
dataset
comp
ete
for
dataset
tra
n
ng
for
and
tra
test
n
ng
ng
and
test
ng
ANN
4:
This
was
the
most
input
data-deprived
algorithm—a
combination
of
the
prepart
of
the
input
data.
part
The
of
comparison
data.
was
The
carefully
made
was
against
carefully
the
wall
made
assemblies
against
the
wall
assemb
purposes
purposes
purposes
purposes
vious
two
cases.
Only
vious
the
two
extreme
cases.
cases
Only
of
the
insulation
extreme
and
cases
thermal
of
insulation
emissivity
and
coeffithermal
emissivity
co
incorporating
the
variable
incorporating
values
that
the
the
variable
algorithms
values
were
that
the
deprived
algorithms
of.
Since
were
the
deprived
regressor
of.
Since
the
regre
•pated
ANN
2
The
ANN
second
•
2
The
ANN
a
gor
second
•
thm
2
The
ANN
a
was
gor
second
deve
thm
2
The
a
was
oped
gor
second
deve
thm
us
ng
a
was
oped
gor
on
deve
y
thm
us
the
ng
was
oped
extreme
on
deve
y
us
the
ng
va
oped
extreme
on
ues
y
us
of
the
ng
va
nsuextreme
on
ues
y
of
the
va
nsuextreme
ues
of
v
n
cient
were
offered
to
cient
the
were
algorithm
offered
at
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
models
were
generally
trained
using
extreme
of
insulation
(with
the
exception
of
of
the
data
obta
the
data
ned
of
through
obta
data
ned
of
the
through
obta
the
FE
data
ned
ana
the
through
obta
ys
FE
s
ned
as
ana
the
descr
through
ys
FE
s
bed
as
ana
the
descr
n
ys
deta
FE
s
bed
as
ana
descr
n
n
ys
prev
deta
s
bed
as
ous
descr
n
n
paraprev
deta
bed
ous
n
n
paraprev
deta
ous
n
p
p
at
on
ckness
at
on
As
ckness
such
the
As
wa
such
assemb
the
wa
es
assemb
cons
dered
es
cons
nc
dered
uded
the
nc
nonuded
nsu
the
ated
nonnsu
ated
density
of
the
offered
input
data.
ANN
1,
which
utilised
ANN
the
1,
full
which
dataset),
utilised
the
the
comparison
full
dataset),
the
made
comparison
against
wall
was
assemblies
made
against
wall
assemb
graphs
Each
graphs
subsequent
Each
subsequent
a
gor
thm
a
was
gor
tra
thm
ned
was
w
tra
th
a
ned
subset
w
th
of
a
the
subset
or
g
of
na
the
nput
or
g
na
nfornput
nforones
and
those
ones
and
nsu
those
ones
ated
and
w
nsu
th
those
ones
ated
100
mm
and
w
nsu
th
of
those
ated
100
EPS
mm
w
nsu
nterna
th
of
ated
100
EPS
mm
y
w
and
nterna
th
of
100
externa
EPS
mm
y
and
nterna
of
y
externa
EPS
y
and
nterna
y
externa
y
and
y
externa
y
featuring
mid-range
featuring
mid-range
featuring
values
(i.e.,
mid-range
featuring
values
50
mm
(i.e.,
mid-range
of
values
50
insulation
mm
(i.e.,
of
values
50
insulation
or
mm
ε
(i.e.,
=
0.5).
of
50
insulation
or
Although
mm
ε
=
0.5).
of
insulation
or
Although
it
ε
was
=
0.5).
anticior
Although
it
ε
was
=
0.5).
anticiAlthough
it
was
an
mat
on
Spec
mat
f on
ca
Spec
ythe
mat
the
f0.1
on
cases
ca
Spec
ythe
mat
exam
the
f=0.1
on
cases
ca
ned
Spec
ythe
exam
the
nc
f=training
ude
cases
ca
ned
yWall
exam
the
nc
ude
cases
ned
exam
nc
ude
ned
nc
ude
• List
ANN
••algorithm
Only
ANN
the
•aim
3:
extreme
Only
ANN
the
••algorithm
3:
values
extreme
Only
ANN
of
the
the
values
extreme
Only
emissivity
of
the
the
values
emissivity
coefficient
of
the
values
emissivity
coefficient
were
of
the
used
emissivity
coefficient
were
for
the
used
decoefficient
were
for
the
used
dewere
for
the
that
pated
ANN
that
1data.
would
ANN
have
1and
would
an
extremely
have
an
extremely
good
predictive
good
predictive
score
(since
it
was
(since
already
it
was
already
velopment
velopment
of
third
of
algorithm.
algorithm.
Wall
assemblies
assemblies
with
εincluded
with
were
εSince
=score
disregarded
0.5
were
disregarded
and
and
Each
was
Each
eventually
compared
was
eventually
to
the
values
compared
to
the
in
the
values
full
set
included
of
inin
the
full
set
ofnp
trained
with
trained
full
data),
with
trained
it
was
data),
with
included
trained
it
was
data),
with
in
included
it
was
resulting
data),
in
included
it
graphs
was
resulting
in
included
for
graphs
resulting
comparative
in
for
graphs
resulting
comparative
reasons.
for
graphs
comparative
reasons.
for
comparat
reason
only
those
only
with
those
=full
only
with
those
εthird
=full
only
0.9
with
and
were
those
ε3:
εthe
=full
0.1
included
0.9
with
and
were
εus
εoned
=extreme
=training
in
0.1
included
0.9
the
and
were
dataset.
ε=the
=0.5
in
included
0.9
the
were
dataset.
in
included
the
dataset.
in
the
dataset.
Table 4.
Table
of
wall
4.
List
assembly
of
wall
analysis
assembly
output
analysis
used
output
for
used
for
each
algorithm.
each
algorithm.
with
the
of
identifying
with
the
aim
level
of
identifying
of
inaccuracy
the
introduced
level
of
inaccuracy
by
withholding
introduced
by
withhold
13:
As
ANN
ment
oned
As
ment
above
oned
As
ANN
th
ment
son
above
uses
oned
As
the
th
ment
sabove
comp
uses
the
ete
th
sdeprived
above
dataset
comp
uses
the
ete
th
for
sdataset
comp
tra
uses
nthe
the
ete
ng
for
and
dataset
comp
tra
test
n
ete
ng
for
ng
and
dataset
tra
test
nthe
ng
for
ng
and
tra
tes
nus
•formation,
ANN
4:
This
was
•formation,
the
ANN
most
4:
input
This
data-deprived
was
most
input
algorithm—a
data-deprived
combination
algorithm—a
of
the
precombination
of
the
part
of
the
input
The
comparison
was
carefully
made
against
the
wall
assemblies
purposes
purposes
vious
cases.
Only
the
extreme
cases
insulation
thermal
emissivity
coeffiSample Reference
Properties
of
Wall
Sample
ANN
1ned
ANN
2ned
ANN
3nforANN
4nforincorporating
the
variable
incorporating
values
that
the
the
variable
algorithms
values
were
that
the
algorithms
of.
were
the
deprived
Since
the
•graphs
ANN
•graphs
2two
The
ANN
second
•graphs
21εnsu
The
ANN
aated
gor
second
•graphs
thm
21εnsu
The
ANN
aextremely
was
gor
second
deve
thm
21of
The
aextremely
was
oped
gor
second
deve
thm
ng
aextremely
was
oped
gor
on
deve
y
thm
us
ng
was
oped
extreme
on
y
us
the
ng
va
oped
extreme
on
ues
y
us
of
the
va
nsuextreme
on
ues
of
the
va
nsuextreme
of
vn
cient
were
offered
to
cient
the
were
algorithm
offered
at
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
models
were
generally
models
trained
were
using
generally
extreme
trained
using
of
insulation
extreme
values
(with
of
insulation
exception
(with
of
exceptio
at
on
th
ckness
at
on
th
As
ckness
such
at
on
the
th
As
ckness
wa
such
at
assemb
the
th
As
ckness
wa
such
es
assemb
cons
the
As
wa
such
dered
es
assemb
cons
the
nc
dered
uded
es
assemb
cons
the
nc
nondered
uded
es
cons
nsu
the
nc
ated
nondered
uded
nsu
the
nc
ated
nonuded
nsu
the
density
of
the
offered
density
input
of
data.
the
offered
input
data.
ANN
1,
which
utilised
the
full
dataset),
the
comparison
against
wall
assemblies
Each
subsequent
Each
subsequent
awith
gor
Each
thm
subsequent
aof
was
gor
Each
tra
thm
subsequent
ned
aof
was
gor
w
tra
th
thm
ain
ned
asubset
was
w
tra
th
thm
of
ain
the
subset
was
w
or
tra
th
g
of
na
aassemblies
the
subset
nput
w
th
g
of
na
ang
the
subset
nput
or
g
of
na
the
nput
or
g2AN
na
n
ones
and
those
ones
and
those
w
th
ated
100
mm
w
th
of
100
EPS
mm
nterna
of
EPS
ymade
and
nterna
externa
ydeve
and
y
externa
y
featuring
mid-range
featuring
values
(i.e.,
mid-range
50
mm
values
insulation
(i.e.,
50
or
mm
εgor
=and
0.5).
of
insulation
Although
or
it
εwere
was
=or
0.5).
anticiAlthough
it
was
an
Sample Reference
Sample Reference
Sample
Reference
Sample
Reference
Properties
Properties
of
Wall
Sample
Properties
of
Wall
Sample
Properties
Wall
Sample
Wall
ANN
Sample
1wa
ANN
ANN
2=
1ANN
ANN
ANN
3regressor
2ANN
1ANN
ANN
4of.
ANN
3y
2ANN
1ANN
ANN
4ues
3regre
A
mat
on
Spec
mat
f
on
ca
Spec
y
the
f
cases
ca
y
exam
the
cases
ned
exam
nc
ude
ned
nc
ude
•part
ANN
•
3
On
ANN
y
the
•
3
extreme
On
ANN
y
the
•
3
va
extreme
On
ANN
ues
y
of
the
3
the
va
extreme
On
ues
em
y
of
the
ss
the
va
v
extreme
ty
ues
em
coeff
of
ss
the
va
v
c
ent
ty
ues
em
coeff
were
of
ss
the
v
c
used
ent
ty
em
coeff
for
ss
v
c
the
used
ent
ty
decoeff
were
for
c
the
used
ent
dewere
for
the
us
pated
that
pated
ANN
that
1
would
pated
ANN
have
that
1
would
pated
ANN
an
have
that
1
would
ANN
an
good
have
1
would
predictive
an
good
have
predictive
an
score
extremely
good
(since
predictive
score
it
good
was
(since
predictive
already
score
it
was
(since
already
score
it
was
(since
alre
iap
velopment
velopment
of
the
third
velopment
of
algorithm.
the
third
velopment
of
algorithm.
the
Wall
third
assemblies
of
algorithm.
the
Wall
third
assemblies
with
algorithm.
Wall
ε
=
0.5
assemblies
with
were
Wall
ε
disregarded
0.5
with
were
ε
=
disregarded
0.5
and
with
were
ε
=
disregarded
0.5
and
were
dis
Each
algorithm
was
eventually
compared
to
the
values
included
in
the
full
set
of
intrained
with
trained
full
data),
with
full
it
was
data),
included
it
was
in
included
resulting
in
graphs
resulting
for
graphs
comparative
for
comparative
reasons.
reasons.
only
those
only
with
those
ε
=
0.1
and
ε
ε
=
=
0.1
0.9
and
were
ε
=
included
0.9
were
included
the
dataset.
the
dataset.
Table
4.
List
Table
of
wall
4.
List
assembly
Table
of
wall
4.
analysis
List
assembly
Table
of
wall
output
4.
analysis
List
assembly
used
wall
output
for
analysis
assembly
training
used
output
for
analysis
each
training
used
algorithm.
output
for
each
training
used
algorithm.
for
each
training
algorithm.
each
algorithm.
formation,
with
the
formation,
aim
of
identifying
with
the
aim
level
of
identifying
of
inaccuracy
the
introduced
level
of
inaccuracy
by
withholding
introduced
by
withhold
•incorporating
ANN
•
1
As
ANN
ment
•
1
oned
As
ANN
ment
above
•
1
oned
As
ANN
th
ment
s
above
uses
1
oned
As
the
th
ment
s
above
comp
uses
oned
the
ete
th
s
above
dataset
comp
uses
the
ete
th
for
s
dataset
comp
tra
uses
n
the
ete
ng
for
and
dataset
comp
tra
test
n
ete
ng
for
ng
and
dataset
tra
test
n
ng
for
ng
and
tra
tes
n
n
4:
This
was
4:
the
This
most
was
4:
input
the
This
most
data-deprived
was
4:
input
the
This
most
data-deprived
was
input
algorithm—a
the
most
data-deprived
input
algorithm—a
combination
data-deprived
algorithm—a
combination
of
the
algorithm—a
precombination
of
the
precombinat
of
the
of
the
input
data.
part
The
of
comparison
data.
was
The
carefully
comparison
made
was
against
carefully
the
wall
made
assemblies
against
the
wall
assemb
purposes
purposes
purposes
purposes
vious
two
cases.
Only
vious
the
two
extreme
cases.
cases
Only
of
the
insulation
extreme
and
cases
thermal
of
insulation
emissivity
and
coeffithermal
emissivity
co
the
variable
values
that
the
algorithms
were
deprived
of.
Since
the
regressor
•
ANN
•
2
The
ANN
second
2
The
a
gor
second
thm
a
was
gor
deve
thm
was
oped
deve
us
ng
oped
on
y
us
the
ng
extreme
on
y
the
va
extreme
ues
of
va
nsuues
of
nsucient
were
offered
to
the
algorithm
at
the
training
stage,
considerably
reducing
the
models
were
generally
models
trained
were
using
generally
extreme
trained
values
using
of
insulation
extreme
values
(with
the
of
insulation
exception
(with
of
the
exceptio
3
at
on
th
ckness
at
on
th
As
ckness
such
at
on
the
th
As
ckness
wa
such
at
on
assemb
the
th
As
ckness
wa
such
es
assemb
cons
the
As
wa
such
dered
es
assemb
cons
the
nc
wa
dered
uded
es
assemb
cons
the
nc
nondered
uded
es
cons
nsu
the
nc
ated
nondered
uded
nsu
the
nc
ated
nonuded
nsu
the
density
of
the
offered
density
input
of
data.
the
offered
input
data.
ANN
1,
which
utilised
ANN
the
1,
full
which
dataset),
utilised
the
the
comparison
full
dataset),
was
the
made
comparison
against
wall
was
assemblies
made
against
wall
assemb
3,1000
3an
3,1000
3an
ones
and
those
ones
and
nsu
those
ones
ated
and
w
nsu
th
those
ones
ated
100
mm
and
w
nsu
th
of
those
ated
100
EPS
mm
w
nsu
nterna
th
of
ated
100
EPS
mm
y
w
and
nterna
th
of
100
externa
EPS
mm
y
and
nterna
of
y
externa
EPS
y
and
nterna
y
externa
y
and
y
externa
y
featuring
mid-range
values
(i.e.,
50
mm
of
insulation
or
ε
=
0.5).
Although
it
was
anticiSmpl1-1
Sample
Reference
Sample
Reference
Properties
Properties
of
Wall
Sample
of
Wall
Sample
ANN
1
ANN
ANN
2
1
ANN
ANN
3
2
ANN
ANN
4
3
ANN
4
ρ
=
1000
kg/m
,
λ:
0.4
W/(m
·
K),
ε
=
0.1
mat
on
Spec
mat
f
on
ca
Spec
y
mat
the
f
on
cases
ca
Spec
y
mat
exam
the
f
on
cases
ca
ned
Spec
y
exam
the
nc
f
ude
cases
ca
ned
y
exam
the
nc
ude
cases
ned
exam
nc
ude
ned
nc
ude
•
ANN
•
3
On
ANN
y
the
3
extreme
On
y
the
va
extreme
ues
of
the
va
ues
em
of
ss
the
v
ty
em
coeff
ss
v
c
ent
ty
coeff
were
c
used
ent
were
for
the
used
defor
the
de













Smpl1-1
Smpl1-1
Smpl1-1
Smpl1-1
ρ
=
1000
kg/m
ρ
=
λ:
0.4
kg/m
W/(m·K),
ρ
=
,
1000
λ:
0.4
kg/m
ε
W/(m·K),
=
ρ
0.1
=
λ:
0.4
kg/m
ε
W/(m·K),
=
0.1
,
λ:
0.4
ε
W/(m·K),
=
0.1
ε
=
0.1
pated
that
ANN
1
would
pated
have
that
ANN
extremely
1
would
good
have
predictive
extremely
score
good
(since
predictive
it
was
already
score
(since
it
was
alre
ve
opment
ve
of
opment
the
th
ve
rd
of
opment
aabove
the
gor
th
thm
ve
rd
of
opment
asabove
the
Wa
gor
thm
assemb
rd
of
aof
the
Wa
gor
thm
assemb
rd
w
ath
Wa
gor
εincluded
es
thm
0in
assemb
w
5of
were
th
Wa
εfor
=for
d
es
0in
assemb
sregarded
w
5of
were
th
εincluded
=test
d
es
0of
sregarded
and
w
5comparative
were
th
εSince
=test
d
0sregarded
and
5comparat
were
dco
sa
Each
algorithm
was
Each
eventually
algorithm
compared
was
eventually
to
the
values
compared
to
the
in
the
values
full
set
inin
the
full
set
of
trained
with
full
data),
with
trained
it
was
data),
with
included
trained
it
was
data),
with
in
included
it
was
resulting
data),
in
included
it
graphs
was
resulting
in
included
the
for
graphs
resulting
comparative
in
the
graphs
resulting
comparative
reasons.
for
graphs
reasons.
for
reason
only
those
only
with
those
=3full
0.1
only
with
and
those
εextreme
εthe
=full
=the
0.1
only
0.9
with
and
were
those
εat
εth
=full
=training
0.1
included
0.9
with
and
were
εus
εth
=es
=training
in
0.1
included
0.9
the
and
were
dataset.
ε=the
=training
included
0.9
the
were
dataset.
included
the
dataset.
in
the
dataset.
Table 4.
List
Table
of the
wall
4.
List
assembly
Table
of
wall
4.
analysis
List
assembly
Table
of
wall
output
4.
analysis
List
assembly
of
used
wall
output
for
analysis
assembly
used
output
for
analysis
each
used
algorithm.
output
for
each
used
algorithm.
each
training
algorithm.
each
algorithm.
with
the
aim
of
identifying
level
inaccuracy
introduced
by
withholding
1th
As
ment
1εth
oned
As
ment
oned
th
uses
the
scomparison
comp
uses
the
ete
dataset
comp
ete
for
dataset
tra
n
ng
for
and
tra
n
ng
ng
and
ng
•formation,
ANN
•trained
4:
This
ANN
was
4:
the
This
most
was
input
most
data-deprived
input
data-deprived
algorithm—a
algorithm—a
combination
combination
of
the
preof
the
prepart
of
input
data.
part
The
of
the
comparison
data.
was
The
carefully
was
against
carefully
the
wall
made
assemblies
against
the
wall
assemb
purposes
purposes
purposes
purposes
vious
vious
cases.
Only
vious
cases.
Only
vious
cases.
the
cases
extreme
Only
cases.
of
the
insulation
cases
extreme
Only
of
the
insulation
and
cases
extreme
thermal
insulation
and
cases
emissivity
thermal
insulation
and
emissivity
coeffithermal
and
emissivity
coeffithermal
emi
incorporating
the
variable
incorporating
values
that
the
variable
algorithms
values
were
that
the
deprived
algorithms
of.
were
the
deprived
regressor
of.
the
regre
•• Reference
ANN
••and
2two
The
ANN
second
••and
2two
The
ANN
aated
gor
second
••and
thm
2two
The
ANN
aextremely
was
gor
second
deve
thm
2two
The
ath
was
oped
gor
second
deve
thm
ng
amade
oped
gor
on
deve
y
thm
us
ng
was
oped
extreme
on
deve
ySince
us
the
ng
va
oped
extreme
on
ues
y
us
of
the
ng
va
nsuextreme
on
ues
y
of
the
va
nsuextreme
ues
of
vn
cient
were
offered
to
cient
the
were
algorithm
offered
the
to
the
training
algorithm
stage,
at
considerably
the
training
stage,
reducing
considerably
the
reducing
models
were
generally
trained
using
extreme
of
insulation
(with
the
exception
of
at
on
ckness
at
on
As
ckness
such
the
As
wa
such
assemb
the
wa
es
assemb
cons
dered
es
cons
nc
dered
uded
the
nc
nonuded
nsu
the
ated
nonnsu
ated
density
of
the
offered
input
data.
ANN
1,
which
utilised
ANN
the
1,
full
which
dataset),
utilised
the
the
comparison
full
dataset),
was
the
made
comparison
against
wall
was
assemblies
made
against
wall
assemb
3,1000
3an
ones
those
ones
nsu
those
ones
w
nsu
th
those
ones
ated
100
mm
and
w
nsu
th
of
those
ated
100
EPS
mm
w
nsu
nterna
of
ated
100
EPS
mm
y
w
and
nterna
th
of
100
externa
EPS
mm
y
and
nterna
of
y
externa
EPS
y
and
nterna
y
externa
y
and
y
externa
y
3,1000
3,th
featuring
mid-range
featuring
values
(i.e.,
mid-range
50
mm
of
values
insulation
(i.e.,
50
or
mm
ε
=
0.5).
of
insulation
Although
or
it
ε
was
=
0.5).
anticiAlthough
it
was
an
Smpl1-2
Sample
Reference
Sample
Reference
Sample
Sample
Reference
Properties
Properties
of
Wall
Sample
Properties
of
Wall
Sample
Properties
Wall
Sample
Wall
ANN
Sample
1
ANN
ANN
2
1
ANN
ANN
ANN
3
2
ANN
1
ANN
ANN
4
ANN
3
2
ANN
1
ANN
ANN
4
3
2
AN
A
ρ
=
1000
kg/m
,
λ:
0.4
W/(m
·
K),
ε
=
0.5
ANN
3
On
ANN
y
the
3
extreme
On
ANN
y
the
3
va
extreme
On
ANN
ues
y
of
the
3
the
va
extreme
On
ues
em
y
of
the
ss
the
va
v
extreme
ty
ues
em
coeff
of
ss
the
va
v
c
ent
ty
ues
em
coeff
were
of
ss
the
v
c
used
ent
ty
em
coeff
were
for
ss
v
c
the
used
ent
ty
decoeff
were
for
c
the
used
ent
dewere
for
the
us








Smpl1-1
Smpl1-1
ρ
=
1000
kg/m
ρ
=
λ:
0.4
kg/m
W/(m·K),
,
λ:
0.4
ε
W/(m·K),
=
0.1
ε
=
0.1
pated
that
ANN
1
would
have
good
predictive
score
(since
it
was
already




Smpl1-2
Smpl1-2
Smpl1-2
Smpl1-2
ρ
=
1000
kg/m
ρ
0.5
=
λ:
0.4
kg/m
W/(m·K),
0.5
λ:
0.4
ε
W/(m·K),
=
0.5
ε
=
0.5
ve
opment
ve
of
opment
the
th
rd
of
a
the
gor
th
thm
rd
a
Wa
gor
thm
assemb
Wa
es
assemb
w
th
ε
=
es
0
w
5
were
th
ε
=
d
0
sregarded
5
were
d
sregarded
and
and
Each
was
Each
eventually
compared
was
eventually
the
values
compared
included
to
the
in
the
values
full
set
included
of
inin
the
full
set
trained
with
full
data),
trained
it
was
with
included
data),
in
it=training
was
included
in
for
comparative
graphs
reasons.
for
comparative
reason
on
y
those
on
w
y
ththose
=3This
0of
on
w
1offered
and
y
thoutput
those
εthe
=full
=to
0ANN
on
0w
1offered
9data-deprived
and
th
were
those
εat
εthe
=This
0of
0resulting
w
1offered
9to
and
uded
th
were
εat
εthe
=that
=training
0the
nagraphs
nc
0oped
1algorithm.
the
9on
and
uded
were
dataset
εthe
=ng
nnc
0resulting
the
9on
uded
were
dataset
nng
nc
the
uded
dataset
nnsuthe
dataset
Table 4.
List
Table
of the
wall
4.
List
assembly
of
wall
analysis
assembly
analysis
used
output
for
used
for
each
each
algorithm.
with
the
aim
identifying
with
the
the
aim
level
identifying
of
inaccuracy
the
introduced
level
of
inaccuracy
by
withholding
introduced
by
withhold
1th
As
ment
1εth
oned
As
ment
above
oned
As
th
ment
sy
above
uses
oned
As
the
th
ment
sabove
comp
uses
oned
ete
th
sdeprived
above
dataset
comp
uses
the
ete
th
for
sdataset
comp
tra
uses
nthe
the
ete
for
and
dataset
comp
tra
test
n
ete
ng
for
ng
and
dataset
tra
test
n
ng
for
ng
and
tra
tes
nof
np
•formation,
ANN
••algorithm
4:
This
ANN
was
•formation,
4:
the
ANN
most
was
••algorithm
4:
input
This
most
was
4:
input
most
data-deprived
was
input
algorithm—a
most
data-deprived
input
algorithm—a
combination
data-deprived
algorithm—a
combination
of
the
algorithm—a
precombination
of
the
precombinat
of
the
part
of
input
data.
The
comparison
was
carefully
made
against
the
wall
assemblies
purposes
purposes
vious
vious
cases.
Only
cases.
the
extreme
Only
the
cases
extreme
of
insulation
cases
of
insulation
thermal
and
emissivity
thermal
emissivity
coefficoeffiincorporating
the
variable
incorporating
values
that
the
the
variable
algorithms
values
were
the
algorithms
of.
Since
were
the
deprived
Since
the
•• Reference
ANN
2two
The
ANN
••and
2two
The
ANN
aated
gor
thm
21=εnsu
The
ANN
aextremely
was
gor
second
deve
thm
21of
The
anc
was
oped
gor
second
deve
thm
us
ng
gor
deve
y
thm
us
was
oped
extreme
y
us
the
ng
va
oped
extreme
on
ues
y
us
of
the
ng
va
extreme
on
ues
of
the
va
nsuextreme
of
vn
cient
were
cient
offered
were
to
cient
the
were
algorithm
cient
the
were
algorithm
the
to
the
training
algorithm
the
to
stage,
the
algorithm
at
considerably
the
stage,
training
at
considerably
the
stage,
reducing
training
considerably
the
stage,
reducing
considerably
the
reducing
models
were
generally
models
trained
were
using
generally
extreme
trained
using
of
insulation
values
(with
of
insulation
exception
(with
of
the
exceptio
at
on
ckness
at
on
As
such
at
the
th
As
ckness
wa
such
at
the
th
As
ckness
wa
such
es
assemb
cons
the
As
wa
such
dered
es
assemb
cons
nc
dered
uded
es
assemb
cons
the
nc
nondered
uded
es
cons
the
nc
ated
nonuded
the
nc
ated
nonnsu
the
density
of
the
offered
density
input
of
data.
the
offered
input
data.
ANN
1,
which
utilised
the
full
dataset),
the
comparison
was
made
against
wall
assemblies
33,1000
33an
33,1000
33an
ones
those
ones
nsu
those
w
th
ated
100
mm
w
th
of
100
EPS
mm
nterna
of
EPS
y
and
nterna
externa
ydeve
and
y
externa
y
featuring
mid-range
featuring
(i.e.,
mid-range
50
mm
values
insulation
(i.e.,
50
or
mm
εtraining
=and
0.5).
of
insulation
Although
or
it
εwere
was
=ss
0.5).
anticiAlthough
it
was
an
Smpl1-3
Sample
Reference
Sample
Reference
Sample
Sample
Properties
Properties
of
Wall
Sample
Properties
of
Wall
Sample
Properties
Wall
Sample
Wall
ANN
Sample
1wa
ANN
ANN
1ANN
ANN
ANN
3regressor
2ANN
1ANN
ANN
4of.
ANN
3y
ANN
4ues
3regre
2AN
A
=Reference
1000
kg/m
,ckness
λ:
0.4
·used
K),
εgor
=
0.9
ANN
•and
On
ANN
y
the
3values
extreme
On
ANN
ysecond
the
•W/(m
3ANN
va
extreme
On
ANN
ues
y
of
the
the
va
extreme
On
ues
em
y
of
the
the
va
v
extreme
ty
ues
em
coeff
of
the
va
v
cintroduced
ent
ty
ues
em
coeff
were
of
the
v
c2nat
used
ent
ty
em
coeff
for
v
cintroduced
the
used
ent
ty
decoeff
were
for
c2ANN
the
used
ent
dewere
for
the
us














Smpl1-1
Smpl1-1
Smpl1-1
Smpl1-1
ρρ4.
=3
1000
kg/m
ρthose
=ε
λ:
0.4
kg/m
W/(m·K),
ρgor
,1000
λ:
0.4
kg/m
εand
W/(m·K),
=assemb
ρ
0.1
=3
λ:
0.4
kg/m
εwas
W/(m·K),
=ss
0.1
,es
λ:
0.4
εextreme
W/(m·K),
=ss
0.1
εved
=ss
0.1
pated
that
ANN
1second
would
pated
have
that
1data-depr
would
good
have
predictive
extremely
good
(since
predictive
it
was
already
(since
it
was
alre
Smpl1-2
Smpl1-2
0.5
0.5
ve
opment
ve
of
opment
the
th
ve
rd
of
opment
aon
the
th
thm
ve
rd
of
opment
aon
the
Wa
th
thm
assemb
rd
of
aved
the
Wa
gor
th
thm
assemb
rd
w
aved
th
Wa
gor
εthe
=score
es
thm
0gor
assemb
w
5of
were
th
Wa
εSince
=d
es
0gor
assemb
sregarded
w
5of
were
th
εnsu
=score
d
es
0of
sregarded
and
w
5dered
were
th
εnsu
=d
01ANN
sregarded
and
5uded
were
d
sap
Each
algorithm
was
eventually
compared
to
the
values
included
in
the
full
set
intrained
with
full
data),
it
was
included
in
resulting
graphs
for
comparative
reasons.
0.9
0.9
0.9
0.9
Smpl1-3
Smpl1-3
Smpl1-3
Smpl1-3
on
y
those
on
w
y
th
=
0
w
1
and
th
ε
ε
=
=
0
0
1
9
were
ε
=
nc
0
9
uded
were
n
nc
the
uded
dataset
n
the
dataset
Table
4.
List
Table
of
wall
List
assembly
Table
of
wall
4.
analysis
List
assembly
Table
of
wall
output
4.
analysis
List
assembly
of
wall
output
for
analysis
assembly
training
used
output
for
analysis
each
training
used
algorithm.
output
for
each
training
used
algorithm.
for
each
training
algorithm.
each
algorithm.
with
the
formation,
aim
of
identifying
with
the
aim
level
of
identifying
of
inaccuracy
the
level
of
inaccuracy
by
withholding
by
withhold
•formation,
ANN
•
4
Th
ANN
s
was
•
4
the
Th
ANN
most
s
was
•
4
nput
the
Th
ANN
most
s
was
4
nput
the
Th
most
s
data-depr
nput
a
the
gor
most
data-depr
thm—a
nput
a
comb
data-depr
thm—a
a
comb
on
ved
thm—a
of
the
nat
a
gor
precomb
on
thm—a
of
the
nat
precomb
on
of
the
nat
part
of
the
input
data.
part
The
of
comparison
input
data.
was
The
carefully
comparison
made
was
against
carefully
the
wall
made
assemblies
against
the
wall
assemb
purposes
purposes
purposes
purposes
vious
two
vious
cases.
two
Only
vious
cases.
the
two
extreme
Only
vious
cases.
the
cases
two
extreme
Only
cases.
of
the
insulation
cases
extreme
Only
of
the
insulation
and
cases
extreme
thermal
insulation
and
cases
emissivity
thermal
insulation
and
emissivity
coeffithermal
and
emissivity
coeffithermal
emi
co
incorporating
the
variable
values
that
the
algorithms
were
deprived
of.
the
regressor
••
ANN
•and
23ANN
The
ANN
2nsu
The
aated
gor
thm
aextremely
was
gor
deve
thm
was
oped
deve
us
ng
oped
on
y
us
the
ng
extreme
on
y
the
va
extreme
ues
of
va
nsuof
nsucient
were
cient
offered
were
to
offered
the
algorithm
to
the
algorithm
at
the
training
at
the
stage,
considerably
stage,
considerably
reducing
the
reducing
the
models
were
generally
models
trained
were
using
generally
extreme
trained
values
using
of
insulation
extreme
values
(with
the
of
insulation
exception
(with
of
the
exceptio
3the
at
on
ckness
at
on
As
such
at
the
th
As
ckness
wa
such
at
the
th
As
ckness
wa
such
es
assemb
cons
the
As
wa
such
dered
es
assemb
cons
nc
dered
uded
cons
the
nc
nondered
uded
es
cons
the
nc
ated
nonuded
the
nc
ated
nonuded
nsu
the
density
of
density
the
offered
density
the
input
offered
of
data.
density
the
input
offered
data.
the
input
offered
data.
input
data.
ANN
1,
which
utilised
ANN
1,
full
which
dataset),
utilised
the
the
comparison
full
dataset),
was
the
made
comparison
against
wall
was
assemblies
against
wall
33 of
33an
33 of
33an
ones
those
ones
and
ones
and
w
nsu
ones
ated
100
mm
and
w
nsu
of
ated
100
EPS
mm
w
nterna
of
ated
100
EPS
mm
y
w
and
nterna
th
of
100
externa
EPS
mm
ythe
and
nterna
y
externa
EPS
y3made
and
nterna
y
externa
y3in
and
ysregarded
externa
ysa
featuring
mid-range
values
(i.e.,
50
mm
of
insulation
εtraining
==ss
0.5).
Although
it
was
anticiSmpl2-1
Sample
Reference
Sample
Reference
Properties
Properties
of
Wall
Sample
of
Wall
Sample
ANN
1wa
ANN
ANN
1ANN
ANN
2ANN
ANN
4ues
ANN
4assemb
= 1000
2000
kg/m
,ckness
λ:
0.8
W/(m
·used
K),
εgor
=
0.1
ANN
On
ANN
y
the
3th
On
ysecond
the
va
extreme
ues
of
the
va
ues
em
of
the
v
ty
em
coeff
v
cintroduced
ent
ty
coeff
were
c2nat
used
ent
were
for
the
used
defor
the
de





Smpl1-1
Smpl1-1
Smpl1-1
Smpl1-1
0.1
0.1
0.1
0.1
pated
that
1second
would
pated
have
that
ANN
would
good
have
predictive
extremely
score
good
(since
predictive
it
was
already
(since
it
was
alre
Smpl1-2
Smpl1-2
Smpl1-2
ρρthose
=th
kg/m
ρthose
=ε
,1000
λ:
kg/m
W/(m·K),
ρgor
=ε
,1000
λ:
kg/m
εand
W/(m·K),
=assemb
ρ
0.5
=ε
,1000
λ:
kg/m
εand
W/(m·K),
=ss
0.5
,=th
λ:
εand
W/(m·K),
0.5
εes
=assemb
0.5
ve
opment
ve
of
opment
the
th
ve
rd
of
opment
aon
the
th
thm
ve
rd
of
opment
aon
the
Wa
thm
assemb
rd
of
aved
the
Wa
gor
es
thm
assemb
w
aved
th
Wa
gor
εthe
es
thm
0gor
assemb
w
5of
were
th
Wa
εSince
=were
d
es
0of
assemb
sregarded
w
5of
were
th
εnsu
=score
d
es
0of
sregarded
and
w
5dered
were
th
εnsu
=d
0
and
5ss
were
dco







Each
was
Each
eventually
algorithm
compared
was
eventually
the
values
compared
included
to
in
the
values
full
set
included
inthe
full
set
of
trained
with
full
data),
trained
it
was
with
included
full
data),
in
it
was
included
graphs
in
the
for
comparative
graphs
reasons.
for
comparative
reason
0.9
0.9
Smpl1-3
Smpl1-3






Smpl2-1
Smpl2-1
Smpl2-1
Smpl2-1
2000
2000
2000
2000
0.8
0.8
0.1
0.8
0.1
0.8
0.1
0.1
on
y
on
w
yth
=3extreme
0those
on
w
1offered
and
y
th
those
εthe
=th
=to
0those
on
0y0.4
w
1offered
91data-depr
yth
were
those
εth
=th
=training
0those
nc
0resulting
w
1offered
9to
uded
th
were
εnsu
=gor
0or
nard
nc
0oped
1algorithm.
the
9on
uded
were
dataset
=training
nnc
0resulting
the
9on
uded
were
dataset
n
nc
the
uded
dataset
n
the
dataset
TableSmpl1-2
4.
ListANN
of
wall
assembly
Table
4.
analysis
List
of
wall
output
assembly
for
analysis
output
each
used
for
each
algorithm.
with
the
aim
of
identifying
level
of
inaccuracy
by
withholding
•formation,
•••algorithm
42two
Th
ANN
sous
was
42two
the
Th
most
sfull
was
nput
most
nput
data-depr
aεth
thm—a
aε=the
comb
thm—a
comb
of
the
nat
preof
the
prepart
of
the
input
data.
part
The
of
comparison
input
data.
was
The
carefully
comparison
made
was
against
carefully
the
wall
made
assemblies
against
the
wall
v
ous
v
cases
On
v
cases
y0.4
ous
the
two
extreme
On
v
cases
ous
the
cases
two
extreme
On
cases
y0.4
of
the
nsu
cases
extreme
On
at
y0.4
of
on
the
nsu
and
cases
extreme
at
therma
on
nsu
and
cases
em
at
therma
ss
on
v
nsu
and
ty
em
coeff
at
therma
ss
on
v
-ANN
and
ty
em
coeff
therma
v
-assemb
ty
em
incorporating
the
variable
incorporating
values
that
the
the
variable
algorithms
values
were
that
the
deprived
algorithms
of.
the
deprived
regressor
of.
Since
the
regre
•
ANN
The
ANN
second
•
The
ANN
a
gor
second
•
thm
2
The
ANN
a
was
gor
second
deve
thm
2
The
a
was
oped
gor
second
deve
thm
us
ng
gor
deve
y
thm
us
ng
was
oped
extreme
deve
y
us
the
ng
va
oped
extreme
on
ues
y
us
of
the
ng
va
nsuextreme
on
ues
y
of
the
va
nsuextreme
ues
of
vn
cient
were
cient
offered
were
to
cient
the
were
algorithm
cient
the
were
algorithm
at
the
to
the
training
algorithm
at
the
to
stage,
the
training
algorithm
at
considerably
the
stage,
training
at
considerably
the
stage,
reducing
training
considerably
the
stage,
reducing
considerably
the
reducing
models
were
generally
trained
using
extreme
of
insulation
(with
the
exception
of
at
on
th
ckness
at
on
th
As
ckness
such
the
As
wa
such
assemb
the
wa
es
assemb
cons
dered
es
cons
nc
dered
uded
the
nc
nonuded
nsu
the
ated
nonnsu
ated
density
of
density
the
offered
of
the
input
offered
data.
input
data.
ANN
1,
which
utilised
ANN
the
1,
which
dataset),
utilised
the
the
comparison
full
dataset),
was
the
made
comparison
against
wall
was
assemblies
made
against
wall
assemb
3
3
ones
and
those
ones
and
nsu
those
ones
ated
and
w
nsu
th
those
ones
ated
100
mm
and
w
nsu
th
of
those
ated
100
EPS
mm
w
nsu
nterna
th
of
ated
100
EPS
mm
y
w
and
nterna
th
of
100
externa
EPS
mm
y
and
nterna
of
y
externa
EPS
y
and
nterna
y
externa
y
and
y
externa
y
3
3
featuring
mid-range
featuring
values
(i.e.,
mid-range
50
mm
of
values
insulation
(i.e.,
50
or
mm
ε
=
0.5).
of
insulation
Although
or
it
ε
was
=
0.5).
anticiAlthough
it
was
an
Smpl2-2
Sample
Reference
Sample
Reference
Sample
Reference
Sample
Reference
Properties
Properties
of
Wall
Sample
Properties
of
Wall
Sample
Properties
Wall
Sample
Wall
ANN
Sample
1
ANN
ANN
2
1
ANN
ANN
ANN
3
2
ANN
1
ANN
ANN
4
3
2
ANN
1
ANN
ANN
4
3
2
AN
A
ρ
=
2000
kg/m
,
λ:
0.8
W/(m
·
K),
ε
=
0.5
3
3
• ListANN
ANN
On
ANN
y
the
•aim
3analysis
On
ANN
yand
the
va
extreme
On
ANN
ues
y
of
the
the
va
On
ues
em
y
of
the
the
va
v
extreme
ty
em
coeff
of
the
va
v
cintroduced
ent
ty
em
coeff
were
of
the
v
cnat
used
ent
ty
em
coeff
were
for
ss
v
cintroduced
used
ent
ty
decoeff
for
cnat
the
used
ent
dewere
for
the
us
Smpl1-1
Smpl1-1
0.1
0.1
pated
that
ANN
1data.
would
have
extremely
good
predictive
score
(since
it
was
already






3th
3an
3ε
3ε
Smpl1-2
Smpl1-2
ρ
==3
1000
kg/m
ρ
==ε
,,1000
λ:
kg/m
W/(m·K),
ρ
,,1000
λ:
kg/m
εεand
W/(m·K),
==in
ρ
0.5
,,1000
λ:
0.4
kg/m
εεand
W/(m·K),
==ss
0.5
,,=es
λ:
εεand
W/(m·K),
==ss
εεved
==ss
0.5
ve
opment
ve
of
opment
the
rd
of
ay
the
th
thm
rd
ayth
Wa
thm
assemb
Wa
assemb
w
th
ε0.5
es
w
5on
were
th
εSince
d
0gor
sregarded
5ss
were
d
sregarded
and
and


were
pre

Each
Each
algorithm
was
Each
eventually
was
eventually
compared
algorithm
was
eventually
the
was
values
eventually
compared
the
included
values
compared
the
in
included
the
values
full
to
the
in
set
included
the
values
of
full
inset
included
the
of
full
inin
set
the
of
trained
with
full
data),
trained
it
was
with
included
data),
it
was
included
graphs
in
for
comparative
graphs
reasons.
for
comparative
reason
0.9
0.9
0.9
0.9
Smpl1-3
Smpl1-3
Smpl1-3
Smpl1-3
Smpl2-1
Smpl2-1
2000
2000
0.8
0.8
0.1
0.1
on
y
on
w
yth
=3extreme
0of
on
w
1offered
th
those
εgor
=full
=to
0Each
on
0y0.4
w
1offered
9data-depr
were
those
=extreme
=training
0of
nc
0resulting
w
1offered
9to
uded
th
were
εn
=gor
0nues
nc
00.4
1algorithm.
the
9to
uded
were
dataset
=0gor
nues
nc
0resulting
the
9to
uded
were
dataset
n
nc
the
uded
dataset
n
the
dataset
TableSmpl1-2
4.
of
wall
assembly
output
used
for
each
with
the
identifying
with
the
the
aim
level
identifying
of
inaccuracy
the
level
of
inaccuracy
by
withholding
by
withhold








Smpl2-2 Smpl1-2
Smpl2-2
Smpl2-2
Smpl2-2
ρthose
kg/m
ρthose
kg/m
ρgor
2000
kg/m
ρgor
2000
kg/m
λ:
W/(m·K),
λ:
W/(m·K),
0.5
λ:
0.8
W/(m·K),
0.5
λ:
0.8
W/(m·K),
0.5
0.5
•formation,
••algorithm
4two
Th
ANN
sent
was
•formation,
4two
the
Th
ANN
most
sent
was
••algorithm
4==ε3th
nput
the
Th
ANN
most
sent
was
4==ε3th
nput
the
Th
most
scompared
data-depr
was
ved
nput
agor
the
most
data-depr
ved
thm—a
nput
aε=gor
comb
data-depr
thm—a
a=were
comb
on
ved
thm—a
of
the
nat
athe
gor
precomb
on
thm—a
ofin
the
comb
on
of
the
nat
p
part
of
the
input
The
comparison
was
carefully
made
against
the
wall
assemblies
v
ous
v
cases
ous
On
cases
y0.4
the
extreme
On
the
cases
extreme
of
nsu
cases
at
of
on
nsu
and
at
therma
and
em
therma
v
ty
em
coeff
ss
v
-
ty
coeff
-
incorporating
the
variable
incorporating
values
that
the
the
variable
algorithms
values
were
that
the
deprived
algorithms
of.
the
deprived
regressor
of.
Since
the
regre
c
ent
were
c
offered
were
to
c
the
were
a
c
thm
the
were
a
at
gor
the
to
thm
the
tra
a
at
ng
the
to
stage
thm
the
tra
a
n
at
cons
ng
the
stage
thm
tra
derab
n
at
cons
ng
the
y
stage
reduc
tra
derab
n
cons
ng
ng
y
the
stage
reduc
derab
cons
ng
y
the
reduc
derab
ng
y
models
were
generally
models
trained
were
using
generally
extreme
trained
using
of
insulation
extreme
values
(with
the
of
insulation
exception
(with
of
the
exceptio
at
on
th
ckness
at
on
th
As
ckness
such
at
on
the
As
ckness
wa
such
at
on
assemb
the
As
ckness
wa
such
es
assemb
cons
the
As
wa
such
dered
es
assemb
cons
the
nc
wa
dered
uded
es
assemb
cons
the
nc
nondered
uded
es
cons
nsu
the
nc
ated
nondered
uded
nsu
the
nc
ated
nonuded
nsu
the
density
of
density
the
offered
of
density
the
input
offered
of
data.
density
the
input
offered
of
data.
the
input
offered
data.
input
data.
ANN
1,
which
utilised
the
full
dataset),
the
comparison
was
made
against
wall
assemblies
3
3
3
3
ones
and
those
ones
and
nsu
those
ated
w
nsu
th
ated
100
mm
w
th
of
100
EPS
mm
nterna
of
EPS
y
and
nterna
externa
y
and
y
externa
y
featuring
mid-range
featuring
values
(i.e.,
mid-range
50
mm
of
values
insulation
(i.e.,
50
or
mm
ε
=
0.5).
of
insulation
Although
or
it
ε
was
=
0.5).
anticiAlthough
it
was
an
Smpl2-3
Sample
Reference
Sample
Reference
Properties
of
Wall
Sample
Properties
Wall
Sample
ANN
1
ANN
2
ANN
ANN
3
ANN
1
ANN
4
2
ANN
3
AN
ρ
=
2000
kg/m
,
λ:
0.8
W/(m
·
K),
ε
=
0.9
3
3
3
3
•
ANN
•
3
On
ANN
y
the
•
3
extreme
On
ANN
y
the
•
3
va
extreme
On
ANN
ues
y
of
the
3
the
va
extreme
On
ues
em
y
of
the
ss
the
va
v
extreme
ty
ues
em
coeff
of
ss
the
va
v
c
ent
ty
ues
em
coeff
were
of
ss
the
v
c
used
ent
ty
em
coeff
were
for
ss
v
c
the
used
ent
ty
decoeff
were
for
c
the
used
ent
dewere
for
the
us














Smpl1-1
Smpl1-1
ρ
==ANN
1000
kg/m
ρ
==εwith
λ:
0.4
kg/m
W/(m·K),
ρ
==ε
λ:
0.4
kg/m
W/(m·K),
ρ
0.1
==with
λ:
0.4
kg/m
W/(m·K),
0.1
λ:
0.4
W/(m·K),
0.1
pated
that
1data.
would
pated
have
that
ANN
extremely
would
good
have
predictive
extremely
good
(since
predictive
it
was
already
(since
itprewas
alre
3List
3an
Smpl1-2
Smpl1-2
0.5
0.5
ve
opment
ve
of
opment
the
th
ve
rd
of
opment
acomparison
the
th
thm
ve
rd
opment
athe
Wa
th
thm
assemb
rd
of
aved
the
Wa
gor
th
es
thm
assemb
w
th
Wa
ε0.1
=score
es
thm
0gor
assemb
5of
were
th
Wa
εSince
=level
d
es
0gor
assemb
sregarded
w
5of
were
th
ε
=score
d
es
0of
sregarded
and
w
5ss
were
th
ε
=dcoeff
0of
sregarded
and
5ss
were
dco
sap
3nput
3an
Each
algorithm
Each
algorithm
was
eventually
was
eventually
compared
compared
the
values
to
the
included
values
in
included
the
full
in
set
the
full
inset
intrained
with
full
data),
it
was
included
graphs
for
comparative
reasons.
0.9
0.9
0.9
0.9
Smpl1-3
Smpl1-3
Smpl1-3
Smpl1-3












Smpl2-1
Smpl2-1
Smpl2-1
Smpl2-1
ρthose
2000
kg/m
ρthose
2000
kg/m
ρgor
2000
kg/m
ρgor
2000
kg/m
,,1000
λ:
0.8
W/(m·K),
,,1000
λ:
εεand
W/(m·K),
==in
0.1
,,1000
λ:
0.8
εεwas
W/(m·K),
==output
0.1
,,level
λ:
0.8
εεa
W/(m·K),
==gor
0.1
εεved
==the
0.1
on
y
on
w
yth
=Th
0of
w
1offered
and
th
εgor
=Th
=to
0of
0y0.8
1of
91data-depr
were
εgor
=training
nc
0resulting
9to
uded
were
nrd
nc
the
uded
dataset
nw
the
dataset
TableSmpl1-1
4.
ListANN
of
wall
assembly
Table
4.
analysis
of
wall
output
assembly
used
for
analysis
each
used
algorithm.
for
training
each
algorithm.
formation,
with
the
formation,
aim
the
identifying
formation,
aim
with
the
identifying
aim
level
of
the
identifying
of
aim
inaccuracy
of
identifying
of
the
inaccuracy
introduced
level
of
the
inaccuracy
introduced
by
withholding
of
inaccuracy
introduced
by
withholding
introduced
by
withhold
by
Smpl2-2 Smpl1-1
Smpl2-2
0.5
0.5
0.9
0.9
0.9
0.9
Smpl2-3
Smpl2-3
Smpl2-3
Smpl2-3
•formation,
•
4
Th
ANN
s
was
•
4
the
ANN
most
s
was
•
4
nput
the
ANN
most
s
was
4
the
Th
most
s
data-depr
nput
a
the
gor
most
data-depr
ved
thm—a
nput
a
comb
data-depr
thm—a
nat
a
comb
ved
thm—a
of
the
nat
a
gor
precomb
thm—a
of
the
nat
comb
on
of
the
nat
part
of
the
input
part
The
of
input
data.
was
The
carefully
comparison
made
was
against
carefully
wall
made
assemblies
against
the
wall
assemb
v
ous
two
v
cases
ous
two
On
v
cases
y
ous
the
two
extreme
On
v
cases
ous
the
cases
two
extreme
On
cases
y
of
the
nsu
cases
extreme
On
at
y
of
on
the
nsu
and
cases
extreme
at
therma
on
nsu
and
cases
em
at
therma
ss
on
v
nsu
and
ty
em
coeff
at
therma
on
v
and
ty
em
therma
v
ty
em
incorporating
the
variable
values
that
the
algorithms
were
deprived
of.
the
regressor
c
ent
were
c
offered
ent
were
to
the
a
thm
the
a
at
the
thm
tra
n
at
ng
the
stage
tra
n
cons
ng
stage
derab
cons
y
reduc
derab
ng
y
the
reduc
ng
the
models
were
generally
models
trained
were
using
generally
extreme
trained
values
using
of
insulation
extreme
values
(with
the
of
insulation
exception
(with
of
the
exceptio
3
dens
ty
of
dens
the
offered
ty
dens
the
offered
ty
of
data
dens
the
nput
offered
ty
data
the
nput
offered
data
nput
data
ANN
1,
which
utilised
ANN
the
1,
full
which
dataset),
utilised
the
the
comparison
full
dataset),
was
the
made
comparison
against
wall
was
assemblies
against
wall
assemb
33,of
33,of
ones
ones
and
nsu
ones
ated
w
nsu
ones
ated
100
mm
and
w
nsu
of
ated
100
EPS
mm
w
nterna
of
ated
100
EPS
mm
y
w
and
nterna
th
of
100
externa
EPS
ythe
and
nterna
y
externa
EPS
y3made
and
nterna
y
externa
yin
and
ysregarded
externa
ys
33an
33an
featuring
mid-range
values
(i.e.,
50
mm
of
insulation
===ss
0.5).
Although
it
was
anticiSmpl3-1
Sample
Reference
Properties
of
Wall
Sample
ANN
1nnc
ANN
2=d
ANN
ANN
4th
ρ
1000
kg/m
,those
λ:
W/(m
·the
K),
εand
=
0.1,
d
=
50
mm,
External
• =List
ANN
•and
On
ANN
y
the
3with
extreme
On
ynput
the
va
extreme
ues
of
the
va
ues
em
of
the
v
ty
em
coeff
v
cintroduced
ent
ty
coeff
were
cintroduced
used
ent
were
for
the
used
defor
the
de







Smpl1-1
ρ
==3
kg/m
λ:
0.4
W/(m·K),
ρ
==ε
kg/m
εεand
==in
0.1
λ:
0.4
W/(m·K),
εεand
pated
that
ANN
10.4
would
pated
have
that
ANN
extremely
would
good
have
predictive
extremely
score
good
(since
predictive
it
was
already
(since
itand
was
alre




3List
Smpl1-2
Smpl1-2
Smpl1-2
ρ
==ε
kg/m
λ:
0.4
W/(m·K),
ρ
0.5
==ε
kg/m
0.5
,,level
λ:
W/(m·K),
0.5
εεth
==mm
0.5
ve
opment
ve
of
opment
the
th
ve
rd
of
opment
ay
the
th
thm
ve
rd
of
opment
ay
the
Wa
th
thm
assemb
of
aved
the
Wa
th
es
thm
assemb
w
aved
th
Wa
ε0.1
es
thm
0gor
assemb
w
5of
were
Wa
εSince
es
0of
assemb
sregarded
w
5of
were
th
εincluded
=score
d
es
0of
sregarded
and
w
5ss
were
εSince
=d
0of
5ss
were
d


3nput
Each
algorithm
Each
algorithm
was
Each
eventually
algorithm
was
Each
eventually
algorithm
was
eventually
the
was
values
eventually
compared
the
included
values
compared
in
included
the
values
full
to
the
in
set
the
values
full
inset
included
the
full
inin
set
the
of
trained
with
data),
trained
it
was
with
included
data),
it
was
included
graphs
in
the
for
comparative
graphs
reasons.
for
comparative
reason
0.9
0.9
Smpl1-3
Smpl1-3



31000
3the
Smpl2-1
Smpl2-1
Smpl2-1
Smpl2-1
ρ=gor
2000
kg/m
2000
0.1
,1000
λ:
W/(m·K),
0.1
0.8
0.1
0.1
on
y
those
on
w
yth
those
=,1000
0those
on
w
1offered
and
th
those
ε3gor
=th
=to
0those
on
0y0.8
w
1offered
91data-depr
were
those
ε3gor
=th
=training
0those
nc
0resulting
w
1offered
9to
th
were
εnsu
=th
=gor
0or
n
nc
0y0.4
1algorithm.
9εto
were
dataset
=training
0resulting
9to
uded
were
dataset
n
nc
the
uded
dataset
n
the
dataset
TableSmpl1-1
4.
wall
assembly
Table
4.
of
wall
output
assembly
used
for
analysis
output
each
used
for
each
algorithm.
with
the
aim
of
identifying
aim
of
identifying
level
of
of
by
withholding
by
withholding


Smpl2-2
Smpl2-2
Smpl2-2
Smpl2-2
ρ=gor
kg/m
0.5
0.5
λ:
W/(m·K),
0.5
0.5






,1000
λ:
W/(m·K),
,,1000
λ:
W/(m·K),
0.9
εεand
==ss
0.9
Smpl2-3
Smpl2-3
ρρ
2000
kg/m
ρρ
2000
kg/m
•formation,
ANN
•formation,
4two
Th
ANN
sent
was
4two
Th
most
sent
was
the
most
nput
data-depr
aεgor
thm—a
aε=gor
comb
thm—a
nat
comb
on
of
the
nat
preon
of
the
part
ofof
the
part
of
the
data.
part
input
The
of
the
data.
comparison
part
input
The
of
the
data.
comparison
was
input
The
carefully
data.
comparison
was
The
made
carefully
comparison
was
against
carefully
the
was
against
wall
made
carefully
assemblies
the
against
wall
made
the
against
wall
the
w


Smpl3-1 Smpl1-2
Smpl3-1
Smpl3-1
ρ
= 1000
kg/m
Smpl3-1
=input
1000
kg/m
=analysis
kg/m
ρ
=using
1000
kg/m
,full
λ:
0.4
W/(m·K),
λ:
0.4
εwhich
W/(m·K),
0.1,
,full
λ:
d
0.4
=compared
50
εth
W/(m·K),
0.1,
,comparison
λ:
drd
External
0.4
=compared
50
εuded
W/(m·K),
=gor
mm,
0.1,
drd
External
=the
50
εuded
=gor
mm,
0.1,
dmade
External
=the
50
mm,
External
v
ous
v
cases
ous
On
v
cases
y0.8
ous
extreme
On
v
cases
ous
cases
two
extreme
On
cases
y0.8
of
the
nsu
cases
extreme
On
at
of
on
the
nsu
and
cases
extreme
at
therma
on
nsu
and
cases
em
at
therma
ss
v
nsu
and
ty
em
coeff
at
therma
v
-assemblies
and
ty
em
coeff
therma
v
-assemb
ty
em
co
incorporating
the
variable
incorporating
values
that
the
variable
algorithms
values
were
that
the
deprived
algorithms
of.
were
the
deprived
regressor
of.
the
regre
cdens
ent
were
cdens
offered
were
to
cANN
the
were
atwo
cANN
thm
ent
the
were
amm,
at
the
to
thm
the
tra
ainaccuracy
n
at
ng
the
to
stage
thm
the
tra
ainaccuracy
n
at
cons
ng
the
stage
thm
tra
derab
n
at
cons
ng
the
y
stage
reduc
tra
derab
n
cons
ng
ng
y
the
stage
reduc
derab
cons
ng
yprethe
reduc
derab
ng
y
models
were
trained
extreme
of
insulation
(with
the
exception
of
3generally
ty
of
the
offered
ty
the
nput
offered
data
nput
data
ANN
1,
which
utilised
ANN
the
1,
full
dataset),
utilised
the
the
full
dataset),
was
the
made
comparison
against
wall
was
assemblies
made
against
wall
assemb
33,of
3
featuring
mid-range
featuring
values
(i.e.,
mid-range
50
mm
of
values
insulation
(i.e.,
50
or
mm
ε
=
0.5).
of
insulation
Although
or
it
ε
was
=
0.5).
anticiAlthough
it
was
an
Smpl3-2
Sample
Reference
Sample
Reference
Properties
of
Wall
Sample
Properties
of
Wall
Sample
ANN
1
ANN
2
ANN
ANN
3
ANN
1
ANN
4
2
ANN
3
AN
ρ
=
1000
kg/m
,
λ:
0.4
W/(m
·
K),
ε
=
0.5,
d
=
50
mm,
External
3
3
•
ANN
•
3
On
ANN
y
the
•
3
extreme
On
y
the
•
3
va
extreme
On
ues
y
of
the
3
the
va
extreme
On
ues
em
y
of
the
ss
the
va
v
extreme
ty
ues
em
coeff
of
ss
the
va
v
c
ent
ty
ues
em
coeff
were
of
ss
the
v
c
used
ent
ty
em
coeff
were
for
ss
v
c
the
used
ent
ty
decoeff
were
for
c
the
used
ent
dewere
for
the
us




Smpl1-1
ρ
=
1000
kg/m
λ:
0.4
W/(m·K),
ε
=
0.1
pated
that
ANN
1
would
have
an
extremely
good
predictive
score
(since
it
was
already


3
3
3
3
Smpl1-2
Smpl1-2
ρ
=
1000
kg/m
0.5
,
λ:
0.4
W/(m·K),
ε
=
0.5
ve
opment
ve
of
opment
the
th
rd
of
ay
the
th
thm
rd
ay
Wa
thm
assemb
Wa
assemb
w
th
ε0.9
es
w
5on
were
th
εSince
d
0gor
sregarded
5ss
were
d
sregarded
and
and



pre
p
Each
a4two
gor
Each
thm
awas
gor
Each
eventua
thm
awas
gor
y
Each
eventua
thm
amm,
was
gor
ync
eventua
thm
the
was
va
ygraphs
eventua
compared
ues
the
nc
va
uded
ymade
compared
ues
the
n
nc
the
va
uded
fu
ues
to
the
set
n
nc
the
va
of
uded
fu
ues
nset
n
nc
the
of
uded
fu
nset
nthe
the
of
trained
with
data),
trained
it
was
with
included
data),
it
was
for
comparative
graphs
reasons.
comparative
reason
,,2000
λ:
W/(m·K),
0.9
εεand
==included
0.9
,,level
λ:
W/(m·K),
εεved
==the
0.9
Smpl1-3
Smpl1-3
ρ
==εwith
kg/m
ρ
kg/m




32000
3the
Smpl2-1
Smpl2-1
,1000
λ:
W/(m·K),
0.8
εand
=in
0.1
0.1
on
y
those
on
w
yth
=,Th
0of
on
w
1offered
and
those
εgor
=full
=to
0of
on
0y0.4
w
1offered
9data-depr
were
those
εthe
=Th
=training
0of
0resulting
w
1offered
9to
uded
th
were
εn
=es
=gor
0of
n
nc
00.4
1algorithm.
the
9to
uded
were
dataset
=0gor
n
nc
0resulting
the
9to
uded
were
dataset
n
nc
the
uded
dataset
n
the
dataset
TableSmpl1-3
4.
wall
assembly
analysis
output
used
for
each
the
aim
the
identifying
aim
with
the
identifying
the
aim
with
level
the
identifying
of
the
aim
identifying
of
the
introduced
level
of
the
inaccuracy
introduced
by
withholding
of
inaccuracy
introduced
by
withholding
introduced
by
withhold
by

Smpl2-2
Smpl2-2
Smpl2-2
Smpl2-2
0.5
,1000
λ:
0.8
W/(m·K),
0.5
0.8
εand
=in
0.5
0.5




λ:
W/(m·K),
0.9
0.9
λ:
W/(m·K),
0.9
0.9
Smpl2-3
Smpl2-3
Smpl2-3
Smpl2-3
ρρ
=with
kg/m
ρthose
2000
kg/m
ρ=gor
kg/m
ρ=gor
2000
kg/m
•formation,
ANN
•formation,
Th
ANN
sent
was
•formation,
4two
ANN
most
sent
was
•formation,
nput
the
Th
ANN
most
sent
was
4==εat
nput
most
s=compared
data-depr
was
ved
nput
aεgor
the
most
data-depr
ved
thm—a
nput
aε=gor
comb
data-depr
thm—a
nat
a=level
comb
on
ved
thm—a
of
the
nat
afor
gor
precomb
on
thm—a
ofregressor
the
nat
comb
on
of
nat
part
ofof
the
part
of
the
data.
input
The
data.
comparison
The
comparison
was
carefully
was
made
carefully
against
against
wall
assemblies
the
wall



Smpl3-1 Smpl1-3
Smpl3-1
ρ
=List
1000
kg/m
=input
1000
kg/m
,full
λ:
0.4
W/(m·K),
λ:
0.4
εth
W/(m·K),
0.1,
d
=compared
50
εth
0.1,
d
External
50
mm,
External
ous
v
cases
ous
On
cases
y0.8
the
extreme
On
the
cases
extreme
of
nsu
cases
at
of
on
nsu
at
therma
and
em
therma
v
ty
em
coeff
ss
v
-assemblies
ty
coeff
-
incorporating
incorporating
the
variable
incorporating
the
values
variable
incorporating
that
the
values
the
variable
algorithms
that
the
values
the
variable
algorithms
were
that
values
the
deprived
algorithms
were
that
of.
the
deprived
algorithms
were
the
of.
deprived
Since
were
the
deprived
Since
the
of.
Since
cv
ent
were
offered
were
to
cdens
the
were
a4==εANN
cdens
thm
the
were
a0.1
the
to
thm
the
tra
ainaccuracy
at
ng
the
to
stage
thm
the
tra
ainaccuracy
n
at
cons
ng
the
stage
thm
tra
derab
n
at
cons
ng
the
y
stage
reduc
derab
n
cons
ng
ng
the
stage
reduc
derab
cons
ng
y
the
reduc
derab
ng
y
models
were
models
trained
were
using
generally
extreme
trained
using
of
insulation
extreme
values
(with
the
of
insulation
exception
(with
of
the
exceptio
3generally
dens
ty
of
the
offered
ty
the
nput
offered
ty
of
data
the
nput
offered
ty
of
data
the
nput
offered
data
nput
data
ANN
1,
which
utilised
the
full
dataset),
the
comparison
was
made
against
wall
assemblies
33,of
33gor
featuring
mid-range
featuring
values
(i.e.,
mid-range
50
mm
of
values
insulation
(i.e.,
50
or
mm
εto
=and
0.5).
of
insulation
Although
or
it
εtra
was
=reasons.
0.5).
anticiAlthough
it
was
an
Smpl3-3
Sample
Reference
Sample
Reference
Properties
of
Wall
Sample
Properties
of
Wall
Sample
ANN
1uded
ANN
2=ntroduced
ANN
ANN
3regressor
ANN
1y
ANN
4of.
2=ntroduced
ANN
3regre
AN
ρ
=
1000
kg/m
, cdens
λ:
W/(m
·the
K),
εa
=
0.9,
dkg/m
=
50
mm,
External








33an
33an
Smpl1-1
Smpl1-1
ρ
==ANN
kg/m
λ:
0.4
W/(m·K),
ρ
1000
εεand
==in
,,2000
λ:
0.4
W/(m·K),
εεdent
==gor
0.1
pated
that
10.4
would
pated
have
that
extremely
1data-depr
would
good
have
predictive
extremely
score
good
(since
predictive
it
was
already
score
(since
itprewas
alre
3List
Smpl1-2
0.5
ve
opment
ve
of
opment
the
th
ve
rd
of
opment
a
the
gor
th
thm
ve
rd
of
opment
a
the
Wa
gor
th
thm
assemb
rd
of
a
the
Wa
gor
th
es
thm
assemb
rd
w
a
th
Wa
ε
=
es
thm
0
assemb
w
5
were
th
Wa
ε
d
es
0
assemb
sregarded
w
5
were
th
ε
=
d
es
0
sregarded
and
w
5
were
th
ε
d
0
sregarded
and
5
were
d
sp
3m
Each
a
gor
Each
thm
a
was
gor
eventua
thm
was
y
eventua
compared
y
compared
to
the
va
ues
the
nc
va
ues
n
nc
the
uded
fu
set
n
the
of
fu
nset
of
ntrained
with
full
data),
it
was
included
the
resulting
graphs
for
comparative
0.9
0.9
Smpl1-3
Smpl1-3














31000
3the
3the
3the
Smpl2-1
Smpl2-1
Smpl2-1
Smpl2-1
,
λ:
0.8
W/(m·K),
,
λ:
0.8
W/(m·K),
0.1
λ:
0.8
ε
W/(m·K),
=
0.1
,
λ:
0.8
W/(m·K),
0.1
ε
=
0.1
on
y
those
on
w
y
th
those
ε
=
0
w
1
and
th
ε
ε
=
=
0
0
1
9
were
ε
=
nc
0
9
uded
were
n
nc
the
uded
dataset
n
the
dataset
Table
4.
List
of
wall
assembly
Table
4.
analysis
of
wall
output
assembly
used
for
analysis
training
output
each
used
algorithm.
for
training
each
algorithm.
on
format
w
th
the
on
format
w
m
th
of
on
dent
format
w
m
fy
th
of
ng
the
on
dent
a
w
fy
eve
th
of
ng
the
dent
of
a
naccuracy
m
fy
eve
of
ng
of
the
naccuracy
ntroduced
fy
eve
ng
of
the
naccuracy
eve
by
w
of
thho
naccuracy
ntroduced
by
d
ng
w
thho
by
d
ng
w
thho
by
d
Smpl2-2
Smpl2-2
0.5
0.5
0.9
0.9
0.9
0.9
Smpl2-3
Smpl2-3
Smpl2-3
ρ
2000
kg/m
ρ
=
2000
kg/m
ρ
=
2000
kg/m
ρ
=
kg/m
•format
ANN
•
4
Th
ANN
s
was
•
4
Th
ANN
most
s
was
•
4
nput
Th
ANN
most
s
was
4
nput
Th
most
s
data-depr
was
ved
nput
a
the
gor
most
data-depr
ved
thm—a
nput
a
gor
comb
data-depr
ved
thm—a
nat
a
gor
comb
ved
thm—a
of
the
nat
a
gor
precomb
thm—a
of
the
nat
comb
on
of
the
nat
part
of
the
part
input
of
the
data.
part
input
The
of
data.
comparison
part
input
The
of
data.
comparison
was
input
The
carefully
data.
comparison
was
The
made
carefully
comparison
was
against
made
carefully
the
was
against
wall
made
carefully
assemblies
the
against
wall
made
assemblies
the
against
wall
assemb
the
w







Smpl3-1 Smpl2-3
Smpl3-1
Smpl3-1
ρ
=
1000
kg/m
Smpl3-1
ρ
=
1000
kg/m
ρ
=
1000
kg/m
ρ
=
1000
kg/m
,
λ:
0.4
W/(m·K),
,
λ:
0.4
ε
W/(m·K),
=
0.1,
,
λ:
d
0.4
=
50
ε
W/(m·K),
=
mm,
0.1,
,
λ:
d
External
0.4
=
50
ε
W/(m·K),
=
mm,
0.1,
d
External
=
50
ε
=
mm,
0.1,
d
External
=
50
mm,
External
v
ous
two
v
cases
ous
two
On
v
cases
y
ous
the
two
extreme
On
v
cases
y
ous
the
cases
two
extreme
On
cases
y
of
the
nsu
cases
extreme
On
at
y
of
on
the
nsu
and
cases
extreme
at
therma
of
on
nsu
and
cases
em
at
therma
of
ss
on
v
nsu
and
ty
em
coeff
at
therma
ss
on
v
and
ty
em
coeff
therma
ss
v
ty
em
co
incorporating
incorporating
the
variable
the
values
variable
that
values
the
algorithms
that
the
algorithms
were
deprived
were
of.
deprived
Since
the
of.
regressor
Since
the
regressor
cdens
ent
were
cdens
offered
ent
were
to
offered
the
a=ANN
gor
todens
thm
the
a0.1
at
the
thm
tra
n
at
ng
the
stage
tra
n
cons
ng
stage
derab
cons
yitwas
reduc
derab
ng
yagainst
the
reduc
ng
the
models
were
models
were
models
generally
trained
were
models
using
generally
trained
extreme
were
using
generally
trained
values
extreme
using
trained
of
insulation
values
extreme
using
of
insulation
values
(with
extreme
the
of
insulation
values
exception
(with
the
of
insulation
exception
(with
of
the
exceptio
(with
ofassemb
the
3generally
ty
of
the
offered
ty
dens
the
nput
offered
ty
of
data
the
nput
offered
ty
of
data
the
nput
offered
data
nput
data
ANN
1,
which
utilised
ANN
the
1,
full
which
dataset),
utilised
the
the
comparison
full
dataset),
was
the
made
comparison
against
wall
assemblies
made
wall
33,of
33gor
featuring
mid-range
values
(i.e.,
50
mm
of
insulation
or
ε
=
0.5).
Although
it
was
anticiSmpl4-1
Sample
Reference
Properties
of
Wall
Sample
ANN
1
ANN
2
ANN
3
ANN
4
ρ
=
2000
kg/m
,
λ:
0.8
W/(m
·
K),
ε
=
0.1,
d
=
50
mm,
External







Smpl1-1
Smpl1-1
ρ
=
1000
kg/m
λ:
0.4
W/(m·K),
ρ
1000
kg/m
ε
=
,
λ:
0.4
W/(m·K),
ε
=
0.1
pated
that
ANN
1
would
pated
have
that
an
extremely
1
would
good
have
predictive
an
extremely
score
good
(since
predictive
was
already
score
(since
it
was
alre
3
3
3
Smpl1-2
Smpl1-2
0.5
0.5
Each
a
gor
Each
thm
a
was
gor
Each
eventua
thm
a
was
gor
y
Each
eventua
thm
compared
a
was
gor
y
eventua
thm
compared
to
the
was
va
y
eventua
compared
ues
to
the
nc
va
uded
y
compared
ues
to
the
n
nc
the
va
uded
fu
ues
to
the
set
n
nc
the
va
of
uded
fu
ues
nset
n
nc
the
of
uded
fu
nset
n
the
of
trained
with
full
data),
trained
it
was
with
included
full
data),
in
the
it
was
resulting
included
graphs
in
the
for
resulting
comparative
graphs
reasons.
for
comparative
reason
0.9
Smpl1-3








3
3
3
3
Smpl2-1
ρ=ρ
=ε0.1,
kg/m
,2000
λ:
W/(m·K),
εand
=W/(m·K),
0.1
,2000
λ:
0.8
W/(m·K),
εand
==cases
0.1
on
y
those
on
w
ythW/(m·K),
those
=,1000
0of
w
1offered
and
y
th
those
εgor
=,fy
=to
0of
0y
w
1offered
9data-depr
y
th
were
εgor
=,fy
=training
0dnc
0y
w
1offered
9data-depr
th
were
εn
=ng
=gor
0n
nc
0y0.8
1algorithm.
9the
were
dataset
=training
n
nc
0nst
9tra
uded
were
dataset
nmade
nc
the
uded
dataset
nmade
the
dataset
TableSmpl2-1
4.
wall
assembly
Table
4.
List
of
wall
output
assembly
used
for
analysis
output
each
used
for
each
algorithm.
on
th
the
on
m
th
the
dent
m
ng
dent
the
eve
ng
of
the
naccuracy
of
naccuracy
ntroduced
ntroduced
by
w
thho
by
ng
w
thho
dpreng


Smpl2-2
Smpl2-2
Smpl2-2
Smpl2-2
ρ=those
=εat
kg/m
,2000
λ:
0.8
W/(m·K),
0.5
εand
=W/(m·K),
0.5
,were
λ:
W/(m·K),
0.5
εderab
=the
0.5
0.9
0.9
Smpl2-3
ρρ
=w
2000
kg/m
ρρ
=εw
kg/m
•format
ANN
•format
4two
Th
ANN
sent
was
4two
the
Th
most
sent
was
nput
the
most
nput
ved
aεgor
ved
thm—a
aεgor
gor
comb
thm—a
nat
comb
of
the
nat
preof
the
part
ofof
the
part
of
the
data
part
nput
The
of
data
compar
part
The
of
son
data
compar
was
The
carefu
son
data
compar
was
yeve
The
carefu
son
compar
was
aga
ythat
carefu
son
was
aga
y
wa
carefu
nst
assemb
the
aga
y
wa
nst
es
the
aga
wa
nst
es
the
w







Smpl3-1 Smpl2-3
Smpl3-1
Smpl3-1
ρ
=List
1000
kg/m
Smpl3-1
=nput
kg/m
=analysis
kg/m
=using
1000
kg/m
,1000
λ:
0.4
λ:
0.4
εwhich
W/(m·K),
λ:
d
0.4
=the
50
εwhich
0.1,
λ:
External
0.4
=the
50
εuded
=cases
0.1,
dmade
External
=the
50
εuded
0.1,
dmade
External
=the
50
mm,
External
v
ous
v
cases
ous
On
v
cases
y0.8
ous
the
extreme
On
v
cases
ous
the
cases
two
extreme
On
cases
of
the
nsu
extreme
On
at
of
on
nsu
extreme
at
therma
of
on
nsu
and
cases
em
at
therma
of
ss
on
v
nsu
ty
em
coeff
at
therma
ss
on
v
-assemb
and
ty
em
coeff
therma
ss
v
-assemb
ty
em
co
incorporating
incorporating
the
variable
incorporating
the
values
variable
incorporating
that
the
values
the
variable
algorithms
that
the
values
the
variable
algorithms
that
values
the
deprived
algorithms
were
of.
the
deprived
Since
algorithms
were
the
of.
deprived
regressor
Since
were
the
deprived
regressor
Since
the
of.
Since
cdens
ent
were
offered
were
to
con
the
atwo
con
thm
ent
were
amm,
the
to
thm
tra
amm,
at
the
to
stage
thm
the
tra
amm,
n
at
cons
ng
the
stage
thm
n
at
cons
ng
the
y
stage
reduc
tra
derab
n
cons
ng
ng
yd
the
stage
reduc
derab
cons
ng
y
the
reduc
derab
ng
y
models
were
models
were
generally
trained
trained
extreme
using
extreme
of
insulation
of
insulation
(with
the
exception
(with
the
exception
of
of
3generally
ty
of
the
offered
ty
the
nput
offered
data
nput
data
ANN
1,
which
ANN
utilised
1,
which
ANN
the
utilised
1,
full
ANN
dataset),
the
utilised
1,
full
the
dataset),
the
comparison
utilised
full
the
dataset),
the
comparison
was
full
the
made
dataset),
comparison
was
against
the
made
comparison
wall
was
against
assemblies
wall
was
against
assemblies
made
wall
against
assemb
w
33 of
33,mm,
featuring
mid-range
featuring
values
(i.e.,
mid-range
50
mm
of
values
insulation
(i.e.,
50
or
mm
εto
=and
0.5).
of
insulation
Although
or
it
εto
was
=and
0.5).
anticiAlthough
it
was
an
Smpl4-2 Table
Sample
Reference
Sample
Reference
Properties
of
Wall
Sample
Properties
of
Wall
Sample
ANN
1uded
ANN
2ntroduced
ANN
ANN
3made
ANN
1uded
ANN
4of.
2ntroduced
ANN
3regre
AN
ρ
=
2000
kg/m
, cdens
λ:
W/(m
·the
K),
εawere
=
0.5,
d
=the
50
External
Smpl1-1
0.1
pated
that
10.8
would
have
extremely
good
predictive
score
(since
it
was
already




3m
Smpl1-2
ρ
==ANN
kg/m
,,2000
λ:
0.4
W/(m·K),
ρ
==0.1,
1000
kg/m
εεdent
==in
0.5
λ:
0.4
W/(m·K),
εεdent
==in
0.5




3an
3m
Each
ancorporat
gor
Each
thm
was
gor
Each
eventua
thm
was
gor
y
Each
eventua
thm
compared
was
gor
y
eventua
thm
compared
to
the
was
va
y
eventua
compared
ues
the
nc
va
ymade
compared
ues
to
the
n
nc
the
va
uded
fu
ues
the
set
n
nc
the
va
of
fu
ues
nset
n
nc
the
of
uded
fu
nset
nthe
the
of
trained
with
full
data),
trained
it
was
with
included
full
data),
the
it
was
resulting
included
graphs
the
for
resulting
comparative
graphs
reasons.
for
comparative
reason
0.9
0.9
Smpl1-3
Smpl1-3
31000
3the
Smpl2-1
2000
0.8
0.1
4.
List
of
wall
assembly
analysis
output
used
for
training
each
algorithm.
on
format
w
th
the
on
format
a
w
th
of
on
dent
format
w
m
fy
th
of
ng
the
on
a
w
m
fy
eve
th
of
ng
the
dent
of
the
a
naccuracy
fy
eve
of
ng
of
the
naccuracy
ntroduced
fy
eve
ng
of
the
naccuracy
eve
by
w
of
thho
naccuracy
ntroduced
by
d
ng
w
thho
by
d
ng
w
thho
by
dp



Smpl2-2
Smpl2-2
λ:
W/(m·K),
0.5
,
λ:
0.8
W/(m·K),
0.5





,
λ:
0.8
W/(m·K),
0.9
ε
=
0.9
,
λ:
0.8
W/(m·K),
0.9
ε
=
0.9
Smpl2-3
Smpl2-3
Smpl2-3
ρ
kg/m
ρ
=
kg/m
ρ
2000
kg/m
ρ
=
2000
kg/m
•format
ANN
•
4
Th
ANN
s
was
•
4
Th
ANN
most
s
was
•
4
nput
the
Th
ANN
most
s
data-depr
was
4
nput
the
Th
most
s
data-depr
was
ved
nput
a
the
gor
most
data-depr
ved
thm—a
nput
a
gor
comb
data-depr
ved
thm—a
nat
a
gor
comb
on
ved
thm—a
of
the
nat
a
gor
precomb
on
thm—a
of
the
nat
precomb
on
of
nat
part
of
the
part
nput
of
the
data
nput
The
data
compar
The
son
compar
was
carefu
son
was
y
made
carefu
aga
y
nst
the
aga
wa
nst
assemb
the
wa
es
assemb
es



Smpl3-1 Smpl2-3
Smpl3-1 Smpl1-2
ρ
=
1000
kg/m
ρ
=
1000
kg/m
,
λ:
0.4
W/(m·K),
,
λ:
0.4
ε
W/(m·K),
=
d
=
50
ε
=
mm,
0.1,
d
External
=
50
mm,
External
v
ous
two
v
cases
ous
two
On
cases
y
the
extreme
On
y
the
cases
extreme
of
nsu
cases
at
of
on
nsu
and
at
therma
on
and
em
therma
ss
v
ty
em
coeff
ss
v
ty
coeff
ncorporat
ng
the
var
ncorporat
ng
ab
the
e
va
var
ues
ncorporat
ng
ab
that
the
e
va
the
var
ues
ng
a
ab
gor
that
the
e
va
thms
the
var
ues
a
ab
were
gor
that
e
va
thms
the
depr
ues
a
were
gor
that
ved
thms
of
the
depr
S
a
nce
were
gor
ved
the
thms
of
depr
regressor
S
nce
were
ved
the
of
depr
regressor
S
nce
ved
the
of
regre
S
nce
c
ent
were
c
offered
ent
were
to
c
offered
ent
the
were
a
gor
to
c
offered
thm
ent
the
were
a
at
gor
the
to
offered
thm
the
tra
a
n
at
gor
ng
the
to
stage
thm
the
tra
a
n
at
gor
cons
ng
the
stage
thm
tra
derab
n
at
cons
ng
the
y
stage
reduc
tra
derab
n
cons
ng
ng
y
the
stage
reduc
derab
cons
ng
y
the
reduc
derab
ng
y
models
were
models
were
models
generally
trained
were
models
using
generally
trained
extreme
were
using
generally
trained
values
extreme
using
trained
of
insulation
values
extreme
using
of
insulation
values
(with
extreme
the
of
insulation
values
exception
(with
the
of
insulation
exception
(with
of
the
exceptio
(with
of
the
3generally
dens
ty
of
dens
the
offered
ty
of
dens
the
nput
offered
ty
of
data
dens
the
nput
offered
ty
of
data
the
nput
offered
data
nput
data
ANN
1,
which
ANN
utilised
1,
which
the
utilised
full
dataset),
the
full
the
dataset),
comparison
the
comparison
was
made
was
against
made
wall
against
assemblies
wall
assemblies
3
3
featuring
mid-range
featuring
mid-range
featuring
values
(i.e.,
mid-range
featuring
values
50
mm
(i.e.,
mid-range
of
values
50
insulation
mm
(i.e.,
of
values
50
insulation
or
mm
ε
(i.e.,
=
0.5).
of
50
insulation
or
Although
mm
ε
=
0.5).
of
insulation
or
Although
it
ε
was
=
0.5).
anticior
Although
it
ε
was
=
0.5).
anticiAlthough
it
was
an
Smpl4-3
Sample
Reference
Sample
Reference
Properties
of
Wall
Sample
Properties
of
Wall
Sample
ANN
1
ANN
2
ANN
ANN
3
ANN
1
ANN
4
2
ANN
3
AN
ρ
=
2000
kg/m
,
λ:
0.8
W/(m
·
K),
ε
=
0.9,
d
=
50
mm,
External
3
3
Smpl1-1
Smpl1-1
ρ
=
1000
kg/m
0.1
0.1
pated
that
ANN
1
would
pated
have
that
ANN
an
extremely
1
would
good
have
predictive
an
extremely
score
good
(since
predictive
it
was
already
score
(since
it
was
alre


3List
Smpl1-2
ρ
=
1000
kg/m
,
λ:
0.4
W/(m·K),
ε
=
0.5






3
Each
a
gor
Each
thm
was
gor
eventua
thm
was
y
eventua
compared
y
compared
to
the
va
ues
to
the
nc
va
uded
ues
n
nc
the
uded
fu
set
n
the
of
fu
nset
of
ntrained
with
full
data),
it
was
included
in
the
resulting
graphs
for
comparative
reasons.
0.9
,
λ:
0.4
W/(m·K),
ε
=
0.9
Smpl1-3
Smpl1-3
3
3
3
3
Smpl2-1
Smpl2-1
2000
ρ
=
2000
kg/m
0.8
0.1
0.8
0.1
Table
4.
List
of
wall
assembly
Table
4.
analysis
of
wall
output
assembly
used
for
analysis
training
output
each
used
algorithm.
for
training
each
algorithm.
format
on
format
w
th
the
on
format
a
w
m
th
of
the
on
dent
format
a
w
m
fy
th
of
ng
the
on
dent
the
a
w
m
fy
eve
th
of
ng
the
dent
of
the
a
naccuracy
m
fy
eve
of
ng
dent
of
the
naccuracy
ntroduced
fy
eve
ng
of
the
naccuracy
ntroduced
eve
by
w
of
thho
naccuracy
ntroduced
by
d
ng
w
thho
ntroduced
by
d
ng
w
thho
by
d


Smpl2-2
ρ
=
kg/m
,
λ:
W/(m·K),
ε
=
0.5

0.9
,of
λ:
εextreme
=ues
0.9
Smpl2-3
of1,
the
of
the
data
nput
The
of
data
compar
nput
The
of
son
data
compar
was
nput
The
carefu
son
data
compar
was
y
The
carefu
son
compar
was
aga
yat
carefu
nst
son
the
aga
y
wa
made
carefu
nst
assemb
the
aga
y
wa
made
nst
es
the
aga
wa
nst
es
the
w





Smpl3-1 Smpl3-1 Smpl2-3
Smpl3-1
ρpart
= 1000
kg/m
Smpl3-1
ρpart
=nput
kg/m
ρpart
=y
kg/m
=us
kg/m
,1000
λ:
0.4
W/(m·K),
,1000
λ:
0.4
εwhich
W/(m·K),
=ρpart
0.1,
,1000
λ:
d
0.4
=the
50
εwhich
W/(m·K),
=cases
0.1,
,comparison
λ:
d
External
0.4
=va
50
εW/(m·K),
W/(m·K),
=cases
mm,
0.1,
dmade
External
=va
50
εnsu
=made
mm,
0.1,
dmade
External
=va
50
mm,
External
v
ous
two
v
cases
ous
two
On
v
cases
y1,
ous
the
extreme
On
v
cases
y1,
ous
the
two
extreme
On
cases
yof
the
nsu
extreme
On
at
yof
on
the
cases
extreme
therma
of
on
nsu
cases
em
at
therma
of
ss
on
v
nsu
and
ty
em
coeff
at
therma
ss
on
v
-assemb
and
ty
em
coeff
therma
ss
v
-assemb
ty
em
co
ncorporat
ncorporat
ng
the
var
ng
ab
the
eutilised
va
var
ues
ab
that
eutilised
va
the
ues
aus
gor
that
thms
the
aus
were
gor
thms
depr
were
ved
of
depr
Swas
nce
ved
the
of
regressor
S
nce
the
regressor
cdens
ent
were
offered
ent
were
to
offered
the
atwo
gor
to
thm
amm,
at
gor
the
thm
tra
n
at
ng
the
stage
tra
n
cons
ng
stage
derab
cons
yit
reduc
derab
ng
y
the
reduc
ng
the
mode
sthat
were
mode
sthat
were
mode
genera
tra
sthat
ned
were
mode
y
genera
tra
ng
sthat
ned
extreme
were
y
genera
tra
ng
ned
extreme
ues
y
tra
of
ng
nsu
ned
at
us
of
on
ng
nsu
(w
extreme
ues
th
at
the
of
on
va
except
nsu
(w
ues
th
at
the
of
on
except
nsu
(w
of
th
at
the
on
except
(w
of(since
thalre
the
o
3genera
ty
of
the
offered
ty
dens
the
nput
offered
of
data
dens
the
nput
offered
ty
data
the
nput
offered
data
nput
data
ANN
which
ANN
utilised
1,
which
ANN
the
ANN
dataset),
the
full
the
dataset),
the
utilised
full
the
dataset),
the
comparison
was
full
the
dataset),
comparison
was
against
the
made
comparison
wall
was
against
assemblies
wall
was
against
assemblies
made
wall
against
assemb
w
33,of
33an
featuring
mid-range
featuring
mid-range
values
(i.e.,
values
50
mm
(i.e.,
of
50
insulation
mm
of
insulation
or
εto
=and
0.5).
or
Although
εto
=and
0.5).
Although
it
was
anticiit
anticiSmpl5-1 TableSmpl1-1
Sample
Reference
Properties
of
Wall
Sample
ANN
1uded
ANN
2score
ANN
3made
ANN
4was
ρ=
1000
kg/m
,cdens
λ:
0.4
W/(m
·full
K),
εty
=
0.1,
d
=
50
mm,
Internal








Smpl1-1
ρ
=
1000
kg/m
λ:
0.4
W/(m·K),
ρ
=
1000
kg/m
ε
=
0.1
,
λ:
0.4
W/(m·K),
ε
=
0.1
pated
pated
ANN
1
would
pated
ANN
have
1
would
pated
ANN
an
extremely
have
1
would
ANN
extremely
good
have
1
would
predictive
an
extremely
good
have
predictive
an
score
extremely
good
(since
predictive
good
was
(since
predictive
already
score
it
was
(since
already
score
it
was
i
3List
Smpl1-2
Smpl1-2
0.5
0.5
Each
a
gor
Each
thm
was
gor
Each
eventua
thm
was
gor
y
Each
eventua
thm
compared
a
was
gor
y
eventua
thm
compared
to
the
was
va
y
eventua
compared
ues
the
nc
va
y
compared
ues
the
n
nc
the
va
uded
fu
ues
to
the
set
n
nc
the
va
of
uded
fu
ues
nset
n
nc
the
of
uded
fu
nset
n
the
of
trained
with
full
data),
trained
it
was
with
included
full
data),
in
the
it
was
resulting
included
graphs
in
the
for
resulting
comparative
graphs
reasons.
for
comparative
reason
0.9
Smpl1-3


3
3
Smpl2-1
Smpl2-1
ρ
=
2000
kg/m
,
λ:
0.8
W/(m·K),
ε
=
0.1
,
λ:
0.8
W/(m·K),
ε
=
0.1
4.
List
of
wall
assembly
Table
4.
analysis
of
wall
output
assembly
used
for
analysis
training
output
each
used
algorithm.
for
training
each
algorithm.
format
on
format
w
th
the
on
a
w
m
th
of
the
dent
a
m
fy
of
ng
dent
the
fy
eve
ng
of
the
naccuracy
eve
of
naccuracy
ntroduced
ntroduced
by
w
thho
by
d
ng
w
thho
d
ng




Smpl2-2
Smpl2-2
0.5
0.5


0.9
Smpl2-3
ρ
=
2000
kg/m
of1sent
the
part
nput
of
the
data
nput
The
of
the
data
compar
nput
The
of1offered
son
the
data
compar
was
nput
The
carefu
son
data
compar
y
The
carefu
son
compar
was
aga
y
made
carefu
nst
son
the
aga
y
wa
made
carefu
nst
assemb
the
aga
y
wa
made
nst
es
assemb
the
aga
wa
es
assemb
the
w

Smpl3-1
Smpl3-1
ρpart
= 1000
kg/m
ρpart
=y
1000
kg/m
,3genera
λ:
0.4
W/(m·K),
εwh
=part
0.1,
,that
λ:
d
0.4
=the
50
W/(m·K),
External
εsed
=dataset)
0.1,
d
=the
50
External
ncorporat
ncorporat
ng
the
var
ncorporat
ng
ab
the
va
var
ues
ng
ab
the
va
var
ues
ng
aus
ab
gor
that
the
eut
va
thms
the
var
ues
ason
ab
were
gor
that
emade
va
thms
the
depr
ues
ason
were
gor
that
ved
thms
of
the
depr
Swas
ason
nce
were
gor
ved
the
thms
of
depr
regressor
S0.5).
nce
were
ved
the
of
depr
regressor
S0.5).
nce
ved
the
of
regre
S AN
nce
cdens
were
cdens
offered
ent
were
to
ceut
offered
ent
the
were
ancorporat
gor
to
ceut
thm
ent
were
amm,
at
gor
the
to
offered
thm
the
tra
awas
n
at
gor
ng
the
to
stage
thm
tra
amm,
n
at
gor
cons
ng
the
stage
thm
tra
derab
n
at
cons
ng
the
y
stage
reduc
tra
derab
nson
cons
ng
ng
y
the
stage
reduc
derab
cons
ng
ynst
the
reduc
derab
ng
y
mode
were
mode
sthat
were
genera
tra
ned
us
y
tra
ng
ned
extreme
ng
extreme
of
nsu
of
on
nsu
(w
th
the
on
except
(w
th
the
on
except
of
on
of
ty
of
the
offered
ty
of
the
nput
offered
data
nput
data
ANN
wh
ANN
ch
ut
1
wh
sed
ANN
the
1
fu
sed
ANN
dataset)
ch
the
fu
wh
sed
the
dataset)
ch
the
compar
fu
the
the
compar
was
fu
the
made
dataset)
compar
was
aga
the
made
nst
compar
wa
was
aga
assemb
made
nst
wa
was
aga
es
assemb
made
nst
wa
aga
es
assemb
nst
w
33ch
3
featuring
mid-range
featuring
mid-range
featuring
values
(i.e.,
mid-range
featuring
values
50
mm
(i.e.,
mid-range
of
values
50
insulation
mm
(i.e.,
of
values
50
insulation
or
mm
ε
(i.e.,
=
0.5).
of
50
insulation
or
Although
mm
ε
=
0.5).
of
insulation
or
Although
it
ε
was
=
anticior
Although
it
ε
was
=
anticiAlthough
it
was
an
Smpl5-2
Sample
Reference
Sample
Reference
Properties
of
Wall
Sample
Properties
of
Wall
Sample
ANN
1
ANN
2
ANN
ANN
3
ANN
1
ANN
4
2
ANN
3
ρ
=
1000
kg/m
,
λ:
0.4
W/(m
·
K),
ε
=
0.5,
d
=
50
mm,
Internal




Smpl1-1
ρ
=
1000
kg/m
,
λ:
0.4
W/(m·K),
ε
=
0.1
pated
that
pated
ANN
1
would
ANN
have
1
would
an
extremely
have
an
extremely
good
predictive
good
predictive
score
(since
score
it
was
(since
already
it
was
already


3
3
Smpl1-2
ρ
==0.1,
1000
kg/m
0.5
λ:
0.4
W/(m·K),
0.5


Each
ancorporat
gor
Each
thm
was
gor
Each
eventua
thm
was
gor
y
Each
eventua
thm
was
gor
y
eventua
thm
compared
to
the
was
va
y
eventua
compared
ues
to
the
nc
va
uded
ygraphs
compared
ues
to
the
nc
the
va
uded
fu
ues
to
the
set
nc
the
va
of
uded
fu
ues
nset
nc
the
of
uded
fu
nset
the
of
trained
with
trained
data),
with
trained
it
was
data),
with
trained
full
it
data),
with
included
the
it
was
resulting
data),
in
included
the
it
was
resulting
included
the
for
resulting
comparative
in
the
for
graphs
resulting
comparative
reasons.
for
comparative
reasons.
for
reason
0.9
0.9
Smpl1-3
Smpl1-3


32000
Smpl2-1
,full
λ:
0.8
εdent
=in
0.1
TableSmpl1-2
4.
wall
assembly
analysis
output
used
for
training
each
algorithm.
format
on
th
the
on
format
ancorporat
w
m
th
of
the
on
dent
format
ancorporat
w
m
fy
th
of
ng
the
on
the
amm,
w
m
fy
eve
th
of
ng
the
dent
of
the
awas
naccuracy
m
fy
eve
of
ng
of
the
naccuracy
ntroduced
fy
eve
ng
of
the
naccuracy
ntroduced
eve
by
w
of
thho
naccuracy
ntroduced
by
dgraphs
ng
w
thho
ntroduced
by
dcomparat
ng
w
thho
by
d



Smpl2-2
Smpl2-2
0.5
,,full
λ:
0.8
W/(m·K),
εεdent
==in
0.5


0.9
0.9
Smpl2-3
Smpl2-3
ρformat
=w
kg/m
ρues
2000
kg/m
part
ofof
the
part
nput
of
the
data
nput
The
data
compar
The
son
compar
was
carefu
son
y
made
carefu
aga
y
made
nst
the
aga
wa
nst
assemb
the
wa
es
assemb
es


Smpl3-1
ρ
=List
1000
kg/m
,full
λ:
0.4
W/(m·K),
εW/(m·K),
=included
d
=compared
50
ncorporat
ng
the
var
ng
ab
the
eut
va
var
ng
ab
that
the
ewas
va
the
var
ues
ng
aus
ab
gor
that
the
eExternal
va
thms
the
var
ues
ason
ab
were
gor
that
egraphs
va
thms
the
depr
aat
were
gor
that
ved
thms
the
depr
Sn
aat
nce
were
gor
ved
the
thms
depr
Sn
nce
were
ved
the
depr
regressor
Sn
nce
ved
the
of
SnAN
nce
mode
were
mode
sthat
were
mode
y
genera
tra
sthat
ned
were
mode
us
y
genera
tra
ng
sthat
ned
extreme
were
y
genera
tra
ng
ned
extreme
us
y
tra
of
ng
nsu
ned
extreme
us
of
on
ng
va
nsu
(w
extreme
th
the
of
on
va
nsu
(w
ues
th
at
the
of
on
nsu
(w
of
th
the
on
(w
of
th
the
o
3genera
dens
ty
of
the
offered
ty
of
dens
the
nput
offered
of
data
dens
the
nput
offered
ty
of
data
the
nput
offered
data
nput
data
ANN
1sthat
wh
ANN
ch
ut
1thm
wh
sed
the
sed
the
fu
the
compar
the
compar
was
made
son
was
aga
made
nst
wa
aga
assemb
nst
wa
es
assemb
es
33ch
33an
featur
ng
m
featur
d-range
ng
m
featur
va
ues
ng
(W/(m·K),
m
featur
edataset)
va
d-range
50
ues
mm
ng
(=
m
edataset)
of
va
50
ues
nsu
mm
(at
eof
on
va
50
ues
nsu
or
mm
εto
(at
===ues
e0.1
0of
on
5)
50
or
A
mm
εof
though
at
=ues
0of
on
5)
nsu
or
A
texcept
εof
though
was
at
=reasons.
0on
5)
ant
A
tcexcept
ε4of
though
was
-
=reasons.
0at
5)
ant
A
tcexcept
though
was
- thho
an
Smpl5-3 Table
Sample
Reference
Sample
Properties
of
Wall
Sample
Properties
of
Wall
Sample
ANN
1nsu
ANN
2score
ANN
ANN
3regressor
ANN
1or
ANN
2score
ANN
3regre
ρ Reference
=
1000
kg/m
,dens
λ:
0.4
W/(m
·fu
K),
εty
=
0.9,
d
50
mm,
Internal







Smpl1-1
Smpl1-1
ρ
==ANN
kg/m
,,d-range
λ:
0.4
ρ
==ANN
kg/m
εεdent
==in
0.1
,,d-range
λ:
0.4
W/(m·K),
εεdent
pated
pated
1on
would
pated
ANN
have
1on
would
pated
an
extremely
have
1on
would
ANN
extremely
good
have
1to
would
predictive
an
extremely
good
have
predictive
an
score
extremely
good
(since
predictive
it
good
was
(since
predictive
already
score
it
was
(since
already
itnst
was
(since
alre
3List
Smpl1-2
0.5
3m
Each
a
gor
Each
was
gor
eventua
thm
was
y
eventua
compared
y
compared
the
va
ues
the
nc
va
uded
ues
n
nc
the
uded
fu
set
n
the
of
fu
nset
of
ntrained
with
trained
full
data),
with
full
it
was
data),
included
it
was
included
the
resulting
in
the
graphs
resulting
for
graphs
comparative
for
comparative
0.9
0.9
Smpl1-3
Smpl1-3








31000
31000
Smpl2-1
Smpl2-1
λ:
0.8
W/(m·K),
0.1
λ:
0.8
W/(m·K),
0.1
4.
List
of
wall
assembly
Table
4.
analysis
of
wall
output
assembly
used
for
analysis
training
output
each
used
algorithm.
for
training
each
algorithm.
format
on
format
w
th
the
format
a
w
m
th
of
the
dent
format
a
w
m
fy
th
of
ng
the
the
a
w
fy
eve
th
of
ng
the
dent
of
the
a
naccuracy
m
fy
eve
of
ng
of
the
naccuracy
ntroduced
fy
eve
ng
of
the
naccuracy
ntroduced
eve
by
w
of
thho
naccuracy
ntroduced
by
d
ng
w
thho
ntroduced
by
d
ng
w
by
di
Smpl2-2
0.5
0.9
0.9
Smpl2-3
Smpl2-3
ρ
2000
kg/m
ρ
2000
kg/m
part
of
the
part
nput
of
the
data
part
nput
The
of
the
data
compar
part
nput
The
of
son
the
data
compar
was
nput
The
carefu
son
data
compar
was
y
The
made
carefu
son
compar
was
aga
y
made
carefu
nst
son
the
was
aga
y
wa
made
carefu
nst
assemb
the
aga
y
wa
made
nst
es
assemb
the
aga
wa
es
assemb
the
w




Smpl3-1
Smpl3-1
ρ
=
1000
kg/m
ρ
=
1000
kg/m
,
λ:
0.4
W/(m·K),
ε
=
0.1,
,
λ:
d
0.4
=
50
W/(m·K),
mm,
External
ε
=
0.1,
d
=
50
mm,
External
ncorporat
ncorporat
ng
the
var
ng
ab
the
eut1va
var
ues
ab
that
eut1mm
va
ues
aus
gor
that
thms
the
ason
were
gor
thms
depr
were
ved
of
depr
S0at
nce
ved
the
of
regressor
Sat
nce
the
regressor
mode
were
mode
sthat
were
mode
y
genera
tra
sthat
ned
were
mode
us
y
genera
tra
ng
sthat
ned
extreme
were
y
genera
tra
ng
va
ned
extreme
ues
us
yan
tra
of
ng
va
nsu
ned
extreme
ues
at
us
of
on
ng
va
nsu
(w
extreme
ues
th
the
of
on
va
except
nsu
(w
ues
th
the
of
on
except
nsu
(w
of
th
at
the
ontaga
except
(w
of(s
th
the
o
3genera
ANN
1sthat
wh
ANN
ch
ut
1
wh
sed
ANN
the
fu
wh
sed
ANN
dataset)
ch
the
fu
wh
sed
the
dataset)
ch
the
compar
ut
fu
sed
the
dataset)
the
compar
was
fu
the
made
dataset)
son
compar
was
aga
the
made
nst
son
compar
wa
was
aga
assemb
made
nst
son
wa
was
aga
es
assemb
made
nst
wa
es
assemb
nst
w
33ch
33an
featur
ng
m
featur
d-range
ng
m
va
d-range
ues
(
e
va
50
ues
(
e
of
50
nsu
mm
at
of
on
nsu
or
ε
at
=
0
on
5)
or
A
ε
though
=
5)
A
t
though
was
ant
t
c
was
ant
c
Smpl6-1
Sample
Reference
Properties
of
Wall
Sample
ANN
1
ANN
2
ANN
3
ANN
4
ρ
=
2000
kg/m
,
λ:
0.8
W/(m
·
K),
ε
=
0.1,
d
=
50
mm,
Internal







Smpl1-1
Smpl1-1
ρ
=
1000
kg/m
,
λ:
0.4
W/(m·K),
ρ
=
1000
kg/m
ε
=
0.1
,
λ:
0.4
W/(m·K),
ε
=
0.1
pated
pated
ANN
1
wou
pated
ANN
d
have
1
wou
pated
ANN
an
d
extreme
have
1
wou
ANN
y
d
extreme
good
have
1
wou
pred
y
d
extreme
good
have
ct
ve
pred
an
score
y
extreme
good
ct
(s
ve
nce
pred
score
y
t
good
was
ct
(s
ve
nce
pred
a
score
ready
t
was
ct
(s
ve
nce
a
score
ready
was
nce
a
re
3
Smpl1-2
Smpl1-2
0.5
0.5
Each
a
gor
Each
thm
a
was
gor
Each
eventua
thm
a
was
gor
y
Each
eventua
thm
compared
a
was
gor
y
eventua
thm
compared
to
the
was
va
y
eventua
compared
ues
to
the
nc
va
uded
y
compared
ues
to
the
n
nc
the
va
uded
fu
ues
to
the
set
n
nc
the
va
of
uded
fu
ues
nset
n
nc
the
of
uded
fu
nset
n
the
of
trained
with
trained
full
data),
with
trained
full
it
was
data),
with
included
trained
full
it
was
data),
with
in
included
the
full
it
was
resulting
data),
in
included
the
it
graphs
was
resulting
in
included
the
for
graphs
resulting
comparative
in
the
for
graphs
resulting
comparative
reasons.
for
graphs
comparative
reasons.
for
comparat
reason
0.9
Smpl1-3








32000
32000
Smpl2-1
ρues
=0.1,
kg/m
,assembly
λ:
0.8
W/(m·K),
εdent
=W/(m·K),
0.1
,assembly
λ:
0.8
εof
=output
0.1
TableSmpl2-1
4.
Table
wall
List
assembly
Table
wall
4.
List
Table
of
wall
output
analysis
List
assembly
of
used
wall
output
for
training
used
output
for
analysis
each
training
used
algorithm.
for
each
training
used
algorithm.
for
each
training
algorithm.
each
algorithm.
format
on
format
th
the
on
w
m
th
of
the
dent
m
of
ng
the
fy
eve
ng
of
the
naccuracy
eve
naccuracy
ntroduced
ntroduced
by
w
thho
by
dmade
ng
w
thho
dnst
ng
Smpl2-2
Smpl2-2
0.5
0.5
0.9
Smpl2-3
ρ4.
=w
kg/m
part
ofof
the
part
nput
ofof
the
data
part
nput
The
of
the
data
compar
part
nput
The
of
son
the
data
compar
was
nput
The
carefu
son
data
compar
was
y
The
carefu
son
compar
was
aga
y
made
carefu
nst
son
the
aga
y
wa
made
carefu
nst
assemb
the
aga
y
wa
nst
es
assemb
the
wa
es
assemb
the
w




Smpl3-1
Smpl3-1
ρ
=List
1000
kg/m
ρ
=analysis
1000
kg/m
,3genera
λ:
W/(m·K),
εwh
=4.
,fy
λ:
d
=ned
50
mm,
External
εW/(m·K),
=dataset)
0.1,
d
=nsu
50
mm,
External
ncorporat
ncorporat
ng
the
var
ncorporat
ng
ab
the
eut
var
ncorporat
ng
ab
that
the
eut
va
var
ues
ng
aanalysis
ab
gor
that
the
eut
va
thms
the
var
ues
ason
ab
were
gor
that
emade
va
thms
depr
ues
aat
were
gor
that
ved
thms
the
depr
Swas
aat
nce
were
gor
ved
the
thms
depr
regressor
S0on
nce
were
ved
the
depr
regressor
S0aga
nce
ved
the
of
S AN
nce
mode
were
mode
sthat
were
y
genera
tra
ned
us
y
tra
ng
extreme
us
ng
extreme
of
of
on
nsu
(w
th
the
on
(w
th
the
on
of
on
of
ANN
1sthat
wh
ANN
ch
10.4
wh
sed
ANN
the
1va
sed
ANN
dataset)
ch
the
10.4
fu
wh
sed
the
dataset)
ch
the
compar
fu
sed
the
the
compar
was
fu
the
made
son
compar
was
aga
the
made
son
compar
wa
was
aga
assemb
nst
son
wa
was
aga
es
assemb
made
nst
wa
aga
assemb
nst
w
33ch
33an
featur
ng
m
featur
d-range
ng
m
featur
va
d-range
ues
ng
(W/(m·K),
m
featur
va
d-range
50
ues
mm
ng
(=
m
eth
of
va
50
ues
nsu
mm
(at
of
on
va
50
nsu
or
mm
εthe
(at
===ve
0of
on
5)
50
or
A
mm
εof
though
at
=ve
0of
on
5)
nsu
or
A
texcept
εof
though
was
at
=reasons
5)
ant
tcexcept
ε4of
though
was
-
=reasons
5)
ant
A
tces
though
was
-reason
an
Smpl6-2 Table
Sample
Reference
Sample
Properties
Wall
Sample
Wall
Sample
ANN
1nsu
ANN
2score
ANN
ANN
3made
ANN
1or
ANN
ANN
3regre
ρ Reference
=
2000
kg/m
,ut
λ:
0.8
W/(m
·fu
K),
εe
=
0.5,
d
50
mm,
Internal




Smpl1-1
ρ
==ANN
kg/m
,,assembly
λ:
0.4
εεdent
==output
0.1
pated
pated
1on
wou
ANN
d
have
1on
wou
an
d
extreme
have
y
extreme
good
pred
y
good
ct
pred
score
ct
(s
tnst
was
(s
ang
ready
tA
was
a2for
ready

3m
Smpl1-2
Smpl1-2
ρ
==0.1,
1000
kg/m
0.5
,,d-range
λ:
0.4
W/(m·K),
εεdent

tra
ned
w
tra
th
fu
ned
data)
w
tra
th
fu
ned
tof
was
data)
w
tra
th
nc
fu
ned
uded
tProperties
was
data)
w
n
nc
the
fu
uded
tof
was
resu
data)
naenc
twas
the
ng
uded
tues
graphs
was
resu
nedataset)
nc
t0.5
the
ng
for
uded
graphs
resu
comparat
nnst
tSnce
the
ng
for
graphs
resu
comparat
ve
tSnce
for
graphs
comparat
ve
comparat
ve
0.9
0.9
Smpl1-3
Smpl1-3




31000
Smpl2-1
ρ4.
2000
kg/m
λ:
0.8
W/(m·K),
0.1
4.
List
Table
of
wall
List
assembly
of
wall
analysis
output
analysis
used
for
training
used
for
each
training
algorithm.
each
algorithm.
format
on
format
w
th
the
format
a
w
th
of
the
dent
format
a
w
m
fy
th
of
ng
the
on
the
a
w
m
fy
eve
th
of
ng
the
dent
of
the
naccuracy
m
fy
eve
of
ng
of
the
naccuracy
ntroduced
fy
eve
ng
of
the
naccuracy
ntroduced
eve
by
w
of
thho
naccuracy
ntroduced
by
d
ng
w
thho
ntroduced
by
d
ng
w
thho
by
d


Smpl2-2
Smpl2-2
0.5
λ:
0.8
W/(m·K),
0.5


0.9
0.9
Smpl2-3
Smpl2-3
ρ
2000
kg/m
part
of
the
part
nput
of
the
data
nput
The
data
compar
The
son
compar
was
carefu
son
y
made
carefu
aga
y
made
nst
the
aga
wa
assemb
the
wa
es
assemb
es


Smpl3-1
ρ
=
1000
kg/m
,
λ:
0.4
W/(m·K),
ε
=
d
=
50
mm,
External
ncorporat
ncorporat
ng
the
var
ncorporat
ng
ab
the
e
va
var
ues
ncorporat
ng
ab
that
the
e
va
the
var
ues
ng
a
ab
gor
that
the
e
va
thms
the
var
ues
a
ab
were
gor
that
e
va
thms
the
depr
ues
a
were
gor
that
ved
thms
of
the
depr
a
nce
were
gor
ved
the
thms
of
depr
regressor
nce
were
ved
the
of
depr
regressor
S
nce
ved
the
of
regre
S
nce
mode
were
mode
were
mode
y
genera
tra
sthat
ned
were
mode
us
yan
genera
tra
ng
sthat
ned
were
us
y
genera
tra
ng
ned
extreme
us
yan
tra
of
ng
nsu
ned
at
us
of
on
ng
va
nsu
(w
ues
th
at
the
of
on
va
nsu
(w
ues
th
at
the
of
on
nsu
(w
of
the
on
(w
of
the
oA
3genera
ANN
1sthat
wh
ANN
ch
1sthat
wh
sed
the
sed
the
fu
the
compar
the
son
compar
was
made
son
was
aga
made
nst
wa
aga
assemb
wa
es
assemb
es
33ch
33an
featur
ng
m
featur
d-range
ng
m
featur
va
d-range
ues
ng
(W/(m·K),
m
featur
va
d-range
50
ues
mm
ng
(=wou
m
edataset)
of
va
d-range
50
ues
nsu
mm
(at
of
on
va
50
ues
nsu
or
mm
εεεextreme
(at
===output
et0.1
0of
on
5)
50
or
A
mm
εextreme
though
at
=ve
0of
on
5)
nsu
ortgood
A
texcept
εthough
was
at
=reasons
0on
5)
ant
or
A
tcexcept
ε4though
was
-ANN
=reasons
0at
5)
ant
AANN
tcexcept
was
- 3th
an
Smpl6-3Reference
Sample
Reference
Sample
Sample
Sample
Reference
Properties
Properties
of
Wall
Sample
Properties
of
Wall
Sample
Properties
of
Wall
Sample
of
Wall
ANN
Sample
1nsu
ANN
ANN
2score
1ANN
ANN
ANN
3nst
1ANN
ANN
3th
1ANN
4though
ρ Reference
=
2000
kg/m
,ut
λ:
W/(m
·fu
K),
εedataset)
=
0.9,
d
50
mm,








Smpl1-1
Smpl1-1
ρ
==ANN
kg/m
,,assembly
λ:
0.4
ρ
==ANN
kg/m
εεextreme
==output
0.1
λ:
0.4
W/(m·K),
pated
pated
10.8
wou
pated
ANN
dut
have
1kg/m
wou
pated
d
extreme
have
1the
ANN
y
d
extreme
good
have
1Internal
wou
pred
y
d
extreme
good
have
ct
ve
pred
an
score
y
extreme
good
ct
(s
nce
pred
y
was
ct
(s
ve
nce
pred
a2ANN
score
ready
tnst
was
ct
(s
ve
nce
a2ANN
score
ready
tnst
was
(s
nce
a2AN
re
3List
Smpl1-2
0.5
3,,assembly
tra
ned
w
tra
th
fu
ned
data)
w
th
fu
tTable
was
data)
nc
uded
tλ:
was
n
nc
the
uded
resu
netwas
the
ng
graphs
resu
ng
for
graphs
comparat
for
comparat
ve
ve
0.9
0.9
Smpl1-3
Smpl1-3








31000
31000
Smpl2-1
Smpl2-1
ρ4.
2000
kg/m
ρues
2000
kg/m
λ:
0.8
W/(m·K),
0.1
λ:
0.8
W/(m·K),
0.1
Table
4.
List
Table
of
wall
List
assembly
Table
of
wall
4.
analysis
of
wall
output
4.
analysis
List
assembly
of
used
wall
for
analysis
training
used
output
for
analysis
each
training
used
algorithm.
for
each
training
used
algorithm.
for
each
training
algorithm.
each
algorithm.
Smpl2-2
0.5
0.9
0.9
Smpl2-3
Smpl2-3
part
of
the
part
nput
of
the
data
part
nput
The
of
the
data
compar
part
nput
The
of
son
data
compar
was
nput
The
carefu
son
data
compar
y
The
made
carefu
son
compar
was
aga
y
made
carefu
nst
son
the
was
aga
y
wa
made
carefu
nst
assemb
the
aga
y
wa
made
es
assemb
the
aga
wa
es
assemb
the
w




Smpl3-1
Smpl3-1
ρ
=
1000
kg/m
ρ
=
1000
,
λ:
0.4
W/(m·K),
ε
=
0.1,
,
d
0.4
=
50
W/(m·K),
mm,
External
ε
=
0.1,
d
=
50
mm,
External
ncorporat
ncorporat
ng
the
var
ng
ab
the
e
va
var
ab
that
e
va
ues
a
gor
that
thms
the
a
were
gor
thms
depr
were
ved
of
depr
S
nce
ved
the
of
regressor
S
nce
the
regressor
mode
s
were
mode
genera
s
were
mode
y
genera
tra
s
ned
were
mode
us
y
genera
tra
ng
s
ned
extreme
were
us
y
genera
tra
ng
va
ned
extreme
ues
us
y
tra
of
ng
va
nsu
ned
extreme
ues
at
us
of
on
ng
va
nsu
(w
extreme
ues
th
at
the
of
on
va
except
nsu
(w
ues
th
at
the
of
on
except
nsu
(w
of
th
at
the
on
except
(w
of
th
the
o
3
ANN
1ng
wh
ANN
ch
ut1that
wh
sed
ANN
the
1have
fu
wh
sed
ANN
dataset)
ch
the
1mm
fu
wh
sed
the
dataset)
ch
the
compar
ut
fu
sed
the
son
compar
was
fu
the
made
son
compar
was
aga
the
made
son
compar
wa
was
aga
assemb
nst
son
wa
was
es
assemb
nst
wa
es
assemb
nst
w
3an
3an
33ch
33an
featur
d-range
ng
m
va
d-range
ues
eth
va
50
ues
(W/(m·K),
of
50
nsu
mm
at
of
on
nsu
or
εεεgor
at
===ou
0on
5)
A
εεgor
though
==ve
0
5)
A
tn
though
was
ant
tcwas
was
-
ant
cwas
-
Smpl7-1
Sample
Reference
Sample
Reference
Properties
Properties
of
Wall
Sample
of
Wall
Sample
ANN
ANN
2hm
ANN
3made
2hm
ANN
ANN
4gor
3made
4(s
ρ4featur
=L1000
kg/m
,sTab
λ:
0.4
·0.4
εy
=
0.1,
d
=
100
mm,








taga

Smpl1-1
Smpl1-1
ρ
==ANN
kg/m
ρy
=ANN
,,1000
λ:
kg/m
ρ
==ANN
λ:
0.4
kg/m
==ou
ρ
0.1
=ANN
,,1000
λ:
0.4
kg/m
W/(m·K),
=ou
,the
λ:
0.4
W/(m·K),
0.1
pated
pated
1w
pated
dut
that
1kg/m
wou
pated
ddut
extreme
have
that
1εεw
wou
y
d
extreme
good
have
1εsExternal
wou
pred
y
extreme
good
have
ct
ve
pred
an
score
y1or
extreme
good
ct
(s
nce
pred
score
y1ANN
tgood
was
ctgor
(s
ve
pred
ang
score
ready
taga
ct
(s
ve
nce
aANN
score
ready
nce
a re
Smpl1-2
Smpl1-2
0.5
0.5
3L
ned
w
tra
th
fu
ned
data)
tra
th
fu
ned
was
data)
w
nc
ned
uded
was
neth
nc
the
fu
uded
tthe
was
resu
data)
ndataset)
nc
t0.1
the
ng
uded
td
was
resu
ndataset)
nc
tmm,
the
ng
for
uded
graphs
resu
comparat
nnst
tS0.1
the
ng
for
graphs
resu
comparat
reasons
tSnce
for
graphs
comparat
reasons
for
comparat
ve
reason
0.9
Smpl1-3








3,1000
Smpl2-1
Smpl2-1
ρ4m
2000
kg/m
ρues
2000
kg/m
λ:
0.8
W/(m·K),
0.1
λ:
0.8
W/(m·K),
0.1
Tab eSmpl1-1
sTab
osthat
wa
eSmpl1-1
L31000
assemb
o0.4
wa
ewou
4W/(m
assemb
stTab
ys
oK),
swa
e(W/(m·K),
ou
ana
Lpu
assemb
stλ:
ys
oused
s50
wa
y
ana
pu
assemb
ra
ys
used
n
or
ana
each
pu
ra
ys
used
ava
s50
ng
or
each
hm
pu
ra
used
aANN
n
ng
or
each
ra
ave
ng
each
ave
Smpl2-2
Smpl2-2
0.5
0.5
0.9
Smpl2-3




Smpl3-1
Smpl3-1
ρtra
= 1000
kg/m
ρ
=ana
1000
λ:
W/(m·K),
εwh
=4tra
0.1,
,fu
0.4
=data)
W/(m·K),
mm,
External
εng
=y
0.1,
d
=n
External
ncorporat
ncorporat
ng
the
var
ncorporat
ng
ab
the
eut
va
var
ncorporat
ng
ab
that
the
eut
va
the
var
ues
ng
aor
ab
gor
that
eut
va
thms
the
var
ues
ason
ab
were
gor
that
egraphs
thms
the
depr
ues
aat
were
gor
that
ved
thms
of
the
depr
aat
nce
were
gor
ved
the
thms
of
depr
regressor
nce
were
ved
the
of
depr
regressor
S02hm
nce
ved
the
of
regre
S 2AN
nce
mode
were
mode
sthat
were
y
genera
tra
ned
us
y
tra
ng
ned
extreme
us
ng
extreme
of
nsu
of
on
nsu
(w
th
the
on
except
(w
th
the
on
except
of
on
of
3, ,genera
ANN
1
wh
ANN
ch
ut
1
wh
sed
ANN
the
1
fu
sed
ANN
dataset)
ch
the
1
fu
wh
sed
the
dataset)
ch
the
compar
fu
sed
the
dataset)
the
compar
was
fu
the
made
dataset)
son
compar
was
aga
the
made
nst
son
compar
wa
was
aga
assemb
made
nst
son
wa
was
aga
es
assemb
made
nst
wa
aga
es
assemb
nst
w
3ch
3an
3
3
featur
ng
m
featur
d-range
ng
m
featur
va
d-range
ues
ng
(
m
featur
e
va
d-range
50
ues
mm
ng
(
m
e
of
va
d-range
50
ues
nsu
mm
(
at
e
of
on
va
50
ues
nsu
or
mm
ε
(
at
=
e
0
of
on
5)
50
nsu
or
A
mm
ε
though
at
=
0
of
on
5)
nsu
or
A
t
ε
though
was
at
=
0
on
5)
ant
or
A
t
c
ε
though
was
=
5)
ant
A
t
c
though
was
an
Smpl7-2
Sample
Reference
Sample
Reference
Sample
Reference
Sample
Reference
Properties
Properties
of
Wall
Sample
Properties
of
Wall
Sample
Properties
of
Wall
Sample
of
Wall
ANN
Sample
1
ANN
ANN
2
1
ANN
ANN
ANN
3
2
ANN
1
ANN
ANN
4
ANN
3
ANN
1
ANN
ANN
4
3
A
ρ
=
1000
kg/m
λ:
0.4
W/(m
·
K),
ε
=
0.5,
d
=
100
mm,
External








3
Smpl1-1
Smpl1-1
ρ
=
1000
kg/m
ρ
=
,
1000
λ:
0.4
kg/m
W/(m·K),
,
λ:
0.4
ε
W/(m·K),
=
0.1
ε
=
0.1
pated
that
pated
ANN
1
wou
ANN
d
have
1
wou
d
extreme
have
an
y
extreme
good
pred
y
good
ct
ve
pred
score
ct
(s
ve
nce
score
t
was
(s
nce
a
ready
t
was
a
ready




3pu
Smpl1-2
Smpl1-2
Smpl1-2
Smpl1-2
ρ
=
1000
kg/m
ρ
0.5
=
,
1000
λ:
0.4
kg/m
W/(m·K),
0.5
,
λ:
0.4
ε
W/(m·K),
=
0.5
ε
=
0.5


3,assemb
tra
ned
w
tra
th
fu
ned
data)
w
tra
th
fu
ned
t
was
data)
w
tra
th
nc
fu
ned
uded
t
was
data)
w
n
th
nc
the
fu
uded
t
was
resu
data)
n
nc
t
the
ng
uded
t
graphs
was
resu
n
nc
t
the
ng
for
uded
graphs
resu
comparat
n
t
the
ng
for
graphs
resu
comparat
ve
reasons
t
ng
for
graphs
comparat
ve
reasons
for
comparat
ve
reason
0.9
0.9
Smpl1-3
Smpl1-3




3
Smpl2-1
λ:
0.8
W/(m·K),
ε
=
0.1
Tab
e
4
L
s
Tab
o
wa
e
4
L
assemb
s
o
wa
y
ana
ys
s
ou
y
ana
pu
ys
used
s
ou
or
ra
used
n
ng
or
each
ra
a
n
gor
ng
each
hm
a
gor
hm


Smpl2-2
Smpl2-2
0.5
,
λ:
0.8
W/(m·K),
ε
=
0.5


0.9
0.9
Smpl2-3
Smpl2-3
ρ
=
2000
kg/m
ρ
=
2000
kg/m


Smpl3-1
ρANN
= 1000
kg/m
λ:ut10.4
W/(m·K),
εsed
=mode
0.1,
d sfu
=ned
50the
mm,
External
were
mode
were
mode
y
genera
tra
sthat
ned
were
us
y
genera
tra
ng
were
us
y
genera
tra
ng
va
ned
ues
us
y
tra
of
ng
va
nsu
ned
ues
at
us
of
on
ng
va
nsu
(w
ues
th
at
the
of
on
va
nsu
(w
ues
th
at
the
of
on
nsu
(w
of
at
the
on
(w
of
the
oA
3, ,genera
1sng
wh
ANN
ch
wh
sed
the
fu
the
compar
the
son
was
made
son
was
aga
made
nst
wa
aga
assemb
wa
es
assemb
es
33ch
33an
33an
33an
featur
m
featur
d-range
ng
m
featur
va
d-range
ues
ng
featur
edataset)
va
d-range
50
ues
mm
ng
(W/(m·K),
m
edataset)
of
va
d-range
50
ues
nsu
mm
(at
of
on
va
50
ues
nsu
or
εeεεextreme
(at
===ou
et0.1
0of
on
5)
50
or
A
εeεextreme
though
at
==ve
0of
on
5)
nsu
or
A
texcept
εgor
though
was
at
=reasons
0on
5)
ant
or
tcexcept
ε4gor
though
was
-ANN
=reasons
0
5)
ant
tcexcept
was
-3th
an
Smpl7-3
SampSmpl1-1
e Reference
Samp
e Reference
Samp
e=Reference
Samp
eth
Reference
Proper
es
Proper
of
Wa
Samp
es
Proper
of
Wa
Samp
es
of
Wa
Samp
es
Wa
ANN
Samp
1nsu
ANN
ANN
2hm
1ANN
ANN
ANN
3nst
ANN
1ANN
ANN
3th
ANN
1ANN
ANN
4though
ρ4mode
kg/m
λ:osthat
0.4
·0.4
εy
=
0.9,
d
=
mm,









tA


Smpl1-1
ρ
==ANN
kg/m
ρy
=ANN
,,1000
λ:
kg/m
ρ
==ANN
λ:
0.4
kg/m
==ou
ρ100
0.1
=ANN
,,1000
λ:
0.4
W/(m·K),
=ou
, compar
λ:
0.4
W/(m·K),
0.1
pated
1w
pated
dut
have
1kg/m
wou
pated
d
extreme
have
that
1eεεextreme
wou
yProper
d
extreme
good
have
1eεextreme
wou
pred
yof
d
extreme
good
have
ct
ve
pred
an
score
y
extreme
good
ct
(s
nce
pred
score
y
was
ct
(s
ve
nce
pred
a2hm
score
ready
tA
was
ct
(s
ve
nce
a2hm
score
ready
was
(s
nce
a2AN
re
3L
Smpl1-2
0.5
0.5
3pu
ned
w
tra
fu
ned
data)
th
fu
was
data)
nc
uded
was
n
nc
the
uded
resu
net0.1
the
ng
graphs
resu
ng
for
graphs
comparat
for
comparat
0.9
0.9
0.9
0.9
Smpl1-3
Smpl1-3
Smpl1-3
Smpl1-3




3,1000
Smpl2-1
Smpl2-1
λ:
0.8
W/(m·K),
0.1
λ:
0.8
W/(m·K),
0.1
Tab
eSmpl1-1
L1000
sTab
othat
wa
eSmpl1-1
L31000
assemb
sλ:
Tab
wa
ewou
4W/(m
ana
assemb
stTab
ys
oK),
swa
e(W/(m·K),
ou
4m
ana
Lpu
assemb
stλ:
ys
oused
s50
wa
y
or
ana
assemb
ra
ys
used
nkg/m
sExternal
ng
y
or
ana
each
pu
ra
ys
used
amm
n
gor
s50
ng
or
each
hm
pu
ra
used
amm
n
gor
ng
or
each
ratgood
ave
n
ng
each
ave

Smpl2-2
0.5



0.9
0.9
Smpl2-3
Smpl2-3
ρ4pated
2000
kg/m
ρ
2000
kg/m




Smpl3-1 Smpl1-2
Smpl3-1
ρtra
=
1000
kg/m
ρ
=
1000
,
0.4
W/(m·K),
ε
=
0.1,
,
d
0.4
=
W/(m·K),
mm,
External
ε
=
0.1,
d
=
mm,
External
3 ,Proper
ANN
1that
wh
ANN
ch
ut
1
wh
sed
ANN
the
ut
1
fu
wh
sed
ANN
dataset)
ch
the
ut
1
fu
wh
sed
the
dataset)
ch
the
compar
ut
fu
sed
the
dataset)
son
the
compar
was
fu
the
made
dataset)
son
compar
was
aga
the
made
nst
son
compar
wa
was
aga
assemb
made
nst
son
wa
was
aga
es
assemb
made
nst
wa
aga
es
assemb
nst
w
33ch
33an
33an
33an
featur
ng
m
featur
d-range
ng
m
va
d-range
ues
(
e
va
50
ues
mm
(
e
of
50
nsu
mm
at
of
on
nsu
or
ε
at
=
0
on
5)
or
A
ε
though
=
0
5)
A
t
though
was
ant
t
c
was
ant
c
Smpl8-1
SampSmp
e
Reference
Samp
e
Reference
es
Proper
of
Wa
Samp
es
of
Wa
e
Samp
e
ANN
1
ANN
ANN
2
1
ANN
ANN
3
2
ANN
ANN
4
3
ANN
4
ρ
=
2000
kg/m
λ:
0.8
W/(m
·
K),
ε
=
0.1,
d
=
100
mm,
External














1
1
Smp
1
1
Smp
1
1
Smp
ρ
=
1000
1
1
kg
ρ
m
=
1000
λ
0
kg
4
W
ρ
m
=
m
1000
λ
K
0
kg
4
ε
W
=
ρ
m
0
=
m
1
1000
λ
K
0
kg
4
ε
W
=
m
0
m
1
λ
K
0
4
ε
W
=
0
m
1
K
ε
=
0
1
pated
pated
ANN
that
1
wou
pated
ANN
d
have
that
1
wou
pated
ANN
d
extreme
have
that
1
wou
ANN
y
d
extreme
good
have
1
wou
pred
y
d
extreme
good
have
ct
ve
pred
an
score
y
extreme
good
ct
(s
ve
nce
pred
score
y
t
good
was
ct
(s
ve
nce
pred
a
score
ready
t
was
ct
(s
ve
nce
a
score
ready
t
was
(s
nce
a
re
Smpl1-2
Smpl1-2
Smpl1-2
Smpl1-2
kg/m
,
λ:
0.4
kg/m
W/(m·K),
,
λ:
0.4
kg/m
W/(m·K),
0.5
,
λ:
0.4
kg/m
W/(m·K),
0.5
,
λ:
0.4
W/(m·K),
0.5
0.5
3
tra
ned
w
tra
th
fu
ned
data)
w
tra
th
fu
ned
t
was
data)
w
tra
th
nc
fu
ned
uded
t
was
data)
w
n
th
nc
the
fu
uded
t
was
resu
data)
n
nc
t
the
ng
uded
t
graphs
was
resu
n
nc
t
the
ng
for
uded
graphs
resu
comparat
n
t
the
ng
for
graphs
resu
comparat
ve
reasons
t
ng
for
graphs
comparat
ve
reasons
for
comparat
ve
reason
0.9
0.9
Smpl1-3
Smpl1-3






 hm 


32000
Smpl2-1
ρρ
ρ=y
ρymm,
=or
0.8
0.8
εs50
=W/(m·K),
0.1
0.8
=ou
0.8
=ou
0.1
= 0.1
Tab eSmpl2-1
4 =L1000
sTab
o kg/m
wa
eSmpl2-1
assemb
Tab
o0.4
wa
eW/(m·K),
y
4 ==ana
L,2000
assemb
sTab
ys
o kg/m
swa
ou
4 =0.1,
ana
Lpu
assemb
oused
wa
ou
ana
pu
assemb
ys
used
nεsεng
or
ana
each
pura
used
a=nεgor
s50
ng
or
each
hm
pura
used
anεgor
ng
or
each
hmraa
ngor
ng
each
hmagor
Smpl2-2
Smpl2-2
λ:
0.5
,2000
λ:ra
W/(m·K),
0.5
0.9
Smpl2-3
ρ4 =L32000
kg/m

Smpl3-1 Smpl2-1
Smpl3-1
ρ
1000
,3 sλ:
εeW/(m·K),
, sλ:dys
0.4
=kg/m
External
=y0.1
0.1,
dys
mm,
External
featur
ng
m
featur
d-range
ng
m
featur
va
d-range
ues
ng
(
m
featur
e
va
d-range
50
ues
mm
ng
(
m
e
of
va
d-range
50
ues
nsu
mm
(
at
e
of
on
va
50
ues
nsu
or
mm
ε
(
at
=
e
0
of
on
5)
50
nsu
or
A
mm
ε
though
at
=
0
of
on
5)
nsu
or
A
t
ε
though
was
at
=
0
on
5)
ant
or
A
t
c
ε
though
was
=
0
5)
ant
A
t
c
though
was
an
Smp
8
2
SampSmp
e
Reference
Samp
e
Reference
Samp
e
Reference
Samp
e
Reference
Proper
es
Proper
of
Wa
Samp
es
Proper
of
Wa
e
Samp
es
Proper
of
Wa
e
Samp
es
of
Wa
e
ANN
Samp
1
ANN
e
ANN
2
1
ANN
ANN
ANN
3
2
ANN
1
ANN
ANN
4
ANN
3
2
ANN
1
ANN
ANN
4
3
2
AN
A
ρ
=
2000
kg/m
λ
0
8
W/
m
K
ε
=
0
5
d
=
100
mm
Ex
erna
33,1000
33an
33an
33ng








1
1
Smp
1
1
ρ
=
1000
kg
ρ
m
=
λ
0
kg
4
W
m
m
λ
K
0
4
ε
W
=
0
m
1
K
ε
=
0
1
pated
that
pated
ANN
that
1
wou
ANN
d
have
1
wou
d
extreme
have
y
extreme
good
pred
y
good
ct
ve
pred
score
ct
(s
ve
nce
score
t
was
(s
nce
a
ready
t
was
a
ready




2
2
Smp
1
2
Smp
1
2
ρ
=
1000
kg
ρ
m
=
5
1000
λ
0
kg
4
W
m
m
5
λ
K
0
4
ε
W
=
0
m
5
K
ε
=
0
5


tra
ned
w
tra
th
fu
ned
data)
w
tra
th
fu
ned
t
was
data)
w
tra
th
nc
fu
ned
uded
t
was
data)
w
n
th
nc
the
fu
uded
t
was
resu
data)
n
nc
t
the
uded
t
graphs
was
resu
n
nc
t
the
ng
for
uded
graphs
resu
comparat
n
t
the
ng
for
graphs
resu
comparat
ve
reasons
t
ng
for
graphs
comparat
ve
reasons
for
comparat
ve
reason
λ:
0.4
W/(m·K),
,
λ:
0.4
W/(m·K),
0.9
,
λ:
0.4
W/(m·K),
0.9
,
λ:
0.4
W/(m·K),
0.9
0.9
Smpl1-3
Smpl1-3
Smpl1-3
Smpl1-3
kg/m
kg/m
kg/m
kg/m








3
Smpl2-1
ρy=ana
0.8
0.1
= 0.1
Tab eSmpl2-3
4 =L1000
sTab
o kg/m
wa
eSmpl2-2
assemb
wa
assemb
ou
ana
pu
used
pu
used
nεng
or
each
ra0.8
anεgor
ng
each
hm a εgor
hm  

Smpl2-2 Smpl2-1
Smpl2-2
Smpl2-2
ρ mm,
=or
0.5
0.5
0.5
= 0.5


,2000
λ:ys
0.8sεW/(m·K),
εs50=ou
0.9
,2000
λ:ra
0.8
W/(m·K),
= 0.9
Smpl2-3
ρ4 =L2000
kg/m
ρ=y=0.1,
2000

Smpl3-1
ρ
,3 sλ: o0.4
W/(m·K),
dys
=kg/m
External
Smp
8
3
SampSmp
e
Reference
Samp
e
Reference
Samp
e
Reference
Samp
e
Reference
Proper
es
Proper
of
Wa
Samp
es
Proper
of
Wa
e
Samp
es
Proper
of
Wa
e
Samp
es
of
Wa
e
ANN
Samp
1
ANN
e
ANN
2
1
ANN
ANN
ANN
3
2
ANN
1
ANN
ANN
4
ANN
3
2
ANN
1
ANN
ANN
4
3
2
AN
A
ρ
=
2000
kg/m
λ
0
8
W/
m
K
ε
=
0
9
d
=
100
mm
Ex
erna














33L
33L
33an
33an
1
1
Smp
1
1
Smp
1
1
Smp
ρ
=
1000
1
1
kg
ρ
m
=
1000
λ
0
kg
4
W
ρ
m
=
m
1000
λ
K
0
kg
4
ε
W
=
ρ
m
0
=
m
1
1000
λ
K
0
kg
4
ε
W
=
m
0
m
1
λ
K
0
4
ε
W
=
0
m
1
K
ε
=
0
1
pated
that
pated
ANN
that
1
wou
pated
ANN
d
have
that
1
wou
pated
ANN
an
d
extreme
have
that
1
wou
ANN
y
d
extreme
good
have
1
wou
pred
y
d
extreme
good
have
ct
ve
pred
an
score
y
extreme
good
ct
(s
ve
nce
pred
score
y
t
good
was
ct
(s
ve
nce
pred
a
score
ready
t
was
ct
(s
ve
nce
a
score
ready
t
was
(s
nce
a
re
2
2
5
5
tra
ned
w
tra
th
fu
ned
data)
w
th
fu
t
was
data)
nc
uded
t
was
n
nc
the
uded
resu
n
t
the
ng
graphs
resu
t
ng
for
graphs
comparat
for
comparat
ve
reasons
ve
reasons
9
9
9
9
3
3
3
3














3
3
Smpl2-1
Smpl2-1
Smpl2-1
Smpl2-1
ρ
=
2000
kg/m
ρ
=
2000
kg/m
ρ
=
2000
kg/m
ρ
=
2000
kg/m
,
λ:
0.8
W/(m·K),
,
λ:
0.8
ε
W/(m·K),
=
0.1
,
λ:
0.8
ε
W/(m·K),
=
0.1
,
λ:
0.8
ε
W/(m·K),
=
0.1
ε
=
0.1
Tab
e
4
L
s
Tab
o
wa
e
4
L
assemb
s
Tab
o
wa
e
y
4
ana
assemb
s
Tab
ys
o
s
wa
e
ou
y
4
ana
pu
assemb
s
ys
o
used
s
wa
ou
y
or
ana
pu
assemb
ra
ys
used
n
s
ng
ou
y
or
ana
each
pu
ra
ys
used
a
n
gor
s
ng
ou
or
each
hm
pu
ra
used
a
n
gor
ng
or
each
hm
ra
a
n
gor
ng
each
hm
a
gor
hm
Smpl2-2
Smpl2-2 Smpl2-3
0.5
0.9 External
0.9
0.9
0.9
Smpl2-3
Smpl2-3
 External
 

Smpl3-1 Smpl2-3
Smpl3-1
ρ = 1000 kg/m
ρ = 1000 kg/m
,3 λ: 0.4 W/(m·K),
ε = 0.1,
, λ:d 0.4
= 50W/(m·K),
mm,
ε =0.5
0.1, d = 50 mm,
Smp
9e1Reference
SampSmp
e Reference
Samp
Proper
es
Proper
ofwas
Wa
Samp
es
of
Wa
eεεw
Samp
eεεsIn
ANN
1K
ANN
ANN
2hm
1ANN
ANN
3t
2hm
ANN
ANN
4gor
3
ANN
4
ρ4tra
=L1000
kg/m
λ1
0kg
4ρ
W/
m
K
εy
=ana
0fu
1K
d
=W
100
mm
erna











1
1
Smp
1
1
Smp
1
1
Smp
ρ
=
1000
1
m
=
1000
λ
0
kg
4
W
ρ
m
=
m
1000
λ
0
kg
4
=
ρ
m
0
=
1
1000
λ
K
0
kg
4
W
=
m
0
m
1
λ
K
0
4
ε
W
=
0
m
1
ε
=
0
1
3L
3L
3m
3ng
2
2
2
2
5
5
5
5
ned
w
tra
th
fu
ned
data)
w
tra
th
fu
ned
t
data)
w
tra
th
nc
ned
uded
t
was
data)
n
th
nc
the
fu
uded
t
was
resu
data)
n
nc
t
the
uded
t
graphs
was
resu
n
nc
t
the
ng
for
uded
graphs
resu
comparat
n
t
the
ng
for
graphs
resu
comparat
ve
reasons
ng
for
graphs
comparat
ve
reasons
for
comparat
ve
reason
9
9
3
3














3
3
3
3
Smp
2
1
Smp
2
1
Smp
2
1
Smp
ρ
=
2000
2
1
kg
ρ
m
=
2000
kg
ρ
m
=
2000
kg
ρ
m
=
2000
kg
m
λ
0
8
W
m
λ
K
0
8
W
=
0
m
1
λ
K
0
8
W
=
0
m
1
λ
K
0
8
ε
W
=
0
m
1
K
ε
=
0
1
Tab
e
s
Tab
o
wa
e
4
L
assemb
s
Tab
o
wa
e
y
4
ana
assemb
s
Tab
ys
o
s
wa
e
ou
4
pu
assemb
s
ys
o
used
s
wa
ou
y
or
ana
pu
assemb
ra
ys
used
n
ng
ou
y
or
ana
each
pu
ra
ys
used
a
n
gor
s
ng
ou
or
each
hm
pu
ra
used
a
n
gor
ng
or
each
ra
a
n
gor
ng
each
a
hm
Smpl2-2
Smpl2-2
Smpl2-2
Smpl2-2
kg/m
kg/m
kg/m
kg/m
,
λ:
0.8
W/(m·K),
,
λ:
0.8
W/(m·K),
0.5
,
λ:
0.8
W/(m·K),
0.5
,
λ:
0.8
W/(m·K),
0.5
0.5
0.9
0.9
Smpl2-3
 dExternal
 External
 
 

Smpl3-1 Smpl2-3
Smpl3-1 Smpl3-1
ρ = 1000 kg/m
Smpl3-1
ρ = ,1000
kg/m
ρ = ,1000
kg/m
= ,1000
λ: 0.4
W/(m·K),
λ: 0.4
ε W/(m·K),
=ρ 0.1,
λ:d 0.4
=kg/m
50
ε W/(m·K),
=mm,
0.1,
, λ:dExternal
0.4
= 50
ε W/(m·K),
=mm,
0.1, dExternal
= 50
ε =mm,
0.1,
= 50 mm,
3 Proper
Smp
9e2Reference
SampSmp
e Reference
Samp
Samp
es
Proper
of00K
Wa
Samp
es
Proper
of
Wa
Samp
es
Proper
of
Wa
Samp
es
of
ANN
Samp
eεεgor
ANN
ANN
1ANN
1ANN
4 3 2AN
A
ρ4e =Reference
1000
kg/m
0kg
4ρym
W/
m
εm==ana
0λ
5Kys
d
100
mm
In
erna








12Samp
12312 Smp
12Tab
12312 eSmp
ρ
λ
kg
4
0used
48eεεs=W
==ou
000.9
==m
000.9

12sTab
232o kg/m
12sλ:λ232o0.4
ρ
ρ
==0.1,
λ
000.8
48eεε50
W
000.8
W
== 000.9
m
== 000.9


33,1000
3m
3m
3m


2hm
 ANN
3 
 ANN
4 ANN
 ANN
Smp
Smp
kg
kg
8sεW
W
m
λ
m
15159pu
K
15159, λ
wa
eSmp
4e===Reference
L31000
assemb
wa
assemb
ou
pu
or
ra
used
ng
or
each
ra
aWa
n48eεεgor
ng
each
hm
2ANN
3 
2ANN
Smp
Smp
kg
m
kg
m
λ
W
m
λ
K
W
m
559 1K
K aANN
559 1ANN


λ:
0.8
W/(m·K),
,1000
λ:K
W/(m·K),
,1000
λ:K
W/(m·K),
λ:K
W/(m·K),
Smpl2-3
Smpl2-3
Smpl2-3
Smpl2-3
ρρ
2000
kg/m
ρm==ana
2000
kg/m
ρ=ym
=0.1,
2000
kg/m
ρ=m
2000
kg/m




Smpl3-1
Smpl3-1
ρ =L1000
kg/m
W/(m·K),
,λ
λ:ys
0.4
W/(m·K),
d00.8
=kg
50
εW
mm,
dExternal
=nkg
mm,
External
3,1000
Smp
9
3
SampSmp
e Reference
Samp
e
Reference
Samp
e
Reference
Samp
e
Reference
Proper
es
Proper
of
Wa
Samp
es
Proper
of
Wa
e
Samp
es
Proper
of
Wa
e
Samp
es
of
Wa
e
ANN
Samp
1
ANN
e
ANN
2
1
ANN
ANN
ANN
3
2
ANN
1
ANN
ANN
4
ANN
3
2
ANN
1
ANN
ANN
4
3
2
AN
A
ρ
=
1000
kg/m
λ
0
4
W/
m
K
ε
=
0
9
d
=
100
mm
In
erna














1
1
Smp
1
1
Smp
1
1
Smp
ρ
=
1000
1
1
kg
ρ
m
=
1000
λ
0
kg
4
W
ρ
m
=
m
1000
λ
K
0
kg
4
ε
W
=
ρ
m
0
=
m
1
1000
λ
K
0
kg
4
ε
W
=
m
0
m
1
λ
K
0
4
ε
W
=
0
m
1
K
ε
=
0
1
2
2
5
5
3312 eSmp
331o kg/m
331o0.4






gor

hm 


32000
32000
λsλ:
0o0.4
8kg/m
λsλ:
K
0oused
8kg/m
εs50
=W/(m·K),
0=0.1,
m
1599pu
K
0used
8εs50
W
=W/(m·K),
00.1,
m
1599pu
λK
0used
W
=ou
00.1,
m
199puK
= mm,
0
199 ra
4 =L1000
wa
eSmp
4 ==L32000
assemb
Tab
wa
eW/(m·K),
y
4m==ana
L32000
assemb
Tab
ys
swa
ou
y
4m=0.1,
Lm
pu
assemb
wa
ou
or
ana
assemb
ra
ys
ou
or
ana
each
ra
ys
a=n8εgor
s50
or
each
hm
ra
used
a=nεgor
ng
or
each
hm
a
ngor
ng
each
hma
Smp
2 3312 Smp
2Tab
2sTab
ρρ
2sλ:
kg
ρρ
kg
ρ=ρ
ρ=ym

Smpl3-1
Smpl3-1
Smpl3-1
ρ
Smpl3-1
kg/m
=ana
,31000
,1000
εeW
W/(m·K),
,1000
dys
0.4
=kg
εW
mm,
, λλ:
dExternal
0.4
=nkg
εng
=ym
mm,
dExternal
εng
=mm,
dExternal
50
External
Smp
10
1
SampSmp
e Reference
Samp
e
Reference
Proper
es
Proper
of
Wa
Samp
es
of
Wa
e
Samp
e
ANN
1
ANN
ANN
2
1
ANN
ANN
3
2
ANN
ANN
4
3
ANN
4
ρ
=
2000
kg/m
λ
0
8
W/
m
K
ε
=
0
1
d
=
100
mm
In
erna














1
1
Smp
1
1
Smp
1
1
Smp
ρ
=
1000
1
1
kg
ρ
m
=
1000
λ
0
kg
4
W
ρ
m
=
m
1000
λ
K
0
kg
4
ε
W
=
ρ
m
0
=
m
1
1000
λ
K
0
kg
4
ε
W
=
m
0
m
1
λ
K
0
4
ε
W
=
0
m
1
K
ε
=
0
1
2
2
2
2
5
5
5
5
m
15 Ex
erna
 

 


23 121 kg
Smp
23λ1210kg
ρ=ρm
=0=m
2000
ρ=m
m
λλK00kg
8kg
W
λλK
00=kg
8kg
ε450
W
=W
0=0m
15992000
λλK
00=kg
8ε450
W
=W
00m
1599m
λK
0=8ε50
W
==mm
00
151 K
ε50
= mm
0


Smp 23 33121 Smp 23 33121 Smp
ρρm
==2000
ρ
m
=
2000

Smp
ρ = 1000
Smp
1000
kg
ρ
m
=
1000
m
1000
m
4
W
m
4
ε
W
m
1
d
K
ε
mm
m
1
d
K
Ex
ε
erna
=
mm
1
d
K
Ex
ε
erna
d
Ex
=
erna
3 Proper
Smp
10
2
SampSmp
e Reference
Samp
e
Reference
Samp
e
Reference
Samp
e
Reference
es
Proper
of
Wa
Samp
es
Proper
of
Wa
e
Samp
es
Proper
of
Wa
e
Samp
es
of
Wa
e
ANN
Samp
1
ANN
e
ANN
2
1
ANN
ANN
ANN
3
2
ANN
1
ANN
ANN
4
ANN
3
2
ANN
1
ANN
ANN
4
3
2
AN
A
ρ
=
2000
kg/m
λ
0
8
W/
m
K
ε
=
0
5
d
=
100
mm
In
erna








1
1
Smp
1
1
ρ
=
1000
kg
ρ
m
=
1000
λ
0
kg
4
W
m
m
λ
K
0
4
ε
W
=
0
m
1
K
ε
=
0
1




2
2
Smp
1
2
Smp
1
2
ρ
=
1000
kg
ρ
m
=
5
1000
λ
0
kg
4
W
m
m
5
λ
K
0
4
ε
W
=
0
m
5
K
ε
=
0
5


9
9
9
9
3
3
3
3
m

59
 
 
λλK00kg
84εW
m
λK
0=kg
8ε50
W
=ρ=m
0=0m
1592000
K
ε50
=m
0m
159 λKEx
2
λ
0
8
W
0
8
ε
W
=
0
5
K
ε
=
0

9
Smp 23 3112 Smp 23 3112 Smp
2 32 kg
Smp
ρρm
==2000
2
3
kg
ρ
m
=
2000
ρ
m
=
2000
kg



ρ = 1000
1000
kg
m
λ
0
4
W
m
W
=
0
m
1
d
K
ε
mm
1
d
Ex
=
erna
mm
erna
3 λ 0 8 W/ m K ε = 0 9 d = 100 mm In erna
3 1
ρ = 2000














Smp
12Smp
123312 10
Smp
123312 Smp
12 1331 kg/m
Smp
ρ
== 1000
12 1331 kg
ρ
m
== 1000
λ
00 kg
4
W
ρ
m
== m
1000
λ
K
00 kg
48εε W
==ρ
m
00== m
1
1000
λ
K
0
kg
4
ε
W
=
m
0
m
1
λ
K
0
4
ε
W
=
0
m
1
K
ε
=
0
1
5
5
9
9
9
9














Smp
Smp
2
Smp
Smp
ρ
2000
kg
ρ
m
2000
kg
ρ
m
2000
kg
ρ
m
2000
kg
m
λ
8
W
m
λ
K
W
m
1
λ
K
0
8
ε
W
=
0
m
1
λ
K
0
8
ε
W
=
0
m
1
K
ε
=
0
1
9 Exerna

 

Smp 3 1 Smp 3 1 Smp
ρ = 1000
3 1 kg
Smp
ρm= 1000
3λ10 kg
ρm= m
1000
4W
λK0 kg
4ε W
=ρm
0= m
11000
λd
K0=kg
450
εW
=m
mm
059m
1 λd
K
Ex
0= 450
εerna
W
=mm
059m
1 d
K
Ex
= 50
εerna
=mm
091 dEx
= 50
erna
mm








Smp
1
2
Smp
ρ
=
1000
1
2
kg
ρ
m
=
1000
λ
0
kg
4
W
ρ
m
=
m
1000
λ
K
0
kg
4
ε
W
=
ρ
m
0
=
m
5
1000
λ
K
0
kg
4
ε
W
=
m
0
m
5
λ
K
0
4
ε
W
=
0
m
5
K
ε
=
0
5




9
9


2
1
2000
2
1
2000
2000
2000
8
8
1
8
1
8
1
1








Smp 123 233121 Smp
Smp 123 233121 Smp
Smp
2
Smp
ρ
=
2
kg
ρ
m
=
kg
ρ
m
=
kg
ρ
m
=
kg
m
λ
0
W
m
λ
K
0
ε
W
=
0
m
5
λ
K
0
ε
W
=
0
m
5
λ
K
0
ε
W
=
0
m
5
K
ε
=
0
5

ρ = 1000
3 1 kgρm= 1000
3λ10 kg
ρm= m
1000
4W
λK0 kg
4ε W
=ρm
0= m
11000
λd
K0=kg
450
εW
=m
mm
09m
1 λd
K
Ex
0= 450
εerna
W
=mm
09m
1 d
K
Ex
= 50
εerna
=mm
0 1 dEx
= 50
erna
mm Exerna  














λ
0
4
W
m
λ
K
0
4
ε
W
=
0
m
9
λ
K
0
4
ε
W
=
0
m
9
λ
K
0
4
ε
W
=
0
m
9
K
ε
=
0
9
Smp
1
3
Smp
ρ
=
1000
1
3
kg
ρ
m
=
1000
kg
ρ
m
=
1000
kg
ρ
m
=
1000
kg
m
8
8
1
1
2
2
5
8
5
8
5
5














λ
0
W
m
λ
K
0
ε
W
=
0
m
9
λ
K
0
ε
W
=
0
m
9
λ
K
0
ε
W
=
0
m
9
K
ε
=
0
9
Smp 123 33121 Smp
Smp 123 33121 Smp
Smp
2
3
Smp
ρ
=
2000
2
3
kg
ρ
m
=
2000
kg
ρ
m
=
2000
kg
ρ
m
=
2000
kg
m
ρ = 1000 kgρm= Each
1000
m mλthm
λ 0 kg
4aW
K0 4ε W
=0m
1 eventua
d
K= 50
ε =mm
0 1ydEx
= 50
erna
mm Extoerna
gor
was
compared
the va ues nc uded n the fu set of














1
1
λ
0
8
W
m
λ
K
0
8
ε
W
=
0
m
1
λ
K
0
8
ε
W
=
0
m
1
λ
K
0
8
ε
W
=
0
m
1
K
ε
=
0
1
5
5
9
9
9
9
Smp 23 3121 Smp 23 3121 Smp
2
3
Smp
ρ
=
2000
2
3
kg
ρ
m
=
2000
kg
ρ
m
=
2000
kg
ρ
m
=
2000
kg
m







ρ = 1000
3 1 kg
ρm= 1000
3λ10on
kg
ρm
=m
1000
=m
kg
4W
λKthe
0 kg
4ε W
0m
11000
λd
K0=dent
450
εW
=m
mm
0fy
m
1 λng
d
K
Ex
0= 4the
50
εerna
W
=mm
0eve
m
1 d
K
Ex
=of50
εerna
=naccuracy
mm
0 1 dEx
= 50
erna
mm
Ex ernaby w thho d ng
nformat
w
th
a=ρm
of
ntroduced

  
23 21 kg
Smp
23λ210kg
ρ=ρm
=0=m
2000
ρ=m
λλK00kg
8kg
λλK
8kg
ε450
=W
0=0m
592000
00=kg
8ε450
W
=W
00m
59m
0=8ε50
W
==mm
00m
51 K
= mm
05 Exerna
Smp 23 321 Smp 23 321 Smp
ρρm
==2000
ρρm
==2000
ρ = 1000
1000
kg
m
1000
m
m
4W
m
4εW
W
m
11000
d
K00=kg
εW
mm
m
1λλK
d
K
Ex
εerna
=m
mm
1λK
d
K
Ex
εerna
dEx
= ε50
erna

84εW
8ε50
=ρ=m
0=0m
92000
0=kg
8ε50
W
=m
0m
9 λKEx
0 8εerna
W
= 0m
9 K ε
= 09
Smp 23 31 Smp 23 31 Smp
2 3 kg
Smp
ρρm
==2000
2λ30kg
ρm
= 2000
ρ=m=0m
2000
  
ρ = 1000
1000
kg
m
4W
mλλK00kg
W
m
1λK
d
K0=kg
εW
mm
1λK
dEx
erna
mm
 Exerna

 

Smp 3 1 Smp 3 1 Smp
ρ = 1000
3 1 kg
Smp
ρm= 1000
3λ10 kg
ρm= m
1000
4W
λK0 kg
4ε W
=ρm
0= m
11000
λd
K0=kg
450
εW
=m
mm
0m
1 λd
K
Ex
0= 450
εerna
W
=mm
0m
1 d
K
Ex
= 50
εerna
=mm
01 dEx
= 50
erna
mm
Appl. Sci. 2021, 11, 11435
14 of 26
part of the input data. The comparison was carefully made against the wall assemblies
incorporating the variable values that the algorithms were deprived of. Since the regressor
models were generally trained using extreme values of insulation (with the exception of
ANN 1, which utilised the full dataset), the comparison was made against wall assemblies
featuring mid-range values (i.e., 50 mm of insulation or ε = 0.5). Although it was anticipated
that ANN 1 would have an extremely good predictive score (since it was already trained
with full data), it was included in the resulting graphs for comparative reasons.
In addition to assessing each algorithm’s performance, it was important to ensure
that overfitting was avoided. Once the most accurate regressor was identified, it was
considered necessary to evaluate its applicability on wall assemblies that had not been
offered to the algorithm at any stage of its development and were, as such, completely
unknown to the model. The variable values and basic structure of these new models had to
be within the range of the algorithm’s training space, as shown on Table 5; however, they
featured completely new parameter value combinations. To enable the comparison of the
ANN predictions with actual ground truth values, six new FE models were developed and
analysed. Their output was finally compared to the regressor predictions, as shown in the
results section of this study. The six new models included the following:
Table 5. Additional FE model parameters.
Sample Ref
Brick Density
Thermal
Conductivity
Coef.
Thermal
Emissivity
Coef.
Insulation
Thickness
Insulation
Type
Insulation
Position
Test Sample 1
Test Sample 2
Test Sample 3
Test Sample 4
Test Sample 5
Test Sample 6
2000
2000
2000
2000
1500
1500
0.8
0.8
0.8
0.8
0.6
0.6
0.9
0.9
0.7
0.3
0.9
0.7
25
25
0
0
0
75
EPS
EPS
NoIns
NoIns
NoIns
EPS
Ext
Int
AbsIns
AbsIns
AbsIns
Int
2.5. ANN Development Protocol
It is a common realisation of the scientific community (at least, the parts working
with Machine Learning and Artificial Neural Networks, in particular) that no formal
standards, guidance or commonly accepted scientific methods of developing an ANN are
available at the moment [9]. Efforts to establish such methods have indeed been made,
providing the first step towards standardisation and a defined set of criteria for choosing
the algorithm input data, architecture, training parameters, and evaluation metrics. The
following paragraphs are dedicated to a step-by-step description of the ANN development
process, following relevant, recently proposed protocols [9,31].
2.5.1. Input Data Selection and Organisation
As mentioned previously, data selection and organisation are the first steps towards
the development of a functional and effective ANN. The importance of careful input
selection is underlined, with a particular focus on data significance and independence.
Although there are various statistical ways of determining the significance of data, the use
of previous experience and domain knowledge is deemed acceptable and valid [9]. In this
case, the independent variables selected are all known to have a considerable contribution
to the development of the final values of the dependent variable.
Data filtering and clustering can reduce the number of input parameters, enabling the
development of a more efficient and computationally manageable algorithm [32]. Such
techniques ensure that any interdependence present in input variables is identified and
either removed or merged, reducing the dimensionality of the necessary analysis. The
number of independent variables used in the present study is considered small. Although,
at present, some dependency between variables is known to exist (i.e., brick density with
its thermal conductivity coefficient), the data is structured in a way that allows for the
Appl. Sci. 2021, 11, 11435
15 of 26
introduction of further test cases able to remove such issues (i.e., varying combinations of
brick density and thermal conductivity that are not proportionally related).
2.5.2. Input Data Preprocessing
Although data preprocessing does not appear explicitly as a separate step in the
ANN development protocol, it was considered necessary to be mentioned here; it did
form an integral part of the work method followed for this study. From the input data
presentation, it became apparent that categorical variables have also been included in
the dataset (i.e., insulation type, insulation position, etc.). These variables were encoded
to ensure that, ultimately, the input offered to the ANN was consistently numerical. To
prevent the negative effect of a “dummy trap” [33], some of the resulting encoded numerical
variables were removed.
Acknowledging that the range of the variable values was quite wide, taking values
from 1 for the encoded variables to 21,600 sec for the timestamps, a scaling of the data
was considered appropriate [34]. To prevent an implicit bias of the algorithm towards the
higher values of the dataset, standardisation was applied throughout the dataset values.
The default values of the “StandardScaler” function from the library sklearn.preprocessing
were utilised [35]. Equation (6) describes the scaling method followed for both the input
and output data. Once the predictions were made, the same scaling tool was used to
reverse the scaling and allow for the inspection of the actual predicted temperature figures
generated by the algorithm.
( x − u)
(6)
s
where z is the standard score of the sample being converted, x is the feature being converted,
u is the mean, and s is the standard deviation.
z=
2.5.3. Data Splitting
Splitting the dataset into training and testing sets is a fundamental part of the ANN
algorithm development process. To avoid introducing figure selection bias, the use
of automatic random selection was opted for: the “train_test_split” function from the
“sklearn.model_selection” library, under the Scikit Learn API [35]. Random selection is
currently the most-used unsupervised splitting technique [9]. Following the example of
several other research studies, a ratio of 80–20% of the available observations was allocated
to training and testing the algorithm, respectively [14,36].
2.5.4. Model Architecture and Structure Selection
The feedforward multilayer perceptron (MLP), the ANN setup that was utilised for
this study, is one of the most popular artificial neural network architectures [31]. The number of input and output nodes was defined by the number of input and output variables,
respectively. The input layer features 8 nodes, reflecting the number of input variables.
This incorporates the dummy variables introduced due to encoding the categorical features, as mentioned previously. The output layer includes a single node representing
the dependent variable, which the ANN has to make predictions against (non-exposed
wall-face temperature through time). To achieve a Deep Learning model, a minimum of
two hidden layers had to be constructed. The number of nodes on each layer was based on
previous experience of ANNs achieving satisfactory prediction results. Figure 6 represents
the structure of the ANN used as part of this study.
Appl. Sci. 2021, 11, 11435
16 of 26
Figure 6. Graphical representation of the employed ANN architecture.
Although methods of optimising the architecture and hyperparameters of the model
do exist, achieving the optimum model was beyond the scope of this study, which instead
focused on the impact of the quality and quantity of input data. Given that an identical
model architecture was utilised for all 4 different models, the final comparison was undertaken on a level plain. Since the activation function has an impact on the performance of
the model [37], it is worth mentioning that the rectified linear unit activation function was
used in both hidden layers of all models. For clarity, the other hyperparameters used for
the development of the ANN are presented in Table 6.
Table 6. Hyperparameters used for the development of the ANN.
Hyper Parameter
Value
Comments
Number of epochs
Learning rate
Batch size
Activation function
Optimiser
Loss function
50
0.001
10
ReLU
Adam
MSE
Following ad hoc experimentation with various values.
Default value of Adam optimiser.
To allow more efficient processing of the large dataset.
Used the default values of the rectified linear unit activation function.
Default values of Adam [38] optimiser were used.
Mean squared error loss function.
2.5.5. Model Calibration
Through training, the ANN converges to the optimum values of the initial random
weights assigned to its neurons; this enables an accurate prediction of the dependent
variable to eventually be made. One of the most computationally efficient algorithms for
achieving such optimisation is backpropagation [39]. A loss function is used at the end
of every epoch to calculate the difference between the ANN prediction and the ground
truth. This study utilises the Mean Squared Error (MSE) loss function, since the physical
phenomenon under consideration was not linear and the final predictions were not distinct
categories (i.e., aiming for the regression of a range of values). With the level of inaccuracy
now quantified, an optimisation function, in this instance Gradient Descent, calculates the
local minimum (optimum) and feeds back to the network, updating the neuron weights
accordingly to reduce the difference between the prediction and the truth (loss). Although
other methodologies for optimising the performance of the model are available [40], they
are beyond the scope of this study.
Appl. Sci. 2021, 11, 11435
17 of 26
2.5.6. Model Validation
Once the ANN is constructed, trained, and functional, it is necessary to evaluate its
performance before deploying it in field operations or using it for further scientific research.
The evaluation process of the ANN constitutes the focal point of this study, and although it
will be explained thoroughly in the results section, it is worth briefly mentioning the three
core evaluation aspects, along with the various metrics to be considered. An outline of
the implementation method of the above, within the context of this research effort, will
also be given, with the intention of gradually introducing the structure of the upcoming
results section.
There are three main steps in validating the functionality and reliability of an artificial
neural network [9]. The “replicative validity” is the first thing to check to ensure that the
ANN is functional and captures the underlying mechanisms of the physical phenomenon.
Essentially, the algorithm needs to be able to replicate the known data observations that
were offered as input (including the training and testing sets). This process yields “obvious”
results, but it does also provide a sanity check that the algorithm has captured at least
the minimum relationship between the independent and dependent variables. The use
of fitness metrics or visual observation of comparative graphs between predictions and
provided data can aid in this direction. In this study, both methods, metrics and visual
inspection of the algorithm’s ability to replicate the data, have been employed.
The validation of the predictive capacity of the algorithm is the second stage in
building confidence before its implementation in real applications. In this step, the ability
of the algorithm to make accurate predictions, when provided with unknown (not included
in the original training data set) input observations, is assessed by the use of specific
efficiency indices. The impact of training some models with progressively fewer input data
becomes apparent at this stage. An observation of the diminishing scores of the various
metrics and also the deviation of the graphical representation of the predictions from the
ground truth values elucidates the major contribution that appropriately rich and diverse
input data can have on the development of an effective ANN.
Finally, the “structural validity” of the model needs to be confirmed. As part of this
step, the neural network is checked against “a priori knowledge” of how the physical
system under consideration works [9]. Apart from making correct predictions on specific
values, the ANN needs to prove a holistic understanding of the underlying mechanisms
that define the phenomenon that is being studied. In this study, instead of generating
only individual predictions, the ANN is requested to predict the whole time series of the
phenomenon. Thus, the structural validity of the ANN is evaluated through a comparison
of the predicted physical behaviour against the known development of the heat transfer
process through the various wall samples.
In the previous paragraphs, reference was made to the metrics used to assess the
performance of the ANN in terms of accuracy and reliability. Table 7 presents the main
indices used as part of this work [41].
2.5.7. From Evaluation Methodology to Structured Results
The above outlines the main steps and methods for the performance and validity evaluation of the developed artificial neural network algorithms; it also informs the structure
of the investigation and results of this study. Instead of exploring methods of algorithm optimisation, the research interest, in this case, focuses on the impact of quality and quantity
of the provided input data. As such, a single algorithm was developed, adhering to the
requirements of the “Architecture and Structure” section above. Then, the same algorithm
was trained with varying numbers of input data (as presented in Section 2.4 Test Cases)
and was thereon treated as 4 separate predictive models. Each model was subsequently
evaluated, using the aforementioned indices and metrics, to reach a conclusion regarding
the most effective regressor. As a final step, the dominant model was tested and evaluated again, against completely unknown data combinations. A brief outline of the work
sequence followed so far and leading to the following results would include:
Appl. Sci. 2021, 11, 11435
18 of 26
•
•
•
•
•
•
An initial review of the existing bibliography proposing a protocol for the development
of ANNs.
The development of one ANN algorithm based on the peer-reviewed protocol.
The training of the ANN algorithm with varying degrees of input data (ending up
with 4 regressors of the same architecture but different levels of input data).
A comparison of the performance of the 4 regressor models (same architecture/different
input data) and an evaluation of the impact of the quality of offered input data on
each model’s predictive capability.
The identification of the best-performing ANN and validation against a completely
new set of data.
An outline of observations, conclusions, and recommendations regarding the impact
of input data quality and ways of mitigating the problem.
Table 7. Metrics used for the evaluations of the performance of the ANN.
Metric
Reference
Formula 1
Absolute maximum error
AME
AME = max(|Qi − Q̂i |)
Mean absolute error
MAE
Relative absolute error
RAE
Peak difference
Per cent error in peak
PDIFF
PEP
Root mean squared error
RMSE
Coefficient of
determination
R2
MAE =
1
n
RAE =
n
∑ Qi − Q̂i
i =1
n
∑i=1 | Qi − Q̂i |
n
∑ i =1 | Q i − Q |
PDIFF = max(Qi ) − max(Q̂i )
max( Qi )−max( Q̂i )
PEP =
× 100
max( Q̂i )
q n
∑i=1 ( Qi − Q̂i )
RMSE =
n
"
#2
n
e)
Q
−
Q̂
Q̂
−
Q
∑
(
)(
i
i
=
0
R2 = q
2 n
n
e )2
∑i=1 ( Qi − Q) ∑i=1 ( Q̂i − Q
Perfect Score
0.0
0.0
0.0
0.0
0.0
0.0
1.0
1
Nomenclature of the above formulas: n = number of data points; Qi = observed value; Q̂i = ANN value prediction; Q = mean of the
e = mean of the values predicted by the ANN.
observed data points; Q
3. Results
A rigorous testing procedure, following the development and training of the various
ANN models, enabled the assessment of the input data contribution. The graphs included
below allow for a visual interpretation of the impact that varying degrees of input quality
have on the performance of the ANN, while the accompanying metrics help quantify the
same more accurately. For ease of reference, the results are organised in accordance with
the structure and nomenclature of the test cases presented earlier herein.
3.1. Impact of Data Quality on ANN Performance
An observation of the following graphs (Figure 7) highlights the excellent fit of the
network trained with the full training data (ANN 1). The thick grey line on all graphs
represents the ground truth—that is, the actual temperature development on the nonexposed face of the wall through time. The dotted line representing the predictions made
by ANN 1 coincides almost entirely with the ground truth line. The above observation is
very well recorded and reinforced by the metric results.
The indices used to evaluate the performance of the fully trained network (ANN 1)
when tested against the ground truth are summarised in Table 8. The algorithm manages
to predict the final temperature with a maximum error of 2.55 ◦ C, which translates to 2.2%
(test on Wall Assembly 3) of the scale of the overall developed temperatures throughout the
analysis. The good fit is evident not only on the final temperature prediction but throughout
the process, where the maximum error appears to be 3.03 ◦ C. The overall agreement
between the trained algorithm and the ground truth is demonstrated by the Mean Absolute
Error, which on average is 0.73 ◦ C (or a 2.62% difference between the observed values and
their mean). An average of 0.9992 coefficient of determination provides robust evidence of
the agreement between the observed and predicted data.
Appl. Sci. 2021, 11, 11435
19 of 26
Figure 7. Graphical representation of the predictive performance of the 4 ANN models (same algorithm, varying quality of
input data).
The training data offered to ANN 2 was deprived of any middle values of insulation
thickness (removed observations incorporating 50 mm insulation externally or internally).
The graphical representation of the network’s predictions reveals that the algorithm, despite
the reduced data, still captures the “essence” of the physical phenomenon and follows the
route of the ground truth curve. Clearly, a perfect fit is not achieved; the prediction lines
generally run close but parallel to the ground truth curve. Wall Assembly 4 (WA4) is an
exception, where the two lines cross at timestamp 15,300 s; the temperature predictions
decline from there onwards.
Appl. Sci. 2021, 11, 11435
20 of 26
Table 8. Metric scores for ANN 1 in the testing phase.
Wall Assembly
AME
MAE
RAE
PDIFF
PEP
RMSE
R2
WA1
WA2
WA3
WA4
WA5
WA6
Average
1.80
1.63
3.03
2.48
1.69
1.67
2.05
0.56
0.78
1.10
0.83
0.45
0.64
0.73
2.33%
2.78%
4.62%
2.60%
1.42%
1.97%
2.62%
1.80
1.55
2.55
1.39
−0.65
0.70
1.22
1.7%
1.4%
2.2%
1.2%
−0.5%
0.6%
1.1%
0.75
0.88
1.29
1.05
0.55
0.78
0.88
0.9993
0.9992
0.9981
0.9991
0.9998
0.9995
0.9992
The above is reflected in the performance indices in Table 9. The predictions of the
final temperature are generally within −0.30 ◦ C and 6.76 ◦ C (−0.24% and 6.27% of the
observed values, respectively) of the actual values. As observed graphically, the final
prediction error for WA4 lies at 11.85 ◦ C (10.37% error compared to the actual temperature
value), slightly beyond the range seen on the other tests. Although the prediction and
ground truth lines are generally parallel, indicating a good capture of the heat transfer
mechanisms, the absolute maximum error of 32.04 ◦ C observed in Wall Assembly 3 is a
reminder of the inferior performance of this ANN. This is not an outlier, as on average there
is a 16.82% error in all observations made by this network. The degraded performance
of this network is also reflected by the lower coefficient of determination and higher root
mean square error, which on average are 0.9540 (compared to ANN 1’s score: 0.9992) and
6.65 (compared to ANN 1’s excellent fit score of 0.88), respectively.
Table 9. Metric scores for ANN 2 in the testing phase.
Wall Assembly
AME
MAE
RAE
PDIFF
PEP
RMSE
R2
WA1
WA2
WA3
WA4
WA5
WA6
Average
8.45
11.44
32.04
22.88
13.09
9.96
16.31
2.69
3.85
5.06
6.87
4.86
5.73
4.85
11.3%
13.7%
21.3%
21.6%
15.3%
17.7%
16.82%
6.76
2.83
2.34
11.85
3.90
−0.30
4.56
6.27%
2.51%
2.03%
10.37%
2.97%
−0.24%
4.0%
3.45
5.07
9.74
8.84
6.25
6.57
6.65
0.9844
0.9741
0.8921
0.9363
0.9702
0.9668
0.9540
A graphical observation of the third network’s (ANN 3) performance reveals similar
trends to ANN 2. The predictive capacity of the model is clearly inferior to ANN 1. However, the general mechanism of the heat transfer process has been captured, as indicated
by the fact that the ground truth and prediction curves are approximately parallel. The
distance between the two lines is larger than observed in the case of ANN 2. It is also
worth noting that the curves generated by the ANN 3 predictions appear to be smoother
compared to the ones resulting from plotting the predictions of ANN 2.
In a similar fashion, the indices in Table 10 reflect the reduced performance of ANN 3.
The degradation from the removal of the middle values of the emissivity coefficient appears
to be more severe, with absolute maximum error values in the range of 28.46 ◦ C to 48.84 ◦ C.
Throughout the six tests, algorithm ANN 3 generates a relative absolute error of 45.60%
on average, with a maximum of 57.80% for the values predicted for Wall Assembly 2. The
overall performance reduction is reflected by the low average coefficient of determination
(R2 ) and root mean square error (RMSE) of 0.6723 and 18.37, respectively.
Appl. Sci. 2021, 11, 11435
21 of 26
Table 10. Metric scores for ANN 3 in the testing phase.
Wall Assembly
AME
MAE
RAE
PDIFF
PEP
RMSE
R2
WA1
WA2
WA3
WA4
WA5
WA6
Average
28.46
45.00
41.09
48.84
28.77
30.34
37.08
10.87
16.21
9.64
15.12
11.37
14.99
13.03
45.59%
57.80%
40.57%
47.49%
35.85%
46.29%
45.60%
22.58
45.00
27.81
27.71
14.56
24.19
26.98
20.95%
39.93%
24.18%
24.25%
11.07%
19.55%
23.3%
15.05
22.55
16.01
22.35
15.06
19.18
18.37
0.7023
0.4870
0.7081
0.5922
0.8271
0.7171
0.6723
The most data-deprived network, ANN 4, presents an irregular graphic form of
prediction results. In all tests, there is an approximate agreement between predictions and
observed values in the lower range of temperatures. However, when the impact of heat
transfer becomes more evident on the finite element analysis model (i.e., ground truth
values), ANN 4 fails to react accordingly.
The indices describing the performance of ANN 4 are presented in Table 11. The
average coefficient of determination for ANN 4 between the six tests is 0.8321 and the
network’s average maximum error is 26.20 ◦ C. ANN 4 generates ultimate temperature
predictions that have an error of 14.55 ◦ C to 31.65 ◦ C (12.92% and 24.06% deviation from the
actual ultimate temperature). Between the six wall assembly tests, the ANN 4 predictions
constitute a mean absolute error of 10.04 ◦ C (35.44% relative absolute error).
Table 11. Metric scores for ANN 4 in the testing phase.
Wall Assembly
AME
MAE
RAE
PDIFF
PEP
RMSE
R2
WA1
WA2
WA3
WA4
WA5
WA6
Average
18.13
22.03
29.39
33.29
31.66
22.70
26.20
8.36
10.46
9.82
15.44
8.25
7.91
10.04
35.08%
37.30%
41.35%
48.49%
26.01%
24.42%
35.44%
18.13
14.55
28.83
29.33
31.65
22.70
24.20
16.82%
12.92%
25.06%
25.66%
24.06%
18.34%
20.5%
10.18
12.39
13.25
19.49
12.56
10.58
13.07
0.8639
0.8451
0.8002
0.6898
0.8797
0.9140
0.8321
Although the above results give an indication of the performance each ANN achieves,
depending on the quality and completeness of the offered training data, it is worth presenting the coefficients of determination obtained after training each algorithm. The figures in
Table 12 indicate the goodness of fit between the predictions made by the algorithms and the
corresponding observed values on the test set. They all appear to be performing extremely
well; this contrasting behaviour is reflected upon further in the discussion section.
Table 12. Final metric scores for each ANN following the completion of their training with their
respective training dataset.
Neural Network
Loss
ANN 1
ANN 2
ANN 3
ANN 4
5.0840 × 10−4
5.0721 × 10−4
2.1836 × 10−4
2.4811 × 10−4
3.2. Performance of the Dominant ANN Model
Following the review of the results presented in the previous paragraphs, the superiority of ANN 1 became apparent. To assess the “dominant” model’s performance
against completely unknown data, 6 more tests were carried out. Figure 8 is the visual
representation of the results of these additional tests. The goodness of fit between the
ground truth (data obtained from finite element model analysis) and the predictions made
Appl. Sci. 2021, 11, 11435
22 of 26
by the ANN is easy to observe. This is further founded and reinforced by the metrics
included in Table 13.
Figure 8. Graphical representation of the predictive performance of the dominant ANN model against completely unknown
wall assembly combinations.
Appl. Sci. 2021, 11, 11435
23 of 26
Table 13. Metric scores for ANN 1, as the dominant network, on unknown data sets
(wall test samples).
Test Sample
AME
MAE
RAE
PDIFF
PEP
RMSE
R2
TS1
TS 2
TS 3
TS 4
TS 5
TS 6
Average
3.83
4.26
6.06
11.42
12.43
4.13
7.02
0.90
1.93
2.08
3.80
4.16
1.92
2.47
1.89%
3.72%
6.37%
26.50%
10.32%
4.64%
8.91%
−0.44
3.72
4.42
11.42
−4.02
4.03
3.19
−0.28%
2.03%
3.30%
15.04%
−2.70%
2.66%
3.3%
1.20
2.48
2.73
5.36
5.81
2.25
3.31
0.9995
0.9981
0.9946
0.8990
0.9832
0.9976
0.9787
The ANN manages to predict the ultimate temperature developed on the non-exposed
surface of the wall test samples with a relatively low error, 3.3% on average. Although
on one occasion (Test Sample 4) the prediction of peak temperature differs from the
observed value by 11.42 ◦ C (15.04% deviation from the actual temperature), on average,
the predictions lie within 3.19 ◦ C of the ground truth. This high predictive performance is
observed not only on the peak temperature, as an isolated success, but also by the overall
low mean absolute error that ranges from 0.90 ◦ C to 4.16 ◦ C.
Tests TS4 and TS5 appear to have more onerous metric results. The absolute maximum
errors are 11.42 ◦ C (in peak temperature, as explained above) and 12.43 ◦ C, respectively.
These can be observed in the graphic representation of the results as deviations of the
prediction curve from the ground truth curve. Despite these local inconsistencies with the
observed figures, the overall high coefficient of determination (0.9787 on average between
tests) and low root mean squared error (3.31 on average) indicate a high-performing model.
4. Discussion
Although an algorithm capable of predicting the temperature developed on the wall
samples’ non-exposed face was eventually constructed, a few items worth highlighting
and discussing further appeared during the development and evaluation process. These
are listed in the following paragraphs, with the intention to evoke thoughts and discussion
regarding potential pitfalls and respective solutions when the ANNs are employed on heat
transfer through masonry wall applications.
The perfect fit of ANN 1 with the ground truth is not a surprising result. The network
was trained with the full training data, which means that its comparison with the ground
truth merely verifies whether its predictions can replicate already-known patterns and
figures. Despite its obvious results, however, this step is far from unnecessary, as it ensures
the replicative validity of the developed algorithm. It helps in building confidence that the
network is not only functional but that it is also able to identify and capture the underlying
patterns in the provided dataset.
After achieving this first level of validation, the research could proceed by exploring
the limits of the algorithm architecture by varying the quality and quantity of the provided
training data. ANN 2, deprived of mid-range insulation thickness values, and ANN 3,
deprived of mid-range emissivity coefficient values, both had difficulty in achieving equally
high predictive rates as ANN 1, which was trained with the full range of data. It is worth
highlighting that the predictions of ANN 2 lay closer to the ground truth but demonstrated
some local irregularities. On the contrary, the predictions generated by ANN 3 lay further
from the ground truth curve (as reflected by the more onerous performance indices);
however, these were largely offset from the observed figures, following their smooth
curvature. This raises questions as to the impact different variables may have on the
performance of ANNs, depending on their contribution to the physical phenomenon under
consideration. The emissivity coefficient is a constant value throughout the finite element
analysis, while properties relating to the EPS are time-dependent (within the context of this
study). The physical importance of the independent variables needs to be well understood
before they are incorporated into an ANN structure.
Appl. Sci. 2021, 11, 11435
24 of 26
The most data-deprived network, ANN 4, presents the most irregular graphic form
of prediction results. The comparison between the performance indices for ANN 4 and
ANN 3 show that the former performed better. However, a visual observation shows
clearly that the prediction line of ANN 4 is more erratic compared to ANN 3, whose line is
just “offset” (i.e., consistently overestimating or underestimating compared to the ground
truth). Although networks ANN 2 and ANN 3 failed to make accurate predictions (at least
to the degree ANN 1 did), they appeared to capture the underlying principles and function
of the physical phenomenon. On the other hand, the lack of resemblance to the ground
truth curve demonstrates that the capture of the underlying mechanisms of the physical
phenomenon by ANN 4 is poor, despite its better metrics.
At the training stage, all algorithms returned high indices of fitness to the test data.
However, their performance on predicting values beyond their training sets varied greatly
depending on the amount and quality of data that was offered at that initial stage. This
underlines the need for the validation of the algorithms and a multifaceted evaluation of
their performance prior to their application in further research or field operations. It also
shows that, despite developing a functioning and potentially effective ANN architecture,
the ultimate performance might be compromised by a lack of representative observations
in the training set.
5. Conclusions and Further Research
This paper contributes towards the further integration of ANNs in the field of heat
transfer through building materials and assemblies. Arguably, there is a wide scope for
further development of this research in directions such as the use of different algorithm
types and structures, an extended range of building assemblies and materials, and different
thermal loadings. Nevertheless, some first conclusions can be drawn from this initial
effort, informing future scientific work aiming to employ Artificial Neural Networks for
the description of heat transfer phenomena. Although good metric results quantifying
the replicative validity of the algorithm offer an indication of a functional ANN, they
do not necessarily constitute evidence of a fully operational and reliable network. As
such, the use of further validation techniques is of paramount importance. A comparison
against unknown data (methodology followed in this study) is an objective way of ensuring
the constructed model behaves and performs as expected. When “external” data is not
available or difficult to obtain, k-fold cross validation could provide a route for building
some confidence in the performance of the model. To mitigate the risk of overfitting,
neuron “dropout” can be introduced as part of the algorithm’s hyperparameters. This
would artificially weaken the fitting of each neuron to the supplied training data.
As seen in the previous paragraphs, metrics and indices can sometimes be misleading.
An intuitive understanding of the predictions and their relevance to factual data is necessary to mitigate the impact of the “black box” phenomenon of blindly accepting output
generated by an AI model. Outlining the expectations of the models’ output in advance
could provide a measure against which a preliminary assessment of the ANN’s predictions
can be undertaken. In the same direction, producing visual representations of the output at
every stage can enhance the understanding of the relationship between the ground truth
and the predictions and can imminently highlight subtle inaccuracies or major errors.
Depending on the nature and contribution of each parameter to the physical phenomenon, the impact of its loss from the training data can have a more severe impact on
the predictive capability of the Neural Network. It is worth researching this relationship
by quantifying the contribution of various parameters on the development of a physical
phenomenon and then training ANNs with a gradual deprivation of the variables in question. This could provide an opportunity for quantifying the correlation of various physical
parameters with the performance of the artificial neural model.
The ANN’s development, analysis, and interrogation need to be considered within the
wider context of a specific scientific study or practical application. This contextualisation
can have a significant effect on the specification of required accuracy levels for the chosen
Appl. Sci. 2021, 11, 11435
25 of 26
evaluation methodology. Different scientific applications have different required levels
of accuracy in their produced results. As such, it is prudent to avoid trying to establish
ill-defined accuracy thresholds without specific application requirements to hand.
This study presented the underpinning principles of an assessment methodology
for the evaluation of ANN predictions and highlighted potential pitfalls arising from
the use of ANNs within a masonry wall heat transfer context. The focus of the present
work is to identify the impact of varying quantities and qualities of input data on the
accuracy of a specific ANN architecture and to propose a methodology for demonstrating
this relationship. This objective has been accomplished by presenting the inferior results
generated by the models trained with reduced amounts of data. Ultimately, the proposed
methodology for the assessment and validation of the ANN’s performance was proposed
not as a panacea for all ML evaluation problems, or as an optimum precision benchmark,
but as a first step towards preventing (or at least enabling the quantification of) accuracy
deficiencies in models developed using the data each research team has available.
It is hoped by the authors that future work can feed into the existing proposed
protocols for the development of ANNs while aligning such documentation to the needs
of heat transfer and building fire research. Expanding the existing understanding of the
factors impacting the performance of ANNs and incorporating elements specific to fire
engineering and heat transfer through building elements could help safeguard future
research in this field from misleading results or discrepancies caused by different ANN
models and parameters.
Author Contributions: Conceptualisation, methodology, code writing, data curating, original draft
preparation, illustrations: I.B.; review, supervision, heat transfer theoretical background, thermal
simulations, foundation input data: K.J.K. All authors have read and agreed to the published version
of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data utilised for the development of the ANN algorithms is not
currently available as it forms part of ongoing research.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Kurfess, F.J. Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2003; pp. 609–629.
Soni, N.; Sharma, E.K.; Singh, N.; Kapoor, A. Artificial Intelligence in Business: From Research and Innovation to Market
Deployment. Procedia Comput. Sci. 2020, 167, 2200–2210. [CrossRef]
Demianenko, M.; de Gaetani, C.I. A Procedure for Automating Energy Analyses in the BIM Context Exploiting Artificial Neural
Networks and Transfer Learning Technique. Energies 2021, 14, 2956. [CrossRef]
Paliwal, M.; Kumar, U.A. Neural networks and statistical techniques: A review of applications. Expert Syst. Appl. 2009, 36, 2–17.
[CrossRef]
Naser, M.; Kodur, V.; Thai, H.-T.; Hawileh, R.; Abdalla, J.; Degtyarev, V.V. StructuresNet and FireNet: Benchmarking databases
and machine learning algorithms in structural and fire engineering domains. J. Build. Eng. 2021, 44, 102977. [CrossRef]
Tran-Ngoc, H.; Khatir, S.; de Roeck, G.; Bui-Tien, T.; Wahab, M.A. An efficient artificial neural network for damage detection in
bridges and beam-like structures by improving training parameters using cuckoo search algorithm. Eng. Struct. 2019, 199, 109637.
[CrossRef]
Srikanth, I.; Arockiasamy, M. Deterioration models for prediction of remaining useful life of timber and concrete bridges: A
review. J. Traffic Transp. Eng. 2020, 7, 152–173. [CrossRef]
Ngarambe, J.; Yun, G.Y.; Santamouris, M. The use of artificial intelligence (AI) methods in the prediction of thermal comfort in
buildings: Energy implications of AI-based thermal comfort controls. Energy Build. 2020, 211, 109807. [CrossRef]
Wu, W.; Dandy, G.C.; Maier, H. Protocol for developing ANN models and its application to the assessment of the quality of the
ANN model development process in drinking water quality modelling. Environ. Model. Softw. 2014, 54, 108–127. [CrossRef]
Abdolrasol, M.G.M. Artificial Neural Networks Based Optimization Techniques: A Review. Electronics 2021, 10, 2689. [CrossRef]
Appl. Sci. 2021, 11, 11435
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
26 of 26
Naser, M. Properties and material models for modern construction materials at elevated temperatures. Comput. Mater. Sci. 2019,
160, 16–29. [CrossRef]
Véstias, M.P.; Duarte, R.P.; de Sousa, J.T.; Neto, H.C. Moving Deep Learning to the Edge. Algorithms 2020, 13, 125. [CrossRef]
Moon, G.E.; Kwon, H.; Jeong, G.; Chatarasi, P.; Rajamanickam, S.; Krishna, T. Evaluating Spatial Accelerator Architectures with
Tiled Matrix-Matrix Multiplication. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 1002–1014. [CrossRef]
Naser, M.Z. Mechanistically Informed Machine Learning and Artificial Intelligence in Fire Engineering and Sciences. Fire Technol.
2021, 57, 2741–2784. [CrossRef]
Olawoyin, A.; Chen, Y. Predicting the Future with Artificial Neural Network. Procedia Comput. Sci. 2018, 140, 383–392. [CrossRef]
Kim, P. MATLAB Deep Learning; Springer: Berlin/Heidelberg, Germany, 2017.
Salehi, H.; Burgueño, R. Emerging artificial intelligence methods in structural engineering. Eng. Struct. 2018, 171, 170–189.
[CrossRef]
Al-Jabri, K.; Al-Alawi, S.; Al-Saidy, A.; Alnuaimi, A. An artificial neural network model for predicting the behaviour of semi-rigid
joints in fire. Adv. Steel Constr. 2009, 5, 452–464. [CrossRef]
Tealab, A. Time series forecasting using artificial neural networks methodologies: A systematic review. Futur. Comput. Inform. J.
2018, 3, 334–340. [CrossRef]
Kanellopoulos, G.; Koutsomarkos, V.; Kontoleon, K.; Georgiadis-Filikas, K. Numerical Analysis and Modelling of Heat Transfer
Processes through Perforated Clay Brick Masonry Walls. Procedia Environ. Sci. 2017, 38, 492–499. [CrossRef]
Kontoleon, K.J.; Theodosiou, T.G.; Saba, M.; Georgiadis-Filikas, K.; Bakas, I.; Liapi, E. The effect of elevated temperature exposure
on the thermal behaviour of insulated masonry walls. In Proceedings of the 1st International Conference on Environmental
Design, Athens, Greece, 24–25 October 2020; pp. 231–238.
Pedregosa, F.; Michel, V.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Vanderplas, J.; Cournapeau, D.; Varoquaux, G.;
Gramfort, A.; et al. Scikit-Learn: Machine Learning in Python. 2011. Available online: http://scikit-learn.sourceforge.net
(accessed on 17 October 2021).
Nguyen, T.-D.; Meftah, F.; Chammas, R.; Mebarki, A. The behaviour of masonry walls subjected to fire: Modelling and
parametrical studies in the case of hollow burnt-clay bricks. Fire Saf. J. 2009, 44, 629–641. [CrossRef]
Nguyen, T.D.; Meftah, F. Behavior of clay hollow-brick masonry walls during fire. Part 1: Experimental analysis. Fire Saf. J. 2012,
52, 55–64. [CrossRef]
Nguyen, T.D.; Meftah, F. Behavior of hollow clay brick masonry walls during fire. Part 2: 3D finite element modeling and spalling
assessment. Fire Saf. J. 2014, 66, 35–45. [CrossRef]
Theodore, L.B.; Adrienne, S.L.; Frank, P.I.; David, P.D. Introduction to Heat Transfer, 6th ed.; John Wiley and Sons: Hoboken, NJ,
USA, 2011.
Fioretti, R.; Principi, P. Thermal Performance of Hollow Clay Brick with Low Emissivity Treatment in Surface Enclosures. Coatings
2014, 4, 715–731. [CrossRef]
British Standards Institution. Eurocode 1: Actions on Structures: Part 1.2 General Actions: Actions on Structures Exposed to Fire; BSI:
London, UK, 2002.
Du, Y.; Li, G.-Q. A new temperature-time curve for fire-resistance analysis of structures. Fire Saf. J. 2012, 54, 113–120. [CrossRef]
Mehta, S.; Biederman, S.; Shivkumar, S. Thermal degradation of foamed polystyrene. J. Mater. Sci. 1995, 30, 2944–2949. [CrossRef]
Maier, H.R.; Jain, A.; Dandy, G.C.; Sudheer, K. Methods used for the development of neural networks for the prediction of water
resource variables in river systems: Current status and future directions. Environ. Model. Softw. 2010, 25, 891–909. [CrossRef]
Lu, Y.; Wang, S.; Li, S.; Zhou, C. Particle swarm optimizer for variable weighting in clustering high-dimensional data. Mach.
Learn. 2009, 82, 43–70. [CrossRef]
Suits, D.B. Use of Dummy Variables in Regression Equations. J. Am. Stat. Assoc. 1957, 52, 548. [CrossRef]
Gardner, M.W.; Dorling, S.R. Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric
sciences. Atmos. Environ. 1998, 32, 2627–2636. [CrossRef]
Buitinck, L.; Louppe, G.; Blondel, M.; Pedregosa, F.; Müller, A.C.; Grisel, O.; Niculae, V.; Prettenhofer, P.; Gramfort, A.;
Grobler, J.; et al. API Design for Machine Learning Software: Experiences from the Scikit-Learn Project. 2013. Available online:
https://github.com/scikit-learn (accessed on 17 October 2021).
Jørgensen, C.; Grastveit, R.; Garzón-Roca, J.; Payá-Zaforteza, I.; Adam, J.M. Bearing capacity of steel-caged RC columns under
combined bending and axial loads: Estimation based on Artificial Neural Networks. Eng. Struct. 2013, 56, 1262–1270. [CrossRef]
Kulathunga, N.; Ranasinghe, N.; Vrinceanu, D.; Kinsman, Z.; Huang, L.; Wang, Y. Effects of Nonlinearity and Network
Architecture on the Performance of Supervised Neural Networks. Algorithms 2021, 14, 51. [CrossRef]
Kingma, D.; Ba, J. Adam: A method for Stochastic Optimization. Available online: https://arxiv.org/abs/1412.6980 (accessed on
22 December 2014).
Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536.
[CrossRef]
Stanovov, V.; Akhmedova, S.; Semenkin, E. Differential Evolution with Linear Bias Reduction in Parameter Adaptation. Algorithms
2020, 13, 283. [CrossRef]
Dawson, C.; Abrahart, R.; See, L. HydroTest: A web-based toolbox of evaluation metrics for the standardised assessment of
hydrological forecasts. Environ. Model. Softw. 2007, 22, 1034–1052. [CrossRef]
Download