Exploration of INNSTA Ayush Kumar Yogi

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 7- July 2013
Exploration of INNSTA
Ayush Kumar Yogi#1
#1
Govt. Engineering College, Ajmer
Rajasthan, India
Abstract— INNSTA (Idea of Neural Network for Software
Testing Automation) has proposed already with its basic model
structure, conceptual fundamental of model and mathematical
formulation. Here I explore the model architecture, the transfer
function and its updating criteria with the way to code
backpropagation algorithm for INNSTA. Perceptron is used to
create and analyze the simple network with the speciality by
communicating functions, network initialization functions and
simulation of such a defined network. Developers can use
perceptron model to create such a network for software testing
and then be implemented as software testing Automation tool. In
this paper, INNSTA is explored in depth, from its design to
mathematical approach. Neural network approach in software
testing automation with INNSTA approach will provide
developers a straight implementation way and users of such a
tool will get efficient results comparing to other automation tool.
Keywords—
INNSTA,
mathematical approach.
transfer
function,
perceptron,
I. INTRODUCTION
INNSTA (Idea of Neural Network for Software Testing
Automation) is defined in [1]. As neural network has
functional characteristic of parallelism so INNSTA
completely follow this in its initial input configuration to its
output layer. In depth scenario of INNSTA, the proposed
design of neurons in three layers MLNN (Multi Layer Neural
Network) architecture provides computation of each input
value for single test case, for different test cases with different
input values a clear ongoing process without interference to
each other and saving the result in order as test cases are
configured from there test cases sub groups and related parent
group. The order is important in case to saving as pass or fail
as final test case result. Using perceptron to create a neural
network model and simulating that is the best way before to
create the INNSTA based tool. Perceptron is especially used
for simple problems in pattern-classification but as creating a
network just for looking and observing near to our logical
fundamental using some of perceptron function is not a bad
idea. There are different functions categorized according to
usability as model functions, network initialization, simulation
function, training functions, error functions and many more.
Selection of functions according to the problem and there
scope is the emphasized area for developers and also to whom
they want to create their own neural network related to the
problem. As I have already stated in previous paper [1] that
there are a lot of modification in designing such a tool. If we
use static neural network with sequential vectors then in
training after each input the weight will be updated. Be
ISSN: 2231-5381
emphasizing here in the logic that, we have used no change
with a constant weight as 1 for input to hidden layer neurons
so contradiction occurs here. Now modification is as, we are
using in INNSTA the static network with concurrent input
vectors rather than sequential input vectors. Static network
approach is used because static network does not require
feedback from the output. Next one is concurrent input vectors
because all the input values for one test case is grouped in one
set and in parallel way we will get the output in one or zero
that will be fetched to output neuron. Because weights are
updated in sequential vectors at each input and in concurrent
vectors at the end of all the inputs so the modification that we
ignoring the weight update operation is reduced in concurrent
vectors because if there are five input values for one test case
then in sequential input vector five time our ignorance will
occur for weight update in single test case operation but if
concurrent vectors are adopted then one time we need to
ignore weight update. This is the reason behind merging all
inputs in concurrent vectors.
II. ARCHITECTURE MODEL OF INNSTA
The overview model has been proposed [1], now a clear indepth model is proposed here. The output neuron is taken only
one that will list down all the inputs based on the number of
processed test cases very well if the tool will be developed by
following the architecture. In this architecture outputs are
added in order of two subscripts and one superscript indexing
of output neuron. Based on these three scripts the order is
maintained of network output from its transfer function as
well. The input neurons are based on number of input values
of each test case additionally number of subsets are operated
concurrently. Here the test cases are batched in different
groups named as G1SSTC1 up to GzSSTCz and test cases are
identified according its group number. In the figure 1, input
neurons have taken the values from test case in the input layer.
The numbers of neurons are in nature of dynamic creation
strategy. Because as stated in previous paper [1] that creation
of input neurons are based on number of input values of each
test case so it will eliminate the problem of over fitting and
under fitting of input neurons.
In INNSTA model, one hidden layer is proposed in that
layer each neuron at hidden layer responsible for calculating
query output. For each test case the output of different input
values form query function is send to one AND function. This
AND function performs its operation and fetches the output to
transfer function Tf. This Tf is responsible to manage and
fetch final output to output neuron with test case identity.
http://www.ijettjournal.org
Page 2941
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 7- July 2013
G
O
Output
Neuron
J
G1
G1
G2
PR,J
G1TC01
Output of Gz input 2 test case 1
Output of Gz input 1 test case 1
Gz
P1,1
PR,1
G2TCJ
.
.
L
a
y
e
r
Input
Neurons
PR,2
G1TCJ
G1TC02
H
i
d
d
e
n
Query function
Gz
P1,J
AND operation
AND operation
Query function
Query function
Query function
Output of G1input R test case J
Output of G2 input 1 test case 2
Output of G2 input R test case 2
˄ ˄
Query function
Output of G1input 2 test case J
˄
Query function
Query function
Output of G1input 1 test case J
AND operation
Transfer function (TF)
GzTCJ
.
G2TC02
G2TC01
.
GzTC02
GzTC01
……. GzSSTCz
G1SSTC1 G2SSTC2
Set of All Test Cases
Fig. 1 Architecture of INNSTA model
A. INNSTA Model Constraints
Here the fig 1 contains some variable constraints. These
constraints have clearly specified the model in respect to their
position in the figure. One important point to specify the input
ISSN: 2231-5381
neuron in the figure 1 and also at the output neuron to receive
related test case result there are subscript and superscripting of
the input and output neurons. How to code them will be
specified in the following sections of the paper. The
constraints that are in the figure 1 are as follows:
1. G1SSTC1 – Group 1 SubSet Test Case 1
2. G1TC01 – Group 1 Test Case 01
G1 -- Input 1st of test case J of Group G1
3. P1,J
4. TF – Transfer Function
5. OJG– Output of test case J of group G.
This model is full implementation view of INNSTA. As
described in previous paper this is in-depth architecture of
neural network adoption in software testing automation.
III. INNSTA IN-DEPTH
Some modifications are there in respect to predefined
neural
network
functions
and
algorithms.
Here
backpropagation algorithm is an optional algorithm if
developers feel to implement in INNSTA based tool related to
efficiency constraint. But as the proposed framework in
design we have kept weights as constant to 1 referred to
desired output. The transfer function TF is responsible to
perform arrangement of fetching final result to output neuron.
This transfer function receives actual output from AND
operation that is performing in hidden layer and then
calculates the final result.
A. INNSTA Conceptual
Here we talk about the in-depth concept of INNSTA. The
fundamental approach has been discussed before. I have
suggested that backpropagation algorithm need to be change
in our scenario. In backpropagation algorithm the error is
calculated by as the difference of desired output and actual
output [2] and error rate is minimized by updating weights by
changing learning rate to increase or decrease. In INNSTA,
simply use the other methodology to use training style, the
algorithm named as TCCA (Test Case Configure Algorithm)
will be called between Transfer function to output neuron link.
Here the position of calling TCCA is complicated in-between
transfer function to output neuron so adjusting by the
developers at may be transfer function may possible. The
reason behind calling TCCA in-between links is due to
making busy the network because reconfiguration of
unprocessed test cases by TCCA should not affect the network
efficiency. Here the TCCA as follow:
1.
Check the available sub groups exist there.
2.
If no groups are available there, then exit otherwise
go to step 3.
3.
Select the test cases from the groups and for each test
case select all the input values.
4.
Count number of inputs and maintain the identity of
inputs in order-respect to group number, test case number
and input number.
5.
Fetch the step 4 data to neural network design
function.
6.
Remove the configured groups from the group list.
7.
Exit.
http://www.ijettjournal.org
Page 2942
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 7- July 2013
NNDF
Remaining
sub groups
(Neural Network
Design Function)
Call TCCA
Input Layer
Hidden Layer
Output Layer
the INNSTA Technique. But there will be a larger update in
design. Developers have to record the session after TCCA
operation. Because after TCCA execution fresh new inputs
will exist there for fetching to test.
B. INNSTA Mathematical Evaluation
The INNSTA mathematical evaluation consists of three
major parts. First part explores the identification of neurons
input related to input layer neuron, in hidden layer processing
neuron and output listing at output layer neuron. Second part
will explore the transfer function processing and third part
show how the output will be saved in output. Here the input
G1
neuron is shown as PR,
J, how the identification is obtained it is
as shown below:
Group Number
PInput number, Test case Number
Fig. 2 Scope of TCCA
In TCCA, there is option to keep the processed groups in
the list with unprocessed groups but it will take extra time to
check unprocessed groups among processed ones. In step 4,
numbers of inputs are calculated that will provide the dynamic
nature of creating input neurons. As in the INNSTA proposed
methodology, user interaction at creating the input neurons
has neglected so step 4 fulfils this concept. As we are using
query function in hidden layer for each hidden neuron that is
responsible for taking input values and checking that query
has ran successfully or not. This query function operates in
two sections as follow:
1.
First phase takes input value. This input is analysed
by running query that input field has taken and run the
input successfully.
2.
In second phase, if the query runs successfully then
the query should fetch 0 else 1.
A complex calculation will perform here in fetching actual
result in the form of 0 or 1. Give attention at the second phase
of query function, it produces actual output in inverse form. If
the query runs successfully then it is fetching 0, it means that
testing have the objective to find bugs, errors, defects in the
software. So keep in mind in test case input values that all the
inputs of test cases must wrong. This single operation will
eliminate some more functionality. For example, if you follow
the concept that wrong input values should produce 0 as its
actual output in query function then you need to change at
first, transformation phase of test case desired output values.
You have to input 0 values for wrong inputs in desired output
column. Second, then and operation will not be used for single
output of each test case. Third, there will be comparison at
each hidden neuron with desired output. So, it that INNSTA
uses a different approach for faster, better and right output.
This query function is the heart of the INNSTA. If developers
want to overcome this situation of query function then
alternate way is to use the concept of previous methodology of
software testing tools as record session of application by first
start recording, input the values in fields and running the
application operation, at the end stopping the record session as
performs in QTP automation tool. After using it implement
ISSN: 2231-5381
Fig. 3 Formulation of Neurons
Here in all respect as hidden and output layer neurons
including processing order is maintained according to this
formulation shown in figure 3. Group number, input number
and test case number show the identity as which group
number of which number test case which number of input.
Simplification is maintained as possible as can be in each and
every section of INNSTA very carefully with keeping
emphasis on efficiency with possible deductions in
unnecessary processing with high utilization of neural
network approach. Second part is related to the transfer
function TF, here first of all the transfer function gets the
single input for each test case from its preceding AND
operation. All the inputs are collected by TF for all test cases
and one AND operation for each test case with value 1 that is
desired output is performed. The final output is fetched then
according to the indexing with group, test case and input
number. The functioning is as:
6. 0 ^ 1 = 0 As Fail result
7. 1 ^ 1 = 1 As Pass result
G1
P1,1
1
G1
P2,1
0
1G1,1,1 ^ 0G1,2,1 ^ 1G1,3,1 = 0G1,1
1G2, 1, 3 ^ 1G2, 2,3 = 1G2,3
1
G1
P3,1
AND operation
1
G2
P1,3
1
G2
0G1, 1 ^ 1= 0
1G2, 3^ 1 = 1
TF
P2,3
Output Neuron
Fig. 4 Mathematical Evaluation of AND and Transfer Function
http://www.ijettjournal.org
Page 2943
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 7- July 2013
Because the transfer function TF is calculating the final
output then it is necessary to get the right output related to the
test case identity. So in the output neuron as the simplicity in
original one column entry we should maintain it using a
mathematical formulation to accessing the related test case
output. There is option that we can create simply 2-D array to
loop for group and then test case. But it will require additional
processing to convert the rows as a single dimensional array
by merging rows in 1-D array. For directly fetching the result
to correct location we have to create already an array at the
design time of neural network framework for INNSTA based
tool. This one-D array will have the size as:
1. Calculate the total number of groups.
2. Calculate the total number of test cases in each group.
3. Calculate cumulative frequency (C.F.) of number of test
cases as shown in table 1.
4. If G1, G2,G3, G4 have 5,5,3,4 test cases respectively
then total size of one –D array will 5+5+3+4=17. One
additional group G0 must be added due to the reason of
calculating right position for result.
5. Here the number of groups are used in calculating and
placing the test case output at the correct location using
the formula given below:
Location of result = C.F. of Gi-1+Test-case number
Fig. 5 Formula for fetching result in 1-D array
6.
block in contiguous memory location. Here are the choices for
developers to save the table I in either one addition constant
neuron, associated with transfer function TF. Transfer function
will refer it for placing it at right position. The third part is
completed according of fetching the result at output neuron.
IV. CONCLUSIONS
INNSTA approach can be used in developing small,
medium and large scale testing products. It provides dynamic
nature at neuron creation. Desired outputs of all test cases are
kept 1. Simple transfer function but its working process is as
complex. A new training algorithm is proposed TCCA that is
intellectual in nature. The INNSTA based tool will be
efficiently during its operation, no one phase has free time
leak point. A complete neural network technique provides
parallelism so processing is fast in theory. AND operation is
simple so time will remain low to compute results. The
fetching of final results to the output is in its well managed
formulation.
REFERENCES
[1]
[2]
Ayush Kumar Yogi, “Reformation with Neural Network in Automated
Software Testing”, International Journal of Computer Trends and
Technology (IJCTT)- volume 4 issue 6- June 2013
Dr. Yashpal Singh, Alok Singh Chauhan, “Neural Network in Data
Mining”, Journal of Theoretical and Applied Information Technology.
Here the simplification in formulate locations and direct
fetching the output provides great operation because as
the test cases results calculated no looping is there to
maintain the order of groups with related test cases so
this independent approach provides a way of “receive
and placed” operation.
TABLE I
DATA USED IN TRANSFER FUNCTION
Group Name
(X1)
G0
Total Test Cases
(X2)
0
C.F. of X2
G1
5
5
G2
5
10
G3
3
13
G4
4
17
0
The order of results with correct maintenance without any
conflict is achieved here. Conflicts appear at the side when
group number and test case number exchanges positions in
scripting as OTG1, 2 and OTG2, 1. Here OTG1, 2 means output of
Group 1, Test case number 2 and OTG2, 1 means output of
Group 2, Test case number 1. Formula shown in figure 5
removes such ambiguity because of C.F. used. For example, if
the output is OTG1,1 and OTG2,3 then the location will be 1 and
8 respectively. The concept of using C.F. is that all test cases
data will be set at the desired position without leaving any free
ISSN: 2231-5381
http://www.ijettjournal.org
Page 2944
Download