Project documentation

advertisement
Project documentation
Emergency call button
Date: 11/03-11
Advisor: Kim Bjerge
Project participants:
Revision: A
Document ID: PRODDOC
Jørgen Vrou Hansen (20042728)
Saiid Shah Alizadeh (19963592)
Anders Hvidgaard Poder (19951439)
Document ID: PRODDOC
14-03-2016
1 Table of content
2
3
4
5
6
Introduction ............................................................................................................................................................... 3
Abbriviation................................................................................................................................................................ 4
Process ....................................................................................................................................................................... 5
Domain model ............................................................................................................................................................ 6
Requirements ............................................................................................................................................................. 9
6.1
6.2
6.3
Quality attributes ................................................................................................................................................. 9
Rationale............................................................................................................................................................. 11
Content ............................................................................................................................................................... 12
6.3.1
6.3.2
6.3.3
6.4
Requirement verification traceability matrix ..................................................................................................... 14
6.4.1
7
Technical risk ...................................................................................................................................................... 17
Cost risk .............................................................................................................................................................. 18
Schedule risk ....................................................................................................................................................... 18
7.3.1
General architecture .......................................................................................................................................... 20
8.1.1
8.1.2
8.1.3
8.2
Dependencies and flow ............................................................................................................................ 22
Evaluation ................................................................................................................................................. 31
Transaction Layer Modelling ..................................................................................................................... 32
Mapping of the general architecture ................................................................................................................. 33
8.2.1
8.2.2
8.2.3
8.2.4
Microcontroller and ISM transceiver ASIC ................................................................................................ 34
Microcontroller, DSP and ISM transceiver ASIC ........................................................................................ 39
FPGA, ADC/DAC and ISM oscillator and LNA receiver filter ...................................................................... 44
Comparing platforms ................................................................................................................................ 47
Pareto analysis ......................................................................................................................................................... 49
SystemC.................................................................................................................................................................... 51
10.1
10.2
10.3
11
Avoiding risk .............................................................................................................................................. 18
Architecture ............................................................................................................................................................. 20
8.1
9
10
Initial Requirement Traceability Matrix .................................................................................................... 14
Risk Management .................................................................................................................................................... 17
7.1
7.2
7.3
8
Concept of Operation ............................................................................................................................... 12
Storytelling ................................................................................................................................................ 12
Use case diagram ...................................................................................................................................... 13
Simulation ...................................................................................................................................................... 51
GSM 06.10 ..................................................................................................................................................... 51
Evaluation ...................................................................................................................................................... 53
Conclution: ............................................................................................................................................................... 55
Page 2/63
Document ID: PRODDOC
12
13
14-03-2016
References: .............................................................................................................................................................. 56
Appendix .................................................................................................................................................................. 57
13.1
SystemC ......................................................................................................................................................... 61
2 Introduction
This document details the process, decisions and result of the Emergency button project. A prerequisite for reading
this document is to have read the project proposal and the one-page intro.
The document begins by describing the process and then continues to describe the individual parts of the project as it
progresses through the process. Naturally the process is iterative in nature, yet in the document any changes will be
described under the section they relate to, and the document may therefore not be seen as a chronological detail of
the process. Where changes occurs the point of origin will naturally be described along with its affect to earlier section
in the change log.
Page 3/63
Document ID: PRODDOC
14-03-2016
3 Abbriviation
In this section, we avoid listing the most commonly used abbreviations, such as GSM, LAN, ADC, DAC, etc since we
assume that the reader is familiar with these terms.
SRD
System Requirements Document
SRS
RVTM
SoC
TLM
B-CAM
C-CAM
CE
PE
ISM
BTS
MTBF
INCOSE
BDD
IBD
ASIC
VHDL
GDB
System Requirements Specification
Requirement verification traceability matrix
System on a Chip
Transaction Layer Modeling
Bus Cycle Accurate Modelling
Computation Cycle Accurate Modelling
Computation Element
Processing Element
industrial, scientific and medica
Base Tranceiver Station
Mean Time Between Failure
The International Council on Systems Engineering
Block Defination Diagram
Internal Block Diagram
application-specific integrated circuit
Very high speed integrated circuit Hardware Description Language
GNU Debugger
Page 4/63
Document ID: PRODDOC
14-03-2016
4 Process
1.
2.
3.
4.
5.
6.
7.
8.
9.
Analyse project proposal and create SRD
a. Domain model
b. System level requirements
i. Use cases.
ii. Sequence diagrams where needed.
iii. Non-functional requirements.
Refine SRD to SRS (system level only
a. State and activity diagrams to clarify use cases.
b. Traceability for changed requirements.
Overall architectural design
a. Identify blocks and create overall structure
b. Mapping of blocks to requirements, both in diagram and RVTM (V is not part of report)
c. Create internal block diagram for important blocks.
d. Create activity, state, sequence and other diagrams where needed.
e. Modify requirements if needed.
SySML for SoC and design and SystemC transformation
a. Relate SySML block to SystemC module
b. SystemC notation on SysML diagrams
c. Partitioning SysML blocks into 1 or more SystemC modules.
d. Describe the process, important considerations
SystemC TLM of overall architectural design
a. Map functional blocks to SystemC module.
b. Create communication channels (mostly standard FIFO).
c. Modify architecture/requirements if needed.
Architecture mapping
a. Identify alternative architectures.
b. Create architectural design for each alternative.
Process mapping for each alternative architecture
a. Identify processes.
b. Identify communication.
c. Map Processes to PE and Communication channels to CE
SystemC Timed TLM for each alternative architectures
a. Update and refine SystemC to the alternative architecture
b. Identify delayes in the proceses and communication channel based on rough estimation.
c. Implement delays in SystemC.
d. Simulate the system and compare the results.
Conclusion
a. Evaluate the pros and cons of the alternative architectures.
b. Evaluate the process.
Page 5/63
Document ID: PRODDOC
14-03-2016
5 Domain model
Private recidence
Speaker
ISM alarm signal and
audio
ISM
Microphone
GSM
GPRS alarm signal
and audio
230V
Phone company system
Caregiver head office
GSM
Care giver
on duty
Technician
on duty
BTS
Phone company
server
Internet
Emergency
system server
Figure 1 - Domain model drawing
In Figure 1 may be seen the overall parts of the Emergency call system including their communication paths. Only a
small part of the above is part of this project, yet it is important to realize the domain in which the project operates.
Page 6/63
Document ID: PRODDOC
14-03-2016
The system functions as follows:
1.
2.
3.
4.
5.
An emergency call base is installed in the home of the person receiving care and an emergency call button is
issued to that person.
The Emergency button communicates with the emergency call base using the ISM network.
The emergency call base communicates with the phone company server using its built-in GSM modem and
SIM card via a Base Tranceiver Station (BTS).
The phone company server forwards the communication via the internet to/from the emergency call server
(naturally in a properly protected tunnel).
The emergency call server forwards the communication to/from the interested parties; technician, care giver
or both.
The above may be formalized in a SysML Domain Basic Block Diagram as shown in Figure 2.
bdd Emergency call system Domain
«system»
Emergency call
system
1..N
«block»
GSM network
«block»
Emergency call
Server
«block»
Emergency call
Base
*
«block»
ISM network
«system»
Emergency call
Button
Technician Head office
«block»
Environment
System of interrest
User
Figure 2 - Emergency call system Domain
In Figure 2 may be seen not only the System Of Interest, but also the environment in which it operates and the blocks
with which it interconnects.
The environment in which the emergency call button operates is one where it may be operated by an elderly or
disabled person, and may therefore be exposed to some degree of moisture and shocks, as well as worn continuously
for a long time.
Page 7/63
Document ID: PRODDOC
14-03-2016
The ISM network naturally suffers from limited range, noise and interference, which is all part of the ISM network.
The Emergency call base is the communication gateway between the Emergency call server and the Emergency call
button. It also has other responsibilities, yet they are irrelevant for this project. The communication goes via the GSM
network, which also suffers from limited range, noise and interference as well as a third party service provider (the
phone company). In the target area it is believed that with a sufficiently large and well placed GSM antenna a
sufficient signal may always be achieved (assumption). Furthermore the GSM network has been thoroughly tested and
the service providers have strong incentives to keep a very high availability.
The Emergency call server is placed in a protected environment with sufficient power and network access, and can
reach the technician and caregiver on duty using either the head-office LAN , SMS Gateways and/or other third party
services.
Focussing on the System-Of-Interest a set of requirements may be created from the project proposal. As the focus on
this project is proof-of-concept some of the detail requirements (e.g. colour, frame IP rating, …) will be excluded just
like only the critical requirements, especially those containing implemtational risks, will be broken down in the
architecture.
Page 8/63
Document ID: PRODDOC
14-03-2016
6 Requirements
This section contains both the overall system requirements (SRD) and the refined requirements (SRS) in one, as the
project is an in-house project with close proximity to the “customer”. It therefore does not make sense to create two
separate documents, but is far more efficient to simply create the SRS right away.
Again the requirements are only the system requirements and do not include all detailed requirements, yet should
contain sufficient data to complete a proof-of-concept.
Before we dive into the creation of the requirements we should look at the Quality attributes of importance to this
system, as well as the challenges in meeting these requirements.
6.1 Quality attributes
There are several schools of system architecture, many of which define their own set of quality attributes. A well
known and recognized set is the one defined by [BASS] as follow:






Availability
Modifiability
Performance
Security
Testability
Usability
Many add an extra quality attribute:

Safety
And finally there are some business oriented attributes:


Time to market
Cost
Naturally all of these are important, but some are more important than others, and some are also more likely to cause
trouble. It is therefore important to include a risk assessment in the quality attribute analysis.
Naturally Safety is vitally important, but at the same time as this product focuses almost entirely on communication,
most of the safety aspect is contained in Availability, and the only true Safety aspect is pinching ones finger in the
button, scratching oneself on the frame or the battery blowing up. Looking at cell phone manufacturers the last safety
aspect is actually a real one, especially as the emergency call button might get damp and warn close to the skin. We
do however believe that the current EU regulations cover these aspects, and that sufficient experience exists to avoid
this problem, and the risk of violating requirements for the safety attribute is low.
Second come Availability. The button must “always” be able to report an emergency (including battery low situations).
Naturally there is no such thing as “always”, so an acceptable probability of failure must be agreed to. In the present
system there is no fault detection, but the “buttons” are made so simple that a very high Mean-Time-Between-Failure
Page 9/63
Document ID: PRODDOC
14-03-2016
(MTBF) can be achieved. In this must also be included the probability of a failure later in the communication path, but
as these units are on a continuous power supply they can maintain a much higher fault detection frequency. It is
believed that the probability of mechanical failure (the button gets stuck or the micro switch fails) can be considered
negligible, making the state of the microcontroller a sufficiently deep fault detection.
Usability is also vitally important. Here the experience from existing system may be used to determine the optimal
size, colour, location and feel of the button itself. Charging the emergency call button is a much more interesting
aspect, as the existing “buttons” very rarely needs to have their battery replaced.
Testability is interesting in the sense that it is important to verify especially the Availability requirements, so it should
be considered when writing the test cases and also when implementing the product.
Naturally a certain level of Performance is required, but the response times that is expected lies well below the
technical capabilities of the technology available, and the risks here are therefore minor.
Modifiability is more of a nice to have as the cost of the individual emergency call button is such that the entire button
may simply be replaced in lieu of updating the firmware, and also gaining access to the buttons themselves are
relatively simple, as central lists of their whereabouts are kept at the municipalities.
Finally there is Security, and for this project it is not important. No confidential information should be exchanged on
this medium and the desire for someone to want to impersonate an emergency call button is most likely quite small.
Naturally the emergency call buttons must be distinguishable from each other, but protection against someone
intentionally trying to spoof an emergency call button is not required.
From the above it is possible to create a prioritised list of quality attributes along with their estimated level of
complexity (and thereby risk).
Priority
1
2
3
4
5
Quality attribute
Safety
Availability
Usability
Testability
Performance
Complexity (0 – 10, where 10 is highest
complexity, i.e. highest risk)
1
8
3
4
4
Flexibility (0 – 10, where 10 is highest
flexibility)
0
3
2
8
5
The reason it is important to include the complexity of achieving the required level and the allowed flexibility, is so it is
possible to know where there may be some leeway. If the flexibility is 0 it is not even worth looking at, as there are
(most likely) strong legislative reasons why this quality attribute must be met to the exact specification. Naturally the
complexity and flexibility is a rough estimate based on the developers and architects understanding of the task at
hand, and may change as the project progresses, yet if there is a high degree of uncertainty it might be a good idea to
reduce this uncertainty before progressing, or at least address this module first in the further development.
When these priorities are in place it is possible to address the business side of the quality attributes. Many newer
schools of development argue that you cannot (or at least should not) change the quality, only adjust the number of
features included to meet the time and cost schedule. This is often illustrated by the triangle shown in Figure 3, often
called the iron triangle because it is possible (ideally) to control two sides of the triangle, but never three. So if you are
Page 10/63
Document ID: PRODDOC
14-03-2016
willing to pay anything, you can have all the features in a very short amount of time, but if you are not willing to pay
very much, and you want it yesterday, then you cannot have very many features.
The problem with this triangle is that it does not take non-functional requirements into account, like many of the
quality attributes are. The reason this is important is that it may also be possible to save significant money and time if
the client is willing to lower the availability requirement, without touching the functional requirements (the features).
This is where the complexity comes into play, as it shows how much there might be potentially gained in time and
money by adjusting these parameters. This should then be combined with the flexibility, to gain an impression not
only on how much is there to be gained, but also how willing (able) is the client to make changes to these
requirements.
Figure 3 - Iron triangle
6.2 Rationale
Diving into the actual requirements we can derive a lot of the information directly from the project proposal, current
legislation, existing solutions, and some from the specification of the emergency call base. Other requirements, like
the maximum allowed time between failure detection, are derived from a combination of risk assessment and
acceptable delays as indicated by the caregivers. By estimating a mean time between failures (MTBF) and an
acceptable response delay and failure probability it is possible to calculate the required fault detection interval in
order to meet the indicated fault probability figure.
This calculation can be done as follow:
-
MTBF = 365 days.
Acceptable extra delay = 30 minutes (a total response time of 60 minutes).
Acceptable probability that the +30 minutes requirement is not met = 0.1%.
The probability that the error occurs in the time more than 30 minutes from a fault detection is (X – 30) / (365 * 24*
60) => (X – 30) / 525600, where X is the interval between fault detection. Combining this with the required probability
gives (X – 30) / 525600 = 0.001 => X – 30 = 525.6 => X = 555.6 minutes, or a fault detection every 9 hours, and the2
hours is therefore acceptable.
Naturally the probability that one unit in a collection of units do not meet the above specifications is different (relative
to the size of the collection).
Page 11/63
Document ID: PRODDOC
14-03-2016
6.3 Content
[INCOSE] describes a system life cycle for processes and activities. In the Emergency call button system, we have
chosen to describe some of the activities in the life cycle model management process, which is a part of the
organizational project enabling processes. The activities and scope for this project is illustrated below. In the life cycle
model management, we have focused on one task in the project process, namely risk management. In the technical
process our first priority have been the architectural design process, and second the implementation face, to support
simulation of our architecture. The stakeholder requirement definition, and requirement analysis process have been
melted together in one title (requirement specification). This decision is because our customer is an experienced
system engineer.
System Life cycle Processes Overview 1
6.3.1 Concept of Operation
This section will contain a storytelling from where many use cases stakeholdes can be derived.
6.3.2 Storytelling
An old lady has fallen and knocked her-self in the bedroom. The old lady presses emergency call button and gets
redirected to a service assistant at the help care center. At the help care center, they quickly realize that the old lady
needs help, and immediately sends a car off to get her.
An old man tried to reach the coffee on top shelf, but cannot reach it. He then presses the emergency call and gets
1
INCOSE Figure 1-1 System life-cycle Process Overview per ISO/IEC 15288:2008
Page 12/63
Document ID: PRODDOC
14-03-2016
redirected to a service assistant at the help care center. The service assistant quickly realizes that it is not seriously
and urgent help is not needed.
Storytelling helps people understand what the system is all about. It focuses on real life scenarios and how the system
interacts in people’s daily life. In our case the system is designed to improve the helper’s workflow, by providing two
way communications, which is new in this branch. The last story is an example of the strength of our system.
6.3.3 Use case diagram
System
Activate
emergency call
Head office
User
Recharge
battery
Caregiver
Runtime
configuration
Battery low
E.g. Adapting the
TX to the
environment to
maximuze
battery life.
Installation of
new emergency
call button
Heart beat
failure
Technician
Emergency call
base
Firmware
update
Figure 4 - Use case diagram
In Figure 4 all communication goes through the Emergency call base, yet where the communication is merely relayed
to another actor the Emergency call base is not shown as part of the communication.
The overall states of the system may also be seen in Figure 5. Here is can be seen that since there is only one way of
activating the emergency call button, the button is also used to enable a firmware update or a configuration of the
button to an emergency call base or to configure the transmission strength. This is done to reduce power so the only
unsolicited activity required by the emergency call button is to wake up and transmit a heart beat as a fire-and-forget.
Page 13/63
Document ID: PRODDOC
14-03-2016
stm Operating state
Heartbeat sent
Idle
Configuration
done
Update done
Configuring
button
Firmware
update
Heartbeat
timeout
Send
heartbeat
Button push
Emergency
done
Configuration
requested
Emergency
button activated
Firmware update
pending
Figure 5 - Main states
Details about the use cases and actors may be found in the Use case document [UseCase].
6.4 Requirement verification traceability matrix
To keep track of our requirement we have developed a requirement verification traceability matrix (RVTM) 2, though
we have shipped the “v” verification because it is beyond the scope for this project.
The table below illustrates all system requirement together with is corresponding use case, and an indication whether
it is a functional requirement or non functional.
6.4.1 Initial Requirement Traceability Matrix
ID
2
1
Stakeh
older
Requir
ement
SR1
2
SR2
3
SR3
System Requirement
When the emergency call button
is activated the emergency shall
be reported to the head office.
Is shall be possible to recharge
the battery on the emergency call
system
The system shall automatically
adjust the RF transmission power
according to the environment, to
minimize power consumption
Requiremen
t type
Compon
ent
Functional
Handhel
d device
Requi
reme
nt
ref.
N/A
UC
Ref.
Test
Case
Ref.
Comm
ent
UC1
N/A
N/A
Functional
Handhel
d device
N/A
UC2
N/A
N/A
Functional
Handhel
d device
N/A
UC3
N/A
N/A
INCOSE p. 57
Page 14/63
Document ID: PRODDOC
4
SR4
5
SR5
6
SR6
7
SR7
8
SR8
9
SR9
10
SR10
11
SR11
12
SR12
13
SR13
14
SR14
15
SR15
16
SR16
17
SR17
18
SR18
The system shall notify the user if
the batty is below 20 % of max
capacity
The system base shall notify a
technician if is doesn’t receive
signal for a period of 2 hours.
It shall be possible to install a
new emergency call button
It shall be possible to update the
firmware on the emergency call
button
The emergency call button itself
shall not weigh more than 125g
The emergency call button shall
not be larger than 40x60x15mm
The button on the emergency call
button must be at least 20x30mm
or have a circumference of at
least 75mm
The ISM band used shall be the
EU allocated frequency for social
alarms (EN 300 220) at 869.2 –
869.25MHz
All requirements set down by the
EU and Denmark regarding EMC,
transmission strength and
frequency hopping must be met,
as well as other legal obligations
pertinent to the product/project.
The devices battery life shall be
sufficient for at least 24 hours of
stand-by (with heart beats) and a
5 minutes two-way conversation
The emergency call button shall
transmit a button-push to the
emergency call base within
500ms of the button being
pushed
An emergency button failure
must be reported by the
emergency button base no later
than 2 hours after the actual time
of failure.
A firmware update shall take no
more than 30 minutes to
complete
The emergency call button shall
be able to charge from empty to
fully charged in no more than 6
hours
During an emergency it shall be
possible to maintain two way
audio communication with the
14-03-2016
Functional
Handhel
d device
N/A
UC4
N/A
N/A
Functional
Base
N/A
UC5
N/A
N/A
Functional
Handhel
d device
Handhel
d device
N/A
UC6
N/A
N/A
N/A
UC7
N/A
N/A
Handhel
d device
Handhel
d device
Handhel
d device
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Design
Handhel
d device
N/A
N/A
N/A
N/A
Design
Handhel
d device
N/A
N/A
N/A
N/A
Performance
N/A
UC4
N/A
*
Performance
N/A
UC1
N/A
**
Performance
N/A
UC5
N/A
N/A
Performance
N/A
UC7
N/A
N/A
Performance
N/A
UC2
N/A
***
N/A
UC1
N/A
N/A
Functional
Design
Design
Design
Functional
Handhel
d device
Page 15/63
Document ID: PRODDOC
14-03-2016
handheld device.
19
20
* A graph showing the dimensions/weight vs. battery life shall be shown starting at the smallest battery supporting the above time
constraint and continuing until the size and/or weight constraint is reached.
** The caregiver must be at the home of the user no more than 30 minutes from the activation of the emergency call. From this a
requirement of a maximum delay from emergency call button is pressed until the alarm is received by central office of 10 seconds is
created, and as the possible delay on the GSM network covers most of the 10 seconds allotted, the ISM delay of 500ms is determined.
*** If the user is to charge the emergency button themselves they must be able to do so while they sleep.
Page 16/63
Document ID: PRODDOC
14-03-2016
7 Risk Management
This process is about identy, analyze, treat and monitor the risk continuously. Since the emergency call system, has
great prospective it’s important to analyze which factor that could risk the product value on the marked.
THIS IS NOT COMPLETE RISK MANAGEMENT ANAYLYSE
The risk of failure for the emergency call system can be devied into diffent types of risk [INCOSE]
1.
2.
3.
4.
5.
Development (Technical risk )
Maintenance (Technical risk )
Operation of system ( Technical risk )
Price (Cost risk)
Time to market ( Schedule risk )
Risk is dealing with uncertainty that is present throughout the entire system life cycle. The goal is to archive a proper
balance between risk and opportunity.
7.1 Technical risk
This risk is worth taking serious, because at technical failure could be devastating for emergency call system. The
technical risk is the possibility that a technical requirement for emergency call system may not be archived in the
system life cycle. Normally large technology companies have experienced from previous project, and know where the
risk of failure is most likely, but for emergency call system which is a newly establish company, there are many
technology barriers, which have to be confronted.
Some requirements are easily tested then others, specially SR3 which take the environment into account are
candidate for several tests, and would be rated HIGH in the risk probability table 3definition explained later
The risk of choosing the wrong platform is also worth mentioning. Perato optimal solutions is based upon choosing
the right components in respect to cost and performance. These parameter isn’t enough when talking about technical
risk. If the pareto optimal solution is a combination which include a FPGA and there wasn’t any IP available for a given
functionality, we could be risking a lot because no members in the project team is familiar with HDL implementation.
The pareto optimal solution presupposes that each configuration is well known for the project team.
It is nevertheless important to consider a platform risk which is illustrated in the below table
Probability
Uncertainty
Rank
> 80%
61%-80%
41%-60%
21%-40%
< 21%
Almost certainly, highly likely
Probable, likely, probably, we believe
We doubt, improbable, better than even
Unlikely, probably not
Highly unlikely, chances are slight
5
4
3
2
1
3
[http://www.dtic.mil/ndia/2004cmmi/CMMIT1Mon/Track1IntrotoSystemsEngineering/KISE09RiskManagementv2.pdf
]
Page 17/63
Document ID: PRODDOC
14-03-2016
The above table illustrates how likely a future problem can occur. In the rationale about recommended platform, in
respect to pareto point, and probability rank from the above table will be included to give a mere adequate conclusion
about platform selection.
We have been discussing a probability factor, which is importer to consider. But what is the consequence if such, if the
worst happens. Let’s consider an example.
Let’s say we go for a FPGA platform. Probability, that the software development exceed time schedule due
inexperienced developers would get a rank 4. The consequence would in worst case be lost customers and lawsuits
due breach of contract.
7.2 Cost risk
This risk is off cause very important for companies producing large number. While our product is a school project,
price of component isn’t very important, though it will be considered when making recommendation for platform
selection. If this product is matured for production, the component price is off cause important, and platform
selection will be based not only upon optimal solution but also cost price.
7.3 Schedule risk
What if a supplier can deliver before half a year, or what is we can’t implement the system in the given schedule. This
normally a hard topic because is involves estimation of task, and milestones. This project schedule has been tackled
through small deliveries from each team member to each meeting.
7.3.1 Avoiding risk
To avoid risk, system engineers have to be proactive in early requirement phase. As stakeholders requirements
normally change within a project life cycle, the risk management is an iterative process, and must be maintained all
time. If system engineers rank stakeholders requirement in early system phase the concerned components involved
could be candidate for early test and verification. The illustration below shows the iterative process of risk
management throughout the project life cycle.
The development of Emergency call button involves risk. Some of those risks are off cause more serious than other.
Prioritization, and strategy plan of those risks is therefore necessary in the early project cycle. The table below
illustration some off the risk identified risk in Emergency call system.
Emergency call button risks
Page 18/63
Document ID: PRODDOC
Schedul
e
Cos
t
Technical
Risk
type
14-03-2016
Id
Requirement/
Description
Rank
Strategy
1
SR13
3
Action:
Test power consumption
on demo-board
Resources:
500 hours
Customer choose
another supplier
2
Unknown
platform ect.
FPGA
5
Action:
Training, and workshop
Resources:
1000 + hours
Schedule
3
SR14
4
5
6
Id ref 2
Consequence
Støjkilde, miljø besværlig gør
transmissionene ?
7
It appears that the most risky part in developing the Emergency call button is choosing an unknown platform. The
reasoning for choosing another platform is therefore very important, and such platform must provide superior
advantages. Chapter ?? pareto point will show if an unknown platform provides those sufficient advantages, and what
the tradeoff will be in respect to this risk management analyses.
Page 19/63
Document ID: PRODDOC
14-03-2016
8 Architecture
The architecture of the system can be separated into two parts
1. General architecture
2. Mapping specific architecture
The general architecture is characterised by being independent on mapping to HW and SW, where the mapping
specific architecture is dependent on selecting the platform on which the general architecture must exist. The
Mapping specific architecture is a refinement of the general architecture, and must therefore not be in conflict with
the general architecture.
8.1 General architecture
In order to define the general architecture the individual blocks that make up the system of interest must be defined.
This is done using a Block Definition Diagram, as may be seen in Figure 6.
bdd Emergency call button
«system»
Emergency call button
«HW»
Housing
«block»
Battery
«block»
LED
«block»
Control
«block»
Firmware update
«block»
Button
«block»
Communication
«block»
Audio
«HW»
Antenna
«block»
ISM
«block»
Microphone
«block»
Speaker
«block»
Signal strength control
Emergency
call base
User
Figure 6 - General Architecture Block Definition Diagram
Page 20/63
Document ID: PRODDOC
14-03-2016
The creation of Figure 6 comes from analyzing the requirements and grouping the required functionality. It may
furthermore be seen that no decision has been made as to what is realized in HW or SW, with the exception of the
Housing, which must, for obvious reasons, be realized in HW.
When transforming the requirements into an overall architecture many different techniques may be used. An example
is [SOFTARC], and though these books focus on Software Architecture, they may also be for HW/SW architecture.
Though many books on architecture exist, a certain level of experience is extremely valuable when choosing the
correct architecture.
Looking at the Quality Attributes we can see that it is a relatively static system; no Plug-and-play, no on-the-fly core
updates, no third party peripherals. This means that the architecture does not need to include pluggable components
or extendable drivers.
The high requirements for availability points to a simple system; the number of errors in a system is always
proportional to the amount of code, and also a system that can be tested thoroughly, which requires interfaces for
testing the modules independently.
There are many different architectures that can fulfil the requirements, and there may even be several equally good
architectures. Therefore choosing an architecture becomes a matter of weighing the quality attributes against the
architectural patterns that are suited to satisfy them and combine this with experience. Naturally the architectural
patterns should be modified as needed to only include the parts of the patterns that are necessary for this application.
Grouping the requirement into independent blocks using the techniques described above, a possible collection of
architecture blocks could be the one shown in Figure 6.
Here is may be seen that the following blocks, their responsibilities and their mapping to the requirement(s) the help
fulfil has been identified:
Block
Description
Req.
Housing
The physical frame that holds the electronics. This is per definition HW, and it
SR8,
has no intelligence or electrical components, only housing.
SR9,
SR12
Antenna
The physical antenna used by the ISM block, as well as potentially any required SR11,
physical components not part of the ISM block.
SR12
ISM
The logic required to package and transmit a data stream / data frame to the
SR11,
Antenna and receive a data stream / data frame from the Antenna. The block is SR12,
also responsible for maintaining the connection, if there is a connection to
SR14
maintain (depends on the low-level protocol), required channel hopping, etc.
The exact division of responsibility between the Antenna and the ISM block
with respect to physical components may be adjusted depending on the
mapping.
LED
The physical LED as well as any required low-level driver and physical
components required by the LED (often a discrete output is dimensioned so it
can drive an LED directly).
Button
The physical button as well as any required low-level driver and physical
SR1,
components required by the button (e.g. a simple hysteresis circuit to prevent
SR10,
multiple activations).
SR12,
SR14
Battery
The physical battery, the charging circuitry and monitoring circuitry as well as
SR2,
Page 21/63
Document ID: PRODDOC
any required low-level driver.
Microphone
Speaker
Control
Communication
Signal Strength
control
Audio
The physical microphone as well as any required low-level driver and physical
components required by the microphone (e.g. an external filtration or
amplification circuit).
The physical speaker as well as any required low-level driver and physical
components required by the speaker (e.g. an external filtration or amplification
circuit).
The control logic which maintains the overall state of the system, including
timing (when to check battery status, when to send heartbeats), commands
from the base (cancel emergency, update firmware) and peripherals (button,
LED). The Control logic also is responsible for turning on and off the Audio.
The communication logic which handles the application level data to and from
the ISM block. It is responsible for adding the any required frame header
and/or footer specific to the application and to distribute received data to
Control or Audio depending on the content (received frame header/footer).
The logic responsible for adjusting the transmission strength of the ISM module
based on e.g. number of retransmissions (BER), received transmission strength
or information from the based.
The logic responsible for; sampling the microphone, filter out the feedback
noise and package the data and send it to the Communication block, receiving
audio frames from the communication block, unpack the data and send it to
the speaker and the Echo cancellation filter. The Audio block may be enabled
and disabled depending on whether an emergency is ongoing.
Firmware
Update
14-03-2016
SR4,
SR12,
SR13,
SR17
SR1,
SR4,
SR5,
SR6,
SR13
SR14
SR3
SR13,
SR18
SR7,
SR16
8.1.1 Dependencies and flow
The above Basic Block Diagram (BBD) only shows the blocks that make up the system and some information about
what blocks are part of other blocks, but it does not show how blocks communicate. For that we need the Internal
Block Diagram (IBD).
Here we will focus on the important blocks, meaning the blocks that have some data flow, and the blocks that have an
interesting interface.
The remaining blocks should naturally also be done, but they are not imperative to do a proof-of-concept or to choose
an architecture and platform.
8.1.1.1 Control
The main state machine, as shown in the requirements specification, is implemented in the Control block, and it is
therefore interesting to see which interfaces this block exposes and needs. This may be seen in Figure 7.
Page 22/63
Document ID: PRODDOC
14-03-2016
Here it may also be seen that the all interfaces are standard interfaces, there are no flow data. This is an architectural
decision to keep the flow data and the control data separate.
«block»
Signal Strength
Control
:ICalibration
:IButtonCtrl
ibd Control
:IAudio
«block»
Button
«block»
Audio
«block»
Command
Handler
«block»
Firmware
update
:ICmdHdl
:ICmdHdl
:ICommHdl
:IFirmware
:ICmdCtrl
«block»
Communication
Handler
«block»
Communication
:ICommHdl
:ICommunication
:ILed
«block»
Test Battery
And Perform
Heartbeat
«block»
Battery
«block»
LED
:IBattery
IButtonCtrl
ICmdCtrl
+ButtonPushed()
+MsgRecv(msg)
IBattery
ILed
+GetBatteryStat()
+SetLED(state)
IFirmware
+PerformFirmwareUpdate(): result
IAudio
+enableAudio()
+disableAudio()
ICmdHdl
ICommHdl
+CancelEmergency()
+BeginFirmwareUpdate()
+PerformCalibration()
+ActivateEmergency()
+SendHeartbeat()
+SendBatWarning(warn)
+SendFirmwareUpdateResult(result)
ICalibration
+calibrate()
Figure 7 - Control Internal Block Diagram
In Figure 7 the following sub-blocks has been defined with their own responsibilities:
 Command Handler
Page 23/63
Document ID: PRODDOC
14-03-2016
o


Handle all commands, whether from the communication channel, the button or a timeout of the
RTC (heartbeat).
Communication Handler
o Handle all communication to and from the Communication block. This block is responsible for
parsing the command messages and parsing on the correct command, as well as packaging any
responses for transmission.
Test battery status and perform heartbeat
o This is the only timer that must be active at all times. It is responsible for waking the system at
predefined intervals to send a heartbeat and/or test the battery.
8.1.1.2 Audio
The Audio block is very interesting as it is the block with the most application level real-time processing and filtering.
To specify the inner workings of the Audio block it is necessary to consider the exact division of the responsibility of
the Audio block and the speaker and microphone. From the BDD it is indicated that the Microphone and Speaker may
contain more than just HW, but it does not say that it has to. Only the Audio block uses the speaker and microphone,
so there are no external demands on the separation of responsibility.
As availability (and thereby stability) is very important using tried and proven technology. This goes well with the fact
that quality of the audio is not imperative (no need to use state of the art technology to improve sound quality) and
with the cost constraint. The battery constraint might warrant considering a non-common design.
In order to make a decision here a small investigative project could be done to determine alternative audio stages and
their power requirements, price and maturity. An alternative is to make an educated guess and then make a note that
there might be a potential for improving the power consumption, should it later be needed.
Due to the high requirement for availability and the fact that a lot of research has already been done into the power
consumption of traditional audio stages, it is believed to be beneficial to postpone the investigation into alternative
audio stages.
The traditional implementation of an audio stage is shown in Figure 8.
Optional
«HW»
Speakers
«HW»
Amplifyer
«HW»
Restoration
filter
«IF»
DAC
«block»
Digital
filtration
«block»
Audio
source
«HW»
Microphone
«HW»
Bandpass
filter
«HW»
Pre
amplifyer
«IF»
ADC
«block»
Digital
filtration
«block»
Audio sink
Optional
Optional
Optional
Figure 8 - Traditional Audio stage
As analog data is subject to noise and loss it is preferable to have the speakers and/or microphone as close to the DAC
/ ADC as possible. This is the principle used in e.g. active speakers, where the DAC has been moved all the way into the
Page 24/63
Document ID: PRODDOC
14-03-2016
speaker. However for this project the speaker and microphone is already in the immediate vicinity of the Audio block,
so there is nothing gained by moving the ADC / DAC out of the Audio block, and this allows a very nice separation into
HW and SW, having the speaker and microphone encapsulate everything before the DAC / ADC, and the Audio block
contain everything after the DAC / ADC. The DAC / ADC themselves are placed in the Audio block, yet depending on
the mapping this might be changed. If the ADC /DAC is an external component it might make more sense to logically
see them as HW, where if the ADC / DAC is an integral part of the processing component it might make more sense to
model them as part of the Audio block.
:IAudio
ibd Audio
«block»
Audio
control
:ISpeaker
12bit,
8kHz
«HW»
Speakers
«block»
Control
:GSM 06.10
13.2kbit/s
:PCM
«block»
DAC
«block»
Audio
decoding
«block»
Splitter
Analog audio
signal to the
speaker /
amplifyer
Encoded audio
from the head
office
«block»
Communication
:IMicrophone
«HW»
Microphone
Analog signal
form the
microphone /
pre-amp.
12bit,
8kHz
:GSM 06.10
13.2kbit/s
:PCM
«block»
Echo
cancellation
«block»
ADC
ISpeaker
+powerOn()
+powerOff()
«block»
Audio
encoding
Encoded audio
from the
microphone
IMicrophone
+powerOn()
+powerOff()
Figure 9 - Audio Internal Block Diagram
In Figure 9 the following sub-blocks has been defined with their own responsibilities:
 Audio Control
o Exposes the interface to turn on and off the Audio component. Turning off the audio component
entails disabling the peripherals (shut down the amplifier and microphone), turn off the ADC and
DAC and disable the encoding and decoding logic. Anything received from the Communication block
is simply discarded. Turning the Audio component on is doing just the opposite.
 DAC
o Converts a discrete value to an analog voltage level. The DAC may be a simple Zero-order hold, and
then the speaker contains the restoration filter (a simple passive low-pass filter), or it might have a
more complex output. The analog voltage level is filtered and amplified as needed and used to drive
the speaker
 ADC
o Converts the analog input from the Microphone to a discrete value and copy the value to the Echo
cancellation on each Audio tick.
 Splitter
Page 25/63
Document ID: PRODDOC
14-03-2016
o



Makes a single value available to the DAC and the Echo cancellation at the same time, and notify the
interested parties when the value changes.
Echo cancellation
o Filter out the echo from the speaker that is picked up by the microphone. This uses the transfer
function from the Splitter to the secondary side of the ADC to determine the required echo
cancellation.
Audio decoding
o Receive frames from the communication block and buffer a predefined number of frames. Decode
the frames into the output buffer for the splitter and copy one value to the splitter on each Audio
tick.
Audio encoding
o Receive discrete values from the echo cancellation and encode them for transmission over the ISM.
When a full frame has been encoded transmit the frame using the Communication block.
There are several decisions that go into this diagram. Firstly the formats of the flow data have been specified;
however there is still the matter of how data flows through and between the different internal blocks (the formats
only specify what data and the frequency).
There are several different approaches, were the first decision is whether data-loss is allowed. When implementing a
system like this there is always at least two clocks; the audio clock (8 kHz) and the main clock (significantly faster). The
main clock indicates the speed of calculation and must be able to perform the necessary calculations on a sample
before the next sample is ready. This is illustrated on the sequence diagram in
sd Audio input processing
Repeat every
227us (44 kHz)
until frame full
Microphone
Audio Clock
ADC
Echo
cancellation
Audio encoding
Communication
CLK
loop
Sample input
Analog value as 12bit discrete
Analog value
Process value
value
Encode value to buffer
alt
[frame buffer not full]
[frame buffer full]
Send encoded frame
to communication layer
Figure 10 - Audio input processing sequence diagram
In order to evaluate the system’s ability to live up to these timing requirements, there are both external and internal
considerations.
Page 26/63
Document ID: PRODDOC
14-03-2016
As the system is relatively simple and is not required to handle other streams at the same time as the audio stream, it
is possible to quite accurately determine the required main clock. The other parts of the system which might consume
some processing power while Audio is processing is any commands received on the ISM, heartbeats and battery test
as well as button pushes. As Audio is only active during an emergency all these processes (except ISM) can be shut
down during an emergency, and the only command that should be received during an emergency is the “Cancel
Emergency” command, in which case it is irrelevant whether the Audio processing fails to meet its deadline. It is
therefore possible to “simply“ count the required clocks from the ADC is sampled until the data is transmitted on the
ISM and add the required clocks from the ISM to the DAC. To this the external factors must be added.


ISM may lose packages (both ways)
The may be significant delays in the GSM/ISM network
For lost packages there are two solutions; accepting that they may be lost and continuing (fire and forget) or using an
ACK and retransmitting lost packages. The delays are “solved” using buffering. The simplest buffer is 1 frame. This
means the buffer only holds what is required to handle one data package, and if the next package is delayed a hole in
the sound is experienced. An alternative is to buffer a certain number of ms of data before beginning the playback –
effectively introducing a delay. When there is a delay on the transmission line it is usually followed by a burst of data,
as all the frames that have been waiting is pushed through. With a single frame buffer only the last package is kept. If
there is a buffer it able to smooth out the burst so potentially no data is lost. If the delay is greater than the buffer
then data is lost no matter what.
The delay should also be considered, as it should be able to maintain a normal conversation. Naturally much
inspiration can be taken from the cell phone industry, and a buffer of 500ms - 1s is a good place to start. Naturally if
the implementation phase shows otherwise it may be changed to accommodate.
An obvious place for this buffer is in the audio decoding block. There are two options; Keep the received data buffered
and decode it so it fits with the Audio clock; decode everything into a buffer and simply pop from this buffer using the
audio clock. The second solution gives the most deterministic analysis, as the buffer size directly indicate the time
delay. If the buffer contains encoded data it is not possible to say exactly how much PCM data there is (without
decoding it) – this is naturally only true if an encoding scheme is used that encode certain sounds/frequencies
differently than others (no linear relationship between PCM data size and encoded data size). However the first
solution requires the least amount of buffer space, as the encoded data always take up less space than the PCM data.
By using the minimum required buffer space to keep the desired buffer of encoded data, the smallest possible
platform can be chosen, and if a platform with sufficient memory is chosen then the raw buffer solution can always be
implemented.
The buffer, whether raw or encoded, must always be circular, so that if a long delay occurs then when the data is
burst through, then the oldest data is overridden. Naturally during this long delay the buffer would have been
emptied, and though there are algorithms for smoothing out small gaps by using e.g. interpolation and statistical
guessing. These algorithms are believed to be unnecessary for this application, and if the buffer is empty a value of 0 is
sent to the DAC.
In Figure 10 a section has been left out. It is the secondary input to the Echo cancellation. Echo cancellation is simply
put just a matter of subtracting the “noise” generated by the speakers from the microphone input. To do this it is
required to know the exact delay and transfer function for the system from the data is sent to the DAC until it is
pickup up by the ADC. The exact calculation of the transfer function and delay will not be done here, and will be left
for the implementation phase. The technology is well known and implemented in any semi-modern cell phone and the
Page 27/63
Document ID: PRODDOC
14-03-2016
algorithms are readily available. The Echo cancellation might also need a small buffer, as it may be necessary to keep a
certain number of past values, as well as the coefficients for the Echo cancellation algorithm. The simplest form of
Echo cancellation is if the distortion is 0 and the delay is 0. In that case it is simply a matter of subtracting what is
played on the speaker from what is received by the microphone. The data flow for the speaker and microphone as it
relates to Echo cancellation is shown in Figure 11.
GSM 06.10
Audio source
PCM_speaker
Audio decoding
PCM_speaker
Splitter
Analog
DAC
Speaker
PCM_speaker
PCM_sound
GSM 06.10
Audio Sink
Audio encoding
Sound waves
PCM_sound+speaker
Echo
cancellation
Analog
ADC
Microphone
Figure 11 - Echo cancellation
As the main clock must always be able to handle one sample in the time before the next sample arrives, no buffers are
required to hold the sampled data from the ADC until the frame buffer holding the encoded data. It is however
imperative that there is complete control of the timing in the splitter and Echo cancellation.
The simplest is to run both the Audio from the microphone (ADC) and the audio to the speaker (DAC) on the same
audio clock and design the splitter and feedback filtration so the Feedback filtration always gets the current value or
the last value (deterministically). It is important to consider when the feedback filtration request / uses the value from
the splitter. The simplest design for the splitter is a one value buffer that updates it value on the on the audio clock
and then indicate that a new value is available. It simply mirrors the audio clock with the delay it takes to copy the
value into the hold buffer. The Feedback filtration is then sensitive to two values, but the input from the ADC and the
input from the splitter begin ready. As the ADC performs it sample and hold on the Audio clock as well the delay
before the Feedback filtration is allowed to run is the delay imposed by MAX(DELAY(ADC), DELAY(Splitter)).
A little detail about the decoding of received data. It may be seen that it is not required that an entire frame between
two samples being written to the DAC, but only that the entire frame can be decoded at least as fast as the DAC need
the raw PCM data. In order to allow for burst reception without re-buffering the decoding must however be faster
than the consumption of the decoded values, and a good rule of thumb is that the decoding of N bytes of data must
be completed within ½ the time used to consume the N bytes. In other words as 8000 12 bit values (~ 96000 bytes) is
consumed in 1 second then the decoding of the 9600 bytes (GSM 06.10 has a compression rate of 10:1) should be
completed in no more than 500ms.
8.1.1.3 Communication
The communication block also has an aspect of data flow, yet as it is merely responsible for forwarding the data and
commands to other blocks it has a less complex content.
As with the other blocks it is necessary to decide on the exact division of responsibility. Here the division is between
the ISM block and the Communicatioin block. It has been mentioned earlier that the Communication block is
responsible for the application layer handling of data. It is defined in the OSI model what this means.
The Communication block must peek at all received data and parse it on to the interested parties. There are three
types of data;
Page 28/63
Document ID: PRODDOC
1.
2.
3.
14-03-2016
Control data (commands)
Firmware update data
Audio data
Audio data is streamed and a data packages has a very limited life-span. If the package is not received in a timely
manner it might as well be lost. This points towards a fire and forget protocol, e.g. UDP. There are special Audio
protocols that may also be used.
The Control data on the other hand are small packages and must not be lost. This indicates a safe communication
protocol, e.g. TCP.
Finally the Firmware update data. This is also streaming data, yet it is not allowed to lose any packages, so here it is
generally preferable to use a safe protocol as well.
The data being sent must have any required header and footer added and then be sent using the desired protocol. If
the desired protocol is a safe protocol it is the responsibility of the ISM block to ensure end to end transmission.
This may be shown in the Internal Block Diagram in Figure 12.
Binary data
ibd Communication
«block»
Firmware
update
:ICmdCtrl
byte stream
<115kbit/s
«block»
ISM
«block»
Data Parser
and control
:GSM 06.10
13.2kHz
«block»
Control
:ICommunication
:ICommunication
«block»
Audio
:ICmdCtrl
«block»
Data Handler
:GSM 06.10
13.2kHz
ICommunication
+SendMsg(msg)
+EnableTranceiver()
+DisableTranceiver()
Figure 12 - Communication Internal Block Diagram
In Figure 12 it may be seen that any buffering of the data to be written (if the ISM block is unable to transmit it
immediately) should be done within the ISM block.
Distinguishing between control data, firmware update data and audio data can be done in several ways.
Page 29/63
Document ID: PRODDOC
1.
2.
3.
14-03-2016
Single data stream
Multiple ports (data streams)
Multiple protocols.
The single data stream operate at a lower level, and down at the lower layers of the ISM block there is naturally only a
single data stream. However this single data stream can be used directly by adding a header to each package
containing the type of content (control, firmware, audio), length and any other information needed, and then have
the Data parser and control block distribute the content accordingly.
Working at a slightly higher level it is possible to have the ISM block do this division. This can e.g. be done using the
TCP / UDP protocol, where a port number may be used to distinguish between control, firmware and audio data.
Naturally this has a higher overhead, as there are extra information in the headers that are not relevant for this
application, on the other hand the protocols are tried and proven and mach available implementation (both in ASIC,
SW and VHDL) exists.
Finally there is the protocol level. There are special protocols designed for transmitting audio data (lossy streaming),
binary data (lossless streaming) and control data (RPC). Using these high level protocols naturally has an extra
overhead, yet they also come with advantages, as the code generally exist and is ready to use, and the protocols are
tweaked to the specific need of the communication, and contain parameter like e.g. QoS for Audio.
Which solution is preferable depend on two aspects; the overhead as it relates to power consumption and processing
requirements, and the capabilities of the chosen platform.
To illustrate the dynamic behaviour of the Communication block the activity diagram in Figure 13.
Figure 13 - Communication Activity Diagram
Page 30/63
Document ID: PRODDOC
14-03-2016
8.1.1.4 ISM
The ISM block may be split up as shown in the Internal Block Diagram in
ibd Antenna
«HW»
Antenna
«HW»
Power
Amplifyer
ibd ISM
«block»
Protocol
stack
«HW»
Oscillator
«block»
Communication
«block»
ADC
«HW»
LNA
«HW»
Low-pass
Filter
Figure 14 - ISM Internal Block Diagram
For the communication block there was a lot of discussion about the depth of the protocol stack, so instead of
repeating it here the ISM block will simply be seen as able to communicate on the ISM bus. Whether this is a Zigbee
protocol or a lower or higher level protocol is left up to later time.
It is also important to note that the exact separation of Antenna and ISM may be different, as an ISM ASIC with
Oscillator, LNA and filter built in may be chosen, and there are also other ways than an ADC to implement the
detection, so the ISM should be seen as a black box simply supplying the ISM lower level communication layer.
8.1.2 Evaluation
Based on the above general architecture it is possible to determine the critical path through the system and to
determine the required execution and communication requirements for the HW. Unfortunately it is not as simple as it
sounds, as there are several mapping-specific considerations that go into the calculation.
Firstly it is necessary to not only identify the critical path, but also to indicate which functionality can be executed in
parallel, as this may greatly lower the required execution requirements. An example could be the Audio encoding. If
the execution path is:
1.
Sample -> Filter -> Encode -> Send
Then the total time for the encoding is include, yet if multiple samples can be or must be buffered and encoded in
parallel with other samples being buffered, then the execution path is:
1.
2.
Sample -> Filter -> Buffer
Encode -> Send
Where the Encode -> Send has to be performed within the time it takes to fill the buffer, and not the time between
two samples.
Page 31/63
Document ID: PRODDOC
14-03-2016
In Table 1 may be seen an overview of the different parallel execution paths, as well as their maximum frequency of
execution. Only the blocks with a relatively high flow of information are included, as the other paths will never be part
of the critical execution and communication path.
1.
2.
3.
4.
5.
Execution path
Microphone -> ADC - > Echo cancellation -> Audio encoding
Audio encoding -> Data Handler -> ISM -> Antenna
Antenna -> ISM -> Data Parser -> Audio decoding
Audio decoding -> Splitter -> DAC -> Speaker
Audio decoding -> Splitter -> Echo cancellation -> Audio encoding
Table 1 - Execution paths of interest
Frequency
8kHz
8kHz / frame size
8kHz / frame size
8kHz
8kHz
It may be seen the Firmware update (the other data intensive instruction) is not included. This is due to the relatively
low communication requirements. Based on the type of application a total footprint of <100kbyte (SW
implementation) is very realistic, and therefore the throughput of 100kbyte in 30 minutes, gives a required bit rate of
100000*8 / (30 * 60) = 444 bit/s. Naturally this will increase with sending acknowledgement. Depending on how the
data is stored there could be a delay for writing to Flash, yet as very little processing of the data is required, and as
one way GSM audio require an absolute minimum of 6,5kbit/s [GSM], it is believed that the critical path is not found
in the Firmware update.
Knowing these paths it is possible to look at the individual blocks in the paths and evaluate their requirements with
respect to execution and communication, and from there determine the critical path, yet to do this it is necessary to
know the execution and communication time for the different parts of the paths. This is heavily dependent on the
mapped to the selected HW, as a HW implementation (FPGA) has an entirely different “required clock cycles” to
implement a functionality than a processor has. Where an FPGA is well suited for filtration and data flow, it is not very
well suited for control logic. This is however very mapping dependent and will be postponed until the mapping phase.
8.1.3 Transaction Layer Modelling
Once the General architecture is established it may be interesting to determine if it is feasible from a functional point
of view. This may be determining by developing an executable Transaction Layer Model (TLM) of the system. It is
however not necessarily a good idea. If there is a high level of confidence in the design, then it maybe a waste of time.
There is however a second advantage with developing the TLM, and that is that it may be extended to be a Timed
TLM, BCAM, CCAM or CAM. Doing so will allow a very fine grained evaluation of the requirements to execution and
communication.
There is a high degree of confidence in the design, and the possible critical paths have been identified. It would
therefore be quite possible to determine the execution and communication requirements for the individual blocks and
simply calculate the requirements, as may also be seen in the mapping section. However, as part of the project is to
gain knowledge of and evaluate the process involving SysML -> SystemC, it is acceptable that extra time is set aside for
SystemC analysis of the system, SystemC being the language of choice for the TLM.
It is however not necessary to implement all parts of the system, only the parts that are necessary to determine the
execution and communication requirements of the critical path. This requires implementing the Microphone, Speaker,
Audio, Communication, ISM and Antenna blocks. As the purpose of the implementation is execution and
Page 32/63
Document ID: PRODDOC
14-03-2016
communication requirements evaluation it is not necessary to implement the physical HW units, like the Antenna,
Microphone and Speaker, as they may be seen as having a 0 delay and execution requirement.
The remaining blocks are implemented in the code project [SystemCTLM].
When executed the ISM simulated receiving data from the antenna by reading bytes from a file. This data is then
packages into frames and transmitted to the communication block. The communication block then peek at the header
and forward the data as needed. This is equivalent with the Single data stream version of the communication block, as
mentioned above. This is due to the fact that this has the simplest data path (no advanced protocols) and is sufficient
to determine a critical path, and if a higher level ASIC ISM chip is chosen then the Communication block will only get
simpler (worst case).
The data progress through the system according to the general architecture ending at the DAC simulator, which writes
the data to the speaker to a file.
The ADC simulator also read its sample data from a file, and also read the Acoustic feedback noise from the DAC
simulator and sends it through the system until the ISM simulator, which writes the data to a file.
To generate the input files a special tool has been developed [SystemCTLMFileGenerator], which takes a running time
as an input and generate two files with a known pattern. The pattern may be seen in the SystemCTLMFileGenerator.
The test procedure is as follow:
1.
2.
3.
Generate two files of N seconds duration (the files on the disk is 100 ms in duration)
Run the SystemCTLM
Verify that the newly generated output files match the input pattern duplicated or reduced accordingly.
8.1.3.1 Experiences with SystemCTLM
It took 6 hours to create the SystemC model of the system. It resulted in no updates to the architecture. It shows that
it is possible to execute the architecture. After the implementation followed another 6 hours of debugging and
decoding gdb error messages until the implementation was fully functional.
If no further work was done on the model, then it contains no additional information than what can be read from the
architecture diagrams. If however the model is refined with architecture specific time delays it may be used to verify
the timing and if even further refined it may allow the simulation of a HW implementation which the SW developers
may also use until the actual HW is present.
Please refer to Appendix xx for some key features of the SystemCTLM implementation.
8.2 Mapping of the general architecture
When the general architecture has been defined and potentially verified (SystemCTLM), it is necessary to look at how
this architecture may be mapped to HW/SW.
There are multiple ways of doing this mapping. In an ideal situation it would be possible to compare the different HW
directly, but implementation and instruction set differences makes it impossible, as previously mentioned. It is
therefore necessary to look at a finite collection of solutions, which are created from experience with similar systems.
Page 33/63
Document ID: PRODDOC
14-03-2016
With a system containing both control and flow data; states and filters, the following mappings are feasible:
1. Microcontroller and ISM transceiver ASIC.
a. ISM transceiver ASICs specially designed for low power consumption may be readily purchased,
some even with a built-in microcontroller in a single IC.
2. DSP, Microcontroller and ISM transceiver ASIC.
a. If the filtration and encoding/decoding parts cannot be handled by a microcontroller (cost
effectively) then adding a DSP may solve the problem.
3. FPGA, ADC/DAC and ISM oscillator and LNA receiver filter.
a. Using an FPGA is a high-cost solution, and doing so would require that almost all logic can be
mapped to HW, so the external components become very cheap, yet the high performance make it
possible to implement the ISM module without an ASIC.
Further combinations then ones mentioned here naturally exist, as will be shown later during the Pareto point
analysis.
All the solutions require external amplifier, speaker, microphone, antenna and housing, as well as some external
components for the button, LED and battery.
8.2.1 Microcontroller and ISM transceiver ASIC
Multiple solutions for this exist, yet a good example is the CC430 low power microcontroller and integrated ISM
transceiver [CC430]. Basically it is a TI MSP430 and a CC1101 in a single chip. As this is a very possible choice this will
be used as an example for the Microcontroller and ISM transceiver ASIC platform.
This platform may be seen in Figure 15, where also the processing elements and communication elements have been
indicated. It is important to realise that at this point it is only a mapping proposal, and it has not been verified that the
microcontroller can actually perform the required processing in a timely manor.
Page 34/63
Document ID: PRODDOC
14-03-2016
«HW»
Button
«HW»
LED
CC430
<<CE>>
Memory-mapped
I/O internal highspeed data bus
RAM
<<PE>>
Microcontroller
Flash
GPIO
SPI
<<PE>>
ISM tranceiver
ADC
Power
DAC
«HW»
Antenna
«HW»
Microphone
«HW»
Battery
«HW»
Speaker
Figure 15- Microcontroller and ISM transceiver ASIC
In order to determine feasibility first the characteristics of the platform must be determined. These may be seen in
Table 2.
Frequency
ADC
Flash
RAM
DAC
GPIO
SPI
ISM
Power
Price
20MHz
12Bit SAR
>1 MSPS
32KB
4KB
Not supported
44
1 (USCI)
Up to 250kbit/s
On: 160 µA/MHz
Sleep: 2 µA
Rx LNA: >15mA
(Supply = 1.8 – 3V)
5.35USD
Page 35/63
Document ID: PRODDOC
14-03-2016
Table 2 - CC430 characteristics
As it may be seen the CC430 do not have a built in DAC, and an external one must be chosen. A good choice is the
TLV5616 from TI [DAC], to stay in the same family. It also comes in a dual DAC package, and a second analogue output
may be used to control the Antenna power booster and LNA to lower power consumption (Signal Strength Control). Its
characteristics are quite common for a DAC, and can therefore easily used even if another DAC is chosen.
Resolution
Settling time
12bit
9us
0.102 MSPS
Interface
SPI
Power
On: 3mW
Sleep: 10nA
Supply = 3V
Price
3.5USD
Table 3 – DAC TLV5616 characteristics
The CC430 also require external transmitter amplification. There are several options, e.g. using an entire transceiver
front end, like the SKY65346 FEM [SKY]. It supports several of the high level protocols. Alternatively a simply
amplification stage (Power Amplifier (PA)) for the CC430 transmitter may be implemented. The maximum
transmission power allowed for the frequency band is 500mW, so with a little loss in the amplification stage a power
consumption of 600mW worst case may be assumed, i.e. very similar to using the SKY 65346 FEM, as may be seen in
Table 4. Another advantage of using the SKY65346 FEM, is that a complete development platform with the CC430 may
be purchased [TIPlat], yet naturally it is more expensive than a simple analogue PA.
Interface
Power
SPI
On: 200mA
Sleep: 5µA
Supply = 3.3V
Price
9.3USD
Table 4 – SKY65346 FEM characteristics
It may be noticed that the physical dimensions have been excluded. This is due to the fact that all the proposed
solutions has a very small form factor (single or dual IC), and the size difference so small that it is not going to affect
the possible battery package size.
Knowing the platform it is possible to determine how much processing and communication is required. This can be
done in several ways. As it may be seen in the SystemCTLM the algorithms have been split out, so they could be
implemented and run on the actual target to determine the required clocks cycles. Alternatively an Instruction Set
Simulator (ISS) may be employed to simulate the execution without having the actual HW and determining the
required clock cycles from there. A third alternative is to estimate the number of instructions based on the algorithms
and communication paths and then use these number. The SystemCTLM may then be augmented with the given
execution and communication times resulting in a Timed Transaction Level Model (see later). The numbers may also
be directly plotted against the different execution paths to determine the critical path and total required clock cycles,
avoiding the SystemC simulation.
Page 36/63
Document ID: PRODDOC
14-03-2016
Here the third alternative will be employed, and an estimation of the required instructions to implement the
algorithms (blocks) on the MSP430 microcontroller instruction set will be performed, and can be seen in Table 5. As
the communication between the blocks except the DAC is purely internal it is a matter of writing the internal RAM,
and communication of a 16bit integer is therefore a matter of a write and a read (if not synchronization is needed),
and adding two clocks between each block should be sufficient. Writing to the SPI interface is about 10 clocks/sample.
Block
Audio.AudioEncoding
Description
Encoding of 160 integers of audio data [libgsm]
Audio.AudioDecoding
Decoding of 32,5 bytes of encoded audio data [libgsm]
Audio.Echo cancellation
Filtration of autistic feedback*
Delay in air (20cm): 587µs
Delay in ADC: 1µs
Delay in comm to DAC: <2µs
Delay in DAC: 10µs
Delay from value from splitter is received by
the microphone: 10+2+587+1µs
Filter size at 8kHz (number of samples):
600µ*8k=4.8 tab. [AEC]
Copy into shared RAM block
Sample ADC
Write integer to SPI
Add header and host to network order copy of 33 byte
Add header: 10
Loop: 2
Int to 4 byte network order: 4
Copy 4 byte: 1
Parse header, copy data, network to host order of 37
byte:
Parse header: 10
Copy data: 37
Compare: 3
Loop: 2
4 byte to int host order: 4
Receive/Transmit across ISM
Table 5 - CC430 execution times
Audio.Splitter
Audio.ADC
Audio.DAC
Communication.DataHandler
Communication.DataParser
ISM
Clock cycles
500 (152 bytes stack
RAM)
500 (152 bytes stack
RAM)
15 clocks (5 int tabs =
14 byte RAM)
1
1
10
10 + 2 +( 4 + 1)*33 =
177
10 + 37 + 3 + 2 + 4*37
= 200
? (dedicated ASIC)
Knowing the different paths from Table 1 it is possible to determine the required execution power from the CC430
(including communication as it is handled by the same micro controller).
Execution path
1. Microphone -> ADC - >
Echo cancellation ->
Audio encoding
Clocks
1(ADC) +
2(Comm) +
15(Echo) +
2(Comm) = 20
Frequency
8kHz
Clocks/s
160k
Page 37/63
Document ID: PRODDOC
2.
Audio encoding ->
Data Handler -> ISM ->
Antenna
3.
Antenna -> ISM ->
Data Parser -> Audio
decoding
4.
Audio decoding ->
Splitter -> DAC ->
Speaker
5.
Splitter -> Echo
cancellation
14-03-2016
500(Enc) +
2*33(Comm)
177 (hdl) +
2*37(Comm) = 817
2*37(Comm) +
200 (parse) +
2*33 (Comm) +
500 (dec) = 840
2 (Comm) +
1 (Split) +
2 (Comm) +
10 (DAC) = 15
2 (Comm)
8kHz / frame size =
8000 / 160 = 50
40.9k
8kHz / frame size =
8000 / 160 = 50
42k
8kHz
120k
8kHz
16k
Total
379k
Table 6 - Execution paths of interest
The result of 379k is most likely low, as it based on an ideal compilation of the instructions, yet using a simple factor10 rule of thumb it is still possible for the CC430 to run the code, as it has a 20 MHz clock, and with a factor-10
increase, a 3.8MHz clock would be sufficient. This will also allow very low power consumption on the microcontroller,
as it uses 160uA per MHz, and as 1MHz (not with a factor-10) is sufficient that is very good.
It may be seen that the ISM block does not require any execution. This is due to the fact that the ISM block is a PE
(Processing Element) in itself and dedicated to handling the ISM. It is therefore not able to run any user code, yet also
do not impose any execution on the microcontroller.
As a single clock takes 1/20000000 = 50ns it is possible to augment the SystemCTLM with the correct wait and thereby
achieving a Timed TLM. As it has already been shown that it is possible there is no real reason to create the SystemC
Timed TLM version for the Microcontroller solution, other than to gain experience with Timed TLM, and for that
reason the SystemC_TTLM_Micro project has been created. It performs the same operations as the TLM version, but
with correct waits.
8.2.1.1 Timed Transaction Layer Modelling
As the TLM version was already developed and the delays already calculated updating the TLM to a Timed TLM was a
minor challenge.
It is important that this is only timed to a certain level, as the delays are estimated and not generated based on
measurements or ISS.
To add a little extra the DACSim was augmented with proper Acoustic echo, which resulted in the Echo Cancellation
no longer working, as the echo now no longer comes instantly but rather just below 5 samples delayed. To fix this the
Echo Cancellation algorithm could be implemented and its functionality could actually be verified. This might be
valuable, yet as the Acoustic Echo Cancellation (AEC) is a tried and proven technology it has not been done here as the
risk of it not working is minimal.
Page 38/63
Document ID: PRODDOC
14-03-2016
It may also be noticed that as the decoding delay is not included it may actually be possible to get a buffer underflow.
Unfortunately as the delay from the ISM to the completed Audio Decoding is 840 clocks, this takes 42us, which is less
than an AudioClock, so it is not detectable. However if the Audio Decoding delay is increased, e.g. to 5000 clocks (total
delay of 267us), it may be seen that data is not available when the DAC requires the first value (underflow). This is
handled by simply sending a 0, but it may be observed that the speaker is silent for a few more samples. However the
overall system still function, as the decoding just has to be completed within 160 samples or 20ms.
8.2.2 Microcontroller, DSP and ISM transceiver ASIC
As the Microcontroller is able to handle the execution by itself, it would be illogical to add an extra component, yet if a
smaller Microcontroller might be chosen and then augmented with a DSP then it might be beneficial.
A suggested platform is shown in Figure 16. An alternative to this platform is to implement the CC1101 functionality in
the DSP. A way of doing this is with an oscillator and LNA and then a very fast ADC to recreate the received signal.
Unfortunately as recreating a signal requires a minimum sampling frequency twice as high as the carrier wave
frequency, or 868 * 2 = 1,7Ghz. This is within the possible range of a DSP, but a very high end, and this alternative is
much more suited for an FPGA implementation, which will be explored in the FPGA platform.
Page 39/63
Document ID: PRODDOC
14-03-2016
«HW»
Speaker
«HW»
Microphone
«HW»
Battery
DAC
ADC
<<CE>>
External highspeed data bus
(SPI)
Power Management
<<CE>>
External low
speed data bus
TMS320VC5401
PIC16F1516
SPI
RAM
<<PE>>
MicroController
«HW»
Antenna
<<PE>>
CC1101
(ISM tranceiver)
<<CE>>
Internal highspeed data bus
UART
Flash
<<CE>>
Internal highspeed data bus
<<PE>>
DSP
GPIO
SPI
<<CE>>
External mediumspeed data bus
(SPI)
«HW»
Button
«HW»
LED
Figure 16 - DSP and Microcontroller platform
Using the same DAC and PA as before the ADC [ADC], DSP [DSP], Microcontroller [PIC16], CC1101 [CC1101] may be
summed up in the tables below:
Resolution
Settling time
12bit
8us
100 KSPS
Interface
SPI
Power
On: 10.8mW
Sleep: 5µA
Supply = 3V
Price
4.27USD
Table 7 – ADC AD7854 characteristics
Frequency
5MIPS
Page 40/63
Document ID: PRODDOC
14-03-2016
Flash
RAM
GPIO
SPI
UART
Power
14KB
512B
Up to 25
1 (USCI)
1
On: 600nA
Sleep: 30nA
(Supply = 1.8V)
Price
1.2USD
Table 8 – Microcontroller PIC16F1516 characteristics
Frequency
Flash
RAM
SPI
DMA
Power
50MHz
14KB
16KB
2 (McBSP)
1
On: 22-52mA
Sleep: 20uA
(Supply =
1.8/3.3V)
Price
3.5USD
Table 9 – DSP TMS320VC5401 characteristics
Resolution
Settling time
12bit
9us
0.102 MSPS
Interface
SPI
Power
Sleep: 200nA
Rx LNA: >15mA
Supply = 1.8 – 3.6V
Price
2.0USD
Table 10 – ISM transceiver CC1101 characteristics
As with the microcontroller it is now possible to determine if the microcontroller and DSP is sufficiently powerful.
Naturally as the DSP is more powerful than the microcontroller then it is, but it may be interesting to see how much
better it can do the calculations.
Block
Audio.AudioEncoding
Audio.AudioDecoding
Audio.Echo cancellation
Description
Encoding of 160 integers of audio data [libgsm]
Shift and multiply-accumulate in single
instruction.
Decoding of 32,5 bytes of encoded audio data [libgsm]
Shift and multiply-accumulate in single
instruction.
Filtration of autistic feedback*
Clock cycles
200 (152 bytes stack
RAM)
200 (152 bytes stack
RAM)
6 clocks (5 int tabs =
Page 41/63
Document ID: PRODDOC
14-03-2016
-
Audio.Splitter
Audio.ADC
Audio.DAC
Communication.DataHandler
Communication.DataParser
ISM
Execution path
1. Microphone -> ADC - >
Echo cancellation ->
Audio encoding
2.
Audio encoding ->
Data Handler -> ISM ->
Antenna
3.
Antenna -> ISM ->
Data Parser -> Audio
decoding
4.
Audio decoding ->
Splitter -> DAC ->
Speaker
5.
Splitter -> Echo
cancellation
Delay in air (20cm): 587µs
Delay in comm. to ADC: < 2µs
Delay in ADC: 8µs
Delay in comm.. to DAC: <2µs
Delay in DAC: 10µs
Delay from value from splitter is received by
the microphone: 10+2+8+2+587µs
Filter size at 8kHz (number of samples):
609µ*8k=4.87 tab. [AEC]
Copy into shared RAM block
Read integer from SPI
Write integer to SPI
Add header and host to network order copy of 33 byte
Add header: 10
Loop: 2
Int to 4 byte network order: 1
Copy ram block: 20
Parse header, copy data, network to host order of 37
byte:
Parse header: 10
Copy data: 20
Compare: 3
Loop: 2
4 byte to int host order: 1
Receive/Transmit across ISM
Table 11 - DSP execution times
Clocks
1(ADC) +
10(Comm) +
6(Echo) +
2(Comm) = 19
200(Enc) +
2*33(Comm)
65 (hdl) +
2*37(Comm) = 405
2*37(Comm) +
72 (parse) +
2*33 (Comm) +
200 (dec) = 412
2 (Comm) +
1 (Split) +
2 (Comm) +
10 (DAC) = 15
2 (Comm)
10 byte RAM)
1
10
10
10 + 2 +33 + 20 = 65
10 + 20 + 3 + 2 + 37 =
72
? (dedicated ASIC)
Frequency
8kHz
Clocks/s
152k
8kHz / frame size =
8000 / 160 = 50
20.3k
8kHz / frame size =
8000 / 160 = 50
20.6k
8kHz
120k
8kHz
16k
Page 42/63
Document ID: PRODDOC
14-03-2016
Total
329k
Table 12 - Execution paths of interest
There is no surprise here; the DSP is more efficient at performing filtration and encoding/decoding than the
microcontroller, even with the increased communication. Luckily the DSP <-> microcontroller communication is at an
absolute minimum, as the microcontroller is very limited in its capabilities, yet monitoring a button, a few LEDs,
sending a heartbeat and turning on and off the DSP is certainly within its capabilities.
An interesting possibility is mapping all the execution to the DSP, completely removing the need for a microcontroller.
In order to determine this, it is required to make an estimate of the required clocks for the remaining functionality.
This is done in Table 13. As this functionality is pure control there is no real difference in implementing them in DSP or
microcontroller, so the same clock cycle figures may be used for the microcontroller case. This is not true for Firmware
Update for the microcontroller and DSP platform, as the update must be separated into a microcontroller update and
a DSP update, or only a very limited update may be performed.
Block
Control
Button
LED
Battery
Firmware update DSP
Firmware update Microcontroller
only
Signal strength control
Description
Simple state machine and RTC (10)
Perform heartbeat (100)
Perform emergency (100)
Perform Battery test (100)
Wake up in interrupt (button) (1)
Set GPIO (1)
Request battery status (100)
Handle battery status (100)
Microcontroller:
o Parse command (100)
o DSP in update mode ( 10)
o Send response (100)
DSP:
o Enter update mode (10)
o Receive and parse up to 100k in 30
minutes = 55 bytes/ sec: 125k
o Write and send response: 1k
Microcontroller:
o Parse command (100)
o Microcontroller in update mode ( 10)
o Receive and parse up to 100k in 30
minutes = 55 bytes/ sec: 500k
o Write and send response: 1k
Request signal strength (50)
Compare signal strength (100)
Write new signal strength (50)
Table 13 – Other blocks execution times
Clock cycles
310 every 2 hours.
1
1
200 every 2 hours.
DSP: 126k in 30 min
Mic: 210 in 30min
502k in 30 min
200 every 2 hours
There is however one important aspect that goes against the single DSP solution and that is the power consumption of
the DSP. It is far greater than the microcontroller, and as the DSP only have to be turned on during an emergency call,
this is a quite important aspect.
Page 43/63
Document ID: PRODDOC
14-03-2016
8.2.3 FPGA, ADC/DAC and ISM oscillator and LNA receiver filter
An FPGA is more expensive and more power consuming than a DSP and a microcontroller, yet it ideally suited for
parallel operations and fast date flow, so if the microcontroller alone can handle the execution, why even consider an
FPGA? Because it might be able to remove a component; the CC1101. As mentioned in the DSP section it might be
able to use a simple LAN and oscillator along with a fast ADC to implement the CC1101. This would require an
extremely fast DSP, but how does it look for an FPGA.
In order to get a ball-park idea of the required FPGA we can look at the sampling. Operating at 868MHz, and
remembering the Nyquist sampling theorem [Nyquist], it is required to sample the LNA at 2*868MHz in order to
restore the original signal. Naturally a low-pass filter eliminating higher frequencies is required, yet this can be done as
a simple analogue filter. Sampling and analysing at this rate requires a frequency of at least 1.8GHz. The parallel
nature of the FPGA would allow the processing of the sample (and continued processing of the previous samples) to
be done in parallel, otherwise a much higher rate would be needed; at least a factor 10 to process the sample, or a
DSP of at least 18GH, if it was to be implemented in DSP. Restoring the signal from the ADC allows for both Amplitude
Shift Keying (ASK) and Phase Shift Keying (PSK) encoding.
There are several ADCs that can handle a sample rate of 1.8GHz. An example could be the National GHz ADC family
ADC12D1x00, which is specifically designed to handle RF application in conjunction with an FPGA.
There are also several FPGAs, yet a good, and relatively small, option is the Altera Cyclone V GX. It is an up to 5G FPGA.
As it is also mentioned in the risk analysis there are very few FPGA resources in the team, and this section may
therefore be less accurate and thorough than the previous sections.
Page 44/63
Document ID: PRODDOC
14-03-2016
«HW»
Speaker
«HW»
Microphone
DAC
ADC
(TLV5616)
(AD7854)
<<CE>>
External highspeed data bus
(SPI)
«HW»
Battery
Power Management
<<PE>>
FPGA
(Cyclone V GX)
ADC
(ADC12D1000)
LED
<<CE>>
External extream
high-speed data
bus.
Button
Antenna
Low pass filter
(868MHz)
Oscillator
(868MHz)
LNA
Power Amplifyer
(PA)
Physical antenna
Figure 17 - FPGA platform
Frequency
Logic Elements (LE)
5Gbps
75000
Page 45/63
Document ID: PRODDOC
14-03-2016
Memory blocks
18x18 multiplier
Power
4.6MB
264
88 mW per channel
Supply = 3.3V
Price
300USD
Table 14 – FPGA Cyclone V GT characteristics
Resolution
Settling time
Power
12bit
2 GSPS
On: 3.38W
Supply = 1.9V
Price
995USD
Table 15 – ADC12D1000 characteristics
The first thing that springs to mind is price, and the second is power consumption. These alone would be sufficient to
exclude this as an option, however from an academic point of view it is interesting to continue with the analysis.
Block
Description
Audio.AudioEncoding
Encoding of 160 integers of audio data
Decoding of 32.5 bytes of encoded audio data.
Filtration of autistic feedback*
Delay in air (20cm): 587µs
Delay in comm. to ADC: < 2µs
Delay in ADC: 8µs
Delay in comm.. to DAC: <2µs
Delay in DAC: 10µs
Delay from value from splitter is received by
the microphone: 10+2+8+2+587µs
Filter size at 8kHz (number of samples):
609µ*8k=4.87 tab. [AEC]
16 bit hold
Read integer from SPI
Write integer to SPI
Add header and host to network order copy of 33
byte.
Parse header, copy data, network to host order of 33
byte:
Parse header: 10
Copy data: 20
Compare: 3
Loop: 2
4 byte to int host order: 1
Audio.AudioDecoding
Audio.Echo cancellation
Audio.Splitter
Audio.ADC
Audio.DAC
Communication.DataHandler
Communication.DataParser
Clock cycles
(LE/Mult/Memory bit)
10 (282/16/1216)
10 (282/16/1216)
1 (484/5/64)
1 (1/0/16)
1 (1/0/16)
1 (1/0/16)
1 (33/0/16)
1 (33/0/16)
Page 46/63
Document ID: PRODDOC
14-03-2016
ISM
Receive/Transmit across ISM
ADC: 3.8G (1k)
Oscillator ctrl.: 200M (1k)
Protocol: 2M (28k)
Required by Control
Table 16 - FPGA execution times
CPU
Execution path
1. Microphone -> ADC - >
Echo cancellation ->
Audio encoding
2.
Audio encoding ->
Data Handler -> ISM ->
Antenna
3.
Antenna -> ISM ->
Data Parser -> Audio
decoding
4.
Audio decoding ->
Splitter -> DAC ->
Speaker
5.
Splitter -> Echo
cancellation
CPU
Total
4G (30k/1/10k)
50M (650/0/1M)
Clocks
1(ADC) +
1(Comm) +
1(Echo) +
1(Comm) = 4
10(Enc) +
1(Comm)
10 (hdl) +
1 (Comm) +
200M/50 (ISM Osc) +
2M / 50 (ISM prot) = 4M
3.8G / 50 (ISM adc) +
2M / 50 (ISM prot) +
10 (parse) +
1 (Comm) +
50 (dec) = 76M
1 (Comm) +
1 (Split) +
1 (Comm) +
1 (DAC) = 4
1 (Comm)
Frequency
8kHz
Clocks/s (gates)
32k (484/5/64)
8kHz / frame size =
8000 / 160 = 50
200M (29k/17/11216)
8kHz / frame size =
8000 / 160 = 50
3.8Gk (1k – protocol is
already included
above/0/16)
8kHz
32k (2/16/1216)
8kHz
8k (already included)
1
20M
50M (650/0/1M)
4G (32k/38/1.1M)
Table 17 - Execution paths of interest
However, some work has been done into better ways to implement an ISM transceiver in an FPGA, as indicated in
[FPGA_RF]. Further investigation into this should be done, especially towards the option of purchasing a ready to use
IP. This is however outside of the analysis at this point.
8.2.4 Comparing platforms
When comparing the different platforms there are several considerations to take into account. Inspiration may be
taken from the Quality Attributes and requirements, which indicate that power consumption, risk and obviously price
is of the highest importance.
A table comparing these may be drawn up, as shown in Error! Reference source not found..
Platform
Price
(USD)
Risk/Complexity
Power
(sleep/audio)
Page 47/63
Document ID: PRODDOC
Microcontroller +
ISM ASIC
18.15
14-03-2016
1



Microcontroller ,
DSP + ISM ASIC
23.57
3




FPGA, ADC +
Oscillator
1295
As the CC430 is specifically designed for this form of
application several exemplas exists as well as a good
opportunity for purchasing expert assistance.
There are “free” libraries for most/all of the needed
algorithms implemented in C.
C programmers are relatively easy to find.
The DSP and Microcontroller can compile the same
algorithms as the Microcontroller.
Added complexity with respect to communication between
ISM ASIC, DSP and Microcontroller, especially with respect to
Firmware update.
Higher power consumption (DSP uses minimum 22mA, so
the actual value will be higher than indicated)
Possibility of adding more complex filtration and
encoding/decoding (more flexible).
10






Single chip solution
Some IPs for ISM, GSM encoding/decoding and AEC exist.
Skilled VHDL programmers are harder to come by than C.
Much larger complexity in programming VHDL than C.
Much larger communication complexity due to high sampling
rate.
Extremely high flexibility
22µW/
max. 648mW
Sleep
(DAC/SKY
entirely off):
60nW
70.69 µW /
max. 815mW
Sleep
(DSP/ADC/DA
C/SKY entirely
off): 60nW
88mW / min.
4W
Here is may be seen that not only is the Microcontroller and ISM ASIC the cheapest solution, and the one that use the
least amount of power, it is also the one with the lowest risk. This makes choosing a platform very easy, or does it?
Have we properly considered the possible mappings? Obviously not, as there are an infinite amount of possible
platforms, as mentioned earlier, but maybe using just the components analysed in this section, there is another
alternative. To test this, the Perato point analysis is well suited, as shown in the following section.
Page 48/63
Document ID: PRODDOC
14-03-2016
9 Pareto analysis
The Pareto analysis technique is derived from economics analysis and is used for selection of a limited number of tasks
that produces significant overall effect. In short it’s about gathering the task which produces the most and cost the
less. An pareto optimal solution would occur when no further pareto improvement can be made.
The technique is assimilated to the 0-1 Knapsack problem, and can be used to compare different hardware
architectures. Normally pareto analysis is based upon cost and performance. While these factors are not the only
important parameter in a project, we would like to give a proposal which include performance and risk as an
alternative. A pareto analyse of available components for emergency call button is illustrated below.
«block»
Microcontroller
PIC-16
(5MIPS)
«block»
DSP
(50MHz)
«block»
FPGA
(5GHz)
«PE»
Microcontroller
MSP430
(20MIPS)
<<CE>>
SPI
(300)
<<CE>>
(internal)
(500)
<<CE>>
(DMA)
(600)
«PE»
ISM ASIC
CC1101
(NA)
«HW»
ADC for ISM
«HW»
ADC for
microphone
«HW»
DAC for speaker
«HW»
On/Off for oscillator
Figure 18 - All-inclusive Emergency call button platform
Execution time of blocks
Component
Cost $
Audio
ISM
Communication
FPGA
55
5 ms
5 ms
ASCIC
20
1ms
1ms
MIPS
5
2ms
2ms
2ms
DSP
10
7ms
7ms
7ms
Page 49/63
Document ID: PRODDOC
14-03-2016
Audio ISM
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Audio ISM
0
0.2
0.4
0.6
0.8
1
1.2
10 Other mapping strategies
In the above design space exploration has been used to logically compare the different platforms with respect to
performance, power and cost. Then a Pareto analysis has been performed to determine the Pareto optimal mappings
and to determine if any alternatives may be excluded as being inferior on all counts.
There are some alternatives to these methods, and an example of this is the Load Balancing Algorithm (LBA) and
Longest Processing Time Algorithm (LPT). These two mapping methods have the advantage that they may be
automated and an “optimal” mapping may be mathematically calculated.
This project has chosen to not focus on these methods. The reason is quite simple, and yet requires some explanation.
Both the LPT and LBA use a figure like the one in Figure 18, where LPT require the communication cost and the LBA do
not. The problem with this is that if the assignment is not to evaluate an existing platform, then creating such a figure
requires doing a design space exploration, and having it encompass all alternatives requires it to contain many
redundant processors. The LPT and LBA are not designed for this. Furthermore the LBA and LPT only consider
computation requirements (LBA and LPT) and communication requirements (LPT), they do not consider risk, cost and
power requirements, which are often very important aspects. Finally the LBA and LPT work on a premise that it is
possible to obtain a number representing how many instructions a given algorithm require. This is a great
oversimplification, as a given algorithm may have an entirely different required number of instructions on an FPGA, a
microcontroller and a DSP. All in all it is tempting to have an algorithm design your platform for you, but at the
moment algorithms like the LPT and LBA has not reached the maturity to handle a complete platform design, with all
the different, often conflicting, requirements that such a design have to consider.
Page 50/63
Document ID: PRODDOC
14-03-2016
11 SystemC
One of main objective of this curse/project is to obtain detailed knowledge of the system design language SystemC.
SystemC is a modelling language, which lets designer to simulate the system including both hardware and software
components at a higher level of abstraction [// Always look at the active buffer first, in case it is
empty// Always look at the active buffer first, in case it is empty// Always look at the active
buffer first, in case it is empty]. Such a model gives design team members a common understanding of
design at an early stage of design process. The implemented components will exist in the final product, why many
algorithms and interfaces implemented for simulation can be reused directly in final product through development
process. Another major advantage of a SystemC simulation is its outcome as an executable specification.
Learning a new programming language, its strength and weakness is best achieved by trying it out on a concrete
design. In this project, Emergency button, would audio module be an absolute qualified candidate for simulation,
since it includes some heavy, resource demanding algorithms such as audio en- decoding and echo cancellation
algorithms. Implementation of these algorithms is out of scope for this project, why we have chosen to simulate these
algorithms by introducing appropriate delays instead of real implementation. It can be advantageous to implement
the algorithms in a real project initiated by industry, since the simulation result will be much more realistic and the
implemented algorithm will be used directly in the real implementation.
11.1 Simulation
As mentioned earlier the Audio block is chosen as subject for simulation. Simulation can be performed at several
abstraction levels such as TLM, timed TLM (consisting of CCAM and BCAM) or RTL.
A TLM simulation is often used to provide an overall understanding of design among design team, while a timed
simulation as CCAM or BCAM address merely a proof of concept and consistency of design. Taking this point into
account, we decided to apply a timed simulation principle in terms of both CCAM and BCAM. Another major
motivation to choose a timed TLM simulation is the learning objective of the project. A timed TLM modelling is more
challenging task and will enforce us to explore SystemC in detail.
In audio systems with no high audio quality demands a sampling frequency of tenth kilo hertz and a sampling depth of
12 bits will usually be plenty. These parameters are important and will impact the choice of en-, decoder algorithms.
As a good design rule we try to apply known technologies where it is possible. The GSM full rate speech codec is a
good choice as decoder. There are a plenty supplier of GSM codec algorithm in terms of open source, which motivate
us to pick it up as codec algorithm.
11.2 GSM 06.10
GSM 06.10 (also known as GSM Full Rate) is a codec designed by the European Telecommunications Standards
Institute for use in the GSM mobile networks. This variant of the GSM codec can be freely used so it is easy to find it in
open source VoIP applications. The codec operates on audio frames of 20 milliseconds length (i.e. 160 samples) and it
compresses each frame to 33 bytes, so the resulting bitrate is 13 kbit/s (to be precise, the encoded frame is exactly 32
and 1/2 byte, so 4 bits are unused in each frame)[// Always look at the active buffer first, in case it is
empty].
Above information taken from GSM06.10 standard leads to a sampling rate of 44 KHz (t p = 22,727 µs) will be sufficient
to supply encoder algorithm with source data.
Page 51/63
Document ID: PRODDOC
14-03-2016
The SystemC implementation of Audio block is based on the ibd (See Figure 9) The AudioTop module is the
programmer view. This module is responsible for tasks of instantiating all other modules and mapping of signals to the
module ports.
AudioDecoder
AudioClk_in
AudioClk_in
thrd_GetData
DAC
bufferMutex
data_in
thrd_getData
data_in
thrd_DecodeData
Data_out
outDataDec
outDataComSim
hsClk_in
AudioClk
Data_out
dataReady_in
CommunicationSim
COMSIM_busy
AudioTop
ADC_busy
ready_out
thrd_SimulateCom
munication
hsClk_in
HS_Clock
dataReady_in
ENCOD_busy
outdataADC
data_in
outDataEnc
EC_busy
outdataEC
dataReady_in
ready_out
AudioClk_in
ready_out
ready_out
AudioEncoder
Echo_in
EchoCancellation
ADCSource
dataReady_in
hsClk_in
thrd_SampleReady
thrd_EncodeData
thrd_RunEchoCancellation
data_out
data_in
data_in
EncodingDone = true
data_out
Data_out
thrd_SendEncodedData
Figure 19 - SystemC class diagram
Each module consists of one or more process, depending of task to be handled. The simulation generates an output
trace (AudioWave.vcd) file which can be loaded in gtk wave viewer.
ADCSource generates random data simulating input from microphone and inject data on the data bus on positive edge
of AudioClk_in. EchoCancellation module runs an algorithm to minimize the loopback effect of speaker signal back
through the mic, called echo rejection. This block receives both ADC data and output from decoder (audio signal on
speaker) and generates stimuli for the encoder block.
Encoder block encodes the audio input and deliver the result to communication module, simulated as
CommunicationSim. CommunicationSim has the task of receiving the encoded data and switch it back to output
channel as data source to AudioDecoder block. AudioDecoder block receives GSM encoded frames from
Communication block in its input buffer. After decoding frames it copies decoded data from input buffer to
outputbuffer using a mutex to ensure secure data concurrent data transfer across resources (output buffer). At last
decoded data are transferred to DACSource block with a rate of AudioClk_in Hertz.
EchoCancellation, encoder and decoder algorithms are presented as delays to simulate on CCAM abstraction layer
without having the final implementation of them. All data transfers are simulated at BCAM level, with exact timing of
bus clock cycles. The result of simulation is plotted in gtk wave viewer as shown below. All tree traces are taken from
the same simulation but different time resolutions.
Page 52/63
Document ID: PRODDOC
14-03-2016
Figure 20 – Gtk Viewer
11.3 Evaluation
We have tried a specific design simulation out by SystemC. The obvious strength of simulation in SystemC resides not
only on establishing of a common view of design among design team, but also an early proof of concept, which can be
used to evaluate choices on hardware (partitioning and mapping).
By using SystemC in practice, we come to the fact that the learning curve for the simulation tool is quite steep and
particular use in projects that veer toward ASIC solutions, the benefits do not match the time effort. Presence of an
auto generate tool would compensate the recent mentioned disadvantage by gaining shorter implementatin time
through autogegerated VHDL code.
On the other hand, it is important to emphasize that the situation will immediately be different when we talk HW &
SW co-design principles in which we operate within the FPGA design rules, where the HW platform is often out of
reach until a late stage in the development process. SystemC is of great importance in this case. A HW platform can be
simulated using SystemC and provides the basis for SW development without having the HW platform availabe.
Page 53/63
Document ID: PRODDOC
14-03-2016
Lack of debug facility is a main downside of SystemC. Considering the simulation model as writing C++ code at same
level as endpoint product, the risk of bug and failure is equally high. This will make demand on advanced debugging
tools, hence GDB and old manner debugging in terms of printf() are the only options we had available.
Page 54/63
Document ID: PRODDOC
14-03-2016
12 Conclution:
Page 55/63
Document ID: PRODDOC
14-03-2016
13 References:
VOIP CODEC
SystemC
BASS
gsm
SysML
Gajski
http://toncar.cz/Tutorials/VoIP/VoIP_Basics_Overview_of_Audio_Codecs.html
SystemC From The Ground Up (ISBN1-4020-7988-5)
Software Architecture in Practice (ISBN 0-321-15495-9)
http://en.wikipedia.org/wiki/GSM (A brief description of the principles within GSM and the
Voice codec).
System Engineering Handbook Version 3.2
GSM encoding/Decoding library written in C. https://launchpad.net/libgsm
http://dea.brunel.ac.uk/cmsp/Home_Gobu_Rasalingam/Gobu_new_way.htm (AEC, Acoustic
Echo Cancellation).
A practical guide to SysML (IBSN 978-0-12-378607-4)
Embedded System design (ISBN 978-1-4419-0503-1)
CC430
TIPlat
FPGA
ADCGHz
ADC
CC1101
DSP
PIC16
SKY
DAC
Nyquist
http://focus.ti.com/lit/ds/symlink/cc430f6137.pdf
http://processors.wiki.ti.com/index.php/RF-CC430-65313-MVK
http://www.altera.com/literature/hb/cyclone-v/cyv_51001.pdf
http://www.national.com/ds/DC/ADC12D1000.pdf
http://www.analog.com/static/imported-files/data_sheets/AD7854_7854L.pdf
http://focus.ti.com/lit/ds/symlink/cc1101.pdf
http://focus.ti.com/lit/ds/symlink/tms320vc5401.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/41452A.pdf
http://www.skyworksinc.com/uploads/documents/201134B.pdf
http://focus.ti.com/lit/ds/symlink/tlv5616.pdf
Digital Design (Wiley), (ISBN:)
INCOSE
libgsm
AEC
Page 56/63
Document ID: PRODDOC
14-03-2016
14 Appendix
14.1 SystemC_TLM
This is a small part of the code encompassing what is believed to be the most interesting parts of the SystemC_TLM
code.
SystemC is divided into modules and threads/methods. As explained in SysML -> SystemC a block can easily be
translated into a module, but it may require multiple threads to realize a block. This may be seen in the Audio
decoding block, where three threads implement the functionality, as shown in Table 18.
SC_THREAD(audio_receiving_thread);
SC_THREAD(audio_decoding_thread);
SC_THREAD(audio_playback_thread);
sensitive << AudioClk;
dont_initialize();
Table 18 - Audio Decoding initialization SystemC
Audio_receiving_thread is responsible for receiving data from the communication block, audio_decoding_thread
decodes the received data frames and audio_playback_thread pushes the decoded data samples to the speaker (via
Splitter and DAC) at the correct frequency (AudioClk). The reason that three threads are required is that the ISM
receptions are sporadic, and it is therefore not possible to determine when they will occur. The decoding can take
more time than there are between two samples (it does not, but it might), so it is needed to have a separation
between the playback thread and the decoding thread, as the decoding thread may be processing for a “long” period
of time.
As mentioned earlier the data received from ISM, via the Communication block, may be delayed resulting in a burst of
data, and also that some data may be buffered before playback begins to protect against small delays in
communication. This is implemented in the audio_receiving_thread, as shown in Table 19.
while(true)
{
GSM0610DataFrame* tmp_data_frame = data_from_communication.read();
receiveBufferLock.lock();
if (receiveBuffer.capacity() - receiveBuffer.size() == 0)
{
printf("Audio receive buffer Overflow\r\n");
}
receiveBuffer.push_back(tmp_data_frame);
receiveBufferLock.unlock();
buffersChanged.notify(SC_ZERO_TIME);
}
Table 19 - Audio decoding receiving thread
The receiveBuffer is a circular buffer holding the data frames (or rather a pointer to a data frame). The length of the
circular buffer is used to determine the max buffering time, but it should not be “infinite”. There are two reasons for
this; firstly, to ensure an uninterrupted audio stream, the entire data stream would have to be buffered before
playback commences, which is naturally not possible in a “real-time” communication. Secondly, as the playback must
continue at a certain speed (8kHz), if a large delay is experienced, e.g. 20 seconds, and the buffer simply accept this
then when the data is pushed through the conversation will be 20 seconds delayed, and this delay will persist. This is
Page 57/63
Document ID: PRODDOC
14-03-2016
shown in figure xx. Alternatively an acceptable max delay is determined, e.g. 1 second, and the buffer is dimensioned
accordingly. 160 samples per data block at 8kHz is 50 data blocks, so the circular buffer should be 50 frames deep. If a
delay of 20 seconds is experienced and the data is then all received at once, all but the last second is overridden in the
buffer, and the 19 “old” seconds are simply lost. In Figure 21 may be seen the effects of a dropout in communication,
both short and longer. It can also be seen that if the buffer do not have a maximum size it will simply grow and the
Audio delay will continue to increase. With an infinite buffer size no data is lost, but in this case where the playback
speed is fixed, it is actually better to lose data.
ms
0
20
40
60
80 100 120 140 160 180 200 220 240 260 280
Real time
Data frame arrival
Buffer content
20
Small delay in
communication path
40
20
60
40
20
80
60
40
80
60
260
240
220
200
180
140 160
120 140 160
80 100 120 140 160
280
260
240
220
200
100ms
80ms
60ms
40ms
20ms
Audio delay
1/160 of the
Audio clock
Initial buffering to protect
against small delays
Here the speaker
will play nothing
Figure 21 - Effect of communication delay
It may also be seen that an overflow is logged, but it is not a fatal error (actually in this simulation it would be), the last
entry in the buffer is simply overridden.
The buffer is protected by a mutex, as it is shared with the decoding thread. Finally an event is raised to notify the
decoding thread that new data is available. The decoding thread may be busy or may not have any playback buffer to
decode to (if they are already full), but in that case the event is simply ignored.
The decoding thread receives an event from both the received thread (when new data is available) and from the
playback thread (when a playback buffer has been emptied), as both events may trigger a new decoding to be
initiated. This may be seen in Table 20 - Audio decoding thread.
GSM0610DataFrame tmp_data_frame;
int bufferToUse;
int tempResult;
while(true)
{
wait(buffersChanged);
if (!receiveBuffer.empty())
{
bufferToUse = -1;
playbackBufferLock.lock();
// Always look at the active buffer first, in case it is empty
if (playbackBufferLength[activeAudioBuffer] == -1)
Page 58/63
Document ID: PRODDOC
14-03-2016
{
bufferToUse = activeAudioBuffer;
}
else if (playbackBufferLength[activeAudioBuffer == 0 ? 1 : 0] == -1)
{
bufferToUse = activeAudioBuffer == 0 ? 1 : 0;
}
playbackBufferLock.unlock();
if (bufferToUse != -1)
{
receiveBufferLock.lock();
std::auto_ptr<GSM0610DataFrame> ptr(receiveBuffer.front());
receiveBuffer.pop_front();
receiveBufferLock.unlock();
tempResult = performAudioDecoding(ptr->getBuffer(), ptr->length(),
playbackBuffer[bufferToUse], sizeof(playbackBuffer[bufferToUse]));
playbackBufferLock.lock();
playbackBufferLength[bufferToUse] = tempResult;
playbackBufferLock.unlock();
}
}
else
{
printf("No data to decode\r\n");
}
}
Table 20 - Audio decoding thread
Here it may be seen that the data received from the ISM block, via the Communication block, is actually a pointer to a
heap-allocated data chunk (created with new), and a so called “smart pointer” is used to ensure cleanup. The
attentive reader may have noticed that the implementation has a serious memory-leak, as if the circular buffer
overflow, then the overridden data is never cleaned up. Naturally this would be unacceptable in a real
implementation, but in this simulation it is OK. The attentive reader may also notice the dual lock/unlock of the
playbackBufferLock, and wonder why the mutex lock does not include the actual decoding. This is due to the fact that
if the buffers length is -1 it is guaranteed that the playback thread will not touch it, and the protection therefore only
need to involve checking and setting the buffer length.
This can also be seen in the playback thread, where double buffering is used to ensure fast switching between one
decoded data frame and the next, as here it is also only the test and switch that require protection. It the
writing/reading of an integer to/from an entry in an integer array could be guaranteed to be an atomic instruction,
much of the protection might be removed. This is shown in Table 21. Here it may also be seen that the thread notify
the decoding thread when a buffer is done. The really attentive reader may notice an issue with the event, as an event
is lost if the receiving thread is not observing the event at the time of the event. If a playback buffer is spent during a
decoding then the event will be lost, and the decoding will not be triggered until a new ISM data frame is received or
the second buffer is empty, which would result in a gap in the audio. Fixing this will be left up to the reader.
short tmp_val_splitter;
while(true)
{
wait();
tmp_val_splitter = 0;
// Only this thread change activeAudioBuffer so no protection is needed
if (playbackBufferLength[activeAudioBuffer] > 0)
{
tmp_val_splitter = playbackBuffer[activeAudioBuffer][audioBufferOffset];
if (++audioBufferOffset == playbackBufferLength[activeAudioBuffer])
{
Page 59/63
Document ID: PRODDOC
14-03-2016
playbackBufferLock.lock();
audioBufferOffset = 0;
playbackBufferLength[activeAudioBuffer] = -1;
activeAudioBuffer = activeAudioBuffer == 0 ? 1 : 0;
playbackBufferLock.unlock();
buffersChanged.notify(SC_ZERO_TIME);
}
}
else
{
printf("Playback buffer Underflow\r\n");
}
data_to_splitter.write(tmp_val_splitter);
}
Table 21 - Audio Playback thread
There are many other interesting aspects of the code, like the DACSim forwarding the speaker data to the ADCSim so
it can simulate the Acoustic echo, or the use of Custom SystemC data types, which were then abandoned for the use
of pointers, but for these fascination aspects please refer to the code.
14.2 SystemC_TTLM_Micro
The Timed TLM for the Microcontroller and ISM ASIC is very similar to the TLM with two important differences; first,
the included waits and second, the inclusion of a simulated sound delay as the sound travels from the speaker to the
microphone.
An example of the first is shown in Table 22, and the second in Table 23.
short tmp_val_echo_cancel;
unsigned char* tmp_result_buffer;
int tmp_result_buffer_length;
while(true)
{
tmp_val_echo_cancel = data_from_echocancellation.read();
// Simlate delay in communication from EchoCancellation
// 2 clocks at 20MHz = 2 * 50ns
wait(100, SC_NS);
// Audio Encoding algorithm
tmp_result_buffer = performAudioEncoding(tmp_val_echo_cancel, &tmp_result_buffer_length);
// Simulate the encoding
// 500 clocks at 20MHz = 500 * 50ns
wait(25, SC_US);
if (tmp_result_buffer != 0)
{
GSM0610DataFrame* tmp_result = new GSM0610DataFrame();
tmp_result->push_back_all(tmp_result_buffer, tmp_result_buffer_length);
data_to_communication.write(tmp_result);
}
}
Table 22 - Audio encoding with delay
void DACSim::dac_writer_thread()
{
short tmp_val_speaker;
while(true)
{
tmp_val_speaker = data_from_splitter.read();
// Simulate the communication: 2 * 50ns
wait(100, SC_NS);
// Simulate the DAC delay: 10 * 50ns
Page 60/63
Document ID: PRODDOC
14-03-2016
wait(500, SC_NS);
fprintf(fp_speaker, "%hd\r\n", tmp_val_speaker);
bufferForEcho.write(tmp_val_speaker);
firstSampleArrived.notify(SC_ZERO_TIME);
}
}
void DACSim::dac_feedback_thread()
{
wait(firstSampleArrived);
wait(587, SC_US);
while(true)
{
data_to_adcsim.write(bufferForEcho.read());
wait(125, SC_US); // wait for next sample (8kHz = 125us)
}
}
Table 23 - Acoustic echo simulation in DACSim
For other examples of the waits, please refer to the code, and to see the buffer underflow (the Audio decoding time
has been increased to show this) please run the simulation.
14.3 SystemC
Below is the SystemC code used to implement AudioDecoder module. The module consists of one constructure,
AudioDecoder, and two thread processes, thrd_GetAndDecodeData and thrd_SendData respectively.
The process thrd_GetAndDecodeData receives data in its packet_in buffer for each positive edge of data_ready_in
signal, which it’s sensitive to.
After receiving enough data to encode (a hole frame) a 10µs delay presented to simulate the decoding algorithem.
Encodeded data will be copied to buffer_out protected by a mutex (mutual exclution) to prevent the buffer from
simultaneously access and inappropriate data overwrite.
The process thrd_GetAndDecodeData runs countiniusly and supply the DAC module either by zeros or decoded data
depending on presence of decoded data.
/*
* AudioDecoder.cpp
*
* Created on: 19/02/2011
*
Author: saa
*/
#include "AudioDecoder.h"
#include "defines.h"
#if (MY_DEBUG_1)
#include <string.h>
#include <iostream>
#include <iomanip>
using namespace std;
#endif
AudioDecoder::AudioDecoder(sc_module_name nm) :
sc_module(nm),
idx(0),
len(0),
templen(0)
{
SC_THREAD(thrd_GetAndDecodeData);
sensitive << data_ready_in.pos();
//dont_initialize();
SC_THREAD( thrd_SendData);
sensitive << AudioClk_in.pos();// << e1;
Page 61/63
Document ID: PRODDOC
14-03-2016
//dont_initialize();
}
void AudioDecoder::thrd_GetAndDecodeData(void)
{
while(true)
{
wait();
//read data packet
packet_in[templen++] = data_in.read();
#if (MY_DEBUG_1)
cout << "\n Decoder get: Temp len = " << setw(3) << templen-1
<< "
data: " << setw(3) << packet_in[templen-1] << endl ;
#endif
if(templen == MAX_SIZE)
{
#if (MY_DEBUG_1)
cout << "\n Decoder copy: Temp len = " << setw(3) << templen << " her kopierer vi bufferen.
\n" ;
#endif
//Lock mutex
bufferAccess.lock();
#if (MY_DEBUG_1)
cout << "bufferAccess lock: " << (packet_in[templen-1] + 1)/MAX_SIZE << endl;
cout << packet_in[templen-1] + 1 <<endl;
#endif
//Delay due to Encoding algorithm
wait(10, SC_US);
//memcpy(buffer_out, packet_in, templen);
for(sc_uint<6> i = 0; i < MAX_SIZE; i++)
{
buffer_out[i] = packet_in[i];
}
#if (MY_DEBUG)
cout << "DEC_OUTPUT_BUFFER: [";
for(sc_uint<6> i = 0; i < MAX_SIZE; i++)
{
cout << buffer_out[i] << ", ";
}
cout << endl;
#endif
len = templen;
templen = 0;
//unlock mutex
bufferAccess.unlock();
}
}
}
Duer ikke i SystemC (3 timer)Ævvvvvvvv
void AudioDecoder::thrd_SendData(void)
{
while(true)
{
wait();
//cout << endl << "Decoder: len = " << len << endl;
if (len != 0)
{
len--;
//Lock mutex
bufferAccess.lock();
data_out->write(buffer_out[idx++]);
}
else
{//buffer empty
//unlock mutex
bufferAccess.unlock();
idx = 0;
data_out->write(0);
}
}
}
Page 62/63
Document ID: PRODDOC
14-03-2016
Page 63/63
Download