Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Vaibhav Narkhede

171090977, BTECH EXTC


ASSIGNMENT 1.

Q1) EXPLAIN IN DETAIL HOW WILL YOU DESIGN AND MANAGE A


PARTICULAR EMBEDDED SYSTEM PRODUCT.
>>IP-PHONE:
A VoIP or IP phone is simply a handset that uses Voice over IP (VoIP) technology to allow
calls to be made and received over an IP network (like the Internet) instead of using the
traditional Public Switched Telephone Network (PSTN). The audio signals from the voice calls
are digitized into IP data packets, the handset is connected to an IP network and these IP packets
are routed through a private IP network or over the public Internet creating a connected voice
call.
Designing the IP-Phone:
The phone has the only user satisfaction criteria, to make the sound go to and fro clearly with
minimalistic design for ease of use,
1. Controller: The controller used must have and wireless protocol interface with an
adequate ADC capacity.
2. Microphone: the microphone design must ensure that the noise pickup is at minimum
with clear sound quality.
3. Keypad: A dial up keypad rather than touch would be prefffered enabling lower
complexity levels.
4. Speaker: The speaker of the phone must have a frequency response that suites the
human frequency only such that rest is automatically avoided.
5. Since an Ip phone is a VOIP technology it must ensure that it has a network interface
card and has got an ethernet jack or wifi connectivity so that a trade-off is achieved if
either has a problem.
6. The ip phone software must be uniquely identify the ip-phone such that a call can be
routed from any part of the world like, assigning a public ip rather that that a dynamic
one.
7. Caller id: A caller id to trace the ip of the receiving ip phone call and the call details.
8. Software: The software will have to be implemented so as to patch the security
problems, route the traffic and provide information to the dialer if the phone is busy .

Managing an Ip-phone:

1. For managing the incoming and outgoing connections the phone must have a reliable
internet connection via ethernet or a wifi connection.
2. The phone must be provided with a public ip address so that it is uniquely identified
over a network.
3. The internet applications have an issue of easy phone taps. Hence, securing the
conversations over appropriate protocols and a security patch to overcome these and
the D-Dos attacks will be implemented.
4. Call Logger to log the call requests and tracing of the details of the received call is
important to eliminate unsafe requests.
5. OTA software update capacity (ethernet bootloader) to enhance the software with the
new patches.

Having these conditions managed with these design criteria a ip-phone is ready to go

Q2.) EXPLAIN ANY 2 DEVELOPMENT MODELS.


>> Waterfall Model:
• The Waterfall Model was first Process Model to be introduced. It is also referred to as
a linear sequential life cycle model. It is very simple to understand and use. In a
waterfall model, each phase must be completed before the next phase can begin and
there is no overlapping in the phases.
• Waterfall model is the earliest SDLC approach that was used for software development.
The
waterfall Model illustrates the software development process in a linear sequential flow;
hence it is also referred to as a linear-sequential life cycle model. This means that any
phase in the development process begins only if the previous phase is complete. In
waterfall model phases do not overlap.

Waterfall Model design


Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software development
is divided into separate phases. In Waterfall model, typically, the outcome of one phase acts as
the input for the next phase sequentially. Following is a diagrammatic representation of
different
phases of waterfall model.
• Requirement Gathering and analysis: All possible requirements of the system to be
developed are captured in this phase and documented in a requirement specification
doc.
• System Design: The requirement specifications from first phase are studied in this
phase and system design is prepared. System Design helps in specifying hardware and
system requirements and also helps in defining overall system architecture.
• Implementation: With inputs from system design, the system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality which is referred to as Unit Testing.
• Integration and Testing: All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.
• Deployment of system: Once the functional and non-functional testing is done, the
product is deployed in the customer environment or released into the market.
• Maintenance: There are some issues which come up in the client environment. To fix
those issues patches are released. Also, to enhance the product some better versions are
released. Maintenance is done to deliver these changes in the customer environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards like a waterfall through the phases. The next phase is started only after the defined
set
of goals are achieved for previous phase and it is signed off, so the name "Waterfall Model".
In
this model phases do not overlap.
Waterfall Model Application
Every software developed is different and requires a suitable SDLC approach to be followed
based
on the internal and external factors. Some situations where the use of Waterfall model is most
appropriate are:
• Requirements are very well documented, clear and fixed.
• Product definition is stable.
• Technology is understood and is not dynamic.
• There are no ambiguous requirements.
• Ample resources with required expertise are available to support the product.
• The project is short.
Advantages Disadvantages
Simple and easy to understand and use No working software is produced until late
during the life cycle
Easy to manage due to the rigidity of the model. Each High amounts of risk and uncertainty.
phase has specific deliverables and a review process.
Phases are processed and completed one at Not a good model for complex and object-
a time. oriented projects.
Works well for smaller projects Poor model for long and ongoing projects.

>>Spiral Model:
• Spiral model is one of the most important Software Development Life Cycle models,
which
provides support for Risk Handling. In its diagrammatic representation, it looks like a
spiral with
many loops.
• The exact number of loops of the spiral is unknown and can vary from project to
project. Each loop of the spiral is called a Phase of the software development process.
The exact number of phases needed to develop the product can be varied by the project
manager depending upon the project risks.
• As the project manager dynamically determines the number of phases, so the project
manager has an important role to develop a product using spiral model.The Radius of
the spiral at any point represents the expenses(cost) of the project so far, and the angular
dimension represents the progress made so far in the current phase.

Below diagram shows the different phases of the Spiral Model:


Each phase of Spiral Model is divided into four quadrants as shown in the above figure. The

functions of these four quadrants are discussed below-


• Objectives determination and identify alternative solutions: Requirements are gathered
from the customers and the objectives are identified, elaborated and analyzed at the
start of
every phase. Then alternative solutions possible for the phase are proposed in this
quadrant.

• 2. Identify and resolve Risks: During the second quadrant all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that
solution is identified and the risks are resolved using the best possible strategy. At the
end of this quadrant, Prototype is built for the best possible solution.

• 3. Develop next version of the Product: During the third quadrant, the identified features
are
developed and verified through testing. At the end of the third quadrant, the next version
of
the software is available.

• 4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate
the so far developed version of the software. In the end, planning for the next phase is
started.

Risk Handling in Spiral Model


• A risk is any adverse situation that might affect the successful completion of a software
project. The most important feature of the spiral model is handling these unknown risks
after the project has started
• Such risk resolutions are easier done by developing a prototype. The spiral model
supports coping up with risks by providing the scope to build a prototype at every phase
of the software development.

• Prototyping Model also support risk handling, but the risks must be identified
completely before the start of the development work of the project. But in real life
project risk may occur after the development work starts, in that case, we cannot use
Prototyping Model.
• The Spiral model is called as a Meta Model because it subsumes all the other SDLC
models. For example, a single loop spiral actually represents the Iterative Waterfall
Model. The spiral model incorporates the stepwise approach of the Classical Waterfall
Model. The spiral model uses the approach of Prototyping Model by building a
prototype at the start of each phase as a risk handling technique. Also, the spiral model
can be considered as supporting the evolutionary model – the iterations along the spiral
can be considered as evolutionary levels through which the complete system is built.
Advantages
• Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the
risk analysis and risk handling at every phase.
• Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
• Flexibility in Requirements: Change requests in the Requirements at later phase can be
incorporated accurately by using this model.
• Customer Satisfaction: Customer can see the development of the product at the early
phase of the software development and thus, they habituated with the system by using
it before completion of the total product.
Disadvantages
• Complex: The Spiral Model is much more complex than other SDLC models.
• Expensive: Spiral Model is not suitable for small projects as it is expensive.
• Too much dependable on Risk Analysis: The successful completion of the project is
very much dependent on Risk Analysis. Without very highly experienced expertise, it
is going to be a failure to develop a project using this model.
• Difficulty in time management: As the number of phases is unknown at the start of the
project, so time estimation is very difficult

Q3) WHAT ARE THE CHALLENGES THAT YOU NEED TO SUFFER WHILE
DEVELPOING AN EMBEDDED SYSTEM.

>> Embedded software is always a constituent of a larger system, for instance, a digital watch,
a smartphone, a vehicle or automated industrial equipment. Such embedded systems must have
real-time response under all circumstances within the time specified by design and operate
under the condition of limited memory, processing power and energy supply. Moreover,
embedded software must be immune to changes in its operating environment – processors,
sensors, and hardware components may change over time.

1) High Power Dissipation of Embedded System Design


The persistent challenge is how to deploy an embedded system with an increasing number
of transistors and with an acceptable power consumption ratio.The power dissipation per
transistor is increasing with the increase in gate density, the power density of system on
chips is set to increase. Thus, the engineers must reduce overall embedded systems’ power
consumption by using efficient system architecture

2) Problems of Testing an Embedded System Design


Embedded Hardware Testing: This is similar to all the testing types where
embedded developers use hardware based test tools. This refers to the embedded
hardware tested for the system’s performance, consistency, and validation as per
the product requirement.
Verification: Ensuring whether functional verification has been implemented
correctly
Validation: Referring to ensure whether the product matches with the requirement
and passes all the quality standards.

3) Stability Unexpected behaviour from an embedded system is inadmissible and poses


serious risks. End users demand that embedded systems must have uniform behaviour
under all circumstances and be able to operate durably

4) Safety in critical environments. Software Development Life Cycle (SDLC) for


embedded software is characterized by more strict requirements and limitations in terms
of quality, testing, and engineering expertise.

5) Increased Cost and Time-to-Market


Apart from flexibility and security, embedded systems are tightly constrained by cost. In
embedded hardware design, the need originates to derive better approaches from
development to deployment cycle in order to handle the cost modelling or cost optimality
with digital electronic components and production quantity. Hardware/software code-
designers also need to solve the design time problem and bring embedded devices at the
right time to the market.

Now, considering the challenges related to the software development of embedded


system:

1) Security became a burning issue in the digital world. The related risks grow
exponentially, especially so for IoT devices gaining popularity worldwide and
becoming more interconnected to each other. Because modern home appliances like
electric cookers, refrigerators and washing machines have connectivity feature
integrated by default, Internet of Things now is exposed to a serious risk of hacking
attacks.

2) Compatibility and integrity: Further expansion of IoT devices on the background of


their connectivity puts more pressure on their adaptability. Users must be capable of
administering the app through a simple user interface via all available channels
including over-the-air firmware updates, which needs extreme compatibility across the
entire ecosystem.

3) Connectivity: There are so many different ways to connect device to the internet.
Wireless connection can be established through Wi-Fi, Ethernet, Edge, LoRa, a
Bluetooth bridge, and other channels.

4) Over the air updates: The issue standing next to connection to the internet is remote
updates of the firmware. In case of standalone device, it is enough to send update to a
secure site and notify users to download and install it. The situation is different with the
IoT devices; the updates must be delivered and executed on their own without user’s
intervention

5) Debugging: Debugging is a general issue growing together with the number of


connected devices – time and effort for debugging grows in parallel. Along with the
process of open source software integration, there occurs more unexpected behaviours
in a system adopting in numerous free flow devices than in the one.

Q4) EXPLAIN THE BENEFITS OF SOFTWARE CONFIGURATION


MANAGEMENT.

>> Software configuration management is the process of organizing, recording and facilitating
software development in a transparent, high speed environment that can operate as a
functioning whole while not overlooking the details

1) It enables software to be created and tested in as practical and cost-effective way as


possible and it has evolved as an indispensable part of the software development
process
2) When it comes to technology, the development and deployment of enterprise software,
along with software customization and packaged applications are the main beneficiaries
of software configuration management.
3) SCM makes change manageable through the use of integrated automated systems that
record change, often as it happens, and provides a development trail that can be easily
followed.
4) It versions the different files and keeps track of them as development progresses. This
prevents missed change requests, unfixed software bugs and other defects that can
damage customer confidence or delay deployment of critical updates.
5) It also makes certain that handoffs occur more easily by making concurrent
development possible.

6) Software configuration management answers the five Who, What , When, Where and
Why, by maintaining records of changes and what impact these changes have on the
application under development as well as upgrades and releases.

7) Cost reduction by having detailed knowledge of all the elements of the configuration
which allows for unnecessary duplication to be avoided. Your business will have
greater agility and faster problem resolution, giving a better quality of service for your
customers.

8) Enhanced system and process reliability through more rapid detection and correction
of improper configurations that could negatively impact performance.

Q5) SHORTNOTES:

Q5.1) Design trade-offs for product development

>> The customer is trying to establish whether something is actually financially viable. Without
a spec, only a rough idea can be given, and there are always lots of decisions to make in the
design process. Give the same specification to ten engineers and you
will get ten, possibly wildly different, designs, all of which may meet the specification
perfectly. However, one of those designs will win in terms of unit cost and another (probably
not the same one) will win on development cost. A third may have the best technical
specification, though providing they all meet the original spec.

A) Balancing TIme

The design engineer has to balance the three factors of technical spec, development cost,
and unit price. To make the best choice of where the bias should be, these factors are
considered:

• Time to market
• Available development budget
• Target market sale price
• Anticipated annual sales volumes
• Value in exceeding or modifying the specification
B)Unit Cost:

Figure 1. High time to market pressure: design choices need to reflect the focus on
reducing development cost and time.

Figure 2. Given a free choice, engineers tend to make design choices that focus on the best
technology and features at the expense of development time and unit cost.
At the centre of the technical decisions are the trade-offs between the use of software vs
hardware.
The software cost doesn’t appear on the bill of materials generally, but it is far from ‘free’.
Software development can be extremely onerous and, in a typical project that we carry out,
the software cost (application plus drivers) makes up about 2/3 of the overall development
cost.
Choice of components can have a fundamental impact on the software development time. As
a result there is often a conflict between this time and the unit cost as shown in Figure 3 :
Figure 3. In embedded designs, using more costly components can often help reduce
software development time (and vice versa).

C) Technical Specification
The technical specification trade-off results in mainly in the section that will the product be
suitable in changing environments and supposed to take on the adaptability of the newer
versions.
Considering over NFC for WIFI or a expandable memory as an option for the increasing
demand for the built in memory.
These technical specifications rather than affecting the intellectual part of embedded systems
move their primary focus on the building costs of the system and hence each unit that goes in
its making.

Q5.2) Requirements of engineering.


>> Requirement Engineering is the process of defining, documenting and maintaining the
requirements. It is a process of gathering and defining service provided by the system.
Requirements Engineering Process consists of the following main activities:
• Requirements elicitation
• Requirements specification
• Requirements verification and validation
• Requirements management
Requirements Elicitation:
1) It is related to the various ways used to gain knowledge about the project domain and
requirements. The various sources of domain knowledge include customers, business manuals,
the existing software of same type, standards and other stakeholders of the project.
2) The techniques used for requirements elicitation include interviews, brainstorming, task
analysis,Delphi technique, prototyping, etc. Some of these are discussed here.
Elicitation does notproduce formal models of the requirements understood. Instead, it
widens the knowledge domain of the analyst and thus helps in providing input to the
next stage.

Requirements specification:
This activity is used to produce formal software requirement models. All the requirements
including the functional as well as the non-functional requirements and the constraints are
specified by these models in totality. During specification, more knowledge about the problem
may be required which can again trigger the elicitation process.
The models used at this stage include ER diagrams, data flow diagrams (DFDs),
function
decomposition diagrams (FDDs), data dictionaries, etc.

Requirements verification and validation:


1) Verification: It refers to the set of tasks that ensure that software correctly implements
a specific function.
2) Validation: It refers to a different set of tasks that ensure that the software that has been
built is traceable to customer requirements. If requirements are not validated, errors in
the requirements definitions would propagate to the successive stages resulting in a lot
of modification and rework.
The main steps for this process include:
• The requirements should be consistent with all the other requirements i.e. no two
requirements should conflict with each other.
• The requirements should be complete in every sense.
• The requirements should be practically achievable.
Reviews, buddy checks, making test cases, etc. are some of the methods used for this.

Requirements management:
1) Requirement management is the process of analysing, documenting, tracking,
prioritizing and agreeing on the requirement and controlling the communication to
relevant stakeholders.
2) This stage takes care of the changing nature of requirements. It should be ensured that
the SRS is as modifiable as possible so as to incorporate changes in requirements
specified by the end users at later stages too.
3) Being able to modify the software as per requirements in a systematic and controlled
manner in an extremely important part of the requirements engineering process.

Q6.) EXPLAIN INTEGRATION, TESTING AND PACKAGING CONCEPT

>> INTEGRATION:
System integration is, quite simply, the combination of systems to build a larger system
or set of systems. Following a bottom to top approach, one can build an understanding of how
chip-level embedded systems can make up board-level embedded systems, which could then
form device-level systems, and so on.
The concept of system integration has been gaining unprecedented importance because
the needs for interoperability and open yet secure embedded system data exchange has gained
momentum. Whether the integration is horizontally or vertically, integrated embedded systems
are an important piece in interoperable controls and data security on the plant floor.

TESTING:

Testing can be stated as the process of verifying and validating that a software or application
is bug free, meets the technical requirements as guided by its design and development and
meets the user requirements effectively and efficiently with handling all the exceptional and
boundary cases.
Testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that software correctly implements a
specific function.
2. Validation: it refers to a different set of tasks that ensure that the software that has been
Built is traceable to customer requirements.

Software techniques can be majorly classified into two categories:


1. Black Box Testing: The technique of testing in which the tester doesn’t have access to the
source code of the software and is conducted at the software interface without concerning with
the internal logical structure of the software is known as black box testing.
2. White-Box Testing: The technique of testing in which the tester is aware of the internal
workings of the product, have access to its source code and is conducted by making sure that
all internal operations are performed according to the specifications is known as white box
testing.

Software level testing can be majorly classified into 4 levels:


1. Unit Testing: A level of the software testing process where individual
units/components of a software/system are tested. The purpose is to validate that each
unit of the software performs as designed.
2. Integration Testing: A level of the software testing process where individual units are
combined and tested as a group. The purpose of this level of testing is to expose faults
in the interaction between integrated units.
3. System Testing: A level of the software testing process where a complete, integrated
system/software is tested. The purpose of this test is to evaluate the system’s
compliance with the specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested
for acceptability. The purpose of this test is to evaluate the system’s compliance with
the business requirements and assess whether it is acceptable for delivery.

Integration testing is the process of testing the interface between two software units or module.
1. Big-Bang Integration Testing –
It is the simplest integration testing approach, where all the modules are combining and
verifying the functionality after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach
is practicable only for very small systems. If once an error is found during the
integration testing, it is very difficult to localize the error as the error may potentially
belong to any of the modules being integrated. So, debugging errors reported during
big bang integration testing are very expensive to fix.
2. Bottom-Up Integration Testing –
In bottom-up testing, each module at lower levels is tested with higher modules until
all modules are tested. The primary purpose of this integration testing is, each
subsystem is to test the interfaces among various modules making up the subsystem.
This integration testing uses test drivers to drive and pass appropriate data to the lower
level modules.
3. Top-Down Integration Testing –
Top-down integration testing technique used in order to simulate the behaviour of the
lower-level modules that are not yet integrated. In this integration testing, testing takes
place from top to bottom. First high-level modules are tested and then low-level
modules and finally integrating the low-level modules to a high level to ensure the
system is working as intended.

4. Mixed Integration Testing –


A mixed integration testing is also called sandwiched integration testing. A mixed
integration testing follows a combination of top down and bottom-up testing
approaches. In top-down approach, testing can start only after the top-level module
have been coded and unit tested.
In bottom-up approach, testing can start only after the bottom level modules are ready.
This sandwich or mixed approach overcomes this shortcoming of the top-down and
bottom-up approaches. A mixed integration testing is also called sandwiched
integration testing.

PACKAGING:

1) Advanced packaging technologies are needed to achieve the challenging design


requirements. Current design problems are not driven by circuit design capabilities but
by an inability to reliably package these circuits within the space constraints.
2) Innovative packaging techniques are required in order to meet the increasing size,
weight, power, and reliability requirements of this industry without sacrificing
electrical, mechanical, or thermal performance.
3) Emerging technologies such as those imbedding components within organic substrates
have proven capable of meeting and exceeding these design objectives.
4) Imbedded Component/Die Technology (IC/DT) addresses these design challenges
through imbedding both actives and passives into cavities within a multi-layer printed
circuit board (PCB) to decrease the surface area required to implement the circuit design
and increase the robustness of the overall assembly.
5) A passive thermal management approach is implemented with an integrated thermal
core imbedded within the multi-layer PCB to which high power components are
mounted directly.

You might also like