Class Test Merged

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Class Test – Part 1

1. What is an embedded system?

2. What are the design challenges in an embedded system?

3. What are the different performance metrices used in an embedded


system?

4. What are differences between General-purpose, Application-


specific and Single-Purpose processors?

5. Create a state-machine which does a pattern recognition of ‘110’


1. Draw the state transition diagram
2. Draw the state table

1. What is an embedded system?


An embedded system – employs a combination of hardware and software (a computational engine) to
perform a set of specific functions. It may be a part of a larget system that may not be a ‘computer’. It
works in a reactive and time-constrained environment. Hardware is used for performance and
sometimes security. Software is used for providing features and flexibility.

2. What are the design challenges in an embedded system?


To construct an implementation with desired functionality. To simultaneously optimize various design
metrics’ like – measurable features of the system, optimizing for unit cost, NRE, size, performance,
power, flexibility, time-to-prototype (market), correctness, safety, maintainability. The challenge is to
have an expertise in both hardware and software along with system level knowledge. (weight)

3. What are the different performance metrices used in an embedded


system?
The widely used measure of performance is – clock frequency, instructions per second (MIPS), latency
(response time), throughput, bandwidth (singleton and concurrent in case of concurrent tasks).
4. What are differences between General-purpose, Application-
specific and Single-Purpose processors?

A general purpose processor is a generic-microprocessor or programmable device used in a variety of


applications.

An Application-Specific-Instruction-Set-Processor (ASIP) is a programmable processor optimized for a


particular class of applications having common characteristics. Microcontrollers, DSPs are examples of
ASIP.

A single purpose processor is a state-machine which is designed to execute exactly one program.

5. Create a state-machine which does a pattern recognition of ‘110’


1. Draw the state transition diagram
2. Draw the state table
STATE INPUT NEXT-STATE OUTPUT
S0 0 S0 0
S0 1 S1 0
S1 0 S0 0
S1 1 S2 0
S2 0 S0 1
S2 1 S2 0

Class Test – Part 2


1. For a cpu with a simple (trivial) instruction set as below, convert
the C program into Assembly using compiler techniques?

int total = 0;

for (int i = 10; i != 0; i--) total += i;


The syntax of C in Backus-Naur Form

<translation-unit> ::= {<external-declaration>}*

<external-declaration> ::= <function-definition>


| <declaration>

<function-definition> ::= {<declaration-specifier>}* <declarator> {<declaration>}*


<compound-statement>

<declaration-specifier> ::= <storage-class-specifier>


| <type-specifier>
| <type-qualifier>

<storage-class-specifier> ::= auto


| register
| static
| extern
| typedef

<type-specifier> ::= void


| char
| short
| int
| long
| float
| double
| signed
| unsigned
| <struct-or-union-specifier>
| <enum-specifier>
| <typedef-name>

<struct-or-union-specifier> ::= <struct-or-union> <identifier> { {<struct-


declaration>}+ }
| <struct-or-union> { {<struct-declaration>}+ }
| <struct-or-union> <identifier>

<struct-or-union> ::= struct


| union

<struct-declaration> ::= {<specifier-qualifier>}* <struct-declarator-list>

<specifier-qualifier> ::= <type-specifier>


| <type-qualifier>

<struct-declarator-list> ::= <struct-declarator>


| <struct-declarator-list> , <struct-declarator>

<struct-declarator> ::= <declarator>


| <declarator> : <constant-expression>
| : <constant-expression>

<declarator> ::= {<pointer>}? <direct-declarator>


<pointer> ::= * {<type-qualifier>}* {<pointer>}?

<type-qualifier> ::= const


| volatile

<direct-declarator> ::= <identifier>


| ( <declarator> )
| <direct-declarator> [ {<constant-expression>}? ]
| <direct-declarator> ( <parameter-type-list> )
| <direct-declarator> ( {<identifier>}* )

<constant-expression> ::= <conditional-expression>

<conditional-expression> ::= <logical-or-expression>


| <logical-or-expression> ? <expression> : <conditional-
expression>

<logical-or-expression> ::= <logical-and-expression>


| <logical-or-expression || <logical-and-expression>

<logical-and-expression> ::= <inclusive-or-expression>


| <logical-and-expression && <inclusive-or-expression>

<inclusive-or-expression> ::= <exclusive-or-expression>


| <inclusive-or-expression> | <exclusive-or-expression>

<exclusive-or-expression> ::= <and-expression>


| <exclusive-or-expression> ^ <and-expression>

<and-expression> ::= <equality-expression>


| <and-expression> & <equality-expression>

<equality-expression> ::= <relational-expression>


| <equality-expression> == <relational-expression>
| <equality-expression> != <relational-expression>

<relational-expression> ::= <shift-expression>


| <relational-expression> < <shift-expression>
| <relational-expression> > <shift-expression>
| <relational-expression> <= <shift-expression>
| <relational-expression> >= <shift-expression>

<shift-expression> ::= <additive-expression>


| <shift-expression> << <additive-expression>
| <shift-expression> >> <additive-expression>

<additive-expression> ::= <multiplicative-expression>


| <additive-expression> + <multiplicative-expression>
| <additive-expression> - <multiplicative-expression>

<multiplicative-expression> ::= <cast-expression>


| <multiplicative-expression> * <cast-expression>
| <multiplicative-expression> / <cast-expression>
| <multiplicative-expression> % <cast-expression>

<cast-expression> ::= <unary-expression>


| ( <type-name> ) <cast-expression>

<unary-expression> ::= <postfix-expression>


| ++ <unary-expression>
| -- <unary-expression>
| <unary-operator> <cast-expression>
| sizeof <unary-expression>
| sizeof <type-name>

<postfix-expression> ::= <primary-expression>


| <postfix-expression> [ <expression> ]
| <postfix-expression> ( {<assignment-expression>}* )
| <postfix-expression> . <identifier>
| <postfix-expression> -> <identifier>
| <postfix-expression> ++
| <postfix-expression> --

<primary-expression> ::= <identifier>


| <constant>
| <string>
| ( <expression> )

<constant> ::= <integer-constant>


| <character-constant>
| <floating-constant>
| <enumeration-constant>

<expression> ::= <assignment-expression>


| <expression> , <assignment-expression>

<assignment-expression> ::= <conditional-expression>


| <unary-expression> <assignment-operator> <assignment-
expression>

<assignment-operator> ::= =
| *=
| /=
| %=
| +=
| -=
| <<=
| >>=
| &=
| ^=
| |=

<unary-operator> ::= &


| *
| +
| -
| ~
| !

<type-name> ::= {<specifier-qualifier>}+ {<abstract-declarator>}?

<parameter-type-list> ::= <parameter-list>


| <parameter-list> , ...

<parameter-list> ::= <parameter-declaration>


| <parameter-list> , <parameter-declaration>

<parameter-declaration> ::= {<declaration-specifier>}+ <declarator>


| {<declaration-specifier>}+ <abstract-declarator>
| {<declaration-specifier>}+

<abstract-declarator> ::= <pointer>


| <pointer> <direct-abstract-declarator>
| <direct-abstract-declarator>

<direct-abstract-declarator> ::= ( <abstract-declarator> )


| {<direct-abstract-declarator>}? [ {<constant-
expression>}? ]
| {<direct-abstract-declarator>}? ( {<parameter-type-
list>|? )

<enum-specifier> ::= enum <identifier> { <enumerator-list> }


| enum { <enumerator-list> }
| enum <identifier>

<enumerator-list> ::= <enumerator>


| <enumerator-list> , <enumerator>

<enumerator> ::= <identifier>


| <identifier> = <constant-expression>

<typedef-name> ::= <identifier>

<declaration> ::= {<declaration-specifier>}+ {<init-declarator>}*

<init-declarator> ::= <declarator>


| <declarator> = <initializer>

<initializer> ::= <assignment-expression>


| { <initializer-list> }
| { <initializer-list> , }

<initializer-list> ::= <initializer>


| <initializer-list> , <initializer>

<compound-statement> ::= { {<declaration>}* {<statement>}* }

<statement> ::= <labeled-statement>


| <expression-statement>
| <compound-statement>
| <selection-statement>
| <iteration-statement>
| <jump-statement>

<labeled-statement> ::= <identifier> : <statement>


| case <constant-expression> : <statement>
| default : <statement>

<expression-statement> ::= {<expression>}? ;

<selection-statement> ::= if ( <expression> ) <statement>


| if ( <expression> ) <statement> else <statement>
| switch ( <expression> ) <statement>

<iteration-statement> ::= while ( <expression> ) <statement>


| do <statement> while ( <expression> ) ;
| for ( {<expression>}? ; {<expression>}? ; {<expression>}? )
<statement>

<jump-statement> ::= goto <identifier> ;


| continue ;
| break ;
| return {<expression>}? ;

Class Test – Part 3


1. Use Exclusive Access Instructions in ARM to implement
Semaphores ?

2. The following decision tree based on Input A[7:0] is to be coded


using C and Assembly.
1. Using Exclusive Access for Semaphores
Exclusive access instructions can be used for semaphore operations – to make sure that a resource is
used by only one task.

DeviceALocked = a data variable in memory to indicate that Device A is being used.

If a task wants to use Device A, it should check the status by reading the variable
DeviceALocked. If it is zero, it can write a 1 to DeviceALocked to lock the device.

After it’s finished using the device, it can then clear the DeviceALocked to zero so that other
tasks can use it.

What will happen if two tasks try to access Device A at the same time ?

In that case, possibly both the tasks will read the variable DeviceALocked, and both will get
zero. Then both of them will try writing back 1 to the variable DeviceALocked to lock the device, and
we’ll end up with both tasks believing that they have exclusive access to Device A.

THIS IS WHERE, exclusive accesses are used. The STREX instruction has a return status, which indicates
whether the exclusive store has been successful. If two tasks try to lock a device at the same time, the
return status will be 1 (exclusive failed) and the task can then know that it needs to retry the lock.
Note that the data write operation of STREX will not be carried out if the exclusive monitor returns a fail
status, preventing a lock bit being set when the exclusive access fails.

If the return status of this function LockDeviceA is 1 (exclusive failed), the application tasks should wait a
bit and retry later. In single-processor systems, the common cause of an exclusive access failing is an
interrupt occurring between the exclusive load and the exclusive store. If the code is run in privileged
mode, this situation can be prevented by setting an interrupt mask register such as PRIMASK for a short
time to increase the chance of getting the resource locked successfully.

In multiprocessor systems, aside from interrupts, the exclusive store could also fail if another processor
has accessed the same memory region. To detect memory access from different processors, the bus
interface requires exclusive access monitor hardware to detect whether there is an access from a
different bus master to a memory between the two exclusive accesses.

With this mechanism, we can be sure that only one task can have access to certain resources. If the
application cannot gain the lock to the resource after a number of times, it might need to quit with a
timeout error. For example, a task that locked a resource might have crashed and the lock remained set.
In these situations, the OS should check which task is using the resource. If the task has completed or
terminated without clearing the lock, the OS might need to unlock the resource.

If the process has started an exclusive access using LDREX and then found that the exclusive access is no
longer needed, it can use the CLREX instruction to clear the local record in the exclusive access monitor.
For CortexM3, all exclusive memory transfers must carry out sequentially. However, if the exclusive
access control code has to be reused on other ARM cores, the Data Memory Barrier (DMB) instruction
might need to be inserted between exclusive transfers to ensure correct ordering of the memory
accesses.

2. Using TBB, TBH and UBFX, SBFX to


form a branching tree
UBFX and SBFX
These are Unsigned and Signed Bit Field Extract instructions. The syntax is

TBB and TBH


The TBB (Table Branch Byte) and TBH (Table Branch HalfWord) are for implementing branch tables. The
TBB instruction uses a branch table of byte size offset, and TBH uses a branch table of half word offset.

Since the bit 0 of the program counter (PC) is always zero, the value in the branch table is multiplied by 2
before it is added to PC. Furthermore, because the PC value is the current instruction address plus 4, the
branch range for the TBB is (2 * 255) + 4 = 514.

The branch range for the TBH is (2 * 65535) + 4 = 131074. Both TBB and TBH support forward branch
only.
Acknowledgements :: The Definitve Guide to ARM CortexM3 – Joseph Yiu

Class Test – Part 4


The task graph in Figure 1 shows a system specification with four tasks, T1 . . . T4. The tasks can be executed on
different components. Table 1 displays the execution times for the tasks on the different components as well as
the component cost. For example, the MIPS processor costs 200 units and can run task T1 in 5 ms and task T4 in 2
ms. Table 1 shows also the number of components available for each component type (MIPS, DSP, FPGA and
ASIC). All components execute tasks sequentially – at any given time a component executes at most one task. Task
execution is non-preemptive - once a task is started, it runs to completion.
a. Construct the design space by listing all possible design points. A design point consists of an allocation
(selection of components), a binding (assignment of tasks to selected components) and a schedule
(execution order for the tasks). Determine the total cost and sum of execution time for each design
component for each design point.
Class Test – Part 5
Given a basic FPGA 4-Input LUT as below, and assuming that you can
connect wires across the LUTs through interconnect, create a 4bit gray
to binary converter.
a> Create a truth table.

b> If each LUT takes 5ns intrinsic delay, what would be the total delay for the conversion.

c> How many LUTs are required.

d> Assume you had a 6bit LUT. Then what would be the total delay for the conversion.

Hint :: B0 = g0; B(i) = g(i) xor B(i-1).


B0 = g0; B(i) = g(i) xor B(i-1).

a> Truth Table


g2 g1 g0 b2 b1 b0

0 0 0 0 0 0

0 0 1 0 1 1

0 1 0 1 1 0

0 1 1 1 0 1

1 0 0 1 0 0

1 0 1 1 1 1

1 1 0 0 1 0

1 1 1 0 0 1

The logic would look like the below. For this there is need for only 2 LUT’s. (3 with pass through is also
correct answer). The g0  b0 can be implemented through simple interconnect.

b>
Since the LUT’s would be in parallel, the intrinsic delay of the full conversion will also be 5ns.

c> For a 6 input LUT there would be no change in total delay of conversion for this example.
Class Test – Part 6
For a Car Controller scheduler, Let C = worst case execution time, T =
(sampling) period, D = deadline.
a> Speed Measurement : C = 4ms, T = 20ms, D = 20ms
b> ABS Control : C = 10ms, T = 40ms, D = 40ms
c> Fuel Injection : C = 40ms, T = 80ms, D = 80ms

1> Are the set of tasks schedulable.

2> Create a time table of the tasks (schedulable or non-schedulable). If there is a slack, mention the
same.

3> What is the CPU utilization.

4> Assume that each context switch takes 0.25ms. Redraw the time table of the tasks.
1> The set of tasks are schedulable.
2> Timetable is as follows, Initial slack is 4ms

3> Cpu Utilization is 95%


4> Context switch happens 10 times (10 if you consider that next cycle it will again begin at SPD ‘0’)
If answer is 9 times also it can be treated as benefit of doubt. The CPU utilization would be
97.5%

You might also like