Professional Documents
Culture Documents
Client Transparent Proxy Message Sink Logbook
Client Transparent Proxy Message Sink Logbook
• Verification
– Are we building the system correctly?
– Eliminate bugs in the system
• Validation
– Are we building the correct system?
– Should already be handled by a thorough
requirements analysis based on use cases
• What is the difficulty?
– Developers are not keen to find their own bugs
• What the industry does?
– Use a team of testers whose main job is to find
fault in the system
• typical tester : developer ratio ~ 1 : 3
– take up at least 30% of the development cost
Department of Information Engineering 2
Test levels
• Unit testing
– To test one and only one unit, usually a class
• Integration testing
– Usually use cases based
– Integration and unit tests can be done together
• System testing
– Testing the entire system, typically from an end
user view
• Example
– partition the state of a bank account into
• empty, positive and negative balance
– partition the state of a stack into
• empty, halffull, full
– Partition the expected input value (0 to 100)
• To value at boundary, outside boundary and
normal
• e.g. 10, 1, 0, 1, 40, 60, 99, 100, 110
Department of Information Engineering 5
Unit testing structure test (white box test)
• Structure test
– every statement has
to be executed at while ...
least once
If a<b
• Test
– the most interesting
paths
– the leastknow
paths
– The high risk paths
• test cases A test cases B
Input /result input/result
300/700 200/2040
600/4000 . . .
. . . . . .
• Operation test
– Test the system in normal operation over a long
period
– Use the system in the intended manner
– Only normal mistakes are made
– Measure meanTimeToFailure (MTTF)
Department of Information Engineering 11
Testing techniques
• Fullscale test
– run the system on its maximum scale
– System is used by many users
• Performance test or capacity test
– Measure the system performance under different
loads
• Overload test
– Goes one step further than the fullscale test to
see how the system behaves when it is
overloaded
– Should not expect normal performance but at
least the system should not go down and
catastrophe not to occur
Department of Information Engineering 12
Testing techniques
• Negative tests
– Stress test that try to use the system in ways it is
not designed for, so as to reveal system
weaknesses
– e.g. incorrect network configuration, insufficient
hardware capacity, impossible work load
• Tests based on requirements
– Tests based on requirement specifications
• Testing of the user documentation
– to check the consistency between manuals and
system behavior
Department of Information Engineering 13
Testing techniques
• Ergonomic test
– Test the manmachine interface
– Is the interface consistent with the use cases?
– Are the menu logical and readable for non
computer professionals?
– Can users understand the failure messages?
• Configuration tests
– Verify that the system works correctly in
different configurations, such as different
network configurations
• Two ways of using the attribute
– Simple way
• Use the attribute as a tag (a label)
• The tag can be identified by “reflection” and
can do useful work, e.g. testing
– Advance way
• Aspect programming, will be discussed
• The alternative is to add serializable code to the class,
but this pollutes the class with noncore responsibility
• The “serializable” responsibility is factored (taken) out
of the class
• Relentless testing (XP)
– Test all methods using different scenario
• Repeat of all your test cases whenever you change
your code
– Regression test
• How to do this efficiently?
• The cornerstone of Extreme Programming (XP)
• Download NUnit V2.1 from www.nunit.org
– Study QuickStart.doc
TestCase
TestRunner TestSuite
*
testSuite.Run()
YourTestCase
Department of Information Engineering 23
Testdriven development (TDD)
• Write the test cases before you write the code !
• Work on one test at a time
• Keep the test small
• TDD Golden Rule
– Never write code unless you have a test that
requires it
• Ref:
http://www.parlezuml.com/tutorials/tdd_nunit/index_files/frame
.htm
• But still need to learn the NUnit framework
• The current version of NUnit is very easy to use
owing to attribute programming
[TestFixture]
public class AccountTest {
[Test] //the test case
public void Deposite() {
Account acc = new Account();
float balanceBefore = acc.Balance;
acc.Deposit(10.0F);
float balanceAfter = acc.Balance;
Assert.AreEqual(10.0F, balanceAfter-balanceBefore);
}
}
• NUnit provides a rich set of assertions as static
methods of the Assert class
– Comparison test
• Assert.AreEqual( 1.0, sum);
– Condition test
• Assert.IsTrue(bool condition);
• Assert.IsNull(object anObject);
– Utility methods
• Assert.Fail(string message);
• Compile the program
• Start NUnit GUI
• Select the .dll file
• Run
• Use of Reflection
– Viewing metadata, perform type discovery
– Dynamic invocation
• to invoke properties and methods on objects
dynamically instantiated based on type
discovery.
• ref:http://www.ondotnet.com/pub/a/dotnet/excerpt/prog_csharp
_ch18/index.html?page=1
Department of Information Engineering 29
Create your own attribute TestFixture
//custom attribute, target includes class & method
[AttributeUsage(AttributeTargets.Class |
AttributeTargets.Method | AllowMultiple = true)]
public class TestFixtureAttribute : System.Attribute
{
string str;
TestFixtureAttribute(string str) { //constructor
this.str = str;
}
public string Message() { //define a property
get { return str; }
}
}
• GetTypes() return an array of Type objects
– Type[] types = a.GetTypes( );
• Check whether the type has attribute [TestFixture]
– Object obj = type.GetCustomAttributes()[0];
if (obj is TestFixtureAttribute)
{ //found the attribute instance, do something
Console.WriteLine(“{0}”, obj.Message);
}
• Question
– What if some responsibilities are not confined to
any particular object but are instead scattered
through out the system?
• The traditional way
– public class Foo {
protected Logbook log;
public bar() {
log.enter(“bar() entered”);
…
… \\business logic of bar()
…
log.enter(“bar() quitted”);
}
}
Department of Information Engineering 33
Classic example logging facility
• The problems
– Messy, need to provide the code for every objects,
bad code reuse
– Objects are overloaded with noncore
responsibilities
• The problem
– Some responsibilities cannot be cleanly
encapsulated in an object or method
– e.g. security, transaction, performance evaluation
• The solution
– AspectOriented Programming (AOP)
• A typical system has
– Core concerns
• E.g. processing payments in bank applications
– Systemlevel concerns
• Logging, transaction, security, . . .
• Systemlevel concerns tend to crosscut (share) by
many modules
– Such concerns are known as crosscutting
concerns
Department of Information Engineering 35
• OOP
– Each object should have clear responsibility
– Good at addressing core concerns
– BUT can’t handle crosscutting concerns well
• AspectOriented Programming (AOP)
– Solve the problem of crosscutting concerns by a
pattern called Interception
public bar() {
log.enter(“bar() entered”);
… \\business logic of bar()
…
log.enter(“bar() quitted”);
}
}
Department of Information Engineering 37
What is changed?
• Add an attribute [Logbook]
– This attribute is associated with a component
that carries out the logging operation
• Foo is now a subclass of ContextBoundObject
– Foo now consists of its core business logic only,
no more crosscutting concerns
Transparent Message
client Foo
Proxy Sink
Logbook
Inserted by compiler
Logbook Security
Inserted by compiler
• C#
– Interception supported by compiler
• Java
– Tools like AspectJ inserts code to source file
Security,
persistence,
Business logic
transaction
logging
Weaver
Aspectual Aspectual
Decomposition Recomposition
Logging
Business logic
Security
Transaction Implementation
modules
• Reference
• See “Decouple Components by Injecting Custom Services into
Your Object’s Interception Chain
(http://msdn.microsoft.com/msdnmag/issues/03/03/ContextsinN
ET)
• For overview, articles by Ramnivas
(http://www.javaworld.com/javaworld/jw012002/jw0118
aspect.html)