Syllabus - astqb [PDF]

hitecture of th n also be ap models or me o Changes ed, the softw moved. This i not a testing ed testing of or uncovered

3 downloads 18 Views 1MB Size

Recommend Stories


Syllabus, blended version (PDF)
No matter how you feel: Get Up, Dress Up, Show Up, and Never Give Up! Anonymous

ACCT 350 syllabus [PDF]
ACCT 351: BUSINESS LAW II. SYLLABUS. Fall 2017. Prof. Sheldon D. Pollack. Department of Accounting & MIS. This course is the second semester in a two-course sequence designed to acquaint accounting majors with the U.S. legal system as it relates to t

VIC343Y1S Syllabus [PDF]
It always seems impossible until it is done. Nelson Mandela

Recent Syllabus (pdf)
No matter how you feel: Get Up, Dress Up, Show Up, and Never Give Up! Anonymous

Film syllabus f09 [PDF]
What does the film get right about actual political behavior and government institutions? ... Syllabus. Required text: Corrigan, A Short Guide to Writing About Film, 7th ed. (Pearson-Longman, 2010). Other readings made available on the class' ... o H

Download Full Syllabus (PDF)
When you do things from your soul, you feel a river moving in you, a joy. Rumi

Course Syllabus - KEI Abroad [PDF]
1. Entertainment Media Production Core Courses. Course Syllabus. 1. Program of Study. Bachelor of Arts. (Animation/Film/Television Production). Faculty/Institute/College. Mahidol University International College. 2. Course Code ICEM 302. Course Title

ENG 121 Honors Syllabus (PDF)
Learn to light a candle in the darkest moments of someone’s life. Be the light that helps others see; i

SYLLABUS
Life is not meant to be easy, my child; but take courage: it can be delightful. George Bernard Shaw

SyllAbuS
At the end of your life, you will never regret not having passed one more test, not winning one more

Idea Transcript


C Certifi ied Testerr Found dation n Lev vel Sy yllabu us

Released R Verrsion 201 11

Intternatio onal Software Testing g Qualiffication ns Boarrd

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Copyrigh ht Notice This doccument may be copied in its entirety, or extracts made, m if the source s is ackknowledged. Copyrigh ht Notice © In nternational Software Te esting Qualific cations Boarrd (hereinafte er called ISTQB®) ISTQB iss a registered d trademark of the Intern national Softw ware Testing g Qualifications Board, Copyrigh ht © 2011 the e authors forr the update 2011 (Thomas Müller (ch hair), Debra Friedenberg, and the ISTQ QB WG Foun ndation Level) Copyrigh ht © 2010 the e authors forr the update 2010 (Thomas Müller (ch hair), Armin B Beer, Martin Klonk, Rahul R Verma)) Copyrigh ht © 2007 the e authors forr the update 2007 (Thomas Müller (ch hair), Dorothyy Graham, Debra D Friedenb berg and Erikk van Veenendaal) Copyrigh ht © 2005, th he authors (T Thomas Mülle er (chair), Re ex Black, Sig grid Eldh, Dorothy Graham, Klaus Ollsen, Maaret Pyhäjärvi, Geoff G Thompson and Erik k van Veenen ndaal). All rightss reserved. The auth hors hereby transfer t the copyright c to the t Internatio onal Softwarre Testing Qu ualifications Board (ISTQB). The authorrs (as currentt copyright holders) and ISTQB I (as th he future cop pyright holder) have agrreed to the fo ollowing cond ditions of use e: 1) Any individual orr training com mpany may use u this sylla abus as the basis b for a tra aining course e if the ed as the so ource and co opyright owners of the sy yllabus authors and the ISTQB are acknowledge at any adverrtisement of such a train ning course may m mention n the syllabu us only and provided tha n for official accreditatio on of the tra aining materrials to an IISTQB recognized afterr submission Natio onal Board. 2) Any individual orr group of in ndividuals ma ay use this syllabus s as the t basis for articles, boo oks, or othe er derivative writings if th he authors and a the ISTQB are acknowledged a as the sourc ce and copyyright ownerss of the syllabus. 3) Any ISTQB-reco ognized Natio onal Board may m translate e this syllabu us and licensse the syllab bus (or o other parties. its trranslation) to

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 2 of 78 7

31-Marr-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Revision Histo ory Version

D Date

Remarks s

ISTQB 2011 2

E Effective 1-Ap pr-2011

ISTQB 2010 2

E Effective 30-M Mar-2010

ISTQB 2007 2

0 01-May-2007 7

ISTQB 2005 2 ASQF V2.2

01-July-2005 0 J July-2003

ISEB V2 2.0

2 25-Feb-1999

Certified Tester Foundation Levell Syllabus Maintena ance Release e – see Appe endix E – Re elease Notes Certified Tester Foundation Levell Syllabus Maintena ance Release e – see Appe endix E – Re elease Notes Certified Tester Foundation Levell Syllabus Maintena ance Release e Certified Tester Foundation Levell Syllabus ASQF Sy yllabus Foundation Level Version 2.2 “Lehrplan n Grundlagen n des Softwa are-testens“ ISEB Sofftware Testin ng Foundatio on Syllabus V2.0 V 25 February 1999

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 3 of 78 7

31-Marr-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Table of Conte ents Acknowledgements..................................................................................................................................... 7  Introducttion to this Syllabus S ....................................................................................................................... 8  Purpo ose of this Do ocument ..................................................................................................................... 8  The Certified C Testter Foundatio on Level in Software S Testing ................................................................. 8  Learn ning Objective es/Cognitive Level of Kno owledge ............................................................................. 8  The Examination E . .................... ............................................................................................................... 8  Accre editation ........................................................................................................................................... 8  Level of Detail......................................................................................................................................... 9  How this t Syllabus is Organized ............................................................................................................. 9  1.  Fun ndamentals of o Testing (K K2)......................................................................................................... 10  1.1  Why is Te esting Necessary (K2) .............................................................................................. 11  1.1.1  Software Systems Context C (K1)) ........................................................................................ 11  1.1.2  Causess of Software e Defects (K2 2) ..................................................................................... 11  1.1.3  Role off Testing in Software and Operations (K2) ............... 11  S Devvelopment, Maintenance M 1.1.4  Testing g and Qualityy (K2) .................................................................................................... 11  1.1.5  How Much Testing is Enough? (K2) ................................................................................. 12  1.2  What is Testing? (K2) ............................................................................................................. 13  Seven Testing Princip 1.3  ples (K2) ................................................................................................ 14  Fundamental Test Pro 1.4  ocess (K1) ............................................................................................ 15  1.4 4.1  Test Planning and Control C (K1) ........................................................................................ 15  1.4 4.2  Test An nalysis and Design D (K1) .................... . ..................................................................... 15  1.4 4.3  Test Im mplementatio on and Execu ution (K1).......................................................................... 16  1.4 4.4  Evaluating Exit Critteria and Rep porting (K1) ...................................................................... 16  1.4 4.5  Test Cllosure Activitties (K1) ............................................................................................... 16  1.5  The Psych hology of Testing (K2) ............................................................................................. 18  1.6  Code of Ethics E ........................................................................................................................ 20  2.  Tessting Throug ghout the Sofftware Life Cycle C (K2) .......................................................................... 21  2.1  Software Developmen nt Models (K2 2) ..................................................................................... 22  2.1.1  V-mode el (Sequentia al Development Model) (K2) ............................................................... 22  2.1.2  Iterative e-incrementa al Development Models (K2) ( .............................................................. 22  2.1.3  Testing g within a Life e Cycle Model (K2) ............................................................................. 22  2.2  Test Leve els (K2) ..................................................................................................................... 24  2.2 2.1  Compo onent Testing g (K2) .................................................................................................... 24  2.2 2.2  Integra ation Testing (K2) ..................................................................................................... 25  2.2 2.3  System m Testing (K2 2) .......................................................................................................... 26  2.2 2.4  Acceptance Testing g (K2).................................................................................................... 26  Test Type 2.3  es (K2) ...................................................................................................................... 28  2.3 3.1  Testing g of Function n (Functional Testing) (K2 2) .................................................................. 28  2.3 3.2  Testing g of Non-funcctional Softw ware Characte eristics (Non n-functional T Testing) (K2) ......... 28  2.3 3.3  Testing g of Software e Structure/A Architecture (Structural Te esting) (K2) .............................. 29  2.3 3.4  Testing g Related to Changes: Re e-testing and d Regression n Testing (K2 2)........................... 29  2.4  Maintenan nce Testing (K2) ( ...................................................................................................... 30  3.  Sta atic Techniqu ues (K2).................................................................................................................... 31  3.1  Static Tecchniques and d the Test Prrocess (K2) ....................................................................... 32  3.2  Review Process (K2) .............................................................................................................. 33  3.2 2.1  Activitie es of a Form mal Review (K K1) .................................................................................... 33  3.2 2.2  Roles and a Responssibilities (K1)) ........................................................................................ 33  3.2 2.3  Types of o Reviews (K2) ( ....................................................................................................... 34  3.2 2.4  Successs Factors fo or Reviews (K K2).................................................................................... 35  3.3  Static Ana alysis by Too ols (K2) ................................................................................................. 36  4.  Tesst Design Te echniques (K K4) ......................................................................................................... 37  4.1  The Test Developmen nt Process (K K3) .................................................................................... 38  4.2  es of Test De esign Techniq ques (K2) ......................................................................... 39  Categorie Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 4 of 78 7

31-Marr-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.3  Specificattion-based orr Black-box Techniques T (K3) ( .............................................................. 40  4.3 3.1  Equivalence Partitio oning (K3) ............................................................................................ 40  4.3 3.2  Bounda ary Value An nalysis (K3) ........................................................................................... 40  4.3 3.3  Decisio on Table Tessting (K3) .............................................................................................. 40  4.3 3.4  State Transition T Testing (K3) ............................................................................................. 41  4.3 3.5  Use Ca ase Testing (K2) ( ....................................................................................................... 41  4.4  Structure--based or Wh hite-box Techniques (K4 4)................................................................... 42  4.4 4.1  Statem ment Testing and a Coverag ge (K4) ............................................................................. 42  4.4 4.2  Decisio on Testing an nd Coverage e (K4) ................................................................................ 42  4.4 4.3  Other Structure-bas S sed Techniqu ues (K1) ........................................................................... 42  4.5  Experiencce-based Tecchniques (K2 2) ...................................................................................... 43  4.6  Choosing Test Techniiques (K2)............................................................................................. 44  5.  Tesst Management (K3) ................................................................................................................... 45  Test Orga 5.1  anization (K2 2) ........................................................................................................... 47  5.1.1  Test Organization and a Independ dence (K2) ....................................................................... 47  5.1.2  Tasks of o the Test Leader L and Tester T (K1) ........................................................................ 47  5.2  Test Planning and Esttimation (K3))........................................................................................ 49  5.2 2.1  Test Planning (K2) ............................................................................................................. 49  5.2 2.2  Test Planning Activvities (K3) .............................................................................................. 49  5.2 2.3  Entry Criteria C (K2) .............................................................................................................. 49  5.2 2.4  Exit Criteria (K2)................................................................................................................. 49  5.2 2.5  Test Esstimation (K2 2) .......................................................................................................... 50  5.2 2.6  Test Sttrategy, Testt Approach (K K2) ................................................................................... 50  5.3  Test Prog gress Monitorring and Con ntrol (K2) .......................................................................... 51  5.3 3.1  Test Prrogress Monitoring (K1) ........................................................................................... 51  5.3 3.2  Test Re eporting (K2)............................................................................................................ 51  5.3 3.3  Test Co ontrol (K2)................................................................................................................ 51  5.4  Configura ation Manage ement (K2) ............................................................................................ 52  5.5  Risk and Testing T (K2) ............................................................................................................. 53  5.5 5.1  Projectt Risks (K2) .............................................................................................................. 53  5.5 5.2  Producct Risks (K2) ............................................................................................................. 53  5.6  Incident Management M (K3) ..................................................................................................... 55  6.  Too ol Support fo or Testing (K2 2).......................................................................................................... 57  6.1  Types of Test T Tools (K K2) ........................................................................................................ 58  6.1.1  Tool Su upport for Te esting (K2) ............................................................................................ 58  6.1.2  Test To ool Classifica ation (K2) .............................................................................................. 58  6.1.3  Tool Su upport for Ma anagement of o Testing an nd Tests (K1)) ............................................... 59  6.1.4  Tool Su upport for Sta atic Testing (K1) ................................................................................. 59  6.1.5  Tool Su upport for Te est Specificattion (K1) ........................................................................... 59  6.1.6  Tool Su upport for Te est Execution n and Loggin ng (K1) .......................................................... 60  6.1.7  Tool Su upport for Pe erformance and a Monitorin ng (K1).......................................................... 60  6.1.8  Tool Su upport for Sp pecific Testin ng Needs (K1 1) .................................................................. 60  6.2  Effective Use U of Toolss: Potential Benefits B and Risks (K2) ................................................... 62  6.2 2.1  Potential Benefits and a Risks of Tool Supporrt for Testing (for all toolss) (K2) ................... 62  6.2 2.2  Special Considerations for Som me Types of Tools T (K1) ..................................................... 62  Introducin 6.3  ng a Tool into o an Organizzation (K1) ........................................................................ 64  7.  References ....................................................................................................................................... 65  Stand dards ............................................................................................................................................. 65  Bookss.................................................................................................................................................... 65  8.  Appendix A – Syllabus S Background ................................................................................................ 67  Historry of this Doccument ..................................................................................................................... 67  Objecctives of the Foundation F C Certificate Qualification ....................................................................... 67  Qualification Objecctives of the International I n (adapted frrom ISTQB meeting m at So ollentuna, Novem mber 2001)................................................................................................................................... 67  Entry Requiremen nts for this Qu ualification ............................................................................................ 67  Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 5 of 78 7

31-Marr-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Backg ground and History H of the e Foundation n Certificate in Software Testing T ..................................... 68  Appendix B – Learning L Obje ectives/Cogn nitive Level of o Knowledge e ............................................... 69  Level 1: Remember (K1) ..................................................................................................................... 69  Level 2: Understand (K2) .................................................................................................................... 69  Level 3: Apply (K3 3) .............................................................................................................................. 69  Level 4: Analyze (K4) ( .......................................................................................................................... 69  10.  A Appendix C – Rules App plied to the IS STQB ................................................................................ 71  Found dation Syllab bus ............................................................................................................................ 71  10..1.1  Genera al Rules .................................................................................................................... 71  10..1.2  Current Content ................................................................................................................. 71  10..1.3  Learnin ng Objectivess ........................................................................................................... 71  10..1.4  Overalll Structure ................................................................................................................ 71  11.  A Appendix D – Notice to Training T Provviders ............................................................................... 73  12.  A Appendix E – Release Notes...................................................................................................... 74  Relea ase 2010 ....................................................................................................................................... 74  Relea ase 2011 ....................................................................................................................................... 74  13.  Index ............................................................................................................................................ 76  9. 

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 6 of 78 7

31-Marr-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Ackno owledgements Internatio onal Softwarre Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2011): Thomas Müller (chair), Debra Friedenberg. The T core team m thanks the review team m (Dan Almog g, Armin Be eer, Rex Black, Julie Garrdiner, Judy McKay, Tuulla Pääkköne en, Eric Riou du Cosquierr Hans Schaefer, Stephanie Ulrich, Erik van Veenendaal) and all National Bo oards for the suggestions for the curre ent version of o the syllabus. Internatio onal Softwarre Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2010): Thomas Müller (chair), Rahul Verma, Martin Klonk K and Arrmin Beer. The T core team m thanks the review te eam (Rex Bla ack, Mette Bruhn-Peders B son, Debra Friedenberg, F Klaus Olsen n, Judy McKa ay, Tuula Pä ääkkönen, Meile M Posthum ma, Hans Scchaefer, Step phanie Ulrich, Pete William ms, Erik van Veenend daal) and all National Boa ards for theirr suggestions s. Internatio onal Softwarre Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2007): Thomas Müller (chair), Dorothy Graham, G Deb bra Friedenberg, and Erikk van Veenendaal. The core c team tha anks the revie ew team (Ha ans Schaeferr, Stephanie Ulrich, Meile e Posthuma, Anders Pettersson, and Won nil Kwon) and d all the National Boards for their sug ggestions. Internatio onal Softwarre Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2005): Thomas Müller (chair), Rex Blackk, Sigrid Eldh h, Dorothy Graham, G Klau us Olsen, Ma aaret Pyhäjärrvi, hompson and d Erik van Ve eenendaal an nd the review w team and all a National B Boards for their Geoff Th suggestions.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 7 of 78 7

31-Marr-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Introd duction n to this s Syllabus Purpo ose of thiss Docume ent This sylla abus forms the t basis for the International Softwarre Testing Qualification a at the Founda ation Level. Th he Internatio onal Software e Testing Qualifications Board B (ISTQB B) provides it to the Natio onal Boards for f them to accredit the trraining provid ders and to derive d examination questtions in their local language e. Training providers p will determine appropriate a te eaching meth hods and pro oduce course eware for accre editation. The syllabus will w help candidates in their preparation for the exa amination. Information on the history and ba ackground off the syllabus s can be foun nd in Append dix A.

The Certified C T Tester Foundation Level in Software e Testing The Foundation Leve el qualificatio on is aimed at a anyone inv volved in softtware testing g. This includ des people in n roles such as testers, te est analysts,, test enginee ers, test conssultants, testt managers, user acceptan nce testers and a software developers. This Founda ation Level qualification q iis also appro opriate for anyone who wantts a basic un nderstanding of software testing, such h as project m managers, quality managerrs, software developmen nt managers, business an nalysts, IT dirrectors and m management consulta ants. Holders of the Foundation Certifficate will be able to go on n to a higherr-level softwa are testing qualification. q

Learniing Objecctives/Co ognitive Level of Knowledge K e Learning g objectives are a indicated d for each section in this syllabus s and d classified ass follows: o K1: remember r o K2: understand u o K3: apply a o K4: analyze a Further details d and examples e of learning l obje ectives are given in Appe endix B. All termss listed under “Terms” jusst below chap pter heading gs shall be re emembered ((K1), even if not explicitlyy mentioned in the learnin ng objectivess.

The Examinatio E on The Foundation Leve el Certificate examination n will be base ed on this syyllabus. Answ wers to ation question ns may require the use of o material ba ased on more e than one section of this s examina syllabus. All sectionss of the syllab bus are exam minable. The form mat of the exa amination is multiple cho oice. Exams may m be taken n as part of an a accredited d training cou urse or taken n independen ntly (e.g., at an a examina ation center or o in a public exam). Com mpletion of an a accredited d training cou urse is not a prerequisite e for the exam m.

Accred ditation An ISTQ QB National Board B may accredit training providers s whose courrse material ffollows this syllabus. Training pro oviders shou uld obtain acccreditation guidelines from the board or body thatt performss the accreditation. An acccredited cou urse is recog gnized as con nforming to this syllabus, and is allowe ed to have an n ISTQB exa amination as part of the course. g for training provviders is give en in Append dix D. Further guidance

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 8 of 78 7

31-Marr-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Level of Detail The leve el of detail in this syllabuss allows interrnationally co onsistent teaching and exxamination. In order to achieve this goal, the syllabus consissts of: o General instructional objectivves describin ng the intentio on of the Fou undation Levvel o A list of informatiion to teach, including a description, d and a referencces to additio onal sources if requ uired o Learrning objectivves for each knowledge area, a describ bing the cogn nitive learning outcome and a mind dset to be acchieved o A list of terms tha at students must m be able e to recall and d understand d o A de escription of the t key conccepts to teach, including sources s such h as accepte ed literature or o standards The sylla abus contentt is not a desscription of th he entire kno owledge area a of software testing; it refflects the level of detail to be b covered in n Foundation n Level trainiing courses.

How th his Syllab bus is Orrganized There arre six major chapters. c The top-level heading h for each chapter shows the h highest level of learning objectives th hat is covere ed within the chapter and specifies the e time for the e chapter. Fo or example e:

2. Tes sting Thrroughoutt the Sofftware Life Cycle (K2)

115 min nutes

This hea ading shows that Chapterr 2 has learning objective es of K1 (asssumed when a higher level is shown) and a K2 (but not n K3), and it is intended d to take 115 5 minutes to teach the material in the e chapter. Within each chapter there are a num mber of sectio ons. Each se ection also ha as the learning objective es and the am mount of time e required. Subsections S that do not have h a time g given are included within the time for the e section.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 9 of 78 7

31-Marr-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.

Fundam mentals of Testting (K2 2)

15 55 minu utes

Learniing Objecctives forr Fundam mentals off Testing The obje ectives identify what you will be able to t do followin ng the complletion of each h module.

1.1 Wh hy is Testing Necess sary? (K2) LO-1.1.1 1 LO-1.1.2 2 LO-1.1.3 3 LO-1.1.4 4 LO-1.1.5 5

Describe e, with examples, the wayy in which a defect in sofftware can ca ause harm to oa person, to t the enviro onment or to a company (K2) ( Distinguish between the root cau use of a defec ct and its effe ects (K2) Give rea asons why testing is nece essary by giv ving example es (K2) Describe e why testing g is part of qu uality assurance and give e examples o of how testing contributtes to higherr quality (K2) Explain and a compare e the terms error, e defect, fault, failure e, and the corrresponding terms mistake and bug, usiing exampless (K2)

1.2 Wh hat is Testing? (K2) LO-1.2.1 1 LO-1.2.2 2 LO-1.2.3 3

Recall th he common objectives o off testing (K1) Provide examples for the objectivves of testing g in different phases of th he software life cycle (K2 2) Differenttiate testing from f debugg ging (K2)

1.3 Sev ven Testin ng Princip ples (K2) LO-1.3.1 1

Explain the t seven prrinciples in te esting (K2)

1.4 Fun ndamenta al Test Pro ocess (K1)) LO-1.4.1 1

Recall th he five fundamental test activities a and d respective tasks t from planning to closure (K1)

1.5 The e Psychology of Tes sting (K2)) LO-1.5.1 1 LO-1.5.2 2

Recall th he psycholog gical factors that t influence e the successs of testing ((K1) Contrastt the mindsett of a tester and a of a deve eloper (K2)

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 10 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.1

Why is Testing g Necesssary (K2 2)

20 minuttes

Terms Bug, deffect, error, fa ailure, fault, mistake, m quallity, risk

1.1.1

Software e Systems s Context (K1) (

Software e systems arre an integrall part of life, from f busines ss applications (e.g., ban nking) to cons sumer productss (e.g., cars).. Most people e have had an a experienc ce with softwa are that did n not work as expected d. Software that t does nott work correcctly can lead to many pro oblems, including loss of money, time t or busin ness reputation, and coulld even caus se injury or de eath.

1.1.2

Causes of o Softwarre Defects s (K2)

A human n being can make m an erro or (mistake), which produ uces a defect (fault, bug) in the progra am code, or in a docume ent. If a defecct in code is executed, th he system ma ay fail to do w what it shoulld do (or do so omething it shouldn’t), ca ausing a failure. Defects in software, systems s or d documents may m result in failures, but not all defeccts do so. Defects occur because human be eings are fallible and bec cause there is time presssure, complex x code, co , and/or man omplexity of infrastructure e, changing technologies t ny system intteractions. Failures can be caussed by enviro onmental con nditions as well. w For example, radiatiion, magnetis sm, electroniic fields, and pollution can cause faults in firmwarre or influencce the executtion of softwa are by changing g the hardwa are conditions.

1.1.3 Role of Testing T in Software Developm ment, Main ntenance a and Operattions (K2) Rigorouss testing of systems s and documentatiion can help to reduce th he risk of problems occurring during operation and d contribute to o the quality of the software system, if the defectss found are corrected d before the system is re eleased for operational us se. Software e testing mayy also be req quired to mee et contractua al or legal req quirements, o or industry-specific standard ds.

1.1.4

Testing and a Qualitty (K2)

With the help of testing, it is posssible to meassure the quallity of software in terms o of defects fou und, for both functional f an nd non-functiional softwarre requireme ents and charracteristics (e e.g., reliabilitty, usability, efficiency, maintainabili m ty and portab bility). For more information on non-fu unctional tes sting see Cha apter 2; for more informattion on software characte eristics see ‘S Software Eng gineering – Software e Product Qu uality’ (ISO 9126). Testing can c give con nfidence in th he quality of the t software if it finds few w or no defeccts. A properrly designed d test that pa asses reduce es the overall level of risk k in a system m. When testing does find d defects, the quality of o the softwarre system inccreases whe en those defe ects are fixed d. Lessonss should be le earned from previous pro ojects. By understanding the root causes of defec cts found in other projeccts, processe es can be imp proved, whic ch in turn sho ould prevent those defectts from reoccurring and, as a consequen nce, improve the quality of o future systtems. This is an aspect of o quality assurance. Testing should s be inttegrated as one o of the qu uality assurance activitiess (i.e., alongsside develop pment standard ds, training and defect an nalysis). Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 11 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.1.5

How Muc ch Testing g is Enoug gh? (K2)

Deciding g how much testing t is eno ough should take accoun nt of the leve el of risk, inclu uding technic cal, safety, and a businesss risks, and project p constrraints such as a time and budget. b Riskk is discussed d further in n Chapter 5. Testing should s provid de sufficient information to t stakeholders to make informed decisions abou ut the release of o the softwa are or system m being teste ed, for the next development step or h handover to custome ers.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 12 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.2

30 minuttes

What iss Testing? (K2)

Terms Debugging, requirem ment, review, test case, te esting, test objective o

Backgrround A common perceptio on of testing is i that it onlyy consists of running testss, i.e., execu uting the softw ware. This is part p of testing g, but not all of o the testing g activities. Test actiivities exist before b and affter test execcution. Thes se activities in nclude plann ning and conttrol, choosing g test conditions, designing and execcuting test cases, checkin ng results, evvaluating exitt criteria, reporting r on the testing process p and system unde er test, and fiinalizing or ccompleting closure activitiess after a test phase has been b completted. Testing also includess reviewing d documents (including source cod de) and cond ducting staticc analysis. Both dyn namic testing g and static te esting can be used as a means for acchieving sim milar objective es, and will provide inforrmation that can c be used to improve both b the systtem being tessted and the e developm ment and tessting processses. Testing can c have the e following ob bjectives: o Finding defects o Gain ning confiden nce about the e level of qua ality o Provviding information for deccision-making g o Prevventing defeccts The thou ught processs and activitie es involved in n designing tests t early in the life cycle e (verifying the test basis via test design) can he elp to preventt defects from m being intro oduced into ccode. Review ws of documen nts (e.g., req quirements) and a the identtification and d resolution of o issues also o help to prev vent defects appearing a in the code. Differentt viewpoints in testing takke different objectives o into o account. For F example, in developm ment testing (e e.g., compon nent, integrattion and systtem testing), the main ob bjective may be to cause as many faiilures as posssible so thatt defects in th he software are a identified d and can be e fixed. In acceptan nce testing, the t main obje ective may be b to confirm that the system works as expected, to gain con nfidence that it has met th he requireme ents. In some e cases the main m objectivve of testing may be to asssess the qua ality of the so oftware (with no intention of fixing defe ects), to give e information n to stakeholders of the risk r of releasing the syste em at a given n time. Mainttenance testiing often inclludes testing th t of the chan hat no new defects d have been introdu uced during developmen d nges. During operational testing, the main obje ective may be to assess system s chara acteristics su uch as reliab bility or availability. Debugging and testin ng are differe ent. Dynamicc testing can show failure es that are ca aused by deffects. Debugging is the devvelopment acctivity that fin nds, analyzes and removves the cause e of the failurre. Subsequ uent re-testin ng by a tester ensures tha at the fix doe es indeed ressolve the failure. The responsiibility for thesse activities is i usually tessters test and d developerss debug. The proccess of testin ng and the te esting activitie es are explained in Section 1.4.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 13 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.3

Seven Testing Principles (K2))

35 minuttes

Terms Exhaustive testing

Princip ples A numbe er of testing principles p ha ave been sug ggested overr the past 40 years and offer general guideline es common for f all testing g. esence of defects d Principle 1 – Testing shows pre Testing can c show tha at defects are present, bu ut cannot pro ove that there e are no defe ects. Testing g reduces the probability of undisco overed defeccts remaining g in the softw ware but, eve en if no defec cts are found, it is not a proo of of correctn ness. Principle 2 – Exhau ustive testing is imposs sible Testing everything e (a all combinatio ons of inputss and precon nditions) is no ot feasible exxcept for triviial cases. In nstead of exh haustive testting, risk ana alysis and priorities should d be used to o focus testing efforts. Principle 3 – Early testing t To find defects d early, testing activvities shall be started as early as posssible in the ssoftware or system s developm ment life cycle, and shall be focused on defined objectives. o Principle 4 – Defectt clustering Testing effort e shall be e focused prroportionally to the expec cted and later observed d defect density y of moduless. A small number of mod dules usuallyy contains mo ost of the deffects discove ered during prep release testing, t or is responsible for most of the t operation nal failures. Principle 5 – Pestic cide paradox x If the sam me tests are repeated ovver and over again, eventtually the sam me set of tesst cases will no longer fin nd any new defects. d To overcome o thiis “pesticide paradox”, test cases nee ed to be regu ularly reviewed d and revised d, and new and a different tests need to o be written to t exercise d different parts s of the softw ware or syste em to find potentially morre defects. Principle 6 – Testing is contextt dependentt Testing is i done differrently in diffe erent contextts. For example, safety-critical softwa are is tested differentlly from an e--commerce site. s nce-of-errors s fallacy Principle 7 – Absen Finding and a fixing de efects does not n help if the e system built is unusable e and does n not fulfill the users’ needs an nd expectatio ons.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 14 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.4

Fundam mental Test T Pro ocess (K K1)

35 minuttes

Terms Confirma ation testing,, re-testing, exit e criteria, incident, regrression testin ng, test basiss, test condittion, test cove erage, test da ata, test execution, test log, test plan, test proced dure, test policy, test suite e, test summaryy report, testtware

Backgrround The mosst visible partt of testing iss test executiion. But to be e effective an nd efficient, ttest plans sh hould also inclu ude time to be b spent on planning p the tests, designing test casses, preparin ng for execution and evalluating resultts. The fund damental tesst process co onsists of the following ma ain activities: o Testt planning an nd control o Testt analysis and design o Testt implementa ation and exe ecution o Evaluating exit criteria c and re eporting o Testt closure activities Although h logically se equential, the e activities in the process may overlap p or take placce concurren ntly. Tailoring g these main activities witthin the context of the system and the e project is u usually requirred.

1.4.1

Test Plan nning and d Control (K1) (

Test plan nning is the activity a of de efining the ob bjectives of te esting and th he specificatio on of test ac ctivities in order to meet the objectives o an nd mission. Test con ntrol is the on ngoing activitty of comparing actual prrogress again nst the plan, and reportin ng the status, in ncluding deviations from the plan. It in nvolves takin ng actions ne ecessary to m meet the mis ssion and obje ectives of the e project. In order o to contrrol testing, th he testing activities shoulld be monitorred througho out the projecct. Test planning takes in nto account the feedbackk from monito oring and con ntrol activitiess. nning and co ontrol tasks are a defined in n Chapter 5 of o this syllab bus. Test plan

1.4.2

Test Ana alysis and Design (K K1)

Test ana alysis and de esign is the activity a during g which gene eral testing objectives are e transformed d into tangible test conditio ons and test cases. c The test analysis and d design actiivity has the following ma ajor tasks: o Reviiewing the te est basis (succh as require ements, softw ware integrityy level1 (risk level), risk analysis reports, architecture e, design, inte erface speciffications) o Evaluating testab bility of the te est basis and d test objects s o Identifying and prioritizing p tesst conditions based on an nalysis of tesst items, the specification n, beha avior and stru ucture of the e software o Desiigning and prioritizing hig gh level test cases c o Identifying necesssary test data to supportt the test con nditions and test cases o Desiigning the tesst environme ent setup and d identifying any required d infrastructu ure and tools o Crea ating bi-direcctional tracea ability betwee en test basis and test casses 1

The degrree to which sofftware compliess or must complyy with a set of stakeholder-sele s ected software a and/or software--based system cha aracteristics (e.g g., software com mplexity, risk asssessment, safe ety level, securitty level, desired performance, reliability, or o cost) which are a defined to re eflect the importa ance of the softtware to its stakkeholders.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 15 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.4.3

Test Imp plementation and Ex xecution (K1)

Test imp plementation and executio on is the activity where te est procedurres or scriptss are specifie ed by combinin ng the test ca ases in a parrticular orderr and includin ng any other information needed for test t executio on, the enviro onment is sett up and the tests are run n. Test imp plementation and executio on has the fo ollowing majo or tasks: o Fina alizing, implem menting and prioritizing test t cases (in ncluding the identification n of test data a) o Deve eloping and prioritizing te est procedure es, creating test t data and d, optionally, preparing te est harn nesses and writing w autom mated test scrripts o Crea ating test suittes from the test procedu ures for efficient test execcution o Veriffying that the e test environ nment has be een set up co orrectly o Veriffying and updating bi-dire ectional trace eability between the test basis and te est cases o Execcuting test prrocedures either manuallly or by using g test executtion tools, acccording to th he planned sequencce o Logg ging the outccome of test execution an nd recording the identitiess and versions of the sofftware unde er test, test to ools and testtware o Com mparing actua al results with h expected results r o Repo orting discrepancies as in ncidents and d analyzing th hem in orderr to establish h their cause (e.g., a de efect in the co ode, in specified test data a, in the test document, or o a mistake in the way th he test was executed) o Repe eating test activities as a result of acttion taken for each discre epancy, for e example, reexeccution of a te est that previo ously failed in order to co onfirm a fix (cconfirmation testing), exe ecution of a corrected tesst and/or exe ecution of tessts in order to t ensure tha at defects have not been intro oduced in uncchanged areas of the sofftware or that defect fixing did not unccover other defe ects (regressiion testing)

1.4.4

Evaluatin ng Exit Crriteria and Reporting g (K1)

Evaluatin ng exit criteria is the activvity where te est execution is assessed d against the defined objective es. This shou uld be done for f each test level (see Section S 2.2). Evaluatin ng exit criteria has the following majo or tasks: o Checcking test log gs against th he exit criteria a specified in n test plannin ng o Asse essing if morre tests are needed n or if the t exit criterria specified should be ch hanged o Writiing a test sum mmary reporrt for stakeho olders

1.4.5

Test Clos sure Activ vities (K1)

Test clossure activities collect data a from comp pleted test ac ctivities to consolidate exp perience, testware e, facts and numbers. n Tesst closure acctivities occurr at project milestones m su uch as when a software e system is re eleased, a te est project is completed (o or cancelled), a milestone has been achieved d, or a mainte enance relea ase has been n completed.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 16 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Test clossure activities include the e following major m tasks: o Checcking which planned deliverables havve been deliv vered o Clossing incident reports or ra aising change e records forr any that rem main open o Docu umenting the e acceptance e of the syste em o Fina alizing and arrchiving testw ware, the tesst environmen nt and the te est infrastructture for later reuse o Hand ding over the e testware to o the mainten nance organiization o Anallyzing lesson ns learned to o determine changes c nee eded for futurre releases a and projects o Usin ng the information gathere ed to improvve test maturity

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 17 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.5

The Pssycholog gy of Tessting (K2)

25 minuttes

Terms Error gue essing, indep pendence

Backgrround The mind dset to be ussed while tessting and revviewing is diffferent from th hat used whiile developing software e. With the rig ght mindset developers d a able to te are est their own code, but se eparation of this t responsiibility to a tesster is typically done to help focus efffort and provide additiona al benefits, su uch as an indep pendent view w by trained and a professio onal testing resources. r In ndependent ttesting may be b carried out o at any levvel of testing. A certain n degree of in ndependencce (avoiding the t author bias) often ma akes the teste er more effec ctive at finding g defects and d failures. Ind dependence e is not, howe ever, a replaccement for fa amiliarity, an nd develope ers can efficiently find ma any defects in their own code. c Severa al levels of in ndependence e can be define ed as shown n here from lo ow to high: o Testts designed by b the person(s) who wro ote the softw ware under test (low level of independence) o Testts designed by b another person(s) (e.g g., from the development d t team) o Testts designed by b a person(s) from a diffferent organiizational grou up (e.g., an iindependentt test team m) or test spe ecialists (e.g.., usability orr performanc ce test specia alists) o Testts designed by b a person(s) from a diffferent organiization or com mpany (i.e., outsourcing or certification by an external bo ody) People and a projects are driven byy objectives.. People tend d to align the eir plans with the objectives set by mana agement and other stakeh holders, for example, e to find f defects or o to confirm m that softwarre meets itss objectives. Therefore, itt is important to clearly state the obje ectives of testing. Identifyin ng failures du uring testing may be percceived as criticism againsst the producct and agains st the author. As A a result, te esting is ofte en seen as a destructive activity, a even n though it iss very constru uctive in the ma anagement of o product rissks. Looking for failures in a system requires r curio osity, profess sional pessimissm, a critical eye, attentio on to detail, good g commu unication with h developme ent peers, and experien nce on which to base erro or guessing. If errors, defects or fa ailures are co ommunicated in a constrructive way, bad feelings between the e testers and a the analyysts, designe ers and developers can be b avoided. This T applies tto defects fou und during re eviews as we ell as in testin ng. The teste er and test le eader need good g interperrsonal skills to communiccate factual information about a defects, progress and risks in a constructive c w way. For the author of the software o or document, defect in nformation ca an help them m improve the eir skills. Defe ects found and fixed duriing testing will w save time and moneyy later, and reduce r risks.. Commun nication prob blems may occcur, particularly if testers are seen only o as messengers of unwante ed news abou ut defects. However, therre are severa al ways to im mprove comm munication an nd relationsships betwee en testers and d others:

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 18 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

o o

o o

Startt with collabo oration rather than battless – remind everyone e of th he common goal of bette er quality systems Com mmunicate fin ndings on the e product in a neutral, fac ct-focused way w without crriticizing the persson who crea ated it, for example, write objective an nd factual inccident reportss and review w findings Try to t understand how the otther person feels f and why they react as they do Conffirm that the other person n has undersstood what yo ou have said d and vice ve ersa

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 19 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

1.6

10 minuttes

Code of o Ethicss

Involvem ment in software testing enables e indivviduals to learn confidential and privile eged informa ation. A code of ethics e is necessary, amo ong other rea asons to ensu ure that the information iss not put to inapprop priate use. Re ecognizing th he ACM and d IEEE code of ethics for engineers, th he ISTQB states the following g code of ethics: PUBLIC - Certified so oftware teste ers shall act consistently with the pub blic interest CLIENT AND EMPLO OYER - Certtified softwarre testers sha all act in a manner m that iss in the best interests of their client c and em mployer, conssistent with th he public inte erest PRODUC CT - Certified d software te esters shall ensure e that th he deliverables they provvide (on the products p and systtems they tesst) meet the highest profe essional stan ndards possible JUDGME ENT- Certifie ed software testers t shall maintain inte egrity and ind dependence in their profe essional judgmen nt MANAGEMENT - Ce ertified softwa are test man nagers and le eaders shall subscribe to and promote an ethical approach a to the managem ment of softw ware testing on of the pro PROFES SSION - Cerrtified softwarre testers shall advance the integrity and reputatio ofession consistent with the public interestt COLLEA AGUES - Cerrtified softwa are testers sh hall be fair to and supporttive of their ccolleagues, and a promote cooperation n with software developerrs SELF - Certified C softw ware testers shall particip pate in lifelon ng learning regarding r the e practice of their professio on and shall promote an ethical appro oach to the practice p of the profession n

Refere ences 1.1.5 Black, 2001, Kaner, K 2002 ack, 2001, Myers, M 1979 1.2 Beizzer, 1990, Bla 1.3 Beizzer, 1990, He etzel, 1988, Myers, M 1979 1.4 Hetzzel, 1988 1.4.5 Black, 2001, Craig, C 2002 1.5 Blacck, 2001, Hettzel, 1988

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 20 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

2. Testing T g Throug ghout th he Softw ware Liffe Cycle e (K2)

1 115 minutes

Learniing Objecctives forr Testing Througho out the Software S L Life Cycle e The obje ectives identify what you will be able to t do followin ng the complletion of each h module.

2.1 Sofftware Dev velopmen nt Models (K2) ( LO-2.1.1 1 LO-2.1.2 2 LO-2.1.3 3

Explain the t relationship between developmen nt, test activitties and work products in n the developm ment life cyccle, by giving examples us sing project and a product types (K2) Recognize the fact th hat software developmen nt models mu ust be adapte ed to the con ntext of projecct and producct characterisstics (K1) Recall ch haracteristicss of good tessting that are e applicable to t any life cyycle model (K K1)

2.2 Tes st Levels (K2) ( LO-2.2.1 1

Compare e the differen nt levels of te esting: majorr objectives, typical objeccts of testing,, typical ta argets of testting (e.g., fun nctional or sttructural) and d related worrk products, people who testt, types of de efects and failures to be id dentified (K2 2)

2.3 Tes st Types (K2) LO-2.3.1 1 LO-2.3.2 2 LO-2.3.3 3 LO-2.3.4 4 LO-2.3.5 5

Compare e four softwa are test typess (functional,, non-functional, structura al and chang gerelated) by example (K2) Recognize that functtional and strructural tests s occur at any test level (K1) al requireme Identify and a describe e non-functio onal test type es based on non-function n ents (K2) Identify and a describe e test types based b on the e analysis of a software syystem’s struc cture or archite ecture (K2) Describe e the purpose e of confirma ation testing and regression testing (K K2)

2.4 Maintenance e Testing (K2) ( LO-2.4.1 1 LO-2.4.2 2 LO-2.4.3 3.

Compare e maintenance testing (te esting an existing system m) to testing a new application with resp pect to test tyypes, triggers for testing and amount of testing (K K2) Recognize indicatorss for mainten nance testing g (modificatio on, migration and retirement) (K1) Describe e the role of regression r te esting and im mpact analysis in mainten nance (K2)

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 21 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

2.1

Softwa are Deve elopmen nt Models (K2)

20 minuttes

Terms Commerrcial Off-The--Shelf (COTS S), iterative-iincremental development model, validation, verification, V-model

Backgrround Testing does d not exisst in isolation n; test activitiies are relate ed to softwarre developme ent activities. Differentt developmen nt life cycle models m need d different approaches to testing.

2.1.1

V-model (Sequential Develo opment Mo odel) (K2)

Although h variants of the V-modell exist, a com mmon type off V-model usses four test levels, correspo onding to the e four develop pment levelss. bus are: The fourr levels used in this syllab o Com mponent (unitt) testing o Integ gration testin ng o Systtem testing o Acce eptance testiing In practicce, a V-mode el may have more, fewerr or different levels of devvelopment an nd testing, dependin ng on the pro oject and the e software prroduct. For example, therre may be co omponent integratio on testing aftter compone ent testing, an nd system in ntegration tessting after syystem testing g. Software e work produ ucts (such ass business sccenarios or use u cases, re equirements sspecification ns, design documents d an nd code) pro oduced during developme ent are often the basis off testing in on ne or more tesst levels. Refferences for generic g workk products include Capab bility Maturityy Model Integ gration (CMMI) or o ‘Software life cycle pro ocesses’ (IEE EE/IEC 1220 07). Verification and valid dation (and early e test design) can be carried c out du uring the devvelopment off the software e work produ ucts.

2.1.2

Iterative--incremen ntal Develo opment Models (K2)

Iterative--incremental developmen nt is the proccess of estab blishing requiirements, designing, build ding and testiing a system m in a series of o short deve elopment cyc cles. Examples are: proto otyping, Rapid Applicatiion Developm ment (RAD), Rational Un nified Process s (RUP) and agile develo opment mode els. A system that t is producced using the ese models may m be teste ed at several test levels d during each iteration.. An increme ent, added to others deve eloped previo ously, forms a growing pa artial system, which sh hould also be e tested. Reg gression testing is increas singly importtant on all ite erations afterr the first one.. Verification and validatio on can be ca arried out on each increm ment.

2.1.3

Testing within w a Life Cycle Model M (K2 2)

In any liffe cycle model, there are several characteristics of o good testin ng: o For every e develo opment activiity there is a corresponding testing acctivity o Each h test level has h test objecctives specifiic to that leve el o The analysis and d design of te ests for a givven test level should begiin during the correspondiing deve elopment acttivity o Testters should be b involved in n reviewing documents d as a soon as drrafts are available in the deve elopment life cycle Test leve els can be co ombined or reorganized r d depending on the nature of the projecct or the systtem architectture. For exa ample, for the e integration of a Comme ercial Off-The e-Shelf (COT TS) software product into a system m, the purcha aser may perrform integra ation testing at a the system m level (e.g., Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 22 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

integratio on to the infrrastructure and other systems, or sys stem deploym ment) and accceptance tes sting (function nal and/or no on-functional,, and user an nd/or operational testing)).

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 23 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

2.2

40 minuttes

Test Le evels (K K2)

Terms Alpha testing, beta te esting, comp ponent testing g, driver, field d testing, fun nctional requ uirement, integratio on, integratio on testing, no on-functionall requiremen nt, robustnesss testing, stu ub, system te esting, test environment, tesst level, test-d driven development, use er acceptance e testing

Backgrround For each h of the test levels, the fo ollowing can be b identified: the genericc objectives, tthe work product(s) being refe erenced for deriving d test cases c (i.e., th he test basiss), the test ob bject (i.e., wh hat is being tessted), typicall defects and d failures to be b found, tes st harness requirements a and tool supp port, and speccific approacches and responsibilities. Testing a system’s configuration data shall be e considered d during test planning,

2.2.1

Component Testin ng (K2)

Test bassis: o Com mponent requ uirements o Deta ailed design o Code e Typical test t objects: o Com mponents o Prog grams o Data a conversion / migration programs p o Data abase modules Compon nent testing (a also known as a unit, modu ule or progra am testing) searches for d defects in, an nd verifies the t functionin ng of, softwa are modules, programs, objects, o classses, etc., that are separattely testable.. It may be do one in isolatiion from the rest of the sy ystem, depending on the e context of the developm ment life cycle and the syystem. Stubss, drivers and d simulators may be used d. nent testing may m include testing t of fun nctionality an nd specific no on-functionall characteristtics, Compon such as resource-behavior (e.g., searching fo or memory le eaks) or robu ustness testin ng, as well as s structura al testing (e.g g., decision coverage). c Te est cases are e derived fro om work prod ducts such as sa specifica ation of the component, th he software design or the e data model. Typicallyy, component testing occcurs with acce ess to the co ode being tessted and with h the supportt of a developm ment environ nment, such as a unit test framework or debugging tool. In practice, comp ponent testing usually u involvves the progrrammer who wrote the co ode. Defects are typicallyy fixed as soo on as they are found, witho out formally managing m the ese defects. One app proach to com mponent testting is to prepare and auttomate test cases c before e coding. This s is called a test-first app proach or tesst-driven deve elopment. Th his approach h is highly iterative and is s based on n cycles of developing te est cases, the en building and integratin ng small piecces of code, and a executing the compo onent tests co orrecting anyy issues and iterating unttil they pass.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 24 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

2.2.2

Integratio on Testing g (K2)

Test bassis: o Softw ware and sysstem design o Arch hitecture o Workflows o Use cases Typical test t objects: o Subssystems o Data abase implem mentation o Infra astructure o Interrfaces o Systtem configura ation and configuration data d Integratio on testing tests interfaces between components, interactions with differen nt parts of a system, such as the operating syystem, file syystem and ha ardware, and interfaces b between systtems. There may be more than t one levvel of integrattion testing and a it may be e carried out on test objec cts of varying size s as follow ws: 1. Com mponent integ gration testin ng tests the in nteractions between b softw ware compo onents and is s done afterr component testing 2. Systtem integratio on testing tests the intera actions betwe een differentt systems or between hard dware and so oftware and may m be done e after system m testing. In this case, the developing g orga anization mayy control onlyy one side off the interface. This migh ht be conside ered as a risk k. Business processses impleme ented as worrkflows may involve a serries of system ms. Cross-platform issue es may be siignificant. The grea ater the scop pe of integrattion, the more difficult it becomes b to issolate defectts to a speciffic compone ent or system m, which mayy lead to incrreased risk and a additiona al time for tro oubleshooting g. Systema atic integratio on strategies may be bassed on the sy ystem archite ecture (such as top-down n and bottom-u up), functiona al tasks, tran nsaction proccessing sequ uences, or so ome other asspect of the system s or compo onents. In orrder to ease fault isolation n and detectt defects earlly, integration n should norrmally be increm mental rathe er than “big bang”. Testing of o specific no on-functionall characteristtics (e.g., performance) may m be includ ded in integrration testing as a well as fun nctional testin ng. At each stage of inte egration, teste ers concentrrate solely on n the integrattion itself. Fo or example, iff they are integ grating modu ule A with mo odule B they are intereste ed in testing the commun nication betw ween the modu ules, not the functionalityy of the indiviidual module e as that wass done during g componentt testing. Both B function nal and strucctural approaches may be e used. Ideally, testers t should understand d the archite ecture and inffluence integ gration plann ning. If integra ation tests are e planned before compon nents or syste ems are built, those com mponents can n be built in th he order req quired for mo ost efficient testing. t

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 25 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

2.2.3

System Testing T (K K2)

Test bassis: o Systtem and softw ware require ement specification o Use cases o Funcctional speciffication o Riskk analysis rep ports Typical test t objects: o Systtem, user and operation manuals m o Systtem configura ation and configuration data d System testing t is con ncerned with h the behavio or of a whole system/prod duct. The tessting scope shall s be clearlly addressed d in the Master and/or Levvel Test Plan n for that test level. In system m testing, the e test environ nment should correspond d to the final target or pro oduction environm ment as much as possible e in order to minimize the e risk of environment-spe ecific failures s not being fou und in testing g. t may include testss based on risks and/or on o requirements specifica ations, busine ess System testing processe es, use case es, or other high level textt descriptions s or models of system be ehavior, interactio ons with the operating syystem, and syystem resources. System testing t should investigate e functional and a non-func ctional requirrements of th he system, and data qua ality characte eristics. Teste ers also need d to deal with h incomplete e or undocum mented requirem ments. System m testing of functional f requirements starts s by usin ng the most a appropriate specifica ation-based (black-box) ( te echniques fo or the aspectt of the syste em to be teste ed. For exam mple, a decision table may be b created for combinations of effects s described in n business ru ules. Structurebased te echniques (w white-box) ma ay then be ussed to asses ss the thoroughness of th he testing with respect to t a structura al element, such s as menu u structure or o web page navigation n (ssee Chapter 4). An indep pendent test team often carries c out syystem testing g.

2.2.4

Acceptan nce Testin ng (K2)

Test bassis: o Userr requiremen nts o Systtem requirem ments o Use cases o Business processses o Riskk analysis rep ports Typical test t objects: o Business processses on fully integrated syystem o Operational and maintenance e processes o Userr proceduress o Form ms o Repo orts o Conffiguration da ata Acceptance testing iss often the re esponsibility of the customers or userrs of a system m; other e involved ass well. stakeholders may be The goal in acceptan nce testing iss to establish h confidence in the system m, parts of th he system orr specific non-functional characteriistics of the system. s Find ding defects is not the ma ain focus in acceptan nce testing. Acceptance A t testing may assess the system’s s read diness for de eployment an nd Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 26 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

use, alth hough it is no ot necessarilyy the final levvel of testing. For example, a large-sccale system integratio on test may come c after th he acceptancce test for a system. Acceptance testing may m occur att various time es in the life cycle, for exa ample: o A CO OTS software product ma ay be accepttance tested when it is in nstalled or inttegrated o Acce eptance testiing of the usa ability of a co omponent may be done during d component testing g o Acce eptance testiing of a new functional en nhancementt may come before b system m testing Typical forms f of acce eptance testiing include th he following: ceptance te esting User acc Typicallyy verifies the fitness for use of the sysstem by business users. onal (accepttance) testin ng Operatio The acce eptance of th he system byy the system administrato ors, including g: o Testting of backu up/restore o Disa aster recoverry o Userr manageme ent o Main ntenance tassks o Data a load and migration m taskks o Perio odic checks of security vulnerabilitiess ct and regula ation accepttance testin ng Contrac Contractt acceptance e testing is pe erformed aga ainst a contra act’s accepta ance criteria for producing custom-d developed so oftware. Accceptance crite eria should be b defined wh hen the partiies agree to the t contract.. Regulation acceptance testing is pe erformed aga ainst any regu ulations that must be adh hered to, such as governme ent, legal or safety regula ations. g Alpha and beta (or field) testing Developers of marke et, or COTS, software ofte en want to get feedback from potentia al or existing g custome ers in their ma arket before the software e product is put p up for sale commercia ally. Alpha te esting is performed at the developing d orrganization’ss site but not by the developing team. Beta testing g, or field-testting, is perforrmed by custtomers or po otential custo omers at their own locatio ons. Organiza ations may use u other term ms as well, such s as facto ory acceptancce testing an nd site accep ptance testing fo or systems th hat are tested before and d after being moved to a customer’s ssite.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 27 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

2.3

40 minuttes

Test Tyypes (K2 2)

Terms Black-bo ox testing, co ode coverage e, functional testing, interroperability te esting, load ttesting, maintain nability testing g, performan nce testing, portability p tes sting, reliabiliity testing, se ecurity testing, stress te esting, structu ural testing, usability u testting, white-bo ox testing

Backgrround A group of test activities can be aimed a at veriifying the sofftware system m (or a part o of a system) based on a spe ecific reason or target for testing. A test typ pe is focused d on a particcular test obje ective, which h could be an ny of the follo owing: o A fun nction to be performed byy the software o A no on-functional quality characteristic, su uch as reliabiility or usability o The structure or architecture of the softwa are or system m o Change related, i.e., confirmiing that defe ects have bee en fixed (con nfirmation tessting) and loo oking for unintended u ch hanges (regrression testin ng) A model of the software may be developed d and/or used in n structural te esting (e.g., a control flow w model orr menu struccture model), non-function nal testing (e e.g., performa ance model, usability mo odel security threat modeling), and fun nctional testing (e.g., a process flow model, m a statte transition model or a plain n language specification) s ).

2.3.1

Testing of o Functio on (Functio onal Testiing) (K2)

The funcctions that a system, subssystem or co omponent are e to perform may be described in work productss such as a re equirementss specification n, use cases s, or a functio onal specifica ation, or they y may be undoccumented. The T functionss are “what” the t system does. d Function nal tests are based on fun nctions and features f (des scribed in do ocuments or u understood by b the testers) and a their inte eroperability with specificc systems, an nd may be pe erformed at a all test levels s (e.g., tests for componentss may be bassed on a com mponent specification). Specifica ation-based techniques t m be used to derive tes may st conditionss and test casses from the functiona ality of the so oftware or syystem (see Chapter C 4). Fu unctional tessting conside ers the extern nal behaviorr of the softw ware (black-b box testing). A type off functional testing, security testing, in nvestigates the t functionss (e.g., a firew wall) relating g to detection n of threats, such as virusses, from ma alicious outsiders. Anothe er type of fun nctional testing, interoperrability testin ng, evaluatess the capabiliity of the softtware producct to interact with one or more specified d componentts or systemss.

2.3.2 Testing of o Non-fun nctional Software S Characteris C stics (Non n-functional Testing g) (K2) Non-funcctional testing includes, but b is not limited to, perfo ormance testing, load testing, stress testing, usability u testing, maintain nability testin ng, reliability testing t and portability p tessting. It is the e testing of o “how” the system s workss. Non-funcctional testing may be pe erformed at all a test levels. The term non-functiona al testing des scribes the testss required to measure cha aracteristics of systems and a software e that can be quantified on o a varying scale, s such as a response times for perrformance te esting. These e tests can be referenced d to a quality model m such as a the one de efined in ‘Sofftware Engine eering – Softtware Product Quality’ (IS SO Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 28 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

9126). Non-functiona N al testing con nsiders the external beha avior of the so oftware and in most case es uses bla ack-box test design d techn niques to acccomplish thatt.

2.3.3

Testing of o Softwarre Structu ure/Archite ecture (Strructural Testing) (K K2)

Structura al (white-boxx) testing mayy be perform med at all testt levels. Structural techniiques are best used afte er specification-based techniques, in order to help p measure th he thoroughn ness of testin ng through assessment of coverage e of a type of structure. Coverag ge is the exte ent that a stru ucture has be een exercise ed by a test suite, s expressed as a percenta age of the ite ems being co overed. If covverage is not 100%, then more tests m may be desig gned to test th hose items th hat were misssed to increa ase coverage e. Coverage techniques a are covered in Chapter 4. At all tesst levels, but especially in n componentt testing and component integration te esting, tools can be used to measure the code covverage of ele ements, such h as stateme ents or decisions. Structural testing may m be based d on the arch hitecture of th he system, such s as a callling hierarch hy. Structura al testing app proaches can n also be applied at syste em, system integration i or acceptance e testing le evels (e.g., to o business models m or me enu structure es).

2.3.4

Testing Related R to o Changes s: Re-testing and Re egression Testing (K2)

After a defect d is dete ected and fixe ed, the softw ware should be b re-tested to t confirm th hat the origina al defect ha as been succcessfully rem moved. This is i called confirmation. De ebugging (loccating and fix xing a defect) iss a developm ment activity, not a testing g activity. Regresssion testing iss the repeate ed testing of an already te ested progra am, after mod dification, to discoverr any defectss introduced or o uncovered d as a result of the chang ge(s). These defects may y be either in the software e being tested, or in anoth her related or o unrelated software s com mponent. It is s performe ed when the software, or its environm ment, is changed. The exttent of regresssion testing g is based on n the risk of not finding defects in soft ftware that was working previously. p hould be repe eatable if the ey are to be used u for conffirmation testting and to assist regress sion Tests sh testing. Regresssion testing may m be performed at all te est levels, an nd includes functional, no on-functional and structura al testing. Re egression tesst suites are run r many tim mes and gene erally evolve e slowly, so regressio on testing is a strong can ndidate for au utomation.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 29 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

2.4

Maintenance Testing T ( (K2)

15 minuttes

Terms Impact analysis, a maintenance tessting

Backgrround Once de eployed, a so oftware syste em is often in n service for years y or decades. During g this time the system, its configura ation data, or its environm ment are often n corrected, changed or extended. Th he planning g of releases in advance is i crucial for successful maintenance m e testing. A distinction has s to be made be etween plann ned releases and hot fixe es. Maintenan nce testing iss done on an n existing operational system, and a is triggered by modiffications, mig gration, or retirement of th he software or system. Modifica ations include e planned en nhancement changes c (e.g g., release-ba ased), correcctive and emergen ncy changes, and change es of environ nment, such as a planned operating o sysstem or database upgrades, planned upgrade of Co ommercial-O Off-The-Shelff software, orr patches to correct newly exposed d or discovere ed vulnerabilities of the operating o sys stem. Maintena ance testing for migration n (e.g., from one platform m to another) should inclu ude operation nal tests of the t new environment as well w as of the e changed so oftware. Migration testing g (conversion n testing) is i also neede ed when data a from anoth her applicatio on will be mig grated into th he system be eing maintain ned. Maintena ance testing for the retire ement of a syystem may in nclude the te esting of data a migration or archiving g if long data a-retention pe eriods are required. In additio on to testing what has be een changed, maintenanc ce testing inccludes regression testing g to parts of the t system that have nott been chang ged. The sco ope of mainte enance testin ng is related to t the risk of th he change, th he size of the e existing sysstem and to the t size of th he change. D Depending on n the changess, maintenancce testing may be done at a any or all test t levels an nd for any orr all test types. Determin ning how the e existing sysstem may be affected by changes is called c impactt analysis, an nd is used to help h decide how h much re egression tessting to do. The T impact analysis may be used to determin ne the regresssion test suiite. Maintena ance testing can be difficcult if specificcations are out o of date orr missing, or testers with domain knowledge k a not availa are able.

Refere ences 2.1.3 CM MMI, Craig, 2002, 2 Hetzell, 1988, IEEE E 12207 2.2 Hetzzel, 1988 2.2.4 Co opeland, 200 04, Myers, 19 979 2.3.1 Be eizer, 1990, Black, B 2001, Copeland, 2004 2 2.3.2 Black, 2001, IS SO 9126 2.3.3 Be eizer, 1990, Copeland, C 20 004, Hetzel, 1988 2.3.4 He etzel, 1988, IEEE I STD 82 29-1998 2.4 Blacck, 2001, Cra aig, 2002, He etzel, 1988, IEEE I STD 82 29-1998

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 30 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

3.

S Static T Techniq ues (K2 2)

6 60 minuttes

Learniing Objecctives forr Static Te echniques The obje ectives identify what you will be able to t do followin ng the complletion of each h module.

3.1 Sta atic Techniques and d the Test Process (K2) ( LO-3.1.1 1 LO-3.1.2 2 LO-3.1.3 3

Recognize software work produccts that can be b examined by the differrent static techniqu ues (K1) Describe e the importa ance and valu ue of conside ering static te echniques fo or the assess sment of softwa are work products (K2) Explain the t differencce between static s and dyn namic techniques, consid dering objecttives, types of defects to be e identified, and a the role of these tech hniques with hin the softwa are life cycle (K2 2)

3.2 Rev view Proc cess (K2) LO-3.2.1 1 LO-3.2.2 2 LO-3.2.3 3

Recall th he activities, roles and responsibilities s of a typical formal revie ew (K1) Explain the t differencces between different type es of reviewss: informal re eview, techniical and inspection (K2) review, walkthrough w Explain the t factors fo or successful performanc ce of reviewss (K2)

3.3 Sta atic Analys sis by Too ols (K2) LO-3.3.1 1 LO-3.3.2 2 LO-3.3.3 3

Recall tyypical defectss and errors identified by y static analyssis and comp pare them to o reviews and dynamicc testing (K1) Describe e, using exam mples, the tyypical benefits of static an nalysis (K2) List typiccal code and design defe ects that may y be identified d by static an nalysis tools (K1)

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 31 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

3.1 Static Techniq T ues and d the Tes st Proce ess (K2)

15 minuttes

Terms Dynamicc testing, stattic testing

Backgrround Unlike dyynamic testin ng, which req quires the exxecution of so oftware, static testing tecchniques rely y on the manu ual examinattion (reviewss) and autom mated analysiis (static ana alysis) of the code or othe er project documentatio d on without the execution of the code. Reviewss are a way of o testing softtware work products p (including code)) and can be performed well w before dynamic test execution. e D Defects deteccted during re eviews early in the life cyycle (e.g., deffects found in requirementts) are often much cheap per to remove e than those detected byy running testts on the execcuting code. A review w could be do one entirely as a a manual activity, but there is also tool supportt. The main manual activity a is to examine e a work w product and make co omments about it. Any so oftware work k product can c be reviewed, includin ng requireme ents specifica ations, desig gn specifications, code, te est plans, te est specifications, test casses, test scriipts, user guides or web pages. Benefits of reviews in nclude early defect detecction and corrrection, deve elopment pro oductivity improvem ments, reducced developm ment timesca ales, reduced d testing cosst and time, liifetime cost reduction ns, fewer deffects and improved comm munication. Reviews R can n find omissio ons, for exam mple, in require ements, whicch are unlike ely to be foun nd in dynamic testing. Reviewss, static analyysis and dyna amic testing have the same objective e – identifying g defects. Th hey are complementary; the different techniques can find diffe erent types of o defects effe ectively and efficiently. Compared d to dynamicc testing, stattic technique es find causes of failures (defects) rather than the failures them mselves. Typical defects d that are a easier to find in reviews than in dynamic testin ng include: d deviations fro om standard ds, requireme ent defects, design d defeccts, insufficie ent maintaina ability and inccorrect interfa ace specifica ations.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 32 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

3.2

25 minuttes

Review w Processs (K2)

Terms Entry criteria, formal review, inforrmal review, inspection, metric, m mode erator, peer rreview, review wer, scribe, te echnical review, walkthro ough

Backgrround The diffe erent types of o reviews vary from inform mal, characterized by no o written instrructions for reviewerrs, to systematic, charactterized by tea am participattion, docume ented resultss of the review w, and documen nted procedu ures for cond ducting the re eview. The fo ormality of a review proce ess is related d to factors such s as the maturity m of the developme ent process, any legal or regulatory re equirements s or the need forr an audit traiil. The wayy a review is carried out depends d on the t agreed objectives of the t review (e e.g., find defe ects, gain und derstanding, educate testters and new w team memb bers, or discu ussion and d decision by consenssus).

3.2.1

Activities s of a Form mal Revie ew (K1)

A typicall formal revie ew has the fo ollowing main n activities: 1. Plan nning • Defining D the review criterria • Selecting S the e personnel • Allocating A ro oles • Defining D the entry and exxit criteria forr more forma al review type es (e.g., insp pections) • Selecting S wh hich parts of documents to t review • Checking C en ntry criteria (ffor more form mal review types) 2. Kickk-off • Distributing D d documents • Explaining E th he objectivess, process an nd documentts to the participants 3. Indivvidual preparration • Preparing P for the review meeting by reviewing r the e document(s) • Noting N poten ntial defects, questions an nd commentts 4. Exam mination/eva aluation/recorrding of resu ults (review meeting) m • Discussing D o logging, with documented results or or o minutes (fo or more form mal review typ pes) • Noting N defeccts, making re ecommenda ations regarding handling the defects, making dec cisions a about the de efects • Examining/e E evaluating and recording issues during g any physiccal meetings or tracking any a g group electro onic commun nications 5. Rew work • Fixing F defectts found (typ pically done by b the authorr) • Recording R updated status of defects (in formal reviews) 6. Follo ow-up • Checking C tha at defects ha ave been add dressed • Gathering G metrics • Checking C on n exit criteria (for more forrmal review types) t

3.2.2

Roles an nd Respon nsibilities (K1)

A typicall formal revie ew will includ de the roles below: b o Manager: decide es on the exe ecution of revviews, alloca ates time in project p sched dules and dete ermines if the e review obje ectives have been met. Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 33 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

o

o o

o

Moderator: the person p who le eads the review of the do ocument or set s of docume ents, includin ng planning the reviiew, running the meeting, and followin ng-up after th he meeting. If necessary y, the moderator may mediate m betw ween the various points of o view and iss often the pe erson upon whom w the success s of th he review ressts. Auth hor: the writer or person with w chief ressponsibility fo or the docum ment(s) to be reviewed. Reviiewers: indivviduals with a specific technical or bus siness backg ground (also called check kers or inspe ectors) who, after the necessary prep paration, iden ntify and desscribe finding gs (e.g., defe ects) in the product p unde er review. Re eviewers sho ould be chose en to represe ent different perspectives s and roless in the revie ew process, and a should ta ake part in any review me eetings. Scrib be (or record der): docume ents all the isssues, proble ems and open points thatt were identiffied durin ng the meetin ng.

Looking at software products p or related r work products fro om different perspectives p and using checklistts can make reviews morre effective and a efficient. For example e, a checklistt based on various v perspecttives such ass user, mainttainer, testerr or operation ns, or a checcklist of typica al requirements problemss may help to o uncover prreviously und detected issu ues.

3.2.3

Types off Reviews (K2)

A single software pro oduct or relatted work pro oduct may be e the subject of more than n one review w. If more tha an one type of o review is used, u the ord der may vary y. For example, an inform mal review ma ay be carried out o before a technical t revview, or an in nspection ma ay be carried out on a req quirements specifica ation before a walkthroug gh with customers. The main m characte eristics, optio ons and purp poses of comm mon review tyypes are: Informal Review o No fo ormal processs o May take the form m of pair pro ogramming or o a technical lead review wing designs and code o Resu ults may be documented d o Varie es in usefuln ness dependiing on the re eviewers o Main n purpose: in nexpensive way w to get so ome benefit Walkthrrough o Mee eting led by author a o May take the form m of scenarios, dry runs,, peer group participation n o Open-ended sesssions O pre-meeting pre eparation of reviewers r • Optional • Optional O preparation of a review repo ort including list of finding gs o Optio onal scribe (who is not th he author) o May vary in pracctice from quiite informal to very forma al o Main n purposes: learning, gaining undersstanding, find ding defects cal Review Technic o Docu umented, de efined defect--detection prrocess that in ncludes peerrs and techniical experts with w optio onal manage ement particip pation o May be performe ed as a peer review witho out managem ment participation o Idea ally led by trained modera ator (not the author) a o Pre-meeting prep paration by reviewers r o Optio onal use of checklists c o Prep paration of a review reporrt which inclu udes the list of findings, the t verdict whether the recommendations relate softw ware productt meets its re equirements and, where appropriate, a ed to findings o May vary in pracctice from quiite informal to very forma al o Main n purposes: discussing, making decissions, evalua ating alternattives, finding g defects, sollving technical problem ms and checcking conform mance to spe ecifications, plans, p regula ations, and standards Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 34 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Inspectiion o Led by trained moderator m (no ot the author) o Usua ally conducte ed as a peer examination n o Defin ned roles o Inclu udes metrics gathering o Form mal process based b on rules and checcklists o Speccified entry and a exit criterria for accep ptance of the software pro oduct o Pre-meeting prep paration o Inspection reportt including lisst of findings o Form mal follow-up p process (wiith optional process p impro ovement com mponents) o Optio onal reader o Main n purpose: fin nding defectss Walkthro oughs, technical reviews and inspectiions can be performed p w within a peer g group, i.e., colle eagues at the e same organizational levvel. This type e of review iss called a “pe eer review”.

3.2.4

Success s Factors for f Review ws (K2)

Successs factors for reviews r include: o Each h review hass clear predeffined objectivves o The right people for the revie ew objectivess are involved d o Testters are value ed reviewerss who contrib bute to the re eview and alsso learn abou ut the produc ct whicch enables th hem to prepa are tests earlier o Defe ects found arre welcomed and expresssed objective ely o Peop ple issues an nd psycholog gical aspectss are dealt with (e.g., makking it a positive experien nce for the author) a o The review is conducted in an atmospherre of trust; th he outcome will w not be ussed for the evaluation of the e participantss o Reviiew techniqu ues are applie ed that are suitable s to ac chieve the ob bjectives and to the type and a level of software work produccts and revie ewers o Checcklists or role es are used if appropriate e to increase e effectivenesss of defect identification n o Train ning is given in review techniques, esspecially the more formall techniques such as inspe ection o Management sup pports a goo od review pro ocess (e.g., by b incorporatting adequate e time for rev view ect scheduless) activvities in proje o Therre is an emphasis on learrning and pro ocess improv vement

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 35 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

3.3

Static Analysis A s by Too ols (K2)

20 minuttes

Terms Compiler, complexityy, control flow w, data flow, static analys sis

Backgrround The obje ective of statiic analysis iss to find defects in softwa are source co ode and softw ware models s. Static an nalysis is perrformed witho out actually executing e the e software be eing examine ed by the too ol; dynamicc testing doess execute the e software co ode. Static analysis can locate l defectts that are ha ard to find in dyynamic testin ng. As with re eviews, staticc analysis fin nds defects rather r than fa ailures. Static c analysis tools analyzze program code c (e.g., co ontrol flow an nd data flow)), as well as g generated ou utput such as HTML and XML. X The valu ue of static an nalysis is: o Earlyy detection of o defects prior to test exe ecution o Earlyy warning ab bout suspicio ous aspects of o the code or o design by the t calculatio on of metrics s, such as a high comple exity measurre o Identification of defects d not easily e found by b dynamic testing t o Dete ecting dependencies and inconsistenccies in softw ware models such s as linkss o Imprroved mainta ainability of code c and dessign o Prevvention of defects, if lesso ons are learn ned in develo opment Typical defects d disco overed by sta atic analysis tools include e: o Refe erencing a va ariable with an a undefined d value o Inconsistent interfaces betwe een moduless and compon nents o Varia ables that arre not used or o are improp perly declared d o Unre eachable (de ead) code o Misssing and erro oneous logic (potentially infinite loops) o Overly complicatted constructts o Prog gramming sta andards viola ations o Secu urity vulnerab bilities o Synttax violationss of code and d software models m Static an nalysis tools are typically used by devvelopers (che ecking against predefined d rules or programming standa ards) before and a during component an nd integration testing or w when checking-in c n manageme ent tools, and d by designers during sofftware modelling. Static code to configuration analysis tools may produce a larg ge number of o warning me essages, which need to b be well-mana aged to allow the most effe ective use off the tool. Compilers may offer some suppo ort for static analysis, a inclluding the ca alculation of m metrics.

Refere ences 3.2 IEEE E 1028 3.2.2 Giilb, 1993, van n Veenendaa al, 2004 3.2.4 Giilb, 1993, IEE EE 1028 3.3 van Veenendaall, 2004

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 36 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.

T Test De esign Te echniqu ues (K4))

28 85 minu utes

Learniing Objecctives forr Test Dessign Tech hniques The obje ectives identify what you will be able to t do followin ng the complletion of each h module.

4.1 The e Test Dev velopmentt Process (K3) LO-4.1.1 1 LO-4.1.2 2 LO-4.1.3 3 LO-4.1.4 4

Differenttiate between n a test desig gn specificattion, test case specificatio on and test procedure specificatiion (K2) Compare e the terms test t condition n, test case and a test proccedure (K2) Evaluate e the quality of test casess in terms of clear traceab bility to the re equirements s and expected d results (K2 2) Translate e test cases into a well-sstructured tes st procedure specification n at a level of o detail rellevant to the knowledge of o the testers s (K3)

4.2 Cattegories of o Test Des sign Tech hniques (K K2) LO-4.2.1 1 LO-4.2.2 2

Recall re easons that both b specification-based (black-box) and a structure e-based (whitebox) testt design tech hniques are useful u and lis st the commo on technique es for each (K K1) Explain the t characteristics, comm monalities, an nd difference es between sspecification--based testing, structure-bas s sed testing and a experienc ce-based tessting (K2)

4.3 Spe ecification n-based or Black-bo ox Techniques (K3) LO-4.3.1 1 LO-4.3.2 2 LO-4.3.3 3

Write tesst cases from m given softw ware models using equiva alence partitiioning, bound dary value an nalysis, decission tables an nd state transition diagra ams/tables (K K3) Explain the t main purrpose of each h of the four testing techn niques, whatt level and ty ype of testing could c use the e technique, and a how cov verage may be b measured d (K2) Explain the t concept of use case testing and its benefits (K K2)

4.4 Strructure-ba ased or Wh hite-box Technique T s (K4) LO-4.4.1 1 LO-4.4.2 2

LO-4.4.3 3 LO-4.4.4 4

Describe e the concep pt and value of o code cove erage (K2) Explain the t conceptss of statemen nt and decision coverage e, and give re easons why these t conceptss can also be e used at tesst levels othe er than component testing g (e.g., on businesss proceduress at system le evel) (K2) Write tesst cases from m given contrrol flows usin ng statementt and decision test design n techniqu ues (K3) Assess statement s an nd decision coverage c for completenesss with respe ect to defined d exit criteria. (K4) (

4.5 Exp perience-b based Tec chniques (K2) ( LO-4.5.1 1 LO-4.5.2 2

Recall re easons for writing w test ca ases based on o intuition, experience e an nd knowledg ge about co ommon defeccts (K1) Compare e experience e-based tech hniques with specification n-based testin ng technique es (K2)

4.6 Cho oosing Te est Techniiques (K2)) LO-4.6.1 1

Classify test design techniques t a according to their t fitness to t a given co ontext, for the e test basis, re espective mo odels and sofftware charac cteristics (K2 2)

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 37 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.1

The Te est Deve elopmen nt Proces ss (K3)

15 minuttes

Terms Test casse specificatio on, test desig gn, test execcution schedule, test proccedure speciification, testt script, tra aceability

Backgrround The test developmen nt process de escribed in th his section ca an be done in different w ways, from ve ery informal with little or no documen ntation, to very formal (as s it is describ bed below). T The level of formalityy depends on n the contextt of the testin ng, including the maturity of testing an nd development processe es, time consstraints, safe ety or regulattory requirem ments, and th he people invvolved. During te est analysis, the test basis documenttation is analyzed in orde er to determin ne what to te est, i.e., to id dentify the tesst conditionss. A test cond dition is defin ned as an item or event th hat could be verified by b one or mo ore test case es (e.g., a fun nction, transa action, quality characterisstic or structu ural element)). Establish hing traceability from testt conditions back b to the specifications s s and require ements enab bles both effe ective impactt analysis wh hen requirem ments change e, and determ mining requirrements cove erage for a set of tests. Durring test analysis the deta ailed test approach is implemented to o select the test t design te echniques to o use based on, o among other o conside erations, the identified risks (see Chapter 5 for more on risk analysis). During te est design th he test casess and test datta are create ed and speciffied. A test ccase consists s of a set of inp put values, execution e pre econditions, expected e res sults and exe ecution postcconditions, de efined to cover a certain tesst objective(ss) or test con ndition(s). The ‘Standard for Software Test Docume entation’ (IEE EE STD 829-1998) descriibes the conttent of test design specifiications (containiing test cond ditions) and test case spe ecifications. Expected d results sho ould be produ uced as part of the specification of a test case and include outputs, changess to data and states, and any other co onsequences s of the test. If expected rresults have not been deffined, then a plausible, but erroneouss, result may y be interpretted as the co orrect one. Expected d results sho ould ideally be b defined prrior to test ex xecution. During te est implemen ntation the te est cases are e developed, implemente ed, prioritized d and organiz zed in the test procedure p sp pecification (IEEE STD 829-1998). Th he test proce edure specifies the seque ence of action ns for the exe ecution of a test. t If tests are a run using g a test execution tool, the sequence of actions is specified in n a test scrip pt (which is an automated d test procedure). The vario ous test proccedures and automated test t scripts are subseque ently formed into a test executio on schedule that t defines the t order in which w the various test pro ocedures, an nd possibly automate ed test scriptts, are execu uted. The test execution schedule will take into account such factors as a regression n tests, prioritization, and technical an nd logical dependencies.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 38 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.2 Catego ories of Test T Design Tec chnique es (K2)

15 minuttes

Terms Black-bo ox test design n technique, experience--based test design d techniique, test dessign techniqu ue, white-bo ox test design n technique

Backgrround The purp pose of a tesst design tech hnique is to identify i test conditions, c te est cases, an nd test data. assic distinction to denote e test techniq ques as blac ck-box or whiite-box. Blacck-box test de esign It is a cla techniqu ues (also called specificattion-based te echniques) are a a way to derive d and select test condition ns, test cases, or test datta based on an analysis of o the test ba asis documentation. This s includes both functio onal and non--functional te esting. Black k-box testing, by definition, does not use u any inforrmation rega arding the inte ernal structure of the com mponent or system s to be tested. Whitte-box test design technique es (also calle ed structural or structure--based techn niques) are b based on an analysis of the structture of the co omponent or system. Black-box and white-box w tessting may als so be combine ed with experrience-based d techniques to leverage the experien nce of develo opers, testers s and users to determine what w should be b tested. Some techniques fall clearly into a single cate egory; others s have eleme ents of more than one categoryy. This sylla abus refers to t specificatio on-based tesst design tec chniques as black-box b tecchniques and d structure e-based test design techn niques as wh hite-box techniques. In ad ddition experrience-based d test design te echniques arre covered. Common n characterisstics of specification-base ed test design techniquess include: o Models, either fo ormal or inforrmal, are use ed for the spe ecification off the problem m to be solved d, the softw ware or its co omponents o Testt cases can be b derived syystematicallyy from these models Common n characterisstics of structture-based te est design te echniques incclude: o Inforrmation abou ut how the so oftware is constructed is used to derivve the test ca ases (e.g., code c and detailed dessign informatiion) o The extent of covverage of the e software ca an be measu ured for existting test case es, and furthe er test case es can be derived system matically to in ncrease cove erage Common n characterisstics of experrience-based d test design techniques include: o The knowledge and a experien nce of people e are used to o derive the test t cases o The knowledge of o testers, de evelopers, ussers and othe er stakeholde ers about the e software, itts usag ge and its en nvironment iss one source of informatio on o Know wledge abou ut likely defeccts and their distribution is i another so ource of inforrmation

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 39 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.3 Specification-b based orr Black-b box Techn niques (K3) (

1 150 minu utes

Terms Boundarry value analysis, decisio on table testin ng, equivalen nce partitioniing, state transition testin ng, use case tessting

4.3.1

Equivale ence Partittioning (K K3)

In equiva alence partitiioning, inputss to the softw ware or syste em are divide ed into group ps that are expected d to exhibit similar s behavvior, so they are a likely to be b processed d in the same way. Equivale ence partition ns (or classess) can be fou und for both valid data, i.e., values that should be e accepted d and invalid data, i.e., va alues that sh hould be rejected. Partitio ons can also be identified d for outputs, internal valu ues, time-rela ated values (e.g., ( before or after an event) e and for interface parameters (e.g., inte egrated com mponents bein ng tested during integratiion testing). T Tests can be e d to cover alll valid and in nvalid partitions. Equivale ence partition ning is appliccable at all levels of designed testing. Equivale ence partition ning can be used u to achie eve input and d output cove erage goals. It can be ap pplied to human input, input via interfacces to a syste em, or interfa ace paramete ers in integra ation testing.

4.3.2

Boundarry Value Analysis A (K K3)

Behaviorr at the edge e of each equ uivalence partition is morre likely to be e incorrect th han behavior within the partittion, so boun ndaries are an a area wherre testing is likely to yield defects. The e maximum and minimum m values of a partition are e its boundarry values. A boundary b va alue for a valiid partition is sa valid bou undary value e; the bounda ary of an inva alid partition is an invalid boundary va alue. Tests can c be designed d to cover bo oth valid and invalid boun ndary values. When desig gning test ca ases, a test fo or each bou undary value e is chosen. Boundarry value analysis can be applied at all test levels. It is relatively easy to apply and its defectfinding capability c is high. h Detailed d specificatio ons are helpfful in determiining the inte eresting boundarries. This tech hnique is ofte en considere ed as an exte ension of equ uivalence partitioning or o other black-b box test design technique es. It can be used on equ uivalence cla asses for use er input on sccreen as well as, for exam mple, on time ranges (e.g., time out, trransactional speed requirements) or ttable ranges s (e.g., table size is 256*256 6).

4.3.3

Decision n Table Testing (K3))

Decision n tables are a good way to t capture syystem require ements that contain c logiccal conditions s, and to docum ment internal system design. They ma ay be used to o record com mplex business rules thatt a system is to impleme ent. When crreating decision tables, th he specificatiion is analyzzed, and cond ditions and actio ons of the syystem are ide entified. The input conditions and actions are mosst often stated in such a way w that theyy must be true or false (Boolean). The e decision tab ble contains the triggering condition ns, often com mbinations off true and false for all inp put conditionss, and the ressulting action ns for each com mbination of conditions. Each E column n of the table e corresponds to a busine ess rule that defines a unique com mbination of conditions an nd which res sult in the exe ecution of the actions associated with that rule. The covverage stand dard commonly used with h decision table testing is s to have at least l one tesst per column n in the table e, which typic cally involvess covering alll combination ns of triggering g conditions..

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 40 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

The strength of decission table tessting is that itt creates com mbinations of o conditions that otherwis se might no ot have been exercised during testing g. It may be applied a to all situations w when the actio on of the softw ware depends on several logical decissions.

4.3.4

State Tra ansition Te esting (K3 3)

A system m may exhibiit a different response de epending on current cond ditions or previous history y (its state). In n this case, th hat aspect off the system can be show wn with a sta ate transition diagram. It allows a the teste er to view the e software in terms of its states, trans sitions betwee en states, the inputs or events e that trigg ger state cha anges (transittions) and the actions wh hich may result from thosse transitions s. The states off the system or object und der test are separate, s ide entifiable and d finite in num mber. A state table shows the t relationship between the states and a inputs, an nd can highliight possible transition ns that are in nvalid. Tests ca an be designe ed to cover a typical sequ uence of states, to coverr every state,, to exercise every transition n, to exercise e specific seq quences of transitions t orr to test invalid transitionss. State tra ansition testin ng is much used within th he embedded d software in ndustry and technical automation in genera al. However, the techniqu ue is also suitable for mo odeling a bussiness object having specific s statess or testing screen-dialog s gue flows (e..g., for Intern net applicatio ons or busine ess scenario os).

4.3.5

Use Case e Testing (K2)

Tests ca an be derived d from use ca ases. A use case c describ bes interactio ons between actors (userrs or systems), which prod duce a resultt of value to a system use er or the cusstomer. Use ccases may be b describe ed at the absttract level (business use case, techno ology-free, business b proccess level) or o at the syste em level (sysstem use casse on the sysstem function nality level). Each use ca ase has preconditions which need to be met m for the usse case to work w successffully. Each use case terminate es with postcconditions which are the observable results r and final state of tthe system after a the use case c has bee en completed d. A use casse usually has a mainstre eam (i.e., most likely) sce enario and alterrnative scena arios. es describe the t “processs flows” throu ugh a system m based on itss actual likely use, so the e test Use case cases de erived from use u cases are most usefu ul in uncoverring defects in the processs flows durin ng real-worlld use of the system. Use e cases are very v useful fo or designing acceptance tests with custome er/user participation. Theyy also help uncover u integ gration defects caused byy the interacttion and interrference of different d components, which individua al componentt testing wou uld not see. Designin ng test casess from use ca ases may be combined with w other spe ecification-ba ased test techniqu ues.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 41 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.4 Structu ure-base ed or Wh hite-box Techn niques (K4)

60 minuttes

Terms Code co overage, deciision coverag ge, statemen nt coverage, structure-ba ased testing

Backgrround Structure e-based or white-box w testing is based d on an identtified structurre of the softtware or the system, as seen in th he following examples: o Com mponent level: the structu ure of a softw ware component, i.e., stattements, deccisions, branc ches or evven distinct paths p o Integ gration level: the structure may be a call c tree (a diagram in wh hich moduless call other modules) e structure may m be a men nu structure, business prrocess or web page struc cture o Systtem level: the In this se ection, three code-related d structural te est design te echniques for code coverrage, based on o statemen nts, branches and decisio ons, are disccussed. For decision d testting, a contro ol flow diagra am may be used u to visua alize the alte ernatives for each e decisio on.

4.4.1

Statemen nt Testing g and Cove erage (K4)

In compo onent testing g, statement coverage is the assessm ment of the pe ercentage off executable echnique derives statemen nts that have e been exerccised by a tesst case suite. The statem ment testing te test case es to execute e specific sta atements, normally to increase statem ment coverag ge. Statement coverage is determine ed by the num mber of exec cutable statements coverred by (desig gned or execu uted) test casses divided by b the numbe er of all exec cutable statem ments in the code under test.

4.4.2

Decision n Testing and a Coverrage (K4)

Decision n coverage, related r to bra anch testing, is the asses ssment of the e percentage e of decision outcome es (e.g., the True T and Fallse options of o an IF statement) that ha ave been exxercised by a test case suite. The decission testing technique t de erives test ca ases to execu ute specific d decision outc comes. Branche es originate frrom decision n points in the e code and show s the tran nsfer of contrrol to differen nt locationss in the code e. Decision n coverage iss determined d by the numb ber of all dec cision outcom mes covered by (designe ed or executed d) test casess divided by the t number of o all possible e decision ou utcomes in th he code under test. Decision n testing is a form of conttrol flow testin ng as it follow ws a specificc flow of conttrol through the t decision points. Deciision coverag ge is stronge er than statem ment coverag ge; 100% de ecision coverrage guarante ees 100% sta atement cove erage, but no ot vice versa a.

4.4.3

Other Structure-ba ased Tech hniques (K K1)

There arre stronger le evels of strucctural covera age beyond decision d cove erage, for exxample, cond dition coverage e and multiple condition coverage. c The conccept of coverage can also be applied d at other test levels For example, at the integration level the percentage of modules, componentss or classes that have be een exercised d by a test ca ase suite cou uld be expresssed as mod dule, compon nent or class coverage. Tool sup pport is usefu ul for the stru uctural testing g of code. Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 42 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.5

Experie ence-ba ased Tecchniques s (K2)

30 minuttes

Terms Exploratory testing, (fault) ( attack

Backgrround Experien nce-based te esting is where tests are derived d from the tester’s skill and intu uition and the eir experien nce with similar applicatio ons and technologies. Wh hen used to augment a sysstematic techniqu ues, these tecchniques can n be useful in n identifying special testss not easily ccaptured by formal f techniqu ues, especially when applied after mo ore formal ap pproaches. However, this technique may m yield wid dely varying degrees d of effectiveness, depending on the testerrs’ experiencce. A commonly used exxperience-ba ased techniqu ue is error gu uessing. Gen nerally testerrs anticipate defects based b on exp perience. A structured s ap pproach to th he error guesssing techniq que is to enumera ate a list of possible defects and to de esign tests th hat attack the ese defects. This systematic approach is called fa ault attack. Th hese defect and failure lists can be built based on n experience e, available e defect and failure data, and from co ommon know wledge aboutt why softwarre fails. Exploratory testing iss concurrent test design, test executio on, test loggiing and learn ning, based on o a test charrter containin ng test objecttives, and ca arried out within time-boxxes. It is an a approach that is most use eful where th here are few or inadequatte specificatiions and sevvere time pre essure, or in order o to augme ent or complement otherr, more forma al testing. It can c serve ass a check on the test proc cess, to help ensure e that th he most serio ous defects are a found.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 43 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

4.6

Choosiing Testt Techniques (K K2)

15 minuttes

Terms No specific terms.

Backgrround The choice of which test techniqu ues to use de epends on a number of fa actors, includ ding the type e of system, regulatory sttandards, customer or co ontractual req quirements, level of risk, type of risk, test objective e, documenta ation availab ble, knowledg ge of the testters, time and d budget, de evelopment liife cycle, usse case models and prevvious experie ence with types of defectss found. Some techniques are e more applicable to certtain situations and test levels; others are applicab ble to all test le evels. When crreating test cases, c testerss generally use u a combin nation of test techniques including pro ocess, rule and data-driven techniques to t ensure adequate cove erage of the object o under test.

Refere ences 4.1 Craiig, 2002, Hettzel, 1988, IE EEE STD 829-1998 4.2 Beizzer, 1990, Co opeland, 200 04 4.3.1 Co opeland, 200 04, Myers, 19 979 4.3.2 Co opeland, 200 04, Myers, 19 979 4.3.3 Be eizer, 1990, Copeland, C 20 004 4.3.4 Be eizer, 1990, Copeland, C 20 004 4.3.5 Co opeland, 200 04 4.4.3 Be eizer, 1990, Copeland, C 20 004 4.5 Kaner, 2002 4.6 Beizzer, 1990, Co opeland, 200 04

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 44 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.

T Test Ma anagem ment (K3 3)

17 70 minu utes

Learniing Objecctives forr Test Managemen nt The obje ectives identify what you will be able to t do followin ng the complletion of each h module.

5.1 Tes st Organizzation (K2) LO-5.1.1 1 LO-5.1.2 2 LO-5.1.3 3 LO-5.1.4 4

Recognize the imporrtance of inde ependent tes sting (K1) Explain the t benefits and drawbaccks of indepe endent testin ng within an o organization (K2) Recognize the differe ent team members to be considered for the creation of a test team (K1) Recall th he tasks of a typical test leader l and te ester (K1)

5.2 Tes st Planning and Esttimation (K K3) LO-5.2.1 1 LO-5.2.2 2

LO-5.2.3 3

LO-5.2.4 4 LO-5.2.5 5 LO-5.2.6 6 LO-5.2.7 7 LO-5.2.8 8 LO-5.2.9 9

Recognize the differe ent levels an nd objectives of test plann ning (K1) Summarrize the purpose and content of the te est plan, test design speccification and d test procedure documentts according to the ‘Stand dard for Softw ware Test Documentatio on’ (IEEE Sttd 829-1998)) (K2) Differenttiate between n conceptuallly different te est approach hes, such as analytical, modelm based, methodical, m p process/stand dard complia ant, dynamic/heuristic, co onsultative and regressio on-averse (K K2) Differenttiate between n the subjectt of test planning for a syystem and sccheduling tes st executio on (K2) Write a test t execution schedule for f a given se et of test casses, considerring prioritiza ation, and tech hnical and log gical depend dencies (K3) List test preparation and executio on activities that t should be b considered during testt planning g (K1) Recall tyypical factorss that influencce the effort related to testing (K1) Differenttiate between n two concep ptually differe ent estimatio on approache es: the metric csbased ap pproach and d the expert-b based approa ach (K2) Recognize/justify ade equate entryy and exit critteria for speccific test leve els and group ps of test case es (e.g., for integration te esting, accep ptance testing g or test case es for usability testing) (K2) (

5.3 Tes st Progres ss Monitorring and Control C (K K2) LO-5.3.1 1 LO-5.3.2 2 LO-5.3.3 3

Recall co ommon metrrics used for monitoring test preparation and execcution (K1) Explain and a compare e test metricss for test rep porting and te est control (e e.g., defects found f and fixed d, and tests passed p and failed) f relate ed to purpose e and use (K K2) Summarrize the purpose and content of the te est summaryy report document accord ding to the ‘Stan ndard for Sofftware Test Documentatio D on’ (IEEE Std 829-1998) (K2)

5.4 Configuratio on Manage ement (K2) LO-5.4.1 1

Summarrize how configuration ma anagement supports s testting (K2)

5.5 Ris sk and Tes sting (K2) LO-5.5.1 1 LO-5.5.2 2 LO-5.5.3 3 LO-5.5.4 4 LO-5.5.5 5

Describe e a risk as a possible pro oblem that wo ould threaten n the achieve ement of one e or more sta akeholders’ project p objectives (K2) Rememb ber that the level of risk iss determined d by likelihoo od (of happen ning) and impact (harm re esulting if it does happen)) (K1) Distinguish between the project and a product risks (K2) Recognize typical product and prroject risks (K K1) Describe e, using exam mples, how risk r analysis and risk man nagement may be used for f test planning g (K2)

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 45 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.6 Inc cident Man nagement (K3) LO-5.6.1 1 LO-5.6.2 2

Recognize the conte ent of an incid dent report according a to the t ‘Standard d for Softwarre Test Doccumentation’’ (IEEE Std 829-1998) 8 (K K1) Write an incident rep port covering the observa ation of a failu ure during te esting. (K3)

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 46 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.1

30 minuttes

Test Organizattion (K2)

Terms Tester, test leader, te est managerr

5.1.1

Test Org ganization and Indep pendence e (K2)

The effectiveness of finding defects by testing g and review ws can be imp proved by ussing independent testers. Options O for in ndependencce include the e following: o No in ndependent testers; deve elopers test their t own cod de o Inde ependent testters within th he developme ent teams o Inde ependent testt team or gro oup within the e organizatio on, reporting to project management or o execcutive manag gement o Inde ependent testters from the e business orrganization or o user comm munity o Inde ependent testt specialists for f specific te est types suc ch as usability testers, se ecurity testerrs or certification teste ers (who certtify a softwarre product ag gainst standa ards and regu ulations) o Inde ependent testters outsourcced or extern nal to the org ganization For large e, complex or o safety criticcal projects, it is usually best b to have multiple leve els of testing, with some or all of the levvels done by independent testers. Development staff s may partticipate in tes sting, especially at the lowe er levels, butt their lack off objectivity often o limits th heir effectiveness. The independ dent testers may have th he authority to o require and d define test processes a and rules, bu ut testers should s take on o such process-related roles r only in the presence e of a clear m managementt mandate e to do so. The benefits of indep pendence incclude: o Inde ependent testters see othe er and differe ent defects, and a are unbiased o An in ndependent tester can ve erify assump ptions people e made during specificatio on and imple ementation of o the system m Drawbaccks include: o Isola ation from the e developme ent team (if trreated as tottally independent) o Deve elopers may lose a sense e of responssibility for qua ality o Inde ependent tessters may be seen as a bottleneck b or blamed for delays d in rele ease Testing tasks t may be e done by pe eople in a specific testing g role, or mayy be done byy someone in n another role, such ass a project manager, m qua ality managerr, developer, business an nd domain ex xpert, infrastruccture or IT operations.

5.1.2

Tasks off the Test Leader an nd Tester (K1) (

In this syyllabus two te est positionss are covered d, test leaderr and tester. The activities and tasks performe ed by people e in these two o roles depen nd on the pro oject and pro oduct contexxt, the people e in the roles, an nd the organization. Sometim mes the test leader is called a test ma anager or tes st coordinatorr. The role off the test leader may be performed p byy a project manager, m a de evelopment manager, a quality q assurrance manag ger or the mana ager of a tesst group. In la arger projects two positio ons may exist: test leaderr and test managerr. Typically th he test leade er plans, mon nitors and co ontrols the testing activitie es and tasks s as defined in i Section 1.4. Typical test t leader ta asks may incclude: o Coordinate the te est strategy and a plan with h project managers and others o o Write e or review a test strateg gy for the pro oject, and tes st policy for th he organizatiion Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 47 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

o o

o o o o o o o o

Conttribute the te esting perspe ective to othe er project acttivities, such as integratio on planning Plan n the tests – considering c t context and the a understa anding the test objectivess and risks – inclu uding selectin ng test appro oaches, estim mating the tim me, effort and cost of testing, acquirin ng reso ources, definiing test levels, cycles, an nd planning in ncident management Initia ate the speciffication, prep paration, imp plementation and execution of tests, m monitor the te est results and checck the exit criteria Adap pt planning based b on tesst results and d progress (sometimes do ocumented in n status repo orts) and take any acttion necessary to compen nsate for pro oblems Set up u adequate e configuratio on managem ment of testwa are for tracea ability Intro oduce suitablle metrics forr measuring test progress s and evalua ating the qua ality of the tes sting and the product Deciide what sho ould be autom mated, to what degree, and how Sele ect tools to su upport testing g and organiize any trainiing in tool usse for testerss Deciide about the e implementa ation of the te est environm ment Write e test summary reports based b on the e information gathered du uring testing

Typical tester t tasks may m include: o Reviiew and conttribute to testt plans o Anallyze, review and assess user requirements, specifications and d models forr testability o Crea ate test speccifications o Set up u the test environment (often ( coordinating with system s administration and network management) o Prep pare and acq quire test data o Implement tests on all test levels, execute e and log the e tests, evalu uate the resu ults and docu ument the deviations d fro om expected d results o Use test adminisstration or ma anagement tools and test monitoring tools as requ uired o Auto omate tests (may be supp ported by a developer d or a test autom mation expertt) o Mea asure perform mance of com mponents and systems (iff applicable) o Reviiew tests devveloped by others o People who w work on test analysiss, test design n, specific test types or te est automatio on may be specialissts in these ro oles. Depend ding on the test t level and d the risks re elated to the p product and the project, different d peo ople may take e over the role of tester, keeping k som me degree of independence. Typicallyy testers at th he componen nt and integrration level would w be developers, testters at the acceptan nce test leve el would be business expe erts and use ers, and teste ers for operattional accepttance testing would w be ope erators.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 48 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.2

Test Pllanning and Esttimation (K3)

40 minuttes

Terms Test app proach, test strategy s

5.2.1

Test Plan nning (K2))

This secction covers the t purpose of o test planning within de evelopment and a impleme entation proje ects, and for maintenance m e activities. Planning mayy be documen nted in a master test plan n and in sepa arate test plan ns for test levvels such as system testin ng and acceptance testin ng. The outlin ne of a testplanning g document iss covered byy the ‘Standa ard for Softwa are Test Doccumentation’ (IEEE Std 8298 1998). Planning g is influence ed by the testt policy of the e organizatio on, the scope e of testing, o objectives, risks, constrain nts, criticalityy, testability and a the availlability of res sources. As the project an nd test plann ning progresss, more inform mation becomes availablle and more detail can be e included in n the plan. Test plan nning is a co ontinuous acttivity and is performed p in all life cycle processes a and activities s. Feedbacck from test activities a is used u to recog gnize changin ng risks so th hat planning can be adjusted.

5.2.2

Test Plan nning Activities (K3 3)

Test plan nning activities for an enttire system or o part of a sy ystem may in nclude: o Dete ermining the scope and riisks and iden ntifying the objectives o of testing o Defin ning the overall approach h of testing, including i the e definition off the test leve els and entry y and exit criteria c o Integ grating and coordinating c the testing activities a into the software e life cycle acctivities (acquisition, supply, developm ment, operattion and maintenance) o Makking decisionss about whatt to test, wha at roles will perform p the te est activities,, how the tes st activvities should be done, and how the te est results willl be evaluate ed o Sche eduling test analysis a and design activvities o Sche eduling test implementati i ion, executio on and evalua ation o Assigning resourrces for the different d activvities defined d o Defin ning the amo ount, level off detail, struccture and tem mplates for th he test docum mentation o Sele ecting metricss for monitorring and conttrolling test preparation p a execution and n, defect reso olution and risk issues o Settiing the level of detail for test procedu ures in order to provide en nough inform mation to sup pport repro oducible testt preparation n and executiion

5.2.3

Entry Criteria (K2))

Entry criteria define when w to startt testing such h as at the beginning of a test level or when a sett of tests is ready r for exe ecution. Typicallyy entry criteria may coverr the following: o Testt environmen nt availability and readine ess o Testt tool readine ess in the tesst environment o Testtable code avvailability o Testt data availab bility

5.2.4

Exit Crite eria (K2)

Exit crite eria define wh hen to stop testing t such as at the end d of a test levvel or when a set of tests s has achieved d specific goa al. Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 49 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Typicallyy exit criteria may cover the t following: o Thorroughness measures, m such as covera age of code, functionalityy or risk o Estim mates of defe ect density or o reliability measures m o Costt o Resiidual risks, such as defeccts not fixed or lack of tes st coverage in i certain are eas o Sche edules such as those bassed on time to t market

5.2.5

Test Estiimation (K K2)

Two app proaches for the estimatio on of test effo ort are: o The metrics-base ed approach h: estimating the testing effort e based on o metrics off former or siimilar ects or based d on typical values v proje o The expert-based approach: estimating the tasks bas sed on estimates made b by the owner of the taskss or by experts Once the e test effort iss estimated, resources can c be identiffied and a scchedule can b be drawn up p. ay depend on n a number of o factors, inc cluding: The testing effort ma o Characteristics of o the producct: the qualityy of the speciification and other inform mation used fo or test models (i.e., the test basis), the t size of th he product, th he complexitty of the prob blem domain, the requ uirements forr reliability an nd security, and a the requiirements for documentatiion o Characteristics of o the development proce ess: the stabiility of the org ganization, to ools used, te est proccess, skills off the people involved, i and d time pressure o The outcome of testing: t the number n of de efects and th he amount off rework requ uired

5.2.6

Test Stra ategy, Tes st Approac ch (K2)

The test approach is the impleme entation of th he test strategy for a speccific project. The test app proach is define ed and refined d in the test plans and te est designs. It typically inccludes the de ecisions mad de based on n the (test) project’s p goall and risk asssessment. It is the startin ng point for planning the test t process,, for selecting g the test dessign techniqu ues and test types to be applied, and d for defining the entry and d exit criteria a. The sele ected approa ach depends on the conte ext and may consider riskks, hazards a and safety, available e resources and a skills, the technologyy, the nature of the system (e.g., custtom built vs. COTS), test t objective es, and regu ulations. a i include: Typical approaches o Anallytical approa aches, such as risk-base ed testing where testing iss directed to areas of gre eatest risk proaches, su uch as stocha astic testing using statisttical informattion about faiilure o Model-based app ratess (such as re eliability grow wth models) or o usage (such as operattional profiless) o Meth hodical appro oaches, such h as failure-b based (includ ding error guessing and ffault attacks), expe erience-base ed, checklist--based, and quality q chara acteristic-bassed o Proccess- or standard-complia ant approach hes, such as those speciffied by indusstry-specific standards or the various agile e methodolo ogies o Dyna amic and heuristic approaches, such as explorato ory testing where w testing is more reac ctive to even nts than pre-planned, and d where execcution and ev valuation are e concurrent tasks o Conssultative app proaches, such as those in which testt coverage iss driven primarily by the advice a and guidance of technology and/or a business domain experts outsside the test tteam o Regression-averrse approach hes, such as those that in nclude reuse of existing test material, extensive automation of funcctional regresssion tests, and a standard d test suites Differentt approachess may be com mbined, for example, e a risk-based dynamic appro oach.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 50 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.3 Test Prrogress Monitorring and Control (K2)

20 minuttes

Terms Defect density, failurre rate, test control, c test monitoring, m te est summaryy report

5.3.1

Test Progress Mon nitoring (K K1)

The purp pose of test monitoring m iss to provide feedback f and d visibility ab bout test activvities. Inform mation to be mo onitored mayy be collected d manually or automatica ally and may be used to m measure exitt criteria, such s as cove erage. Metriccs may also be b used to assess progre ess against tthe planned schedule e and budgett. Common test t metrics include: o Perccentage of wo ork done in test t case pre eparation (or percentage of planned te est cases prep pared) o Perccentage of wo ork done in test t environm ment prepara ation o Testt case execution (e.g., nu umber of testt cases run/n not run, and test t cases pa assed/failed)) o Defe ect informatio on (e.g., defe ect density, defects d found d and fixed, failure f rate, a and re-test re esults) o Testt coverage off requiremen nts, risks or code c o Subjjective confid dence of testters in the prroduct o Date es of test mile estones o Testting costs, including the cost c compare ed to the ben nefit of finding the next de efect or to run the nextt test

5.3.2

Test Rep porting (K2 2)

Test reporting is concerned with summarizing g information n about the te esting endea avor, includin ng: o Wha at happened during a perriod of testing g, such as da ates when exxit criteria we ere met o Anallyzed informa ation and me etrics to supp port recommendations an nd decisions about future e actio ons, such as an assessm ment of defects remaining g, the econom mic benefit off continued testing, outstand ding risks, and the level of o confidence e in the tested d software The outline of a test summary rep port is given in ‘Standard d for Software e Test Docum mentation’ (IEEE Std 829--1998). Metrics should s be co ollected durin ng and at the end of a tes st level in ord der to assesss: o The adequacy off the test objectives for th hat test level o The adequacy off the test app proaches takken o The effectivenesss of the testiing with resp pect to the ob bjectives

5.3.3

Test Con ntrol (K2)

Test con ntrol describe es any guidin ng or correctiive actions ta aken as a ressult of inform mation and metrics m gathered d and reporte ed. Actions may m cover an ny test activitty and may affect a any oth her software life cycle acttivity or task.. Example es of test con ntrol actions include: o Makking decisionss based on in nformation frrom test mon nitoring o Re-p prioritizing tests when an identified rissk occurs (e.g., software delivered latte) nment o Changing the tesst schedule due d to availa ability or unav vailability of a test environ o Settiing an entry criterion requ uiring fixes to o have been re-tested (confirmation ttested) by a deve eloper before e accepting them into a build b

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 51 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.4

Configu uration Manage M ement (K K2)

10 minuttes

Terms Configurration manag gement, verssion control

Backgrround The purp pose of configuration management is to establish and maintain the integritty of the prod ducts (compon nents, data and a documen ntation) of the e software orr system thro ough the projject and prod duct life cycle e. For testing, configura ation manage ement may involve ensurring the following: o All items of testw ware are iden ntified, versio on controlled, tracked for changes, related to each h other and related to de evelopment ittems (test ob bjects) so tha at traceabilityy can be maintained throu ughout the te est process o All id dentified doccuments and software item ms are refere enced unambiguously in test docu umentation For the tester, t config guration management hellps to unique ely identify (a and to reprod duce) the tes sted item, tesst documentss, the tests and the test harness(es). h During te est planning,, the configurration manag gement procedures and infrastructure i e (tools) shou uld be chosen, documented d and implem mented.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 52 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.5

30 minuttes

Risk an nd Testing (K2)

Terms Product risk, project risk, risk, risk-based testting

Backgrround Risk can n be defined as the chancce of an even nt, hazard, th hreat or situa ation occurrin ng and resultting in undesira able consequ uences or a potential p prob blem. The level of risk will be determiined by the likelihood d of an adve erse event ha appening and d the impact (the harm re esulting from that event).

5.5.1

Project Risks R (K2))

Project risks r are the risks that surround the project’s capa ability to delivver its objecttives, such as: o Orga anizational fa actors: • Skill, traiining and sta aff shortagess • Personn nel issues • Political issues, such h as: ƒ Problems with testers co ommunicating g their needss and test ressults ƒ Failure by the team to follow up on in nformation fo ound in testin ng and review ws (e.g., not imp proving deve elopment and d testing pracctices) • Improper attitude tow ward or expectations of te esting (e.g., not n appreciating the value of finding defects d during g testing) o Tech hnical issuess: • Problem ms in defining the right req quirements • The exte ent to which requirements r s cannot be met given exxisting constrraints • Test envvironment no ot ready on tim me • Late data a conversion n, migration planning p and d development and testing data conversiion/migration n tools • Low qua ality of the de esign, code, configuration c n data, test data and testss o Supp plier issues: • Failure of o a third partty • Contracttual issues When an nalyzing, managing and mitigating m the ese risks, the e test manag ger is followin ng well-estab blished project management m t principles. The T ‘Standarrd for Software Test Docu umentation’ ((IEEE Std 82 291998) ou utline for testt plans requirres risks and d contingenciies to be statted.

5.5.2

Product Risks (K2 2)

Potential failure area as (adverse future f eventss or hazards)) in the software or system m are known n as product risks, r as theyy are a risk to o the quality of the produ uct. These in nclude: o Failu ure-prone software delive ered o The potential tha at the softwarre/hardware could cause e harm to an individual orr company o Poorr software ch haracteristicss (e.g., functionality, reliability, usability and perforrmance) o Poorr data integriity and qualitty (e.g., data migration issues, data conversion c prroblems, data a transsport problem ms, violation of data standards) o Softw ware that does not perforrm its intended functions Risks are e used to de ecide where to t start testin ng and where e to test more e; testing is u used to reduce the risk of an n adverse efffect occurring, or to reduce the impac ct of an adve erse effect.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 53 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Product risks are a special s type of o risk to the success of a project. Tessting as a rissk-control acttivity providess feedback ab bout the residual risk by measuring th he effectiven ness of critica al defect rem moval and of co ontingency plans. p A risk-ba ased approacch to testing provides pro oactive oppo ortunities to re educe the levvels of produ uct risk, starrting in the in nitial stages of o a project. It I involves the identificatio on of producct risks and th heir use in gu uiding test planning and control, c speccification, pre eparation and d execution o of tests. In a riskbased ap pproach the risks identifie ed may be used to: o Dete ermine the te est technique es to be employed o Dete ermine the exxtent of testin ng to be carrried out o Priorritize testing in an attemp pt to find the critical defec cts as early as a possible o Dete ermine wheth her any non-ttesting activiities could be e employed to t reduce risk (e.g., proviiding training to inexpe erienced dessigners) Risk-bassed testing draws on the collective kn nowledge and d insight of th he project sta akeholders to t determin ne the risks and a the levelss of testing required r to ad ddress those e risks. To ensurre that the ch hance of a product failure e is minimize ed, risk mana agement actiivities provide a discipline ed approach to: o Asse ess (and reassess on a regular basis) what can go wrong (riskks) o Dete ermine what risks are imp portant to deal with o Implement action ns to deal witth those risks In additio on, testing may m support the t identificattion of new risks, r may he elp to determ mine what risk ks should be b reduced, and a may lower uncertaintty about risks s.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 54 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5.6

Inciden nt Manag gement (K3)

40 minuttes

Terms Incident logging, incident manage ement, incide ent report

Backgrround Since on ne of the obje ectives of tessting is to find d defects, the discrepanccies between n actual and expected d outcomes need n to be lo ogged as inccidents. An in ncident mustt be investiga ated and may turn out to be e a defect. Ap ppropriate acctions to disp pose incidents and defeccts should be e defined. Inc cidents and defe ects should be b tracked fro om discoveryy and classification to corrrection and confirmation n of the solution. In order to manage m all in ncidents to completion, c an a organizatio on should esstablish an in ncident management processs and rules for f classificattion. Incidentss may be raissed during development,, review, testting or use off a software product. The ey may be raised d for issues in i code or the working syystem, or in any a type of documentatio d on including requirem ments, develo opment docu uments, test documents, d and user info ormation succh as “Help” or installatio on guides. Incident reports have e the followin ng objectivess: o Provvide develope ers and othe er parties with h feedback about a the pro oblem to enable identifica ation, isola ation and corrrection as ne ecessary o Provvide test lead ders a meanss of tracking the quality of o the system m under test a and the progress of the testing o Provvide ideas forr test processs improveme ent Details of o the inciden nt report mayy include: o Date e of issue, isssuing organizzation, and author a o Expe ected and acctual results o Identification of the t test item (configuratio on item) and environmentt o Softw ware or syste em life cycle process in which w the inc cident was ob bserved o Desccription of the incident to enable repro oduction and d resolution, including log gs, database e dum mps or screen nshots o Scop pe or degree e of impact on n stakeholde er(s) interests s o Seve erity of the im mpact on the system o Urge ency/priority to fix o Statu us of the inciident (e.g., open, o deferre ed, duplicate,, waiting to be b fixed, fixed d awaiting re e-test, close ed) o Concclusions, reccommendatio ons and apprrovals o Glob bal issues, su uch as other areas that may m be affectted by a change resulting g from the inc cident o Change history, such as the sequence off actions take en by projectt team memb bers with res spect to the incident to o isolate, repa air, and conffirm it as fixed d o Refe erences, inclu uding the ide entity of the test t case spe ecification tha at revealed tthe problem The structure of an in ncident report is also covvered in the ‘Standard forr Software Te est Docume entation’ (IEE EE Std 829-1998).

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 55 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Refere ences 5.1.1 Black, 2001, Hetzel, H 1988 5.1.2 Black, 2001, Hetzel, H 1988 5.2.5 Black, 2001, Craig, C 2002, IEEE I Std 829 9-1998, Kaner 2002 5.3.3 Black, 2001, Craig, C 2002, Hetzel, H 1988 8, IEEE Std 829-1998 8 5.4 Craiig, 2002 5.5.2 Black, 2001 , IEEE Std 829 9-1998 5.6 Blacck, 2001, IEE EE Std 829-1 1998

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 56 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

6.

T Tool Su upport for f Testiing (K2))

8 80 minutes

Learniing Objecctives forr Tool Sup pport for Testing The obje ectives identify what you will be able to t do followin ng the complletion of each h module.

6.1 Typ pes of Tes st Tools (K K2) LO-6.1.1 1 LO-6.1.3 3

Classify different types of test too ols according g to their purpose and to the activities s of the funda amental testt process and d the softwarre life cycle (K2) ( Explain the t term testt tool and the e purpose of tool support for testing (K K2) 2

6.2 Effe ective Use e of Tools s: Potentia al Benefits s and Risk ks (K2) LO-6.2.1 1 LO-6.2.2 2

Summarrize the poten ntial benefitss and risks off test automa ation and too ol support forr testing (K K2) Rememb ber special consideration c ns for test exe ecution toolss, static analyysis, and tes st management tools (K K1)

6.3 Intrroducing a Tool into o an Orga anization (K1) LO-6.3.1 1 LO-6.3.2 2 LO-6.3.3 3

2

State the e main princiiples of introd ducing a tool into an orga anization (K1 1) State the e goals of a proof-of-conc p cept for tool evaluation and a piloting phase for to ool impleme entation (K1) Recognize that facto ors other than n simply acquiring a tool are required for good too ol support (K1)

LO-6.1.2 Intentiona ally skipped

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 57 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

6.1

Types of Test Tools T (K K2)

45 minuttes

Terms Configurration manag gement tool, coverage too ol, debugging tool, dynam mic analysis tool, inciden nt management tool, load testing to ool, modeling g tool, monito oring tool, performance te esting tool, probe effect, re equirements managemen nt tool, review w tool, security tool, staticc analysis tool, stress tes sting tool, testt comparatorr, test data prreparation to ool, test desig gn tool, test harness, h testt execution tool, test man nagement too ol, unit test frramework too ol

6.1.1

Tool Sup pport for Testing T (K K2)

Test toolls can be use ed for one orr more activitties that support testing. These inclu ude: 1. Tools that are dirrectly used in n testing succh as test exe ecution toolss, test data ge eneration too ols and result compa arison tools 2. Tools that help in n managing the t testing process such as those used to manag ge tests, test results, data, req quirements, incidents, defects, etc., and for reportting and mon nitoring test execcution 3. Tools that are ussed in reconn naissance, or, in simple terms: t explorration (e.g., ttools that mo onitor file activity a for an n application)) 4. Any tool that aidss in testing (a a spreadshe eet is also a test t tool in this meaning) Tool sup pport for testing can have e one or more e of the follow wing purpose es depending on the con ntext: o Imprrove the efficciency of testt activities byy automating repetitive ta asks or suppo orting manua al test activvities like testt planning, te est design, te est reporting and monitorring o Auto omate activitiies that require significan nt resources when done manually m (e.g g., static testting) o Auto omate activitiies that cann not be executted manually y (e.g., large scale perforrmance testin ng of clien nt-server app plications) o Incre ease reliabilitty of testing (e.g., by auto omating large data comp parisons or siimulating beha avior) The term m “test frameworks” is alsso frequently used in the industry, in at a least three e meanings: o Reussable and exxtensible testting libraries that can be used to build d testing toolls (called tes st harn nesses as we ell) o A typ pe of design of test autom mation (e.g., data-driven,, keyword-driven) o Overall process of execution of testing For the purpose p of th his syllabus, the term “tesst framework ks” is used in its first two meanings as s describe ed in Section 6.1.6.

6.1.2

Test Too ol Classific cation (K2 2)

There arre a number of tools that support diffe erent aspects s of testing. Tools T can be e classified based on severral criteria su uch as purpose, commerccial / free / op pen-source / shareware, technology used and so fo orth. Tools are a classified in this syllab bus according to the testiing activities that they support. Some to ools clearly su upport one activity; a otherrs may suppo ort more than n one activityy, but are classified d under the activity a with which w they are most clos sely associate ed. Tools fro om a single provider, especially those t that ha ave been dessigned to work together, may be bund dled into one e package e. Some types of test to ools can be intrusive, which means th hat they can affect the acctual outcome e of the test. For example e, the actual timing may be b different due d to the exxtra instructio ons that are executed d by the tool,, or you mayy get a differe ent measure of code cove erage. The cconsequence e of intrusive e tools is calle ed the probe e effect. Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 58 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Some to ools offer sup pport more ap ppropriate fo or developers s (e.g., tools that are used d during compone ent and component integ gration testing g). Such tools are marke ed with “(D)” iin the list bellow.

6.1.3

Tool Sup pport for Manageme M ent of Testing and Tests T (K1)

Management tools apply to all tesst activities over o the entirre software life cycle. Test Ma anagement Tools T These to ools provide interfaces for executing tests, t tracking defects an nd managing requirementts, along with support fo or quantitative e analysis an nd reporting of the test objects. They also supporrt he test objeccts to require ement specifications and might have an a independent version control c tracing th capability or an interfface to an exxternal one. Require ements Mana agement To ools These to ools store req quirement sta atements, store the attrib butes for the requirementts (including priority), provide uniq que identifierrs and suppo ort tracing the e requiremen nts to individu ual tests. Th hese ay also help with w identifyin ng inconsiste ent or missing requirements. tools ma ent Tools (D Defect Tracking Tools) Incidentt Manageme These to ools store and manage in ncident reporrts, i.e., defec cts, failures, change requ uests or perc ceived problemss and anoma alies, and he elp in managiing the life cy ycle of incide ents, optionally with supp port for statistica al analysis. Configu uration Mana agement Tools Although h not strictly test t tools, the ese are nece essary for sto orage and ve ersion manag gement of testware e and related software especially whe en configuring g more than one hardware/software environm ment in termss of operating g system verrsions, comp pilers, browse ers, etc.

6.1.4

Tool Sup pport for Static S Testting (K1)

Static tessting tools prrovide a costt effective wa ay of finding more defects at an earlie er stage in th he developm ment processs. Review Tools ools assist with review pro ocesses, che ecklists, revie ew guideline es and are ussed to store and a These to commun nicate review w comments and a report on n defects and d effort. Theyy can be of ffurther help by b providing g aid for onlin ne reviews fo or large or ge eographically y dispersed teams. t Static Analysis A Too ols (D) These to ools help devvelopers and testers find defects priorr to dynamic testing by providing support for enforrcing coding standards (in ncluding seccure coding), analysis of structures s an nd dependen ncies. They can n also help in n planning orr risk analysiis by providin ng metrics fo or the code (e e.g., complex xity). Modelin ng Tools (D) These to ools are used d to validate software mo odels (e.g., physical data model (PDM M) for a relational database e), by enume erating incon nsistencies and finding de efects. These e tools can o often aid in generating some test cases base ed on the mo odel.

6.1.5

Tool Sup pport for Test T Speciification (K K1)

Test Des sign Tools These to ools are used d to generate e test inputs or executable tests and/o or test oracle es from requirem ments, graphiical user inte erfaces, desig gn models (s state, data orr object) or ccode.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 59 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Test Datta Preparation Tools Test data a preparation n tools manip pulate databases, files or data transm missions to set up test da ata to be used during the execution e of tests t to ensu ure security th hrough data anonymity.

6.1.6

Tool Sup pport for Test T Execu ution and Logging (K1) (

Test Exe ecution Too ols These to ools enable te ests to be exxecuted auto omatically, orr semi-autom matically, usin ng stored inp puts and expe ected outcom mes, through h the use of a scripting language and usually provvide a test log g for each tesst run. They can c also be used u to recorrd tests, and usually support scripting g languages or GUI-bassed configura ation for para ameterization n of data and d other customization in th he tests. T Framew work Tools (D) ( Test Harness/Unit Test A unit test harness or o framework facilitates th he testing of components c or parts of a system by ng the enviro onment in wh hich that test object will ru un, through the provision of mock objects simulatin as stubss or drivers. Test Comparators Test com mparators de etermine diffe erences betw ween files, da atabases or test t results. T Test executio on tools typ pically include e dynamic co omparators, but post-exe ecution comp parison may b be done by a separate e comparison n tool. A test comparator may use a te est oracle, especially if itt is automate ed. ge Measurem ment Tools (D) Coverag These to ools, through intrusive or non-intrusive e means, me easure the pe ercentage off specific types of code stru uctures that have been exercised e (e.g g., statements, branchess or decisionss, and module or function calls) by a set of tests. Security y Testing To ools These to ools are used d to evaluate e the securityy characteristtics of softwa are. This inccludes evalua ating the abilitty of the softw ware to prote ect data conffidentiality, in ntegrity, authentication, a authorization,, availability, and non--repudiation. Security too ols are mostly y focused on n a particular technology, platform, and purposse.

6.1.7

Tool Sup pport for Performan P nce and Mo onitoring (K1)

Dynamic c Analysis Tools T (D) Dynamicc analysis too ols find defeccts that are evident e only when w softwa are is executiing, such as time depende encies or memory leaks. They are typ pically used in componen nt and compo onent integra ation testing, and a when tessting middlew ware. mance Testin ng/Load Tes sting/Stress Testing Too ols Perform Performa ance testing tools monito or and report on how a sy ystem behaves under a vvariety of sim mulated usage co onditions in terms t of num mber of concu urrent users, their ramp-u up pattern, frrequency and d relative percentage p o transaction of ns. The simu ulation of load d is achieved d by means o of creating viirtual users ca arrying out a selected set of transactio ons, spread across a variou us test mach hines commo only known as a load generrators. Monitorring Tools Monitorin ng tools conttinuously ana alyze, verify and report on usage of specific s syste em resources s, and give warrnings of posssible service e problems.

6.1.8

Tool Sup pport for Specific S Te esting Nee eds (K1)

Data Qu uality Assessment Data is at a the center of some pro ojects such as data conve ersion/migrattion projects and applicattions like data a warehousess and its attriibutes can va ary in terms of criticality and a volume. In such conttexts, tools nee ed to be emp ployed for da ata quality asssessment to o review and verify the da ata conversio on and Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 60 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

migration n rules to ensure that the e processed data is corre ect, complete e and complie es with a pre edefined context-spec c cific standard d. Other tessting tools exxist for usabiility testing.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 61 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

6.2 Effectivve Use of o Tools: Poten ntial Benefits and Risks (K K2)

20 minuttes

Terms Data-drivven testing, keyword-driv k ven testing, scripting s lang guage

6.2.1 (K2)

Potential Benefits and Risks s of Tool Support S fo or Testing g (for all to ools)

Simply purchasing p or leasing a to ool does not guarantee success with that tool. Each type of to ool may requ uire additional effort to acchieve real and a lasting be enefits. Therre are potenttial benefits and a opportun nities with the e use of toolss in testing, but b there are e also risks. Potential benefits of using tools in nclude: o Repe etitive work is i reduced (e e.g., running regression tests, re-ente ering the sam me test data, and checcking againstt coding stan ndards) o Grea ater consiste ency and repe eatability (e.g g., tests exec cuted by a to ool in the sam me order with h the same frequency,, and tests de erived from requirements r s) o Obje ective assesssment (e.g., static measu ures, coverag ge) o Ease e of access to t information n about testss or testing (e e.g., statisticcs and graphs about test prog gress, inciden nt rates and performance e) Risks of using tools include: o Unre ealistic expecctations for th he tool (inclu uding functionality and ea ase of use) o Unde erestimating the time, co ost and effort for the initia al introduction n of a tool (in ncluding train ning and external exp pertise) o Unde erestimating the time and d effort need ded to achiev ve significantt and continu uing benefits from the tool t (including the need fo or changes in the testing g process and d continuouss improvement of the way w the tool is used) o Unde erestimating the effort required to ma aintain the test assets generated by th he tool o Over-reliance on n the tool (rep placement fo or test design n or use of au utomated tessting where manual testing would w be bettter) o Neglecting versio on control off test assets within w the too ol o Neglecting relatio onships and interoperabiility issues be etween criticcal tools, such as requirem ments management too ols, version control c tools, incident management to ools, defect trracking tools s and toolss from multip ple vendors o Riskk of tool vend dor going outt of business, retiring the tool, or sellin ng the tool to o a different vend dor o Poorr response frrom vendor for f support, upgrades, u an nd defect fixe es o Riskk of suspension of open-ssource / free tool project o Unfo oreseen, succh as the inab bility to support a new pla atform

6.2.2

Special Considera C ations for Some Typ pes of Too ols (K1)

Test Exe ecution Too ols Test exe ecution tools execute testt objects usin ng automated d test scriptss. This type o of tool often requires significant effort e in orderr to achieve significant s be enefits. Capturin ng tests by re ecording the actions of a manual teste er seems attractive, but tthis approach h does not scale e to large numbers of auttomated test scripts. A ca aptured scrip pt is a linear rrepresentatio on with specific data and d actions as part of each h script. This type of scrip pt may be unsstable when unexpeccted events occur. o Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 62 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

A data-d driven testing g approach separates outt the test inputs (the data a), usually intto a spreadsheet, and usess a more gen neric test scrript that can read r the inpu ut data and execute e the ssame test script with diffe erent data. Testers who are a not familiiar with the scripting s lang guage can then create the e test data for these predeffined scripts.. There arre other techniques employed in data a-driven techniques, wherre instead off hard-coded data combina ations placed in a spreadssheet, data is generated using algoritthms based o on configura able parameters at run tim me and supplied to the ap pplication. Fo or example, a tool may use an algoritthm, which ge enerates a ra andom user ID, I and for re epeatability in n pattern, a seed s is emplloyed for controllin ng randomne ess. In a keyw word-driven testing t appro oach, the sprreadsheet co ontains keyw words describ bing the actio ons to be taken n (also called d action word ds), and test data. Testers s (even if the ey are not fam miliar with th he scripting language) can c then define tests usin ng the keywo ords, which can c be tailore ed to the application being tessted. Technica al expertise in the scriptin ng language is needed fo or all approacches (either by testers orr by specialissts in test auttomation). Regardle ess of the sccripting techn nique used, th he expected results for each e test nee ed to be store ed for later com mparison. A Too ols Static Analysis Static an nalysis tools applied to so ource code can c enforce coding c standards, but if a applied to exiisting code ma ay generate a large quanttity of messa ages. Warnin ng messagess do not stop the code fro om being tra anslated into an executab ble program, but ideally should s be addressed so tthat maintena ance of the co ode is easier in the future e. A gradual implementatiion of the analysis tool w with initial filte ers to exclude some messa ages is an efffective appro oach. anagement Tools T Test Ma Test management to ools need to interface i with h other tools or spreadsh heets in order to produce useful information in a format that fits th he needs of the organizattion.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 63 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

6.3 Introdu ucing a Tool T into o an Org ganizatio on (K1)

15 minuttes

Terms No specific terms.

Backgrround The main considerattions in seleccting a tool fo or an organiz zation include e: o Asse essment of organizationa o al maturity, sttrengths and d weaknesses and identiffication of oppo ortunities for an improved d test processs supported by tools o Evaluation again nst clear requ uirements an nd objective criteria c o A pro oof-of-conce ept, by using a test tool during the eva aluation phasse to establissh whether itt perfo orms effectivvely with the software und der test and within the cu urrent infrastrructure or to identify changes needed to th hat infrastruccture to effec ctively use th he tool o Evaluation of the e vendor (inccluding trainin ng, support and a commerccial aspects)) or service support s supp pliers in case e of non-com mmercial toolss o Identification of internal requiirements for coaching an nd mentoring in the use o of the tool o Evaluation of training needs considering the current test team’s te est automatio on skills o Estim mation of a cost-benefit c r ratio based on o a concrete e business ca ase Introducing the seleccted tool into an organiza ation starts with w a pilot pro oject, which has the follow wing objective es: o Learrn more deta ail about the tool t o Evaluate how the e tool fits with existing prrocesses and d practices, and a determin ne what would need d to change o Deciide on standard ways of using, mana aging, storing g and maintaining the too ol and the tes st asse ets (e.g., decciding on nam ming conventtions for files s and tests, creating c libraries and defining the modularity m off test suites) o Asse ess whether the benefits will be achie eved at reaso onable cost include: Successs factors for the deployme ent of the too ol within an organization o o Rolling out the to ool to the resst of the orga anization incrrementally o Adap pting and improving proccesses to fit with w the use of the tool o Provviding training g and coaching/mentorin ng for new us sers o Defin ning usage guidelines g o Implementing a way w to gathe er usage information from m the actual use u o Monitoring tool use u and bene efits o Provviding supporrt for the testt team for a given g tool o Gath hering lesson ns learned fro om all teamss

Refere ences 6.2.2 Bu uwalda, 2001 1, Fewster, 1999 1 6.3 Few wster, 1999

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 64 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

7.

Referen nces

Stand dards ISTQB Glossary G of Terms T used in Software Testing T Versiion 2.1 [CMMI] Chrissis, C M.B B., Konrad, M. M and Shrum m, S. (2004) CMMI, Guidelines for Pro ocess Integrration and Prod duct Improve ement, Addisson Wesley: Reading, MA A See Secction 2.1 [IEEE Sttd 829-1998] IEEE Std 82 29™ (1998) IEEE Standa ard for Softw ware Test Doccumentation, See Secctions 2.3, 2.4 4, 4.1, 5.2, 5.3, 5 5.5, 5.6 [IEEE 10 028] IEEE Sttd 1028™ (20 008) IEEE Standard for Software S Revviews and Au udits, See Secction 3.2 [IEEE 12 2207] IEEE 12207/ISO/IE 1 EC 12207-20 008, Software e life cycle processes, See Secction 2.1 [ISO 912 26] ISO/IEC 9126-1:2001 9 1, Software Engineering E – Software Product P Quality, See Secction 2.3

Bookss [Beizer, 1990] Beizerr, B. (1990) Software S Tessting Techniq ques (2nd ed dition), Van N Nostrand Reinhold: Boston See Secctions 1.2, 1.3 3, 2.3, 4.2, 4.3, 4 4.4, 4.6 [Black, 2001] 2 Black, R. (2001) Ma anaging the Testing Proc cess (3rd ediition), John W Wiley & Sons s: New York See Secctions 1.1, 1.2 2, 1.4, 1.5, 2.3, 2 2.4, 5.1, 5.2, 5.3, 5.5,, 5.6 [Buwalda a, 2001] Buw walda, H. et al. a (2001) Inttegrated Test Design and d Automation n, Addison Wesley: W Reading, MA See Secction 6.2 [Copelan nd, 2004] Co opeland, L. (2 2004) A Pracctitioner’s Gu uide to Softw ware Test Dessign, Artech House: Norwood, N MA A See Secctions 2.2, 2.3 3, 4.2, 4.3, 4.4, 4 4.6 [Craig, 2002] 2 Craig, Rick R D. and Jaskiel, J Steffan P. (2002)) Systematic Software Te esting, Artech h House: Norwood, N MA A See Secctions 1.4.5, 2.1.3, 2 2.4, 4.1, 5.2.5, 5.3, 5.4 [Fewsterr, 1999] Fewster, M. and Graham, D. (1999) Softw ware Test Au utomation, A Addison Weslley: Reading, MA See Secctions 6.2, 6.3 3 [Gilb, 1993]: Gilb, To om and Graham, Dorothyy (1993) Software Inspection, Addison n Wesley: Reading, MA See Secctions 3.2.2, 3.2.4 3 [Hetzel, 1988] Hetzel, W. (1988) Complete Guide G to Softw ware Testing g, QED: Welle esley, MA See Secctions 1.3, 1.4 4, 1.5, 2.1, 2.2, 2 2.3, 2.4, 4.1, 4 5.1, 5.3 [Kaner, 2002] 2 Kaner,, C., Bach, J. and Petttico ord, B. (2002 2) Lessons Learned L in So oftware Testiing, John Willey & Sons: New N York See Secctions 1.1, 4.5 5, 5.2 Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 65 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

[Myers 1979] Myers, Glenford J. (1979) The Art A of Softwa are Testing, John J Wiley & Sons: New w York See Secctions 1.2, 1.3 3, 2.2, 4.3 [van Vee enendaal, 20 004] van Vee enendaal, E. (ed.) (2004) The Testing g Practitionerr (Chapters 6, 6 8, 10), UTN N Publishers: The Netherrlands See Secctions 3.2, 3.3 3

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 66 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

8.

A Append dix A – Syllabu S s Backg ground

History ry of this Documen D nt This doccument was prepared p bettween 2004 and a 2011 by y a Working Group G comprrised of mem mbers appointe ed by the Inte ernational So oftware Testing Qualificattions Board (ISTQB). ( It w was initially reviewed d by a selectted review pa anel, and the en by represe entatives dra awn from the internationa al software e testing com mmunity. The rules used in the produc ction of this document d are e shown in Appendix C. This doccument is the e syllabus forr the Internattional Founda ation Certificcate in Software Testing, the first level internationa al qualificatio on approved by the ISTQ QB (www.istqb.org).

Objectives of th he Found dation Ce ertificate Qualificat Q tion o o o o o o o

To gain g recognitiion for testing as an esse ential and pro ofessional so oftware engin neering speccialization To provide p a stan ndard framew work for the developmen nt of testers' careers c To enable e professsionally qua alified testerss to be recognized by employers, customers and peers, p and to raise the profile p of testers To promote p conssistent and good testing practices p within all softwa are engineerring discipline es To id dentify testing topics thatt are relevantt and of value to industryy To enable e softwa are supplierss to hire certified testers and a thereby gain g commercial advanta age overr their compe etitors by advvertising their tester recru uitment policyy To provide p an op pportunity forr testers and those with an a interest in testing to accquire an interrnationally re ecognized qu ualification in the subject

Objectives of th he International Qualificatio Q on (adapted from ISTQB meetin ng at Solllentuna, Novembe N er 2001) o o o o o o

o o o o

To be b able to com mpare testing skills acrosss different countries c To enable e testers to move accross countryy borders mo ore easily To enable e multin national/intern national projects to have a common understandin u ng of testing issues To in ncrease the number n of qu ualified teste ers worldwide e To have h more im mpact/value as a an interna ationally-base ed initiative than from anyy country-specific apprroach To develop d a com mmon international bodyy of understanding and kn nowledge ab bout testing throu ugh the sylla abus and term minology, and to increase e the level off knowledge about testing g for all pa articipants To promote p testing as a profe ession in mo ore countries To enable e testers to gain a re ecognized qu ualification in n their native language To enable e sharin ng of knowled dge and reso ources acros ss countries To provide p intern national reco ognition of tessters and this s qualification due to parrticipation from many countries

Entry Requirem ments forr this Qua alification The entrry criterion fo or taking the ISTQB Foun ndation Certifficate in Softw ware Testing g examinatio on is that cand didates have e an interest in software testing. t Howe ever, it is stro ongly recommended thatt candidattes also: o Have e at least a minimal m backkground in either software e developme ent or software testing, su uch as six months m experience as a system s or usser acceptanc ce tester or as a a software e developer Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 67 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

o

Take e a course th hat has been accredited to t ISTQB sta andards (by one o of the IS STQB-recogn nized Natio onal Boards)).

Backg ground an nd Historyy of the Foundatio F on Certificcate in So oftware Testin ng The inde ependent cerrtification of software s testters began in n the UK with h the British C Computer Society'ss Information n Systems Exxamination Board B (ISEB)), when a Software Testin ng Board wa as set up in 199 98 (www.bcss.org.uk/iseb). In 2002, ASQF A in Germ many began to support a German tes ster qualification scheme (www.asqf.d de). This sylllabus is base ed on the ISE EB and ASQ QF syllabi; it includes reorganized d, updated an nd additionall content, and d the empha asis is directe ed at topics that will provide the mostt practical help to testers.. An existiing Foundation Certificate e in Software e Testing (e.g., from ISEB, ASQF or a an ISTQBrecognizzed National Board) awarrded before this t Internatio onal Certifica ate was relea ased, will be deemed to be equiva alent to the In nternational Certificate. The T Foundation Certificatte does not expire e and doess not need to o be renewed d. The date iti was awarded is shown on the Certificate. Within ea ach participa ating countryy, local aspeccts are contro olled by a na ational ISTQB B-recognized d Software e Testing Boa ard. Duties of o National Boards are sp pecified by th he ISTQB, bu ut are implem mented within ea ach country. The duties of o the countryy boards are expected to o include accreditation of training providers p and the setting g of exams.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 68 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

9. Append A dix B – Learnin L ng Objec ctives/C Cognitiv ve Level of Know wledge The follo owing learnin ng objectives are defined as applying to this syllab bus. Each top pic in the syllabus will be exxamined acccording to the e learning ob bjective for it..

Level 1: Remember (K1 1) The cand didate will re ecognize, rem member and recall a term m or concept.. Keyword ds: Rememb ber, retrieve, recall, recog gnize, know Example e Can reco ognize the de efinition of “fa ailure” as: or o “Non n-delivery of service to an n end user or any other stakeholder” s o “Actu ual deviation n of the comp ponent or sysstem from its s expected delivery, service or result””

Level 2: Underrstand (K2 2) The cand didate can se elect the rea asons or expllanations for statements related to the e topic, and can summarize, compare e, classify, ca ategorize and d give examples for the testing t conce ept. Keyword ds: Summarrize, generaliize, abstract,, classify, compare, map,, contrast, exxemplify, inte erpret, translate e, represent, infer, conclu ude, categorize, construct models es Example Can exp plain the reasson why testss should be designed d as early as posssible: o To find defects when w they are e cheaper to o remove o To find the most important de efects first Can exp plain the similarities and differences d between integ gration and system s testin ng: o Similarities: testing more than n one compo onent, and ca an test non-ffunctional asspects o Diffe erences: integration testin ng concentra ates on interffaces and intteractions, an nd system te esting conccentrates on whole-system aspects, such s as end--to-end proce essing

Level 3: Apply (K3) The cand didate can se elect the corrrect applicattion of a conc cept or techn nique and ap pply it to a giv ven context. Keyword ds: Impleme ent, execute, use, follow a procedure, apply a proccedure Example e o Can identify boundary valuess for valid and invalid parrtitions o Can select test cases c from a given state transition dia agram in order to cover a all transitions s

Level 4: Analyzze (K4) The cand didate can se eparate inforrmation relatted to a proce edure or tech hnique into itts constituen nt parts for better understand ding, and can n distinguish between fac cts and infere ences. Typiccal application is to analyze a document,, software orr project situa ation and pro opose approp priate actionss to solve a problem or task. Keyword ds: Analyze,, organize, fin nd coherencce, integrate, outline, parsse, structure,, attribute, deconstrruct, differentiate, discrim minate, disting guish, focus,, select

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 69 of 78

31-Ma ar-2011

International esting Software Te Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

e Example o Anallyze product risks and propose preve entive and co orrective mitig gation activitties o Desccribe which portions p of an n incident report are factual and whicch are inferre ed from results

Refere ence (For the cognitive levvels of learning objectivess) Anderso on, L. W. and Krathwohl, D. R. (eds) (2001) A Tax xonomy for Learning, Tea aching, and Assessin ng: A Revisio on of Bloom'ss Taxonomyy of Education nal Objective es, Allyn & Bacon

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 70 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

10. Append A dix C – Rules R A Applied to the ISTQB Found dation Syyllabus The rules listed here were used in the develo opment and review r of thiss syllabus. (A A “TAG” is sh hown after eacch rule as a shorthand s ab bbreviation of o the rule.)

10.1.1 General Rules SG1. The syllabus sh hould be und derstandable e and absorb bable by peop ple with zero o to six month hs (or more) exxperience in testing. (6-M MONTH) SG2. The syllabus sh hould be pra actical rather than theorettical. (PRACT TICAL) hould be clea ar and unam mbiguous to itts intended readers. r (CLE EAR) SG3. The syllabus sh SG4. The syllabus sh hould be und derstandable e to people frrom different countries, and easily translata able into diffe erent languag ges. (TRANS SLATABLE) SG5. The syllabus sh hould use Am merican English. (AMERICAN-ENGLISH)

10.1.2 Current Content C SC1. The syllabus sh hould include e recent testiing concepts s and should reflect curre ent best practtices in softwa are testing where this is generally g agrreed. The syllabus is sub bject to review w every three e to five yearrs. (RECENT T) SC2. The syllabus sh hould minimiize time-relatted issues, such s as curre ent market co onditions, to enable itt to have a sh helf life of thrree to five ye ears. (SHELF F-LIFE).

10.1.3 Learning g Objective es LO1. Lea arning objecttives should distinguish between b item ms to be reco ognized/reme embered (cognitive level K1)), items the candidate c should understtand concepttually (K2), ittems the can ndidate should be able to practice/use p ( (K3), and items the candidate should be able to use u to analyzze a document, software e or project situation in co ontext (K4). (KNOWLEDG GE-LEVEL) LO2. The e description n of the conte ent should be e consistent with the learrning objectivves. (LOCONSIS STENT) LO3. To illustrate the e learning ob bjectives, sam mple exam questions for each major ssection shou uld be issued along a with the e syllabus. (L LO-EXAM)

10.1.4 Overall Structure S ST1. The e structure of o the syllabus should be clear and allow cross-refferencing to and from oth her parts, fro om exam que estions and from f other re elevant documents. (CRO OSS-REF) ST2. Overlap betwee en sections of o the syllabu us should be minimized. (OVERLAP) ST3. Eacch section off the syllabuss should havve the same structure. s (STRUCTURE E-CONSISTE ENT) ST4. The e syllabus sh hould contain n version, da ate of issue and a page num mber on everry page. (VERSIO ON) ST5. The e syllabus sh hould include e a guideline for the amou unt of time to o be spent in n each sectio on (to reflect th he relative im mportance of each topic). (TIME-SPEN NT)

Refere ences SR1. So ources and re eferences willl be given fo or concepts in n the syllabu us to help training provide ers find out more m informa ation about the topic. (RE EFS) SR2. Wh here there arre not readilyy identified an nd clear sources, more detail d should be provided in the syllabus. For examplle, definitionss are in the Glossary, G so only the term ms are listed in the syllab bus. (NON-REF DETAIL)

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 71 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

Source es of Inforrmation Terms used in the syyllabus are defined in the e ISTQB Glos ssary of Term ms used in S Software Testting. A version of o the Glossa ary is availab ble from ISTQ QB. A list of recommende r ed books on software tessting is also issued in parrallel with thiss syllabus. The T main boo ok list is partt of the Referrences sectio on.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 72 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

11.

A Appendi ix D – No otice to Training T Providerrs

Each ma ajor subject heading h in th he syllabus iss assigned an n allocated tiime in minute es. The purp pose of this is bo oth to give gu uidance on th he relative prroportion of time t to be allocated to ea ach section of o an accredite ed course, and to give an n approximatte minimum time t for the teaching t of e each section.. Training providers may spend mo ore time than n is indicated d and candidates may spend more tim me again in reading and research. A course curriiculum does not have to follow the sa ame order as s the syllabus. The sylla abus contains referencess to establish hed standards, which musst be used in n the prepara ation of trainin ng material. Each E standarrd used musst be the vers sion quoted in the currentt version of this syllabus. Other publications, templates or sta andards not referenced r in n this syllabus may also be b used and d referenced d, but will nott be examine ed. All K3 an nd K4 Learning Objective es require a practical p exe ercise to be in ncluded in th he training materialss.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 73 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

12.

A Appendi ix E – Re elease No otes

Release 2010 1. C Changes to Learning Ob bjectives (LO) include som me clarificatio on a. Wording W cha anged for the e following LO Os (content and a level of L LO remains unchanged):: LO-1.2.2, LO-1.3.1, LO--1.4.1, LO-1..5.1, LO-2.1.1, LO-2.1.3, LO2 2.4.2, LO-4.1 1.3, LO-4.2.1 1, LO-4.2.2, LO-4.3.1, LO O-4.3.2, LO-4 4.3.3, LO-4.4 4.1, LO-4.4.2, LO O-4.4.3, LO-4 4.6.1, LO-5.1 1.2, LO-5.2.2 2, LO-5.3.2, L LO-5.3.3, LO O5 5.5.2, LO-5.6 6.1, LO-6.1.1 1, LO-6.2.2, LO-6.3.2. b. LO-1.1.5 hass been reworrded and upg graded to K2 2. Because a comparison n of t terms of defe ect related te erms can be expected. c. LO-1.2.3 (K2 2) has been added. a The content wass already covvered in the 2007 2 s syllabus. d. LO-3.1.3 (K2 2) now comb bines the con ntent of LO-3.1.3 and LO--3.1.4. e. LO-3.1.4 hass been removed from the e 2010 syllab bus, as it is p partially redun ndant w LO-3.1.3. with f. LO-3.2.1 hass been reworrded for cons sistency with h the 2010 syyllabus conte ent. g. LO-3.3.2 hass been modiffied, and its level l has bee en changed from K1 to K2, K for c consistency with LO-3.1.2. h. LO 4.4.4 hass been modiffied for clarity y, and has been changed d from a K3 to t a K4. Reason n: LO-4.4.4 had already been b written in i a K4 mann ner. i. LO-6.1.2 (K1 1) was dropp ped from the 2010 syllabu us and was rreplaced with h LO6 6.1.3 (K2). There T is no LO-6.1.2 L in th he 2010 sylla abus. 2. Consistent C u for test approach acccording to the use e definition in n the glossaryy. The term test t s strategy will not be required as term to t recall. 3. Chapter C 1.4 now contains the concep pt of traceability between test basis and test cases. 4. Chapter C 2.x now containss test objectss and test ba asis. 5. Re-testing R iss now the ma ain term in the glossary in nstead of con nfirmation tessting. 6. The T aspect data d quality and a testing has h been add ded at severa al locations in the syllabu us: d data quality and a risk in Chapter C 2.2, 5.5, 5 6.1.8. Reason: Consistency to Exit 7. Chapter C 5.2.3 Entry Crite eria are adde ed as a new subchapter. s C Criteria (-> entry e criteria added to LO O-5.2.9). 8. Consistent C u of the terrms test strattegy and testt approach with use w their definition in the g glossary. 9. Chapter C 6.1 shortened be ecause the tool t descriptions were too o large for a 45 minute le esson. 10. IEEE Std 829:2008 has been b release ed. This vers sion of the syyllabus does not yet consider t this new edittion. Section 5.2 refers to o the docume ent Master Test Plan. The e content of the M Master Test Plan is cove ered by the co oncept that the t documen nt “Test Plan”” covers diffe erent l levels of plan nning: Test plans p for the test levels ca an be create ed as well as a test plan on o the p project level covering mu ultiple test levvels. Latter is s named Master Test Pla an in this syllabus a in the IS and STQB Glossa ary. 11. Code C of Ethics has been moved from m the CTAL to o CTFL.

Release 2011 Changess made with the “mainten nance releasse” 2011 1. General: G Wo orking Party replaced r by Working W Gro oup 2. Replaced R po ost-conditionss by postcon nditions in ord der to be con nsistent with the ISTQB G Glossary 2.1. 3. First F occurre ence: ISTQB replaced by ISTQB® 4. Introduction to this Syllab bus: Descripttions of Cogn nitive Levelss of Knowledg ge removed, b because thiss was redund dant to Appendix B. Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 74 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

5. S Section 1.6: Because the e intent was not to define e a Learning Objective forr the “Code of o E Ethics”, the cognitive c leve el for the secction has bee en removed. 6. Section S 2.2.1 1, 2.2.2, 2.2.3 and 2.2.4, 3.2.3: Fixed formatting isssues in listss. 7. Section S 2.2.2 2 The word failure f was no ot correct forr “…isolate fa ailures to a sspecific comp ponent …”. Thereforre replaced with w “defect” in that sente ence. 8. Section S 2.3: Corrected fo ormatting of bullet b list of test t objective es related to test terms in n s section Test Types (K2). 9. Section S 2.3.4 4: Updated description d off debugging to be consistent with Verrsion 2.1 of the ISTQB Glosssary. 10. Section S 2.4 removed r worrd “extensive e” from “inclu udes extensivve regression n testing”, b because the “extensive” depends on the change (size, risks, value, v etc.) a as written in the t n next sentencce. 11. Section S 3.2: The word “in ncluding” hass been remov ved to clarifyy the sentencce. 12. Section S 3.2.1 1: Because the t activities of a formal review r had been b incorrecctly formatted d, the r review proce ess had 12 main m activitiess instead of six, s as intend ded. It has be een changed d back t six, which makes this section to s comp pliant with th he Syllabus 2007 2 and the e ISTQB Advanced L Level Syllabu us 2007. 13. Section S 4: Word W “develop ped” replaced by “defined d” because test cases ge et defined an nd not d developed. 14. Section S 4.2: Text change e to clarify ho ow black-box x and white-b box testing co ould be used d in c conjunction w experien with nce-based te echniques. 15. Section S 4.3.5 5 text change e “..between actors, inclu uding users and a the syste em..” to “ … b between acto ors (users orr systems), … “. 16. Section S 4.3.5 5 alternative path replace ed by alterna ative scenario o. 17. Section S 4.4.2 2: In order to o clarify the te erm branch testing t in the e text of Secttion 4.4, a s sentence to clarify the focus of brancch testing has s been chang ged. 18. Section S 4.5, Section 5.2.6: The term “experienced d-based” tessting has bee en replaced by b the c correct term “experience-based”. 19. Section S 6.1: Heading “6.1.1 Understa anding the Meaning M and Purpose of T Tool Supportt for T Testing (K2)” replaced byy “6.1.1 Tooll Support for Testing (K2)”. 20. Section S 7 / Books: B The 3rd 3 edition of [Black,2001] listed, repla acing 2nd edittion. 21. Appendix A D: Chapters re equiring exerccises have been b replaced by the gen neric requirem ment t that all Learn ning Objectivves K3 and higher h require e exercises. This is a req quirement specified i the ISTQB in B Accreditatio on Process (Version ( 1.26 6). 22. Appendix A E: The change ed learning objectives bettween Versio on 2007 and 2010 are no ow c correctly liste ed.

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

Page 75 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

13.

Index

action word .............................................. 63 alpha tessting ...................................... 24, 27 architectture ................ 15, 21, 22, 25, 28, 29 archiving g ............................................ 17, 30 automation ............................................... 29 benefits of independe ence ....................... 47 benefits of using tooll ............................... 62 beta testting ........................................ 24, 27 37, 39, 40 black-bo ox technique .................... . black-bo ox test design n technique ............. 39 black-bo ox testing ...................................... 28 bottom-u up................................................. 25 boundarry value analyysis ......................... 40 bug ........................................................... 11 captured d script ......................................... 62 checklistts ........................................... 34, 35 choosing g test techniq que .......................... 44 code covverage ................. 28, 29, 37, 42, 58 commerccial off the sh helf (COTS) ............ 22 compilerr ................................................... 36 complexxity .............................. 11, 36, 50, 59 compone ent integratio on testing22, 25, 29, 59, 60 compone ent testing22 2, 24, 25, 27,, 29, 37, 41, 42 configura ation management ......... 45, 48, 52 Configurration manag gement tool ............. 58 confirma ation testing.... 13, 15, 16, 21, 28, 29 contract acceptance testing .................... 27 control fllow............................. 28, 36, 37, 42 coverage e 15, 24, 28, 29, 37, 38, 39, 40, 42, 50, 51 1, 58, 60, 62 coverage e tool ........................................... 58 custom-d developed so oftware.................... 27 data flow w .................................................. 36 data-drivven approach h .............................. 63 data-drivven testing ................................... 62 debuggin ng .............................. 13, 24, 29, 58 debuggin ng tool ................................... 24, 58 decision coverage .............................. 37, 42 decision table testing g ........................ 40, 41 decision testing ........................................ 42 defect10 0, 11, 13, 14, 16, 18, 21, 24, 26, 28, 29, 31 1, 32, 33, 34, 35, 36, 37, 39, 40, 41, 43, 44 4, 45, 47, 49, 50, 51, 53, 54, 55, 59, 60, 69 9 defect de ensity..................................... 50, 51 defect tra acking tool................................... 59 developm ment .. 8, 11, 12, 13, 14, 18, 21, 22, 24, 29 9, 32, 33, 36, 38, 44, 47, 49, 50, 52, 53, 55 5, 59, 67 ...... 21, 22 developm ment model .................... . drawbaccks of indepe endence ................... 47 driver ........................................................ 24 ol ....................... 58, 60 dynamicc analysis too Version 2011 2 © Internationa al Software Testing Qualifications Q Board

dy ynamic testin ng ..................... 13, 31, 32, 3 36 em mergency ch hange ................................. 30 en nhancement .................................... 27, 2 30 en ntry criteria ............................................. 33 eq quivalence partitioning p .......................... 40 10, 11, 18, 43, errror.................................. 1 4 50 errror guessing g ............................. 18, 43, 4 50 ex xhaustive tessting ................................... 14 ex xit criteria13,, 15, 16, 33, 35, 45, 48, 49, 4 50, 51 ex xpected resu ult ...................... 16, 38, 48, 4 63 ex xperience-ba ased technique ....... 37, 39, 3 43 ex xperience-ba ased test dessign techniqu ue 39 ex xploratory tessting ............................. 43, 4 50 fa actory accepttance testing g ...................... 27 fa ailure10, 11, 13, 14, 18, 2 21, 24, 26, 32 2, 36, 43, 46, 50, 51, 53, 54, 6 69 fa ailure rate .......................................... 50, 5 51 fa ault .............................................. 10, 11, 43 fa ault attack ................................................ 43 fie eld testing ......................................... 24, 2 27 fo ollow-up ....................................... 33, 34, 3 35 fo ormal review .................... . ................. 31, 3 33 fu unctional requ uirement ...................... 24, 2 26 fu unctional spe ecification ............................ 28 fu unctional taskk ......................................... 25 fu unctional testt .......................................... 28 fu unctional testting ..................................... 28 fu unctionality ................ 24, 2 25, 28, 50, 53, 5 62 im mpact analysis ........................... 21, 30, 3 38 incident ... 15, 16, 17, 19, 2 24, 46, 48, 55 5, 58, 59, 62 incident loggin ng ....................................... 55 incident mana agement.................. 48, 55, 5 58 incident mana agement tool ................. 58, 5 59 incident reportt ................................... 46, 4 55 independence e ............................. 18, 47, 4 48 informal review w ............................ 31, 33, 3 34 inspection ............................... 31, 33, 34, 3 35 inspection leader ..................................... 33 integration13, 22, 24, 25, 2 27, 29, 36, 40, 41, 42, 45, 48, 59, 60, 69 integration tessting22, 24, 2 25, 29, 36, 40 0, 45, 59, 60, 69 interoperabilityy testing ............................. 28 introducing a tool t into an o organization5 57, 64 IS SO 9126 ................................ 11, 29, 30, 3 65 de evelopment model m ................................. 22 ite erative-increm mental development mod del22 ke eyword-drive en approach........................ 63 ke eyword-drive en testing ............................ 62 kick-off ...................................................... 33 le earning objecctive ... 8, 9, 10, 21, 31, 37 7, 45, 57, 69, 70, 71 lo oad testing ................................... 28, 58, 5 60 Page 76 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

load testting tool........................................ 58 maintain nability testing g ............................. 28 maintena ance testing ......................... 21, 30 management tool ................... 48, 58, 59, 63 maturity .................................. 17, 33, 38, 64 metric ........................................... 33, 35, 45 mistake ........................................ 10, 11, 16 modelling tool........................................... 59 moderator .................................... 33, 34, 35 monitorin ng tool ................................... 48, 58 non-funcctional requirrement ......... 21, 24, 26 non-funcctional testing g ....................... 11, 28 objective es for testing ............................... 13 off-the-shelf .............................................. 22 operational acceptan nce testing ............... 27 operational test ............................ 13, 23, 30 patch ........................................................ 30 peer review .................................. 33, 34, 35 ...... 28, 58 performa ance testing .................... . performa ance testing tool ................... 58, 60 pesticide e paradox..................................... 14 portabilitty testing ...................................... 28 probe eff ffect .............................................. 58 procedurre................................................. 16 product risk r ............................ 18, 45, 53, 54 project risk ................................... 12, 45, 53 prototyping ............................................... 22 quality 8, 8 10, 11, 13, 19, 28, 37, 38, 47, 48, 50, 53 3, 55, 59 rapid application devvelopment (R RAD) ..... 22 Rational Unified Proccess (RUP) ............. 22 recorderr ................................................... 34 regressio on testing ...... 15, 16, 21, 28, 29, 30 Regulation acceptance testing ............... 27 reliabilityy ..................... 11, 13, 28, 50, 53, 58 reliabilityy testing ....................................... 28 requirem ment...................... 13, 22, 24, 32, 34 requirem ments manag gement tool .............. 58 requirem ments specificcation ................ 26, 28 responsiibilities ............................. 24, 31, 33 re-testing g . 29, See co onfirmation testing, t See  confirm mation testingg review13 3, 19, 31, 32, 33, 34, 35, 36, 47, 48, 53, 55 5, 58, 67, 71 review to ool................................................ 58 reviewerr ............................................. 33, 34 risk11, 12, 13, 14, 25 5, 26, 29, 30,, 38, 44, 45, 49, 50 0, 51, 53, 54 risk-base ed approach ............................... 54 risk-base ed testing......................... 50, 53, 54 risks ....................................... 11, 25, 49, 53 risks of using u tool ..................................... 62 robustne ess testing.................................... 24 roles ................ 8, 31, 33, 34, 35, 47, 48, 49 root causse .......................................... 10, 11 scribe ................................................. 33, 34 scripting language........................ 60, 62, 63 security ...................... 27, 28, 36, 47, 50, 58 Version 2011 2 © Internationa al Software Testing Qualifications Q Board

se ecurity testing ........................................ 28 se ecurity tool ........................................ 58, 5 60 simulators ................................................. 24 site acceptancce testing ........................... 27 so oftware deve elopment ............. 8, 11, 21, 2 22 so oftware deve elopment model .................. 22 sp pecial consid derations for some types of tool 62 te est case ................................................... 38 sp pecification-b based technique..... 29, 39, 3 40 sp pecification-b based testing g...................... 37 18, 26, 39, 45, sttakeholders ... 12, 13, 16, 1 4 54 sttate transition n testing ....................... 40, 4 41 sttatement covverage ................................ 42 sttatement testting ..................................... 42 sttatic analysiss .................................... 32, 3 36 sttatic analysiss tool ........... 3 31, 36, 58, 59, 5 63 sttatic techniqu ue ................................. 31, 3 32 sttatic testing ....................................... 13, 32 ........... 28, 58, sttress testing .................... . 5 60 sttress testing tool .............................. 58, 5 60 sttructural testiing .................... 24, 28, 29, 2 42 sttructure-base ed technique e ................ 39, 3 42 sttructure-base ed test design technique .... 42 sttructure-base ed testing ..................... 37, 3 42 sttub .......................................................... 24 su uccess factors ....................................... 35 sy ystem integra ation testing ................. 22, 2 25 sy ystem testing g13, 22, 24, 2 25, 26, 27, 49, 4 69 te echnical revie ew .................... 31, 33, 34, 3 35 te est analysis ........................... 15, 38, 48, 4 49 te est approach ........................ 38, 48, 50, 5 51 te est basis .................................................. 15 te est case . 13, 14, 15, 16, 2 24, 28, 32, 37 7, 38, 39, 40, 41, 42, 45, 51, 5 55, 59, 69 te est case speccification ................. 37, 38, 3 55 te est cases ................................................. 28 te est closure ................................... 10, 15, 16 ....................... 38 te est condition .................... . 15, 16, 28, 38, te est conditionss ........... 13, 1 3 39 te est control.................................... 15, 45, 4 51 te est coverage .................................... 15, 50 te est data ......... 15, 16, 38, 4 48, 58, 60, 62, 6 63 te est data preparation tool .................. 58, 5 60 te est design13,, 15, 22, 37, 38, 39, 43, 48, 4 58, 62 te est design sp pecification .......................... 45 te est design tecchnique .................. 37, 38, 3 39 te est design too ol .................................. 58, 5 59 Te est Developm ment Processs ..................... 38 te est effort .................................................. 50 17, 24, 26, 48, te est environme ent . 15, 16, 1 4 51 te est estimatio on ....................................... 50 te est execution n13, 15, 16, 3 32, 36, 38, 43 3, 45, 57, 58, 60 te est execution n schedule .......................... 38 te est execution n tool ..... 16, 3 38, 57, 58, 60, 6 62 te est harness...................... 1 16, 24, 52, 58, 5 60 te est implemen ntation ..................... 16, 38, 3 49 Page 77 of 78

31-Ma ar-2011

International Software Te esting Q Qualifications s Board

Certiffied Teste er Founda ation Level Syyllabus

der .............................. 18, 45, 47, 55 test lead test lead der tasks....................................... 47 test level. 21, 22, 24, 28, 29, 30, 37, 40, 42, 44, 45 5, 48, 49 test log ................................... 15, 16, 43, 60 test man nagement ............................... 45, 58 test man nagement too ol ....................... 58, 63 test man nager .................................. 8, 47, 53 test mon nitoring ................................... 48, 51 test obje ective ....... 13 3, 22, 28, 43, 44, 48, 51 test oraccle ................................................ 60 test orga anization ...................................... 47 test plan n .. 15, 16, 32 2, 45, 48, 49, 52, 53, 54 test plan nning .............. 15, 16, 45, 49, 52, 54 test plan nning activities ......................... 49 test proccedure ............ 15, 16, 37, 38, 45, 49 test proccedure speciffication .............. 37, 38 test prog gress monitoring ......................... 51 test repo ort........................................... 45, 51 test repo orting ...................................... 45, 51 test scrip pt ..................................... 16, 32, 38 test strattegy ............................................. 47 test suite e .................................................. 29 test sum mmary report ........ . 15, 16, 45, 48, 51 test tool classification n .............................. 58 test type e ........................... 21, 28, 30, 48, 75 test-drive en developm ment ......................... 24 tester 10 0, 13, 18, 34, 41, 43, 45, 47, 4 48, 52, 62, 67 7 tester tassks .............................................. 48 test-first approach .................................... 24

Version 2011 2 © Internationa al Software Testing Qualifications Q Board

te esting and qu uality ................................... 11 te esting princip ples ............................... 10, 14 15, 16, 17, 48, te estware............................ 1 4 52 to ool support ...................... 2 24, 32, 42, 57, 5 62 to ool support fo or manageme ent of testing g and tests ..................................................... 59 to ool support fo or performance and moniitoring 60 to ool support fo or static testin ng ................... 59 to ool support fo or test execution and logg ging60 to ool support fo or test speciffication ............ 59 to ool support fo or testing ...................... 57, 5 62 to op-down .................................................. 25 tra aceability .................................... 38, 48, 4 52 tra ansaction pro ocessing seq quences ......... 25 ty ypes of test to ool ................................ 57, 5 58 un nit test frame ework ...................... 24, 58, 5 60 un nit test frame ework tool ..................... 58, 5 60 up pgrades .................................................. 30 us sability ...................... 11, 2 27, 28, 45, 47, 4 53 us sability testin ng ................................. 28, 2 45 us se case test .................... . ................. 37, 3 40 us se case testing .......................... 37, 40, 4 41 us se cases ............................... 22, 26, 28, 2 41 us ser acceptan nce testing .......................... 27 va alidation .................................................. 22 ve erification ................................................ 22 ve ersion contro ol ......................................... 52 V--model .................................................... 22 walkthrough .................................. 31, 33, 3 34 white-box testt design tech hnique ....... 39, 3 42 white-box testting ............................... 28, 2 42

Page 78 of 78

31-Ma ar-2011

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.