What is Quality Assurance ?

Quality assurance is any systematic process of checking to see whether a product or service being developed is meeting specified requirements.

Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects).

Why does software have bugs?
  • Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
  • Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tier distributed systems, applications utilizing multiple local and remote web services applications, data communications, enormous relational databases, security complexities, and sheer size of applications have all contributed to the exponential growth in software/system complexity.
  • Programming errors - programmers, like anyone else, can make mistakes.
  • Changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.
  • Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
  • Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
  • Software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.
What are Software Testing Types ?
  • Black box testing : You don't need to know the internal design in detail or have a good knowledge about the code for this test. Tests are mainly based on functionality and specifications, requirements.
  • White box testing : This test is based on detailed knowledged of the internal design and code. Tests are performed for specific code statements and coding styles.
  • Unit testing : The most micro scale of testing to test specific functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.
  • Incremental integration testing : Continuous testing of an application as new functionality is added. Requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed. Done by programmers or by testers.
  • Integration testing : Testing of combined parts of an application to determine if they function together correctly. It can be any type of application which has several independent sub applications, modules.
  • Functional testing : Black box type testing to test the functional requirements of an application. Typically done by software testers but software programmers should also check if their code works before releasing it.
  • System testing : Black box type testing that is based on overall requirements specifications. Covers all combined parts of a system.
  • End to End testing : It's similar to system testing. Involves testing of a complete application environment similar to real world use. May require interacting with a database, using network communications, or interacting with other hardware, applications, or systems.
  • Sanity testing or smoke testing : An initial testing effort to determine if a new sw version is performing well enough to start for a major software testing. For example, if the new software is crashing frequently or corrupting databases then it is not a good idea to start testing before all these problems are solved first.
  • Regression testing : Re-testing after software is updated to fix some problems. The challenge might be to determine what needs to be tested, and all the interactions of the functions, especially near the end of the sofware cycle. Automated testing can be useful for this type of testing.
  • Acceptance testing : This is the final testing done based on the agrements with the customer.
  • Load / Stress / Performance testing : Testing an application under heavy loads. Such as simulating a very heavy traffic condition in a voice or data network, or a web site to determine at what point the system start causing problems or fails.
  • Usability testing : Testing to determine how user friendly the application is. It depends on the end user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  •  Install / Uninstall testing : Testing of full, partial, or upgrade install / uninstall processes.
  • Recovery / failover testing : Testing to determine how well a system recovers from crashes, failures, or other major problems.
  • Security testing : Testing to determine how well the system protects itself against unauthorized internal or external access and intentional damage. May require sophisticated testing techniques.
  • Compatability testing : Testing how well software performs in different environments. Particular hardware, software, operating system, network environment etc. Like testing a web site in different browsers and browser versions.
  • Exploratory testing : Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
  • Ad-hoc testing : Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • Context driven testing : Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life critical medical equipment software would be completely different than that for a low cost computer game.
  • Comparison testing : Comparing software weaknesses and strengths to competing products.
  • Alpha testing : Testing of an application when development is nearing completion. Minor design changes may still be made as a result of such testing. Typically done by end users or others, not by programmers or testers.
  • Beta testing : Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end users or others, not by programmers or testers.
  • Mutation testing : A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (defects) and retesting with the original test data/cases to determine if the defects are detected. Proper implementation requires large computational resources.
5 common problems in the software development process
  • Poor requirements - if requirements are unclear, incomplete, too general, and not testable, there may be problems.
  • Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
  • Inadequate testing - no one will know whether or not the software is any good until customers complain or systems crash.
  • Featuritis - requests to add on new features after development goals are agreed on.
  • Miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems can be expected.
5 common solutions to software development problems
  • solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. In 'agile'-type environments, continuous close coordination with customers/end-users is necessary to ensure that changing/emerging requirements are understood.
  • realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
  • adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 'Early' testing could include static code analysis/testing, test-first development, unit testing by developers, built-in testing and diagnostic capabilities, automated post-build testing, etc.
  • stick to initial requirements where feasible - be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. In 'agile'-type environments, initial requirements may be expected to change significantly, requiring that true agile processes be in place and followed.
  • communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - groupware, wiki's, bug-tracking tools and change management tools, intranet capabilities, etc.; ensure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes and/or continuous communication with end-users if possible to clarify expectations.
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

The role of documentation in QA

Generally, the larger the team/organization, the more useful it will be to stress documentation, in order to manage and communicate more efficiently. (Note that documentation may be electronic, not necessarily in printable form, and may be embedded in code comments, may be embodied in well-written test cases, user stories, etc.) QA practices may be documented to enhance their repeatability. Specifications, designs, business rules, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. may be documented in some form. There would ideally be a system for easily finding and obtaining information and determining what documentation will have a particular piece of information. Change management for documentation can be used where appropriate. For agile software projects, it should be kept in mind that one of the agile values is "Working software over comprehensive documentation", which does not mean 'no' documentation. Agile projects tend to stress the short term view of project needs; documentation often becomes more important in a project's long-term context.

What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so overly detailed that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
  1.     Title
  2.     Identification of software including version/release numbers
  3.     Revision history of document including authors, dates, approvals
  4.     Table of Contents
  5.     Purpose of document, intended audience
  6.     Objective of testing effort
  7.     Software product overview
  8.     Relevant related document list, such as requirements, design documents, other test plans, etc.
  9.     Relevant standards or legal requirements
  10.     Traceability requirements
  11.     Relevant naming conventions and identifier conventions
  12.     Overall software project organization and personnel/contact-info/responsibilities
  13.     Test organization and personnel/contact-info/responsibilities
  14.     Assumptions and dependencies
  15.     Project risk analysis
  16.     Testing priorities and focus
  17.     Scope and limitations of testing
  18.     Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
  19.     Outline of data input equivalence classes, boundary value analysis, error classes
  20.     Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
  21.     Test environment validity analysis - differences between the test and production systems and their impact on test validity.
  22.     Test environment setup and configuration issues
  23.     Software migration processes
  24.     Software CM processes
  25.     Test data setup requirements
  26.     Database setup requirements
  27.     Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
  28.     Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
  29.     Test automation - justification and overview
  30.     Test tools to be used, including versions, patches, etc.
  31.     Test script/test code maintenance processes and version control
  32.     Problem tracking and resolution - tools and processes
  33.     Project test metrics to be used
  34.     Reporting requirements and testing deliverables
  35.     Software entrance and exit criteria
  36.     Initial sanity testing period and criteria
  37.     Test suspension and restart criteria
  38.     Personnel allocation
  39.     Personnel pre-training needs
  40.     Test site/location
  41.     Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
  42.     Relevant proprietary, classified, security, and licensing issues.
  43.     Open issues
  44.     Appendix - glossary, acronyms, etc.


No comments:

Post a Comment