May 29th, 2009
I will be honest that I have never had any luck hiring a good QA. While my success rate in hiring a good developer is a whopping 66%, the success rate in hiring a good QA dips to an abysmal 25%. That number essentially comes from my past record of having hired 20 QAs, 4 of whom turned out to be super stars, and others not so much.
So, the question rises again, why am I hiring another QA today. But either way, I am about 22 minutes away from interviewing a candidate, and I am desperately looking for clues, cues and questions to ask QAs. Some of the good questions that I have found are:
- Describe any good or interesting bug that you found. How did you follow it up until resolution?
- After you file a bug, what are the various things that can happen to it? (The question is same as asking more directly, the different states of a bug)
- What is the difference between a test case and a test report?
- How do you handle when a developer marks your bug as “invalid”? (The question is whether the QA can zero in on the SRS as the ultimate resolver).
- How do you determine what to test? (Again, the SRS)
- How do you handle the situation that there is no SRS? (Initiative)
- We are starting designing a product that will be ready in 10 months. Rather than hiring you right now, wouldnt it be more beneficial to hire you at that time, since you will not have anything to do anyway? (Deliberately misleading question)
- When would you start creating the test cases?
Ok, that is how far I got, and I need to go to the conference room now.
December 21st, 2007
What are the responsibilities of a Quality Assurance?
Firstly, you may know this role by different names. Earlier this used to be call “tester”. Then to glamorize, and to emphasize the technical nature and importance of this role, the term “QA” is now the industry standard.
As of 2007, QA is a hot field, and requires significant technical expertise – to the same extent of that of a developer, as a QA writes code just like a developer.
Whatever the name, QA has these roles:
- Create (and keep up to date!) test scenarios (groups of test cases).
- Create test programs/test scripts to automate the testing.
- Run the test scenarios to generate test reports. In this process, the bugs that are found are logged on to bug tracking software.
- Verify fixed bugs
One of the laws of good bug tracking is that only the person who opened a bug can close it. You open a bug (NEW). Someone fixes it. (FIXED). You verify it (VERIFIED CLOSED or REOPENED).
December 18th, 2007
One would think that someone who has worked in software for more than 1 week would know these things, but that is not always the case. So, some definitions are in order.
An input/output combination to determine if the system is behaving as expected. Test cases can be of at least two kinds: functional test cases, and performance test cases. A functional test case is used to determine whether the system is functionally behaving as expected. A performance test case is used to determine whether the system is being able to handle the expected system load. Most often, a test case refers to a functional test case.
Example: Let us use a phonebook as an example software system. An example of a non-existent test case may be: “Searching for a non-existent last name”
Test scenario is a collection of test cases that tests a group of related functional specifications. An example scenario could be “Searching”, which may consist of say 20 test cases.
A test report is an execution of your test scenarios. When the test scenarios are tested against a system, the test report documents which tests passed, and which failed.
- Test reports are generated new. Every time a tester tests the system, a new test report is created. It is very possible (indeed likely) that many test cases use the same test scenarios.
- You develop the test scenarios once – and create the test scenarios document.
- You run the test cases (hopefully, automated) thousands of time – and create thousands of test reports.
- You should modify the test scenarios document when the functional specification of the system (SRS) changes.
October 16th, 2007
When a good QA tests an application, she creates data that is meaningful. This data is a good reflection of the real world, or at least of the testing scenario being tested. For example, if she is creating a document with the latest word processing software, she would create a document “ISS Research Report”, or “King Lear”, or “Yard Sale”. Or, she may create a document “Deletable Document”, “Read only Document”, “Tabular Document”.
When a bad QA tests an application, she creates data that is by design temporary. The usual documents created are “Alice1″, “Alice2″, “Alice3″, “BobTest”, “asdkfjsdf” etc.
After a good QA has finished testing an application, the test data becomes a good demonstration, and an automatic user guide.
After a bad QA has finished testing an application, DBA has clean up tasks.