The ad hoc testing technique is usually criticized for being unreliable and lacking an elaborate approach. For most development cases this is true, yet the ad hoc manner meets its goals for brisk functionality assessment. If done right, the technique may propel agile development toward release within a tighter timeframe and a budget respectively.
The QA department of DDI Development doesn’t quite often apply for this way of quality assurance. However, we opt for it if a product or its chunk has small or moderate number tasks supported, hence composing of test cases would be a superfluous effort. It’s obligatory for a QA specialist to be well informed of a system otherwise the ad hoc approach may miss substantial flaws upon release.
From Symfony 2.4 to 2.7
The framework update is among those types of software surgeries you have to deal with in order to lay grounds for successful improvements and scaling. This comprehensive measure can be ditched though your perspectives of utilizing fancy framework features in advance become vague.
We currently are engaged in development and improvements of an investments platform which provides members access to exclusive share offers and operates on a membership subscription model.
The platform is powered by the PHP/Symfony framework, and we hadn’t been delivering the product from scratch. As the system had been in development for about a year, it lacked a number of technology upgrades. Besides other tasks, we had to update the Symfony version from 2.4 to 2.7., test it, and fix occurred bugs.
The overall upgrade course was set to be completed within two weeks which isn’t unmanageable to accomplish as a high-level task for a development iteration. One of vulnerable parts of the system after the upgrade was an admin panel, which required testing and further debug. The admin dashboard supports a variety of tasks within the system: administration tools allow to allocate and market share offers as well as conduct other platform changes, including content management.
Ad hoc testing
Although the actual update from one version of Symfony to another takes a couple of hours, the aftermath may be devastating and fixing everything broken is a challenge which takes quite a while. That’s an inevitable price you have to pay for further convenience. For this very reason, we had to assess occurred malfunctions rapidly in order to initiate debugging and fit into the fortnight course.
Descending more QA technicians into the project was one of the options discussed during a meeting. It may have provided a comprehensive approach to testing, yet required composing all-encompassing test documentation. New - as for the project - testers had to grasp the specifics of an unknown environment to make a productive outcome.
Another option was to stick with those QA-specialists who had been in the project since beginning and thus wouldn’t experience a threshold while exploring “terra incognita”. This approach seemed more time saving and allowed to estimate further debug within one iteration as it was required.
As the QA-department had two specialists involved in the project before the update, we opted for the ad hoc testing by the means of these technicians. And yet this wasn’t an exhaustive measure because the administration tools were scheduled to be incrementally improved in the longer run, which gave more freedom for further and elaborate testing.
Within the testing course we have revealed the following set of flaws:
A number of buttons have broken. Though their design remained the way it was meant to be, we’ve faced numerous malfunctions after the update. Relations had to be restored.
Various flaws related to media files. The content management couldn't be conducted properly as media files were uploaded and viewed incorrectly.
Data validation issues. A few of field validations just disappeared.
As for the flaws that weren’t directly related to the update, we have unfolded the issues with database relations and experienced the lack of a proper database scheme. Its development we initiated as soon as we’ve dealt with post-update flaws.
Achievements and conclusion
The major objective - which was to reveal post-update issues - was achieved. Choosing the ad hoc approach allowed our team to cut the QA and bugfix course and fit it into the timeframe.
Despite the overall success in this particular case, the ad hoc testing approach should not be established as a sole and - even worse - the only way to assess the system and reveal bugs.Unfortunately, tight budgets often nudge stakeholders to reduce spends on QA, which not only may degrade the user experience, but also leave various vulnerabilities for data breaches.
When and how to use the ad hoc testing?
Be extremely cautious. Although you may shorten development period by ad hoc, there are always supposed to be additional safety measures that would eliminate a chance of missing substantial flaws. In our case, the development retained after the framework upgrade, which allowed to ensure the stability of the system in the long run.
Involve only those who are well informed of a system. The ad hoc testing may not fail if QA specialists are experienced in a software environment they deal with. Once you need to engage somebody new, it’s better to stick with test cases.
Test small products or moderate chunks of big ones. The more intricate and diverse your system is the more pitfalls for testers it contains. Trying to tackle a complex environment without proper planning will eventually result in a broken deadline or a nightmare release.
As we’ve showcased the ad hoc testing experience, you’ll see more our cases on different testing techniques we practise. Stay tuned.