Every programmer thinks his code is perfect, well perhaps that’s not entirely true. What I mean is that no programmer thank you for pointing out obvious flaws in their code if they can help it. However that’s the primary aim of the initial testing phases to spot major and obvious flaws as early as possible. It’s a simple and essential part of the process and arguably one of the most important phases of the test schedule.

Just like reviews, static analysis searches for defects without executing the code. Having said that, as opposed to reviews static analysis is implemented once the code has actually been written. Its goal is to find flaws in software source code and computer software models. Source code is any sequence of statements written in some human-readable computer programming language which in turn can then be converted to equivalent computer executable code– this is actually usually produced by the programmer. A software model is an image of the final solution developed using techniques just like Unified Modeling Language (UML); this is commonly created by a software designer.

Throughout the testing process the core code should be stored somewhere centrally, with limited access to anyone.  If alterations are needed to the core code, it should be done as part of the testing schedule.  Certainly it is vital that these changes are tracked, you should also limit remote access to this store for security reasons.  If remote access is essential then you should use a secure connection such as a VPN  e.g this one for Indian IPTV USA 

Static analysis can easily find issues that are difficult to find during test execution by analysing the program code e.g. instructions to the computer system can be in the form of control flow graphs (how control passes involving modules) and data flows (assuring data is identified and effectively used). The value of static analysis is:

Initial discovery of issues just before test execution. Just like reviews, the earlier the issue is found, the cheaper and easier it is to fix.

Early warning regarding questionable aspects of the code or design, by the computation of metrics, such as a high-complexity measure. If code is too complicated it can be a lot more vulnerable to error or less dependent on the focus given to the code by programmers. In the event that they recognize that the code has to be complex then they are more likely to check and double check that this is accurate; however, if it is unexpectedly complicated there is a greater chance that there will certainly be a defect in it.

Identification of defects not easily discovered by dynamic testing, such as development standard non-compliance as well as identifying dependencies and inconsistencies in software models, such as hyperlinks or interfaces that were actually either inaccurate or unknown before static analysis was carried out.

Improved maintainability of code and design. By performing static analysis, issues will be eliminated that would certainly typically have increased the volume of maintenance required after ‘go live’. It can also recognize complex code which if fixed will make the code more easy to understand and consequently easier to manage.

Prevention of defects. By identifying the defect very early in the life cycle it is actually a great deal easier to identify why it existed in the first place (root cause analysis) than during test execution, therefore offering information on possible process improvement that might be made to prevent the same defect appearing again.

Further Reading: http://residentialip.net/

Leave a Reply

Your email address will not be published. Required fields are marked *