October 13, 2017
Posted by on
In computing, a memory leak occurs when a program fails to release memory which is no longer required. For instance, a C program where the programmer dynamically allocates memory for a task but forgets to de-allocate it.
In certain circumstances, memory leakage may not cause big problems or might even be asymptomatic, like in programs with a little memory leakage which never actually run for long enough periods of time that memory usage becomes a problem. However, in worse cases memory leakage can lead to the complete failure of a program.
A couple types of programs at high risk for memory leakage are those in embedded systems (which can run continuously for years at a time, giving even small leaks the potential for being problematic) and those where memory is allocated extremely frequently for one-time tasks (such as rendering in a video or game).
While many languages like Java have built-in garbage collection which automatically cleans up unused or unaccessible memory, other languages like C and C++ require the programmer to be on top of making sure these leaks don’t happen in the first place. However, there are memory management libraries and memory debugging tools written to help with these languages. It should also be noted that automatic garbage collection comes with a performance overhead, so whether it’s desirable varies depending on the situation.
October 5, 2017
Posted by on
In software development, soak testing is a form of load testing where a system is exposed to some ‘normal’ (anticipated) load for a relatively long period of time to expose issues that would develop under these circumstances. For instance, certain problems like memory leaks in a system under normal usage might take a long time to have noticeable effects. The period of time required or desired varies depending on specifics, but some companies have been known to perform soak tests up to several months long. (Contrasting with the rapid deployment cycles of many systems.)
Ideally, one would be able to vary the load on the system in order to better simulate the fluctuations that it would encounter in deployment, instead of just using a flat average expected load. If this isn’t possible, one might instead set the flat load to one that’s higher than expected in order to get a better simulation. (“Passed tests give you no information.”)
October 3, 2017
Posted by on
In software testing, stress testing is generally testing software by subjecting it to a ‘heavier load’ than it would ordinarily be expected to encounter during ‘normal’ usage. It differs from load testing in that its focus is less on controlled variations in load and more on chaotic, unpredictable causes of stress to a system.
This can be advantageous because it’s not always possible to forsee all the different circumstances in which the software may be used in the future. For instance:
- Operating systems and middleware will eventually have to support software which hasn’t even been written yet.
- Users may run the software on systems with less computational resources than those on which the software was tested in development.
In some cases, stress testing over a short period of time can also serve as a good simulation of ‘normal’ use over a long period of time, potentially exposing bugs with effects that take time to become noticeable. Additionally, stress testing is good for finding concurrency problems like race conditions and deadlocks.
July 31, 2017
Posted by on
Linters in the modern sense generally refer to programs which check code for potential errors (or even just style errors) in a static context (without running the code). Potential such errors might include syntactic errors, “variables being used before being set, division by zero, conditions that are constant, and calculations whose result is likely to be outside the range of values representable in the type used.” [Wikipedia]
The name comes from a program called Lint that performed this function for C code, but has since become a more general term.
Linting can be helpful for debugging interpreted languages like Python, since they don’t have a compilation phase where things like syntactic errors would be caught.
July 18, 2017
Posted by on
In software testing, differential testing is the method of giving semantically identical input to multiple different implementations of the same program and looking for where, if at any point, these implementations differ in output. Such a difference would obviously indicate a problem in one or more of the implementations.
As with any testing method that uses a select few inputs out of an often infinite input space, there are various ways of selecting which inputs to use, each with their advantages and disadvantages. One can select inputs completely randomly — known as unguided testing — or use techniques such as evolutionary guidance to make the selection of inputs more likely to detect possible bugs.