Automated Testing
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. - Hetzel88
During the software development process, developers often perform actions to determine whether a program achieves its desired functionality. In a practical sense, this often involves running a program multiple times under a variety of scenarios to cover all possible combinations of user interactions and inputs.
Logging and debugging are helpful parts of the software development and testing processes, but ultimately the purpose of a software test is to explicitly define a program’s intended functionality and verify the program is producing said functionality.
Rather than repeatedly performing manual actions, and rather than trying to remember all possible user experience scenarios, software developers write test code to accompany a program’s source code. These tests often perform small components of program functionality by directly referencing parts of the accompanying source code in controlled scenarios. By testing individual components of a program’s source code, developers help guarantee the program as a whole performs as desired.
Most modern programming languages include built-in features and third-party packages which help developers test their code.
Testing Philosophies and Benefits
Different people test software for different reasons. Some software development teams track metrics related to “test coverage” because it matters to them what percentage of an application’s functions are described by tests. Other developers only test public-facing components of an application’s source code. Some developers don’t bother to write tests at all.
The professor suggests you use tests to save yourself time, to improve the quality and maintainability of your programs, and to communicate your program’s desired functionality to other developers. If you find yourself performing a manual action multiple times, consider automating that action by writing a corresponding test case. If you expect a specific function to operate in a certain way, write down your expectations in a new test. If you find yourself writing comments to describe some part of the source code which isn’t clearly understood, make its function plain by describing it in a test.
Well-tested applications in general tend to be of a higher quality than untested applications, at least because there is intentional effort spent in pursuit of such a goal, but perhaps also because tests serve their purpose. However just because an application has passing tests, that doesn’t necessarily mean it is free of bugs.
Written expectations and descriptions of desired functionality mitigate risks of “brain drain” over time as development teams experience attrition. In this way, well-tested applications tend to be easier to maintain over time.
Test-driven Development (TDD)
Sometimes software developers write tests after completing the development process. In other cases, developers write tests before and during the development process. This latter approach is called Test-driven Development (TDD). When following TDD, tests become a bridge between requirements of desired functionality (i.e. “the app should do xyz”) and explicit checks for that functionality.
Your professor prefers to practice TDD and a related approach of Document-driven Development whereby the software development process follows the testing process which in-turn follows from the process of writing the software’s documentation (most likely in a README.md file).
The test-driven development process is closely related with the refactoring process. When performing the two processes in conjunction with each other, as a best practice developers follow the approach of “Red Light, Green Light, Refactor”. This refers to a process of first writing a failing test, then writing functional code to make the test pass, then refactoring the functional code and re-running tests to make sure they are still passing.