Implementation and Testing
Prioritizing feature implementation
- All applications have dependencies between
protocols/use-cases/features in terms of what needs what before it can be built
- If there is no game board you can't think about moving pieces on the board yet
- If there is no database persistence layer it would be a waste of time to work on saving the game state.
- The feature dependency graph has an edge from a feature to a feature it depends on: you can't get the feature running until all its dependencies are working.
- From the feature dependency graph there are multiple successful paths to building your final app over time: the onion of how it will appear layer by layer over time
- Reify the onion into a series of deadlines, regular (two-week for us) iterations with concrete goals of features/etc implemented in each successive layer/iteration
- Your job as a software developer is to find the most natural route through the dependency maze to your final app.
Planning the onion
- Since there are multiple people on your team, you need a "parallel programming plan" so multiple parts can proceed simultaneously with minimal blocking/conflicts.
- Never write un-testable code, bugs will fester there since the debugging will happen long after it was written and forgotten about
- Define a subset of features to start with, the key features, which when implemented will give bare-bones functionality
- For complex libraries/frameworks, prototype before integrating into your app
- Exercise the library with a small bit of code to make sure its doing what you need it to; then use it in your app
- Prototyping is also helpful for UIs, stand them up as dummies before hooking into app logic.
- Develop underlying communication protocols so they are independently testable components
- the RESTful server example from assignment 1 is an example of this.
Iteration / Release Planning
Software iterations and releasesAn iteration of a project is a planned global step in the development of a piece of software.
- An iteration should not be too big: add some features, modify the design to do one aspect differently, etc.
- Iterations give you many little deadlines to sucessfully hit
--and, its critical: OOSE for several years had no iteration deadlines; little work happened until the end of term and projects were much weaker.
- Plan iterations to give the goal: before starting, make a proposal of what will be accomplished in the forthcoming iteration.
A release is a stable iteration released to some users.
- version X.Y.Z (e.g. 10.8.3) means major version X, minor version Y, patch version Z.
- patches should only fix bugs, backward compatibility should not be broken (for users upgrading from older version). If a verion number is X.Y only it is shorthand for X.Y.0.
- minor versions add new functionality which should also be backwards-compatible.
- major versions may make changes which are not backwards-compatible -- users need to change their code or adjust their workflow.
- a pre-release is not a stable version; suffixes are added, e.g. 10.8.3-pre or 8.3.23-alpha or 2.3.2-beta.
- alpha release (-alpha) - some features of eventual version haven't been implemented yet and there are quite a few bugs. More for in-house testing.
- beta release (-beta): the feature set is frozen and all features more or less work, but there are still bugs. Outside advanced users can use it.
- pre-release (-pre): a generic term for alpha or beta
- release candidate (-rc): better than beta; on deck for shipment, infintesimal bugs only
- Version number less than 1: an application that has yet to get beyond alpha/beta/pre in any release.
- The above terms are not always used consistently; semver.org is an attempt to standardize terminology, see that link for guidance on good version numbering.
We covered above that an onion-decomposition is needed; how to do that in practice?
- An interation plan maps features and/or use-cases on to which iteration they should be implemented in.
- Have a detailed iteration plan for the next iteration and a fuzzier one for more distant iterations
- Revise your iteration plan at the end of each iteration
- Maybe some things in the previous iteration proved too hard -- bump up to current iteration or divide into smaller problems over several future iterations.
- Make clear what the new set of features / use-cases you want to add in the next iteration is - take your fuzzy ideas from the previous iteration plan and refine them to a concrete plan.
- Keep iteration planning in mind when making use-cases
- the idea of a use-case is a manageable task for one iteration (at least the happy path through the use-case)
- if your use-cases are bigger than that they should probably be broken up.
Implementation PrinciplesHere are some good principles for implementation, many of which are from the Agile school of thought.
Practice Collective Code Ownership
- Everyone can edit all code (use git or other version control system)
- Humans are social animals; sharing code will better share ideas/concepts/solutions
- Allows refactoring to not be bounded--tweak others' code if you see a better way for it to interact with your code
- Aids in production of readable, elegant code (who wants to be hated by everyone else)
- But, it needs unit tests to work best -- if you tweak others' code, run their tests to make sure you didn't hose it.
Continuous Integration (CI)Build the whole system on a regular basis, don't work on a subcomponent in isolation for long
- Forces a methodology of many small changes (and small bugs to fix) as opposed to one big change that breaks the system and gives mega-bugs.
- You need to figure out how to break a big change into a series of small changes to get CI to work for you.
- Unit tests are pretty much required for CI -- its automatically apparent that the build failed.
- Tool support has recently increased for CI by running your tests automatically at each repository push.
-- tools for this are discussed below.
Have a coding standard
- This is a standard intro programming topic you should know aready; just make sure you are doing it.
- See the bottom of Assignment 1 for the standards for this course.
Pair programming is two people programming on one terminal.
- Pair consists of driver & partner
- driver has the keyboard
- ALL coding done in pairs
- Partner: corrects flaws in driver
- asks questions of driver about code
- has final say (they have the keyboard)
- can focus on code details since parter is on the concepts
Is it good? Bad? Ugly?
- One brain has attention to "concepts", the other to "details" -- specializing in a way that makes a "superbrain" better than two individual brains trying to hold both at once.
- You get the best qualities of two people - some are better at concepts/critiquing and some better at details/typing.
- Rapidly train new people (intense exposure)
- Leverage that "humans are social animals" aspect again
- Disadvantages: With only one person typing, it can slow things down.
- Conclusion: works well with some personalities on some projects some or all of the time. Give it a test-drive if you have not tried it before.
Refactoring is an independent future lecture topic. At the root its very simple: sometimes its better to stop adding features and instead redo existing code to be more elegant and more aligned to they way you (now) see you are going.
Testing is a major component of commercial software development.
- All code is obviously tested before shipping, the question is how thoroughly/frequently/automatically it is tested.
- A test suite is a set of tests which can be automatically run and the code either passes or fails.
- If you don't take a methodological approach to
automated testing it is very difficult to develop large pieces of software
-- OOSE projects are just a little too small to appreciate how absolutely critical good tests are but they still will help you
We use a simple testing hierarchy (there are other forms of test that we skip; these are the most common ones today)
- Unit tests (low-level operations of one component)
- Integration tests (tests of how components interact)
- Acceptance tests (at the level of use-cases, also at the level of components interacting)
Write small tests for each nontrivial operation
- Test should be completely automatically executable.
- Each test returns either true (success) or false (fail) always.
- Re-run the complete unit test suite after any significant change, and then immediately debug to get test success to 100%
- By being rigorous about regular and thorough testing you
are stamping out the bugs before they get out of hand.
-- "a stitch in time saves nine" yet again.
Unit testing in Java with JUnit
- JUnit is a simple Java unit testing framework
- It is also built into Eclipse and IntelliJ: just hit the button and wait for the green light
- You are required to implement unit tests for your projects
- Learning JUnit is not hard and is a self-study topic for those not familiar with it; see the Course tools page for pointers.
- There is no need for tests that completely overlap in terms of bugs they would catch: quality over quantity.
- If there is some special case the code should work on, document this by writing a test for it.
- If you think an operation could fail, write a test to make
sure you are catching it
(e.g. is reading past the end of file caught?)
- Add tests before refactoring to make sure you can verify the success of the refactoring afterwards
- Cover bugs with tests (i.e. add a test that would have failed given the bug you just found; prevents recurring bugs)
Test Coverage Tools
- Code coverage tools see which lines of code are run by your test suite
- If a lot of lines are never run by any test you have bad test coverage
- Excellent coverage of code by line is an important dimension of overall test coverage
- Even with excellent line coverage you can have bad test coverage, its only based on line of code not value of variable
- For Java, a common coverage tool is JaCoCo which has an Eclipse plugin, EclEmma; IntelliJ has its own code coverage tool built-in.
We will run the IntelliJ code coverage tool on the Todo app test suite. (Note their coverage tool doesn't seem to be compatible with Maven)
- Acceptance tests are tests corresponding to use-cases: one per use-case.
- They should test the customer-facing side of the app: is it "acceptable" to customers?
- Acceptance tests often involve clicking on GUI elements so special tools may be needed to automate acceptance testing.
BDD is a relatively new approach to acceptance testing
- BDD-style acceptance tests start off in a format similar to a use-case scenario (a story): a linear sequence of steps in English.
- A precise mapping of that English on to code is defined.
- So, its "just acceptance tests" but you don't have to stare at a pile of code to decypher the test.
- We will look at JBehave, A Java BDD framework, in particular how you Write and then code a story.
- Integration tests are similar to acceptance tests - they also test the whole system
- But, integration tests are low-level things about whole system, not customer-facing
CI Services for integration (and acceptance) testing
- One challenge of CI is individual developers have different build setups (library/OS/etc)
- A relatively recent solution is to use a CI service to run your build on a blank box and then run all your tests.
- The CI service defines the "gold standard" of success or fail of tests.
- The CI service requires a fully automated build and test process as a prerequisite
-- you should have this anyway; putting a CI service in the loop forces this
- For Github, Travis-CI is a CI service company. For commercial repos use travis-ci.com, and use travis-ci.org for public repos
- Why Github and Travis together? For every push to master, Travis gets notified and builds project and runs your test suite!
- We want you to use Travis for OOSE; see OOSE Tools page for more information
We will show Travis-CI in action running the tests of the simple Todo app.
Things that are harder to test automaticallyYou need to work harder to get some features automatically tested.
- GUI's: its hard to automate the input. Either test via
the underlying model, or display in a console a message
for what the human tester needs to manually do at each point.
-- there are tools to support automated "button press" etc but they are not quite ready for prime time
- Phone or other device apps - you have to work harder to run a test suite on phone or in phone simulator.
-- Note that Android testing has continued to improve -- see Android testing best practices for more details.
- Distributed systems, persistence layers: make sure you set up a particular initial database/user configuration before running the tests.
- Unit tests are for one component only; how to deal with missing components you may be interacting with? Mock them up; see e.g. mockito for a framework to help with Java mocking.
For your projects you will need to get as much automated testing set up as possible. Work with your project advisor to figure out how to get your test harness set up.