Implementation and Testing
Prioritizing feature implementation
- All applications have dependencies between
protocols/use-cases/features in terms of what needs what before it can run
- If there is no game board you can't think about moving pieces on the board yet
- If there is no database persistence layer it would be a waste of time to work on saving the game state.
- From the feature dependency graph there are multiple successful paths to building your final app over time: the onion of how it will appear layer by layer over time
- Reify the onion into a series of deadlines, regular (two-week for us) iterations with concrete goals of features/etc implemented in each successive layer/iteration
- Your job as a software developer is to find the most natural route through the dependency maze to your final app.
This is a difficult planning problem
- Since there are multiple people on your team, you need a "parallel programming plan" so multiple parts can proceed simultaneously with minimal blocking/conflicts.
- You never want to write un-testable code, bugs will fester there since the debugging will happen long after it was written and forgotten about
-- SO, you can only add on new bits of code that are testable.
- Define a subset of features to start with, the key features, which when implemented will give bare-bones functionality
- Don't forget prototyping: Make a Potemkim village of surface functionality
- Prototyping is particularly good for UIs
- Big libraries/frameworks you are using should be exercised first by a small example
- Develop underlying communication protocols so they are independently testable components
- the RESTful server example from assignment 1 is an example of this.
Iteration / Release Planning
Software iterations and releasesAn iteration of a project is a planned global step in the development of a piece of software.
- An iteration should not be too big: add some features, modify the design to do one aspect differently, etc.
- Iterations give you many little deadlines to sucessfully hit
--its critical: OOSE for several years had no iteration deadlines; little work happened until the end of term.
- Plan iterations to give the goal: before starting, make a proposal of what will be accomplished in the forthcoming iteration.
A release is a stable iteration released to some users.
- version X.Y.Z (e.g. 10.8.3) means major version X, minor version Y, patch version Z.
- patches should only fix bugs, backward compatibility should not be broken (for users upgrading from older version). If a verion number is X.Y only it is shorthand for X.Y.0.
- minor versions add new functionality which should also be backwards-compatible.
- major versions may make changes which are not backwards-compatible -- users need to change their code or adjust tgheir workflow.
- a pre-release is not a stable version; suffixes are added, e.g. 10.8.3-pre or 8.3.23-alpha or 2.3.2-beta.
- alpha release (-alpha) - some features of eventual version haven't been implemented yet and there are quite a few bugs. More for in-house testing.
- beta release (-beta): the feature set is frozen and all features more or less work, but there are still bugs. Outside advanced users can use it.
- pre-release (-pre): a generic term for alpha or beta
- release candidate (-rc): better than beta; on deck for shipment, infintesimal bugs only
- Version number less than 1: an application that has yet to get beyond alpha/beta/pre in any release.
- The above terms are not always used consistently; semver.org is an attempt to standardize terminology, see that link for guidance on good version numbering.
- Its important to have a plan of how your project's onion will unfold
- Proceeding without a plan is like driving a car where there is no road: its hard!
- A clear straight road is easy, a curved muddy road is hard to drive on.
- An interation plan maps features and/or use-cases on to which iteration they should be implemented in.
- Have a detailed iteration plan for the next iteration and a fuzzier one for more distant iterations
- Revise your iteration plan at the end of each iteration
- Maybe some things in the previous iteration proved too hard -- bump up to current iteration or divide into smaller problems over several future iterations.
- Make clear what the new set of features / use-cases you want to add in the next iteration is - take your fuzzy ideas from the previous iteration plan and refine them to a concrete plan.
- Keep iteration planning in mind when making use-cases
- the idea of a use-case is a manageable task for one iteration (at least the happy path through the use-case)
- if your use-cases are bigger than that they should probably be broken up.
Implementation PrinciplesHere are some good principles for implementation, many of which are from the Agile school of thought.
Practice Collective Code Ownership
- Everyone can edit all code (use git or other version control system)
- Humans are social animals; sharing code will better share ideas/concepts/solutions
- allows refactoring to not be bounded--tweak others' code if you see a better way for it to interact with your code
- Aids in production of readable, elegant code (who wants to be hated by everyone else)
- But, it needs unit tests to work best -- if you tweak others' code, run their tests to make sure you didn't hose it.
Continuous Integration (CI)Build the whole system on a regular basis, don't work on a subcomponent in isolation for long
- Will make it more obvious when a recent change broke the system
- Unit tests are pretty much required for CI -- its automatically apparent that the build failed.
- Why? Forces many small changes (and small bugs to fix) as opposed to one big change at once that breaks the system and gives mega-bugs.
- Tool support has recently greatly increased for CI.
Below we discuss CI services for automating testing of the integration process.
Have a coding standard
- This is a standard intro programming topic you should know aready; just make sure you are doing it.
- See the bottom of Assignment 1 for the standards for this course.
Pair programming is two people programming on one terminal.
- Pair consists of driver & partner
- driver has the keyboard
- ALL coding done in pairs
- Partner: corrects flaws in driver
- asks questions of driver about code
- has final say (they have the keyboard)
- can focus on code details since parter is on the concepts
Is it good? Bad? Ugly?
- One brain has attention to "concepts", the other to "details" -- specializing in a way that makes a "superbrain" better than two individual brains trying to hold both at once.
- You get the best qualities of two people - some are better at concepts/critiquing and some better at details/typing.
- Rapidly train new people (intense exposure)
- Leverage that "humans are social animals" aspect again
- Disadvantages: With only one person typing, it can slow things down.
- Conclusion: works well with some personalities on some projects some or all of the time. Give it a test-drive if you have not tried it before.
Refactoring is an independent future lecture topic. At the root its very simple: sometimes its better to stop adding features and instead redo existing code to be more elegant and more aligned to they way you (now) see you are going.
TestingTesting is a major component of commercial software development.
- All code is obviously tested before shipping, the question is how thoroughly/frequently/automatically it is tested.
- A test suite is a set of tests which can be automatically run and the code either passes or fails.
- If you don't take a methodological approach to automated testing it is very difficult to develop large pieces of software
Testing HierarchiesWe use a simple testing hierarchy that has become common today
- Unit tests (low-level operations of one component)
- Integration tests (tests of how components interact)
- Acceptance tests (at the level of use-cases, also at the level of components interacting)
Unit TestingWrite small tests for each nontrivial operation
- Test should be completely automatically executable.
- Each test returns either true (success) or false (fail) always.
- Re-run the complete unit test suite after any significant change, and then immediately debug to get test success to 100%
- By being rigorous about regular and thorough testing you
are stamping out the bugs before they get out of hand.
-- "a stitch in time saves nine" yet again.
Unit testing in Java with JUnit
- JUnit is a simple Java unit testing framework
- It is also built into Eclipse: just hit the button and wait for the green light
- You are required to implement unit tests for your projects
- Learning JUnit is not hard and is a self-study topic for those not familiar with it; see the Course tools page for pointers.
- There is no need for tests that completely overlap in terms of bugs they would catch: quality over quantity.
- If there is some special case the code should work on, document this by writing a test for it.
- If you think an operation could fail, write a test to make
sure you are catching it
(e.g. is reading past the end of file caught?)
- Add tests before refactoring to make sure you can verify the success of the refactoring afterwards
- Cover bugs with tests (i.e. add a test that would have failed given the bug you just found; prevents recurring bugs)
Test Coverage Tools
- Code coverage tools see which lines of code are run by your test suite
- If a lot of lines are never run by any test you have bad test coverage
- Excellent coverage of code by line is an important dimension of overall test coverage
- Even with excellent line coverage you can have bad coverage, its only based on line of code not value of variable
- For Java, a common tool is JaCoCo which has an Eclipse plugin, EclEmma; IntelliJ has its own code coverage tool built-in.
We will run the IntelliJ code coverage tool on the Todo app test suite.
Degrees of importance of thorough testingThe degree of commitment varies by the size, complexity, and need for correctness of the project. Here are some points on the spectrum.
- Safety-critical systems require very thorough
testing since failure could be catastrophic
-- sometimes want to go beyond testing to formal verification of critical systems
- All significant software projects need some degree of automated testing to keep bugs down.
- Short temporary scripts have fairly obvious functionality and automated testing is not worth the overhead.
- Acceptance tests are tests corresponding to use-cases: one per use-case.
- They should test the customer-facing side of the app: is it "acceptable" to customers?
- Acceptance tests often involve clicking on GUI elements so special tools may be needed to automate acceptance testing.
BDD is a relatively new approach to acceptance testing
- BDD-style acceptance tests start off in a format similar to a use-case scenario (a story): a linear sequence of steps in English.
- A precise mapping of that English on to code is defined.
- So, its "just acceptance tests" but you don't have to stare at a pile of code to decypher the test.
- We will look at JBehave, A Java BDD framework, in particular how you Write and then code a story.
- Integration tests are similar to acceptance tests - they also test the whole system
- But, integration tests can be low-level things about whole system, not just customer-facing
CI Services for integration (and acceptance) testing
- One challenge of CI is individual developers have different build setups (library/OS/etc)
- A relatively recent solution is to use a CI service to run your build on a blank box and then run all your tests.
- The CI service defines the "gold standard" of success or fail of tests.
- The CI service requires a fully automated build and test process as a prerequisite
-- you should have this anyway; putting a CI service in the loop forces this
- For Github, Travis-CI is a CI service company. For commercial repos use travis-ci.com, and use travis-ci.org for public repos
- Why Github and Travis together? For every push to master, Travis gets notified and builds project and runs your test suite!
- We want you to use Travis for OOSE; see OOSE Tools page for more information
We will show Travis-CI in action running the tests of the simple Todo app.
Things that are harder to test automaticallyYou need to work harder to get some features automatically tested.
- GUI's: its hard to automate the input. Either test via the underlying model, or display in a console a message for what the human tester needs to manually do at each point.
- Distributed systems, persistence layers: make sure you set up a particular initial database/user configuration before running the tests.
- Chicken-egg problems: if you have no code, you have no concrete objects to pass as parameters. Its OK to just prototype it, pass mock objects initially.