|
A Rule Engine for State and Event Monitoring | ||||
  | |||||
Home |
Random Parallel Tests First I'll describe briefly the random parallel approach to testing that we've used in the past. Although insufficient for the release process, this approach is useful for introduction of new NodeBrain releases into any application, and we will continue to conduct these tests on our own applications prior to each release. This approach is also appropriate for Beta testers. Random Parallel Regression Test In our random parallel regression test we hold the rules constant and compare two versions of nb when subjected to a common, but random, event stream. In the figure below, our production application is at nb version V and our test application is at version V+1. +- nb V -> alarms -+ / | \ event stream -< rules V >-> compare \ | / +--nb V+1 -> alarms -+ This approach simply verifies that a new version of nb does not changed the behavior of a given application in an unexpected way when the rules stay the same. This test is considered successful if there are no unexpected differences over a few days and several thousand input events. Random Parallel Rule Enhancement Test In cases where nb has been enhanced to include rule features not previously supported, a second parallel test is conducted to see if enhanced rules behave as expected. This is done after migrating nb V+1 to production. +- nb V+1 -> alarms -+ / | \ event stream -< rules V >-> compare \ / +--nb V+1 -> alarms -+ | rules V+1 This test verifies that rule enhancements produce different results where expected, and more importantly that there are no surprises. Like the regression test above, this test is conducted over a few days and several thousand events. Copyright © 2015 NodeBrain.org |