|
|
| Line 1: |
Line 1: |
| =QmlTest=
| |
|
| |
|
| Michał Sawicz, Michael Zanetti
| |
|
| |
| We’ve been rather happy with qmltestrunner, we’d like to show our approach to combining auto and manual test <span class="caps">QML</span> code. The thing we’ve been struggling with is measuring coverage for <span class="caps">QML</span>, so that’s what we’d like to brainstorm about.
| |
|
| |
| Some notes:
| |
|
| |
| * It’s hard to measure test coverage with <span class="caps">QML</span> because:
| |
| *# Declarative code doesn’t really execute anything – it “creates”
| |
| *# eval() breaks coverage metrics by adding code at runtime
| |
| *# standard coverage tools don’t know about <span class="caps">QML</span>/JS
| |
| * We can deal with declarative code by measuring which types were instantiated.
| |
| * We can deal with the eval() problem by agreeing on not to use eval().
| |
| * We can try to use the <span class="caps">QML</span> profiler as tool to measure coverage.
| |
| * The <span class="caps">QML</span> profiler currently measures only function calls. We want branch/condition or line coverage. This would be possible by collecting more data with the profiler, at the cost of a higher impact on performance. As that is not a good idea for profiling it should be optional.
| |
| * Multi-engine profiling is currently not possible with the command line profiler but could be done using EngineControl.
| |
|
| |
| (Something is wrong with command line handling and the test runner in conjunction with qmlprofiler – it didn’t work in the demo.)
| |