Qt wiki will be updated on October 12th 2023 starting at 11:30 AM (EEST) and the maintenance will last around 2-3 hours. During the maintenance the site will be unavailable.

Qt-contributors-summit-2014-Qs2014QmlTest: Difference between revisions

From Qt Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
=QmlTest=


Michał Sawicz, Michael Zanetti
We’ve been rather happy with qmltestrunner, we’d like to show our approach to combining auto and manual test <span class="caps">QML</span> code. The thing we’ve been struggling with is measuring coverage for <span class="caps">QML</span>, so that’s what we’d like to brainstorm about.
Some notes:
* It’s hard to measure test coverage with <span class="caps">QML</span> because:
*# Declarative code doesn’t really execute anything – it “creates”
*# eval() breaks coverage metrics by adding code at runtime
*# standard coverage tools don’t know about <span class="caps">QML</span>/JS
* We can deal with declarative code by measuring which types were instantiated.
* We can deal with the eval() problem by agreeing on not to use eval().
* We can try to use the <span class="caps">QML</span> profiler as tool to measure coverage.
* The <span class="caps">QML</span> profiler currently measures only function calls. We want branch/condition or line coverage. This would be possible by collecting more data with the profiler, at the cost of a higher impact on performance. As that is not a good idea for profiling it should be optional.
* Multi-engine profiling is currently not possible with the command line profiler but could be done using EngineControl.
(Something is wrong with command line handling and the test runner in conjunction with qmlprofiler – it didn’t work in the demo.)

Revision as of 14:10, 25 February 2015

QmlTest

Michał Sawicz, Michael Zanetti

We’ve been rather happy with qmltestrunner, we’d like to show our approach to combining auto and manual test QML code. The thing we’ve been struggling with is measuring coverage for QML, so that’s what we’d like to brainstorm about.

Some notes:

  • It’s hard to measure test coverage with QML because:
    1. Declarative code doesn’t really execute anything – it “creates”
    2. eval() breaks coverage metrics by adding code at runtime
    3. standard coverage tools don’t know about QML/JS
  • We can deal with declarative code by measuring which types were instantiated.
  • We can deal with the eval() problem by agreeing on not to use eval().
  • We can try to use the QML profiler as tool to measure coverage.
  • The QML profiler currently measures only function calls. We want branch/condition or line coverage. This would be possible by collecting more data with the profiler, at the cost of a higher impact on performance. As that is not a good idea for profiling it should be optional.
  • Multi-engine profiling is currently not possible with the command line profiler but could be done using EngineControl.

(Something is wrong with command line handling and the test runner in conjunction with qmlprofiler – it didn’t work in the demo.)