Initial feedback/questions

Mar 24, 2010 at 1:13 PM

First of all, I want to say thanks for creating this project - I've hacked together similar solutions in the past using SSIS, but this is much more robust and I can see myself using it a lot. I often find myself in situations where I'm making changes to complex MDX calculations and need to check that I haven't broken anything else in my cube.

I've now had a chance to install it and have a play (the installation went very smoothly, btw - the documentation is very helpful), and I have some initial feedback and questions which I hope you can help me with:

  1. Are there any plans to capture the amount of time a query took to run? When I make a change to an MDX calculation, I care about two things: does my query return different values, and does it perform better or worse? For the latter case, it would be nice to display query times for the two queries being compared, and possibly other related perfmon counter values. Then if there was a significant difference in durations then I'd like to be alerted somehow.
  2. Can you implement something like the delta attribute for AssertTable? Often I find when I change an MDX calculation I end up getting insignificant differences in the values that are returned in some cells, and I want to ignore them. Here's an example I've just got:
      value difference in row 4, column 6
      Expected: 2180296.8799999999d
      But was:  2180296.8800000008d

    I can't use AssertValue because I have no idea which cells this will occur in - I need to test all cells. If I could set delta=0.01 in this case I think my problem would be solved.
  3. When using AssertTable, if there are multiple cells with differences I'd like to have all of them listed and not the first one that's found.
  4. There's something wrong with the row and column numbers coming back when a difference is found - they aren't correct, at least with MDX queries. Can you check this please? Also, with a cellset it would be useful to know what members were on rows and columns for each cell with differences, not just the row and column number.

Thanks,

Chris Webb

Coordinator
Mar 24, 2010 at 1:52 PM

Hi Chris,

thank you for your constructive feedback. Let me answer your questions:

  1. NUnit supports the consuming time by its own. If you start the tests from the command line, you will get a report with all times. Look at chapter 5 "test via msbuild". There you can find further informations. This are presents of NUnit. Acutal it is not planed to include other counters. 
  2. Thats a good Idea. We plan to extend the assert methods. delta in a table is one of our list. We also plan to extend the compare methods to compare ResultSets.
  3. Your are right. We have the same problem in our projects. The error reporting is also an issue.
  4. The same. Rows and columns are not so good than field values. 

The first change we will do are best practice testcases and an better example suite. They are planed in 4 weeks from today. I don't know if we can do the other changes within that time. But be sure, we are working on it!

Thanks,

Thomas Strehlow  

Mar 24, 2010 at 2:39 PM

Thanks for the quick reply, Thomas. I'll be doing more thorough testing over the next few days and I'll let you know about any other bugs etc I find. I'm looking forward to the next release!

Regards,

Chris

Apr 14, 2010 at 7:59 AM

Hello,

this is a great project, and exactly something I need right now!

Anyway, with documentation being all german, I have trouble grasping the whole concept of it.

For a starter - is there a way to make a rule which would try to match outputs of 3 distinct queries (one against source transactional db, one against DW and one mdx against OLAP cube)?

Coordinator
Apr 14, 2010 at 9:01 AM
Edited Apr 14, 2010 at 9:03 AM

Hi mrQQ

first of all, thanks for the compliment.
Meanwhile, there is also an English documentation at http://biquality.codeplex.com/documentation.

Possible procedure:

  1. For each source create a test case that extracts the data into separate result files (see chapter 3.2)
  2. Each test case contains your distinct query (for example: one against source transactional db)
  3. Define 2 test cases to compare the result files (result file 1 against result file 2 and result file 1 against result file 3)

Best regards
Jörg

Apr 14, 2010 at 1:41 PM

Ah, that's good. I wasn't aware you could save result to file and then compare result against file. I will have to try that :)