Vibrant discussion about CSLA .NET and using the framework to build great business applications.

Forum has moved

New location: CSLA .NET forum

CSLA .NET Resources:
  • CSLA .NET forum
  • CSLA .NET home page
  • Unit Test Quandry

    rated by 0 users
    This post has 43 Replies | 4 Followers

    Top 25 Contributor
    Posts 397
    Jav Posted: Tue, May 9 2006 1:19 PM

    I am trying to set up an elaborate unit test project in my solution.  One problem that has me stumped is the Access level of our Methods, for example Friend factory methods in all Child classes. 

    I do not want to mess up the access levels of hundreds of my objects just to set up tests.  I would also like to keep all of the tests in one or two specialized test projects separate from the rest of the code.  Only thought that comes to mind is to add a single Public class to every project with Public access methods that work as jump off points to the actual objects.  Has anybody devised any other civilized way to do this?


    Top 25 Contributor
    Posts 397
    Jav replied on Tue, May 9 2006 1:53 PM
    Okay, I see it.  Visual Studio's test module automatically creates Accessors for the Private methods in other projects.
    Top 50 Contributor
    Posts 198


    Sorry for the slow reply, only just catching up with new forum emails.

    I agree, you shouldn't have to "mess up the access levels" of your objects.  You should write the objects exactly as they are meant to be used by your users (or your consuming code).

    Therefore, your test unit projects should test the public interfaces that your objects are exposing.  That is what you want to guarantee doesn't change from one version to the next.

    So we target our unit tests (we use MbUnit, although NUnit seems to have a much better take up) at our publicly exposed stuff that we don't want to break.  But we don't test private methods within classes, because that's specific to the implementation within each class.

    For each assembly (i.e. MyBusiness.Dll) we have a corresponding test assembly (i.e. MyBusiness.Tests.Dll) that tests the functionality exposed by that assembly.  That seems to be a common way to structure the different projects.

    We have the whole thing automated with CruiseControl.NET as well.

    Not Ranked
    Posts 6
    andreakn replied on Tue, May 30 2006 4:19 PM
    Hello, new to the forum and new to csla (halfway through the book), but here's my followup-question anyway:

    How do you detatch yourself from using the DB when doing unit tests? Considering how tightly coupled the objects are to the DB in the standard use of CSLA as described in Lhotkas book, it seems to me to be hard to be able to test without having a DB loaded with test-data.

    A key point in doing unit tests are that they are to be fast, going to the DB takes time, I'd much rather instantiate "fake" objects loaded with just enough data to do my tests.

    There should be a way to inject the dependency on the DB I think...

    any thoughts?

    Andreas, Norway
    Top 50 Contributor
    Posts 198

    I disagree on your comment that unit tests need to be fast.  They need to be 100% accurate and 100% repeatable.  I would say that speed is least important.

    This is why we use an automated build process that gets the source from SourceSafe, builds the assemblies and runs all our tests on a daily basis.  We don't care how long it takes, providing it can do it in a standard repeatable way.  It then becomes a group responsibility to sort out a broken build.

    As for the dependency on the database, yes you do have to assume there will be a database.  But that's why you've got BOs that persist themselves to the DB right?

    So we have a standard abstract test base class that allows us to test the simple CRUD functions of each BO class against the DB.  Obviously we have to write a concrete test class for each BO to override the unique things for that BO.  But the mechanics of the test process are essentially the same for each BO.

    Create a new BO and save it.  Then read it back, edit it and save it.  Read it back again and check it updated ok, before finally deleting it.  Check it was deleted properly.

    It's fairly easy to write such a unit test harness for your BO framework if you think that is what you are trying to test at a unit level.

    If you are trying to do System Testing (i.e. testing the functionality of the application) rather than just unit level testing then I would agree that you need a pre-populated DB to work with.

    In that case you can either go with a standard SQL script that you run to populate your DB with data, or use your (fully unit tested) BOs to populate the DB for you.  It all depends on the amount of investment you want to make in the test environment and how you plan to use it.

    Not Ranked
    Posts 6
    jokiz replied on Wed, May 31 2006 4:14 AM
    i have been struggling with unit testing lately since i have been using CSLA 1.1.  i have read a number of articles on unit testing and they really are ought to be fast.  after all, you wanted to know immediately if you break something after you've done your changes.

    unit tests, as most of the articles have said, should not communicate with an external environment (DB, Active Directory, etc.).  a better setup is to have a separate project for these tests (persistence, AD authentication) and the unit tests for the business objects should not load themselves from DB.  and since CSLA makes use of static factories, you should have an abstraction inside this factory methods, i haven't implemented it though but i can see some opening...
    Top 50 Contributor
    Posts 198

    Ok, some of this depends on your definition of "unit" and also of "fast".  But IMO I strongly disagree.

    If you are saying that running 1 individual test must be responsive within your IDE when you want to test if you have broken 1 thing in your class - then I do agree with you.  But I must re-iterate again that the point of unit testing is NOT to make sure that the code executes quickly - it's to make sure that your code does what it is supposed to do!   That is all - nothing else.  If it takes 5 seconds to prove you are 100% correct - then it takes 5 seconds - end of story.  I don't want something to take 1 second to tell me that it's 85% correct.  What about the people who need to use the other 15% of the functionality?  What do I tell them?  So I want the full 100% tested properly no matter how long it takes.

    That's why reliability and runnability are more important than speed.  Hey, if you have unit tests that you know run slowly then say so in your documentation, or mark then in a special way - that doesn't mean that they shouldn't be there to guarantee 100% accuracy.

    And I believe you have to communicate with the DB, if the "unit" you are testing is your BO.  How are you going to prove that your BO can perform basic CRUD operations if it never actually persists data into your database?

    C'mon that's what your BO is supposed to do!  It's supposed to create itself via a factory method, allow some properties to be set on itself and then persist itself into the database when you call the Save() method.  How can you possibly test that it does its job properly if you don't communicate with a database.  That's where the data has to end up!

    I understand the points you make, but in the real world you have to do what makes sense for the "unit" you are testing.  So if that means you go to the database or to AD, then that's what you should do.

    Otherwise, your unit tests have no value as they don't test how your application will work when it's deployed!


    Top 10 Contributor
    Posts 4,106
    Andy replied on Wed, May 31 2006 7:47 AM
    I just want to say I agree with David.  you want the tests to run in a reasonable amount of time.  You don't want to find out tomorrow that something you coded today failed.  You want to find out at most in a few hours.

    I also agree that part of the BO's behavior is persisting itself.  You can use mocks, but that doesn't prove 100% that you BO communicates with your data layer properly, it just proves it communicates with the mock properly. 

    Not Ranked
    Posts 9
    MelGrubb replied on Wed, May 31 2006 7:49 AM

    I also disagree with the "fast" comment.  In my opinion you should not even attempt to decouple your business objects from the database for purposes of unit testing.  My number one rule for unit tests is that they ought to be testing what the method they are exercising actually does.  If a method saves to the database then by God the unit test for that method should be checking that it did write to the database otherwise what's the point?  If your unit tests have not verified the behavior you are expecting at runtime then they are worthless.

    For those that insist that their unit tests should be super fast because they want their answers now, then my only suggestion is that you should just run the tests for the parts you changed?  Certainly NUnit allows you to do this.  Maybe I don't have time for the full 200 tests in my suite when I only made changes to one section of the code... so just run those tests.  However: Do not trust this "quick" test to verify your entire API and push the thing to production.  You must run the entire suite before giving the API the stamp of approval.

    And really, come on, you can't spare one minute out of your day to save hours and hours of painstaking debugging?  I've never had a unit test suite that was so intollerably long that I couldn't suffer through it.  Go get a drink or something and come back, it's not that bad.  My unit tests typically consist of at the very least a "CRUD" test, and sometimes a "CRUD x 100" test which just repeats the first test 100 times.  This is for really punishing the system or trying to get more accurate numbers for comparing timing.  The "x 100" tests are marked as "Explicit" in NUnit, though.  I don't want everything running 100 times during my normal test cycle.  These are specific tests for specific times, and they stay neatly out of the way until told "Explicit"ly to run.

    Not Ranked
    Posts 6
    andreakn replied on Wed, May 31 2006 2:10 PM
    Well, though I must say that I understand the whole argument that you need to test that each object can persist itself correctly, it is downright overkill to force EACH of your unit tests to have to roundtrip the DB in order to test some part of your logic.

    There's a fair amount of literature out there supporting that one of the very key features of unit tests are that they are independent and fast. If all your tests roundtrip the same DB, then you cannot easily guarantee either, and certainly not both, as the only way to guarantee independent tests is to reset the DB between tests. I won't try to defend this standpoint here (call me lazy if you will) as I'm not trying to convert anyone to any specific point of view.

    Let me ask this then: *given* that I have the need to have an indirection between the objects and the DB for testing purposes, what would be an appropriate way of going about that business?

    is it even possible to do this within CSLA without breaking everything apart? If anyone have any insight on this I would love to hear it

    Top 10 Contributor
    Posts 4,106
    Andy replied on Wed, May 31 2006 2:43 PM
    I fail to see how a roundtrip to the db is overkill; its necessary for the BO to carry out its behavior. 

    You can easily guarantee idenpendance; you reset the DB just as you suggest, and this can be as simple as calling a cleanup procedure in your teardown.  The 'fast' part is relative.  You can make tests faster with no programming; just increase the network connection, processor speed, memory, hard drive, etc. 

    I'm sure you could use mocks if you absolutely wanted to not hit a database. 

    Not Ranked
    Posts 6
    andreakn replied on Wed, May 31 2006 5:07 PM
    well assume that my PM / Architect / Team lead / *insert power that be* / QA-guy

    comes to me and says: " this CSLA you're talking about sure is great, now if it only had an easy modification that would allow us to do our unit testing without hitting the DB all the time"

    what am I to respond (also assume that the not hitting DB during majority of unit tests is non-negotiable, coz it is in my neck of the woods)

    I know about NMock, I like NMock, I just don't see how I can easily modify either my usage of CSLA or modify CSLA itself to accomodate using mocks for testing.

    I'm getting more and more into the CSLA way of thinking and I'd really like to "sell" the framework to our company, but unless we can decouple the DB for tests it's disqualified :(
    Not Ranked
    Posts 6
    jokiz replied on Wed, May 31 2006 10:16 PM
    may i know how do you persist your BO's andreakn?  are you using stored procedures hardcoded on the bo's dataportal methods?
    Not Ranked
    Posts 6
    DennisWelu replied on Wed, May 31 2006 10:22 PM

    We recently began a CSLA-based project which has us unit testing in a serious way for the first time. Here's where we started, not necessarily saying this is good...

    We did mock the database using NMock. I can share more details on how that works if that's of interest. We were trying to unit test the public factory methods, which call to static methods of the DataPortal. The DataPortal is difficult to mock because of those static methods. So instead we mocked the ADO.NET interfaces used in the eventual call to our DataPortal_ABC methods. When the factory methods are done executing you can assert that data was fetched into the object as expected (for a _Fetch), etc..

    But here's the problem part...

    1) The theory was that mocking would keep the tests fast and allow us to focus on what the DataPortal_ABC code did with the results of the data access. In the case of a Delete, there really is nothing to check on the business object afterwords, so why bother testing with mocks? Unless you just want to make sure an exception is not happening. Same problem with Insert and Update. The only thing that really gets changed there is the internal timestamp. So it seems that perhaps we went down the mockery road unnecessarily in this case. It certainly didn't test that the data was persisted to the actual database, which some folks on this thread have pointed out as valuable!

    2) For this particular project we chose to implement stored procedures in the database instead of building SQL statements. That's great, but it spreads the data access logic across the DataPortal_ABC method and the SP. As we mocked the ADO objects we realized that while it allowed us to isolate the tests towards the code in the DP_ABC methods, it really didn't test the SP's. I could see someone advocating unit tests on the SP's specifically, but I'm thinking it's more manageable if you just consider them part of the data access logic in the DP_ABC methods instead.

    That's a long winded way of saying that I'm having a change of heart related to mocking those ADO objects. I'm now thinking it may be better to go ahead and test those DP_ABC methods against the real thing. As MelGrubb implies you can organize your tests into categories: "all tests", the "fast tests" (hopefully the majority of your tests), and the "slower tests". That will help if you are following the conventional wisdom of regressing your test suite frequently - just make sure to run them all before you check in.

    I'd be very interested in hearing about the experience of others unit testing their CSLA business objects...


    Top 75 Contributor
    Posts 102
    hurcane replied on Wed, May 31 2006 11:22 PM
    There are "unit" tests and there are "integration" tests. I think a lot of us are running integration tests, but calling the unit tests. From an academic standpoint, you want your unit tests to test a single unit of code, and not run any code in any other modules. That's what mock objects are for.

    In theory, I want to write a test for the Get factory method. The code consists of:
    Public Shared Function GetProject(ByVal ID as Guid) as Project
        If Not CanGetObject() Then
           Throw New Exception ...
        End If
        Return DataPortal.Fetch(Of Project)(New Criteria(id))
    End Function

    Testing the access exception is easy as it would never go to the database. But suppose you want to write a unit test that confirms that the Shared method is doing what it is designed to do. You would mock the DataPortal object. The mock would be designed to expect a call to Fetch and the mock object would return a Project object (also a mock, but with no expectations). Part of the unit test asserts that Expectations of the mock were met. If somebody changed the code and accidentally commented out the DataPortal.Fetch line, this test would fail.

    This test would be very fast. In my opinion, it's not a very useful test. However, suppose you want to unit test the DataPortal_Fetch method for the Project object (Pg. 426 in the VB book), which is what normally hits the database. There are multiple possible tests for the DataPortal_Fetch method. Let's design one that confirms all the appropriate fields are being pulled from the database.

    Typical code in the DataPortal_Fetch method uses SQLConnection, SqlCommand, SafeDataReader, and mResources. All of these objects need to be mocked.

    The connection mock has to include the Open method, but it doesn't need any expectations for this test.

    The command mock needs the CommandType and CommandText methods, but it also needs the ExecuteReader function which has to return the mock data reader. For this test, there are no expectations.

    The data reader has to include Read, GetGuid, GetString, GetSmartDate, GetBytes, and NextResult. None of these methods have to actually return any values. For this test, there are lots of expectations to set up on the mock data reader. It is expected to get two GetString calls, one with a parameter of Name and the other with a parameter of Description. It is expected to get two GetSmartDate calls (Started and Ended). It is expected to get one GetGuid call (Id). And finally, it is expected to get a GetBytes call (LastChanged).

    Like the first test, it passes if all the expectations on the mock data reader are met.

    Here's another good test I just thought of: Started date is assigned to the started field. This test would use the same mock objects as the previous test, but the data reader would only have the expectation that GetString is called one with the Started parameter. This expectation would return some value that you define in the expectation. The unit test then asserts that the Started Date of the Project matches the the value you put in the expectation.

    When I started writing tests a couple months ago, I initially went this route. I found it very tedious to write all these expectations. The mock objects were somewhat tedious at first, but they are highly reusable. Designing the expectations is where I spent most of my time. As a result, I use integration tests. I have a suite of about 650 tests that I run several times a day. They take about 25 seconds with a local data portal and a remote database.

    There are downsides to using integration tests. One bug can break multiple tests. That's why it is important to always make small changes and test between each change. If a previously running test breaks, then it has to be in the code you just changed. Another downside is that it is slower. When the test suite becomes too lengthy to run in 5 minutes or less, you have to partition the tests into functional areas that should have no crossover. You run the suite of tests appropriate to the code you are working on. The nightly/daily build should run the full suite of tests.

    Page 1 of 3 (44 items) 1 2 3 Next > | RSS

    Copyright (c) 2006-2014 Marimer LLC. All rights reserved.
    Email admin@lhotka.net for support.
    Powered by Community Server (Non-Commercial Edition), by Telligent Systems