Vibrant discussion about CSLA .NET and using the framework to build great business applications.
I am trying to set up an elaborate unit test project in my solution. One problem that has me stumped is the Access level of our Methods, for example Friend factory methods in all Child classes.
I do not want to mess up the access levels of hundreds of my objects just to set up tests. I would also like to keep all of the tests in one or two specialized test projects separate from the rest of the code. Only thought that comes to mind is to add a single Public class to every project with Public access methods that work as jump off points to the actual objects. Has anybody devised any other civilized way to do this?
Sorry for the slow reply, only just catching up with new forum emails.
I agree, you shouldn't have to "mess up the access levels" of your objects. You should write the objects exactly as they are meant to be used by your users (or your consuming code).
Therefore, your test unit projects should test the public interfaces that your objects are exposing. That is what you want to guarantee doesn't change from one version to the next.
So we target our unit tests (we use MbUnit, although NUnit seems to have a much better take up) at our publicly exposed stuff that we don't want to break. But we don't test private methods within classes, because that's specific to the implementation within each class.
For each assembly (i.e. MyBusiness.Dll) we have a corresponding test assembly (i.e. MyBusiness.Tests.Dll) that tests the functionality exposed by that assembly. That seems to be a common way to structure the different projects.
We have the whole thing automated with CruiseControl.NET as well.
I disagree on your comment that unit tests need to be fast. They need to be 100% accurate and 100% repeatable. I would say that speed is least important.
This is why we use an automated build process that gets the source from SourceSafe, builds the assemblies and runs all our tests on a daily basis. We don't care how long it takes, providing it can do it in a standard repeatable way. It then becomes a group responsibility to sort out a broken build.
As for the dependency on the database, yes you do have to assume there will be a database. But that's why you've got BOs that persist themselves to the DB right?
So we have a standard abstract test base class that allows us to test the simple CRUD functions of each BO class against the DB. Obviously we have to write a concrete test class for each BO to override the unique things for that BO. But the mechanics of the test process are essentially the same for each BO.
Create a new BO and save it. Then read it back, edit it and save it. Read it back again and check it updated ok, before finally deleting it. Check it was deleted properly.
It's fairly easy to write such a unit test harness for your BO framework if you think that is what you are trying to test at a unit level.
If you are trying to do System Testing (i.e. testing the functionality of the application) rather than just unit level testing then I would agree that you need a pre-populated DB to work with.
In that case you can either go with a standard SQL script that you run to populate your DB with data, or use your (fully unit tested) BOs to populate the DB for you. It all depends on the amount of investment you want to make in the test environment and how you plan to use it.
Ok, some of this depends on your definition of "unit" and also of "fast". But IMO I strongly disagree.
If you are saying that running 1 individual test must be responsive within your IDE when you want to test if you have broken 1 thing in your class - then I do agree with you. But I must re-iterate again that the point of unit testing is NOT to make sure that the code executes quickly - it's to make sure that your code does what it is supposed to do! That is all - nothing else. If it takes 5 seconds to prove you are 100% correct - then it takes 5 seconds - end of story. I don't want something to take 1 second to tell me that it's 85% correct. What about the people who need to use the other 15% of the functionality? What do I tell them? So I want the full 100% tested properly no matter how long it takes.
That's why reliability and runnability are more important than speed. Hey, if you have unit tests that you know run slowly then say so in your documentation, or mark then in a special way - that doesn't mean that they shouldn't be there to guarantee 100% accuracy.
And I believe you have to communicate with the DB, if the "unit" you are testing is your BO. How are you going to prove that your BO can perform basic CRUD operations if it never actually persists data into your database?
C'mon that's what your BO is supposed to do! It's supposed to create itself via a factory method, allow some properties to be set on itself and then persist itself into the database when you call the Save() method. How can you possibly test that it does its job properly if you don't communicate with a database. That's where the data has to end up!
I understand the points you make, but in the real world you have to do what makes sense for the "unit" you are testing. So if that means you go to the database or to AD, then that's what you should do.
Otherwise, your unit tests have no value as they don't test how your application will work when it's deployed!
I also disagree with the "fast" comment. In my opinion you should not even attempt to decouple your business objects from the database for purposes of unit testing. My number one rule for unit tests is that they ought to be testing what the method they are exercising actually does. If a method saves to the database then by God the unit test for that method should be checking that it did write to the database otherwise what's the point? If your unit tests have not verified the behavior you are expecting at runtime then they are worthless.
For those that insist that their unit tests should be super fast because they want their answers now, then my only suggestion is that you should just run the tests for the parts you changed? Certainly NUnit allows you to do this. Maybe I don't have time for the full 200 tests in my suite when I only made changes to one section of the code... so just run those tests. However: Do not trust this "quick" test to verify your entire API and push the thing to production. You must run the entire suite before giving the API the stamp of approval.
And really, come on, you can't spare one minute out of your day to save hours and hours of painstaking debugging? I've never had a unit test suite that was so intollerably long that I couldn't suffer through it. Go get a drink or something and come back, it's not that bad. My unit tests typically consist of at the very least a "CRUD" test, and sometimes a "CRUD x 100" test which just repeats the first test 100 times. This is for really punishing the system or trying to get more accurate numbers for comparing timing. The "x 100" tests are marked as "Explicit" in NUnit, though. I don't want everything running 100 times during my normal test cycle. These are specific tests for specific times, and they stay neatly out of the way until told "Explicit"ly to run.
We recently began a CSLA-based project which has us unit testing in a serious way for the first time. Here's where we started, not necessarily saying this is good...
We did mock the database using NMock. I can share more details on how that works if that's of interest. We were trying to unit test the public factory methods, which call to static methods of the DataPortal. The DataPortal is difficult to mock because of those static methods. So instead we mocked the ADO.NET interfaces used in the eventual call to our DataPortal_ABC methods. When the factory methods are done executing you can assert that data was fetched into the object as expected (for a _Fetch), etc..
But here's the problem part...
1) The theory was that mocking would keep the tests fast and allow us to focus on what the DataPortal_ABC code did with the results of the data access. In the case of a Delete, there really is nothing to check on the business object afterwords, so why bother testing with mocks? Unless you just want to make sure an exception is not happening. Same problem with Insert and Update. The only thing that really gets changed there is the internal timestamp. So it seems that perhaps we went down the mockery road unnecessarily in this case. It certainly didn't test that the data was persisted to the actual database, which some folks on this thread have pointed out as valuable!
2) For this particular project we chose to implement stored procedures in the database instead of building SQL statements. That's great, but it spreads the data access logic across the DataPortal_ABC method and the SP. As we mocked the ADO objects we realized that while it allowed us to isolate the tests towards the code in the DP_ABC methods, it really didn't test the SP's. I could see someone advocating unit tests on the SP's specifically, but I'm thinking it's more manageable if you just consider them part of the data access logic in the DP_ABC methods instead.
That's a long winded way of saying that I'm having a change of heart related to mocking those ADO objects. I'm now thinking it may be better to go ahead and test those DP_ABC methods against the real thing. As MelGrubb implies you can organize your tests into categories: "all tests", the "fast tests" (hopefully the majority of your tests), and the "slower tests". That will help if you are following the conventional wisdom of regressing your test suite frequently - just make sure to run them all before you check in.
I'd be very interested in hearing about the experience of others unit testing their CSLA business objects...