CSLA .NET

Vibrant discussion about CSLA .NET and using the framework to build great business applications.

Forum has moved

New location: CSLA .NET forum


CSLA .NET Resources:
  • CSLA .NET forum
  • CSLA .NET home page
  • How Far Can I Push the New CSLA 4 Authorization Model?

    rated by 0 users
    Not Answered This post has 0 verified answers | 7 Replies | 3 Followers

    Top 100 Contributor
    80 Posts
    JonStonecash posted on Sun, May 9 2010 6:03 PM

    I am working on a system to be implemented using CSLA 4.  One of the requiements is row level authorization.  That is, the every user has permissions on the business objects, but based upon authorization information, a user retrieving a list of these business objects would only be able to see a subset of the rows in the table.  Furthermore, some of the retrieved rows would have different permissions: some could be editted, some could be viewed in full, and some would have some of the data obscured.  The new CSLA 4 authorization model seems capable of supporting all of this, and more, with one exception: excluding rows from the collection.

    I could filter out the forbidden data after it has been retrieved, using the authorization logic to build the appropriate LINQ expression.  The user would never see the data.  Indeed, the filtering could occur on the server before the transmission across the wire to the client.  This all works but it bothers me to retrieve the data then throw some of it away. 

    I have been looking into translating LINQ Expressions into SQL.  Linq to SQL and Linq to Entities do this.  I have some sample code that does a good deal of what I want.  If this works, the business object could tailor the request by formulating some Linq that would end up as SQL in the where clause to restrict the retrieved data.

    In my mind, it makes sense to have the business object create the LINQ expression to shape the data retrieval request to satisfy business needs.  What I want to do is to add the authorization restriction to the SQL.  I have pretty much worked out how I would translate the authorization data into a LINQ expression.  The question is how to pass that expression to the DAL layer. 

    Here is what I understand.  When the object type is first created, the authorization rules are applied to the elements of the business object.  We will be slapping authorization rules on everything that moves using common logic that sits in our own base classes that inherit from the CSLA base classes.  These rules will chat with the authorization system that is claims based and is looking like it will be rather dynamic. Since the rule is going to work its way through the authorization data to authorize a fetch, it seemed to me that I could build up the LINQ expression to restrict the data retrieved to just that which was allowed by the rules at the same time.  Performance would be better and some of the security concerns would be strongly mitigated.

    The problem is how to pass that expression to the DAL.  For various reasons, I want to shield the business objects from the details of authorization (and other cross-cutting infrastructure issues).  At the time of the Fetch authorization there is no instance of the object to drop the expression into.  I could have the DAL create the LINQ expression but that seems like the wrong place to do it (not to mention the double hit on the authorization system).  

    Is there a place where the Fetch authorization rule could communicate the authorization LINQ expression to the DAL without doing something that makes me feel dirty or overworked?

    Jon Stonecash

     

    All Replies

    Top 10 Contributor
    9,475 Posts

    I think the question is whether you gain enough performance/efficiency to compensate for the increase in complexity.

    It is terribly simple to selectively include rows of data into your collection based on some authz logic. So the complexity there is very low, but you are pulling some data from the database just to ignore it.

    It could get quite complex to create more sophisticated in-database filtering to only get back the correct rows. And with such complexity you might lose any perf/efficiency benefits if the database query plan gets out of control. And of course you'll have increased the complexity, which reduces maintainability and generally increases cost.

    So In the end the question is whether you gain enough value to offset the complexity and cost.

    Rocky

    Top 25 Contributor
    498 Posts

    Hi Jon,

    Have a look at http://forums.lhotka.net/forums/p/8432/40266.aspx#40266

    Tiago Freitas Leal, CslaGenFork (Open Source CSLA code generator)

    Top 100 Contributor
    80 Posts

    I have been reading a fair number of posts on the forum and giving this issue a lot of thought.  I am almost certainly getting ahead of myself here: premature optimization and all that.  What I need to do right now is figure out a way to filter the data that leaves open the possibility of optimizing the retrieval of data from the database, if that turns out to be needed in the future.  This is the approach that I will be working on (expressed as it applies to Fetch, but all of the CRUD operations would be affected):

    Create a Criteria object to hold filtering stuff.  I know that CSLA 4 does not require a criteria object but I find that there is still a lot of useful functionality that is enabled by having a criteria object.

    The criteria object will be a generic typed by the data transfer object to which it applies. 

    The criteria object will be able to contain the value for a key, primary or foreign.

    The criteria object will contain two different lambda expressions that constrain the data that is to be made visible as a result of the fetch. One is the constraint that is imposed by the "pure" business logic; a second is the constraint that is imposed by the "authorization" logic. Both are of the form Func<DTO, bool> based upon the DTO type of the criteria.  Both are optional.

    The criteria object will have an indicator that says that the fetch might benefit from translating some of the lambda expression to restrictions in the WHERE clause of the SELECT statement.  For the moment, the DAL will ignore this indicator.

    The DataPortal_Fetch methods will construct the criteria setting all of the above values as needed and send it off to the DAL. 

    The DAL will generate the desired SQL, and could,in the future, if the "benefits from database filtering" indicator is active, translate the two lambda expressions (partially or entirely) into additional chunks of the WHERE clause.  The DAL generates the DTOs and returns them to the business layer.

    The DataPortal_Fetch methods use the lambda expressions in the criteria object to filter the returned DTO objects.  If the database filtering is in place, this might be a "no operation" but the cost here is minimal and it is most likely that it will only make economic sense to translate a subset of the lambda expressions.

    The other factor to consider here is the impact on caching.  The above approach would seem to work well with caching in that you could apply the lambda expressions to the cache as well.

    Jon Stonecash

     

     

    Top 100 Contributor
    80 Posts

    A very useful post.  Thanks.

    Jon Stonecash

     

    Top 25 Contributor
    422 Posts

    Two things I see here as possible issues - both probably pretty minor:

    1. Are lambda expressions serializable?  I seem to remember reading somewhere that they aren't, but I may be confusing things.

    2. Applying your lambda expressions to your cached data is not a bad plan, but potentially opens up a security hole.  After all, the cache has to have all possible data available.  I'm presuming the cache is server-side, so the potential risk is probably pretty small.  But it does potentially pose an issue.

    Just some things to think about.  Otherwise, it sounds like a decent plan to me.

    HTH

    - Scott

    Top 100 Contributor
    80 Posts

    Good point on the serialization.  However, the lambda expressions will not cross the client/server boundry.  They are generated in the DataPortal_xxx methods (and thus are running on the server) and sent to the DAL in the server.  Not a problem!  But you did make me think about it for a moment.  Always useful.

    I have not fleshed out the caching strategy.  I would like to have some caching on the client side so as to avoid the round-trip to the server.  That cache would only have approved items (the server would never serve up anything else to that client).  But this area definitely needs some hard thinking.

     

    Jon Stonecash

     

    Top 10 Contributor
    9,475 Posts

    Jon, you should contact Prasanna in the Chicago Magenic office, he put quite a bit of thought into a client-side cache mechanism that integrates directly into the client-side data portal so it is effectively transparent to any code that uses the data portal.

    Rocky

    Page 1 of 1 (8 items) | RSS

    Copyright (c) 2006-2014 Marimer LLC. All rights reserved.
    Email admin@lhotka.net for support.
    Powered by Community Server (Non-Commercial Edition), by Telligent Systems