back to article Faster than reflection: Microsoft previews Source Generators for C#

Microsoft is previewing a new C# compiler feature called a Source Generator that it said will automatically spit out new source code and compile it when you build a project. The feature is in the latest .NET 5.0 preview, where .NET 5.0 is the forthcoming release that is intended to unify the Windows-only .NET Framework and the …

  1. cb7

    I always found a well designed relational database, with optimized datatype tables, with well thought out indexes with SQL native queries and stored procedures can be paired with even a VB asp.NET front end and deliver much better performance than a front end and queries written in C#.

    1. J27

      Yeah...

      Stored procedures really lock you into a specific database layer and can become difficult to manage in larger systems. So yeah, that can make sense, but it can create a lot of problems, you have to be careful when you split up your program logic between the DB and model code. We have an older web app at work where ALL of the logic is written in stored procedures, including things that make no sense to have in stored procedures (like ones that literally run .NET code, why is that even a thing?).

      VB.NET VS C# should perform basically the same because they run on the same runtime. The main reason you'd not want to use VB.NET these days is lack of popularity (good luck hiring a team with a lot of VB.NET experience) and it's relative lack of language features (not a huge loss in my opinion.) .NET Core doesn't support it yet either, but apparently that's comeing eventually after .NET 5.

      And as for... C# queries.... I think you probably mean Entity Framework, that's just an ORM, nothing to do with C# and ORMs all have the same problems. You're trading development speed for performance. So obviously you end up with better performance if you write direct to your database.

      1. UKHobo

        Re: Yeah...

        Our inhouse system is a c# / vb.net mess and a cluster of thousands of stored procedures which are full of legacy business logic that has been introduced over the years. Some stored procedures have grown to be thousands of lines long. Most developers appear to treat the SQL code with contempt where the SQL is so poorly formatted that it's barely readable and it may as well all be on one continuous unbroken line.

        DBAs have decided that they want / need to think about segmenting the data into logical partitions and separating it onto logical hardware. Part of this new world thinking plan is that maintaining any existing business logic and introducing new business logic is now banned in any stored procedure change. This has forced a mammoth refactor of both SQL code and application code which will take years to complete (if it ever gets there).

        Lessons in how to morph and mutate code into an unmaintainable monster.

        1. scotthannen

          Re: Yeah...

          I've seen this also. Messy, untestable code is a problem regardless of whether it's C# or SQL. It's enough of a struggle encouraging developers to keep C# clean and tested. They (we) have a tendency to throw all that out the window when it comes to SQL.

          It's possible to make SQL somewhat testable, but it's much harder and most developers don't even think of writing unit tests for SQL. It's not an option in some cloud implementations of SQL.

          C# and SQL are different in our minds, but long, unreadable, untested, and untestable mountains of code don't follow different laws because they're written in one language or another. The defects and technical debt are the same. http://scotthannen.org/blog/2019/01/22/untestable-code-knows-no-mercy.html

          1. Robert Grant

            Re: Yeah...

            Messy, untestable code is a problem regardless of whether it's C# or SQ

            Sigh. Yes. But one of them is orders of magnitude more likely to produce it, given two equally skilled developers.

    2. 9Rune5

      SQL

      SQL does not lend itself to nicely structured code.

      Even a simple select has its own set of problems.

      You type "SELECT " in the query editor, press ctrl+space and... yeah... what exactly should intellisense show you? So you put in an asterisk * and press on with FROM myfancytable. Now intellisense has something to work with and you can go back and select field names. (I wonder how many people won't even bother at this point?)

      Compare that with working with an ORM like EF. myresult = MyTable.Where(t=>t.<ctrl-space>... intellisense presents a nice list of fields, and you can crack on.

      Then there is the whole type safety thing. I'm amused to see some people will always go for strings. WHERE Id = '1'. '1'? Really? Yes... :(

      SQL is a primitive language that encourages bad habits. (Injection vulnerabilities, anyone?)

      I was tempted to flame the rather crude debugger support, but msft solved that in the latest release of SSMS. They simply left out the debugger completely.

      No. Do not be tempted to implement business logic using a QUERY language. Just say no. It is an absolute hell to maintain. Your future maintainers will eventually reimplement everything in a proper dev language anyway (assuming the application has a life-span of more than five years). And they will do this while investigating your current whereabouts and contemplate showing up at your door with a baseball bat and duct tape.

  2. J27

    Ahh, System.Reflection, that namespace is the sort of thing you have to avoid if you care even slightly about performance. Too many times I've had to shoot down a junior dev who has come up with some novel whiz-bang code that relies entirely on reflection. Sure you might save on some code and not have to write those interfaces I asked you to. But it'll perform like crap.

    1. richardcox13

      Refection is not that slow (at least after the metadata is loaded for that type: the first time is slow). Unless you are measuring the performance you are just assuming.

      ASP.NET MVC uses reflection heavily, and still manages to be massively faster than Web Forms (and as pages get more complex, even more so) and yet MVC uses reflection at its core.

      As with anything performance beyond the simplest code it is far more subtle than you can predict.

  3. bombastic bob Silver badge
    Megaphone

    Many commonly used libraries [snip] also make heavy use of reflection

    Interesting. I always knew ".Not" had some serious performance bottlenecks [primarily due to the OBVIOUS performance differences between Win2k server and Win2k3 server on the SAME HARDWARE] but it's nice to get some confirmation as to WHY.

    So here's a question: WHY must the fundamental design of these libraries require RUN-TIME DISCOVERY of ANY kind? This "collection" and "arbitrary data type" and "arbitrary property" kind of mentality is BEYOND SILLY and crosses into LUDICROUS. It is THIS FUNDAMENTAL DESIGN FLAW that _IS_ the problem!

    These are the kinds of "features" you expect to find in INTERPRETED languages, and NOT a COMPILED one for 21st century programming!!! For someone like me who does stuff for device control and kernel modules and microcontrollers and things like that, it's OBVIOUSLY THE WRONG WAY TO DO THINGS.

    Like the Mythbusters used to say when looking at a catastrophic failure: "Well, THERE's your problem!"

    1. not_equal_to_null

      Re: Many commonly used libraries [snip] also make heavy use of reflection

      Meanwhile, in the real world, you often have arbitrary input to process, and it's often JSON.

      You've broadly two options here - treat it as data and access it as such (like reading from a nested dictionary), or deserialise it into a strongly typed object.

      Using strong types obviously leads to better structured code, as we're explicitly defining our domain rather than relying on a nebulous bucket of data, and better performance as we're using CLR data structures to hold our data, not big old dictionaries.

      Deserialising does requires that you know something about the target type, so you can populate properties, validate values etc.

      You could hand-craft a deserialiser, but frankly why bother when type introspection gives you reliable, testable and an above all predictable experience? Any decent code builds compiled expressions from the results of reflection rather than using it every call, as it is expensive to do, but the results when properly used (for example to build an expression tree which is then compiled to a lambda) give similar performance to hand written code.

      And any decently architected software will allow selective optimisation where required, so your point about NEVER DOING THIS is somewhat naive and shows a lack of understanding.

      Rather than banging on about '.not', perhaps you could instead use some of your evidently vast intellect to consider that some tools are best for some jobs, and every tool is a compromise between performance, effort and maintainability. Just a thought.

      1. disinterested observer

        Re: Many commonly used libraries [snip] also make heavy use of reflection

        Bob's issue is in thinking that if it's "M$" then it's bad and if it's post Win7, it's actively evil. This is because Bob is an idiot.

    2. 9Rune5

      Re: Many commonly used libraries [snip] also make heavy use of reflection

      OBVIOUS performance differences between Win2k server and Win2k3 server on the SAME HARDWARE

      I fail to see the relevance?

      But presumably you ran Performance Monitor (a part of Windows since NT 3.1) and tracked down the culprit, so why do you not simply tell us what did what?

      1. not_equal_to_null

        Re: Many commonly used libraries [snip] also make heavy use of reflection

        You, sir, are (willfully or dimly) missing the point.

        Your original point was actually about deterministic (ie. knowing everything there is to know about an applications execution ahead of time) and non-deterministic approaches to software development, as seen through the lens of reflection in .NET, which you seem to despise for some reason.

        Your point was that the potential dynamism provided by reflection was outweighed by the performance penalties inherent in introspection. Though you didn't quite put it that clearly.

        It is certainly possible to write entirely deterministic software (in terms of behaviour within the CLR) on .NET, but it's not really _desireable_, is it, when taking into account maintainability and the complexity of modern software.

        Again, it's a tradeoff - different tools for different jobs.

        For the majority of modern applications, I'd take maintainability over performance (to a point) every time. Software is about solving problems to make user's lives better, not being a purist. A user doesn't care if you write specific deterministic code to cover every eventuality - they care that when they throw a vaugly-correctly-shaped bit of data at whatever you create, it'll work as expected.

        Popular libraries such as newtonsoft json, automapper, restsharp etc all make use of reflection because its the _right tool for the job_ - namely taking diverse inputs and doing something sane with them.

        What has that got to do with how 'fast' windows runs on your server?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon