back to article Microsoft previews 'Juneau' SQL Server tools

Microsoft has released a third preview of SQL Server 2011, codenamed "Denali" and including the "Juneau" toolset. In the Denali database engine there are new features that supporting high availability, and improve query performance of data warehousing queries. Then there's FileTable, a special table type that is also published …


This topic is closed for new posts.
  1. Robert E A Harvey


    I can't decide if this idea is rather clever or if I should run screaming from the room.

    I assume such access will be read-only? The idea of ones database integrity being threatened by windows explorer is a bit like someone designing that vent on the death star that Luke dropped the bomb down.

    1. Anonymous Coward


      If it is read-only I can see a benefit, not a huge one but I can see use-cases. I imagine there is some weird reporting benefit somewhere for people who don't want to use proper software like crystal reports. It also depends how the data is represented. (1 row = 1 csv file?)

      If it is read-write I will join you in running screaming from the the room

    2. Linbox


      New name for old functionality.

  2. Bassey

    SQL Server Management Studio

    "the query and design tools should mean that developers rarely need to switch to SQL Server Management Studio"

    Thank god for that! However, will the Visual Studio Tools continue to randomly corrupt my databases forcing me to re-build them from script at least once a week?

    1. Loyal Commenter Silver badge

      Why so hard on Management Studio?

      Sure, it takes an age to load, and has its quirks, but the newer version does have some quite useful features, such as the ability to view recent expensive queries on a server, a godsend if trying to optimise performance.

  3. Alister

    freudian slip

    I first read the codename as "Denial"

    Dunno why?

  4. BlueGreen

    This is continuing a bad trend and MS are damaging themselves

    databases have to be rock solid for business; adding fancy new features are Microsoft's traditional method of trying to create demand for a new product, the trouble is this is directly opposed to reliability. SQL Server 2000 was pretty damn solid, SQL Server 2008 had serious documentation holes in their fancy new geographic data types, and some weird SQL behaviour including some outright crashing bugs.

    Both of these cost me and my company some heavy time to track down and bypass at a previous job.

    Three things a database needs: reliability, reliability, reliability. Features, features, features are not a substitute.

    Another thing the article indicates is Microsoft trying to deskill DB use and management. This provides nice lock-in for them, but causes problems for the users who think that because they can design tables visually (which is fucking trivial!) and they can 'draw' their SQL, then they know what they're doing. I can say from other people's experience, this is not true: these visual tools have unexpected behaviours of their own (had to help a tech track down why data from multiple tables was being deleted when using visual tools), and far, far worse is that by having pretty diagrams of joins etc it gives the impression that they are in control. This is completely wrong: such diagrams just hide the syntax of SQL, not the semantics, and there are some nasty semantic holes in SQL which had bitten me and others -- even when I know that they're there! (Specific examples: nulls producing tristate logic, nulls in not exists queries, transactions not necessarily being quite as atomic as you think they might be, and others).

    And visual SQL design tools never seem to support the full syntax, which you rapidly need as your techs/support staff gain experience.

    "testing, debugging, version control, refactoring, dependency checking, and deployment" -- testing should be done on a separate server, likewise debugging, version control should be trivial because SQL is text (So what exactly is this offering?), some competent refactoring tools would be nice at times, though!, but deployment? What is deployment mean here?

    "In addition, the query and design tools should mean that developers rarely need to switch to SQL Server Management Studio." -- Oh bloody hell...

    "A SQL Server database project stores the entire database schema as the Transact-SQL (T-SQL) scripts that are required to create it, including code that runs in the database, such as stored procedures and triggers. You can import the schema from an existing database, or you can build it from scratch." -- you can extract the text of an entire database already, in executable format, and it should be in version control anyway (although admittedly it never is). Right click on the database in the studio then Tasks|Generate Scripts... But this is less useful than it seems as a database's value is in its data as much or more as in its structure.

    "Juneau has a visual table designer" -- mega useful, that. And have you seen the kind of cruddy, verbose and deliberately nonstandard SQL these things produce?

    "if a database admin has made some changes since the schema was imported to the project, Schema Compare will highlight them. The tools are also able to generate update scripts that apply the project schema to the target, and warn you of consequences such as data loss." -- had exactly this problem over multiple sites at a previous job. A tool to reliably diff the schema would have been somewhat useful, but far less than you might think because the application *using the database* would expect certain semantic properties, so you have to change not just the DB but the application layer. This really is messy.

    "no visual query designer, and no database diagramming or visual modelling tool" -- see above. I really find these are no value whatsoever, and in fact can be worse than useless. I'm not saying this as a 733t DBA (which I'm not) to put other people down, but I've seen the problems that they've caused.

    Bottom line? Certain new SQL features such as recursive with, pivots etc. are very handy indeed, most especially the rather simple row_number() in my experience, but there's no feature like reliability (not just in the DB, but also the other tools, like the reporting service, he says bitterly). MS has forgotten that and it will cost them. And doubtless continue to cost their users too.

    1. Loyal Commenter Silver badge

      A number of valid points.

      However, you mention, 'transactions not necessarily being quite as atomic as you think they might be'. Unless you are talking about things like (NOLOCK) and (READPAST) keywords, which are used to deliberately read (or skip over without locking) uncommitted data, I'd really like a concrete example of what you man by this, if you have one to hand. This is the sort of thing that I, as a developer writing SQL-heavy applications, should know about!

      1. BlueGreen

        transaction weirdness example

        Nope, it's not an ignore-the-locks issue.

        -- start sql

        select @@version;

        -- for your info, returns on my machine: Microsoft SQL Server 2008 (RTM) - 10.0.1600.22 (X64)[...] on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)

        create table #tbl ( a char primary key );

        insert into #tbl values('a');

        insert into #tbl values('b');

        insert into #tbl values('c');

        select * from #tbl;

        -- returns what you expect, 3 rows of 'a', 'b', 'c'

        -- now the interesting bit

        begin transaction;

        insert into #tbl values('b');

        insert into #tbl values('c');

        insert into #tbl values('d');


        -- what do you expect? it's in a transaction and 'b' and 'c'

        -- were already entered as primary keys so this should

        -- cause a violation of the PK which, being in a transaction,

        -- should roll back the lot, so were' back to 'a', 'b' and 'c'.

        -- Indeed, does report: Violation of PRIMARY KEY constraint 'PK__#tbl______3BD0198E0425A276'. Cannot insert duplicate key in object 'dbo.#tbl'.

        -- however...

        select * from #tbl;

        -- Actual result is 'a', 'b', 'c' and 'd'

        -- end sql

        This is per the sql standard, believe it or not. The easy (and MSSQL-specific way) to deal with it is using SET XACT_ABORT ON which terminates transactions on any error.

        You might also want to look at my comments in <> for some null nastiness.

        1. BlueGreen

          and monkey-boy has just realised...

          ... he's running an RTM sql install. Only a home machine but still. SP2 installed pronto.

          Bad monkey-boy. No bananas for you.

        2. pan2008

          where is the rollback?

          Personally when I am using a begin and commit tran I always have a catch error statement. So your block should have a rollback statement if things goes wrong. Check the TRY-CATCH block in online help.

          I don't see anything wrong with the behavior of SQL Server above, if you had the rollback statement though then your table wouldn't have the 'd' value. This is standard coding practice. At least SQL server has a very neat way to handle errors. Not all DBMS have this lovely try-catch.

          I agree with another post, unfortunately these nice tools in Visual studio give the .NET programmers the impression that they don't need to really understand databases and SQL. I have seen some really bad examples of coding (especially with hibernate) and jumping out of the windows wasn't enough. I am still here cause I was on the ground floor!.

          Am I looking forward to Denali, certainly am, especially the SSAS that will be able to be more efficient with not analytics only.

          1. BlueGreen

            @pan2008: there's nothing to roll back

            try/catch is the modern way to do it but I'm a bit behind in these (a shedload better than the old ways of error handling with gotos though). Per BOL it catches any error level > 10 (informational level) so it would have caught this PK violation, which is a level-14 error, which is nonfatal. Nonfatal errors apparently don't abort the transaction (unless you use a try/catch or xact_abort to force handling of any errors) which is why this transaction succeeded - and it did succeed! Put 'select @@trancount' immediately after the commit and it shows 0 - there is no pending trans because it completed! There's nothing to roll back.

            By contrast, put 'insert into #nonexistent_table values(99);' just before the commit and run again, it complains and aborts the lot. Run the 'select @@trancount' again (manually, because it won't be run now as script is aborted and never reaches the @@trancount bit) and it shows '1' - one transaction is still pending.

            You're quite right about try/catch though, I need to get au fait with these.

            (handy hint. I used temporary tables (# prefixed) because you can clean up all such tables very easily; good for manual prototyping and messing around. They last as long as the connection remains, so when you want a clearout, right click the SQL pane and choose Connection|Disconnect. This drops all temp tables. Then press F5 which asks you to login, and bingo, all clean again, no table dropping needed. Trivial trick but handy)

            1. pan2008

              @ BlueGreen

              We are turning this into a sql forum but if you have the following around your transaction it never fails. The select only gives a,b,c. Hope that helps.

              -- now the interesting bit

              begin try

              begin transaction;

              insert into #tbl values('b');

              insert into #tbl values('c');

              insert into #tbl values('d');


              end try

              begin catch

              if @@TRANCOUNT > 0

              rollback tran

              end catch

              1. BlueGreen


                Yes, but like I said:

                "Per BOL [try/catch] catches any error level > 10 (informational level) so it would have caught this PK violation, which is a level-14 error, which is nonfatal. Nonfatal errors apparently don't abort the transaction [ie, just by themselves without further error handling]..."

                PK violatioin by itself doens't abort the transaction. The transaction completes despite having an 'obvious' error, which IMO is wrong. Adding try/catch or xact_abort makes it behave as it always should have done, because a transaction is all-or-nothing by definition. Without some error handling it effectively executes part of the transaction ('d') but not the rest ('b', 'c', which it couldn't insert anyway for PK reasons). IOW it's *not* behaving as a transaction in the reasonably expected sense.

                1. pan2008


                  OK, I see your point, but a transaction needs error handling, this is how I used them for the past 10 years anyway. I am not sure about Oracle and how it handles it, but SQL Server and Sybase behave this way. In fact if you don't put some error handling you may end up with a locked table cause the commit didn't execute and the rollback doesn't exist in the code.

                  One of the reasons is that I may want to do something specific after the error, if by default it rolls back I can't write to a audit table to capture the error cause it will rollback the audit write op. It's similar with .NET or Java, appropriate error handling makes things easier but that's the price you have to pay.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2022