Paul Flessner: Tech•Ed 2005 Keynote

Remarks by Paul Flessner, Senior Vice President, Server Applications, Microsoft Corporation
Tech•Ed 2005
“Getting Ready for Connected Systems”
Orlando, Florida
June 7, 2005

PAUL FLESSNER: Well, thank you so much and welcome. Tech•Ed is a great time of year for us, it’s a very, very exciting time of year. Like all of you, we work hard all year long and Tech•Ed is our chance to talk to you about the great products and technologies that we’ve been building all year and it’s super important that you give us feedback, things that we’re doing right and things that we’re not doing right. We do listen to that and it’s critically important to us building great products.

Today’s talk is about connected systems. You might be asking yourself what is a connected system, we’re going to talk about that, but in a big way a lot of what we’ve done for a long, long time in IT is get our companies connected — connected to customers to increase revenue, connected to partners and suppliers to cut costs and increase profitability, and connected to our employees so that we can be very productive and make sure they have all the information they need.

And we’re going to talk a lot about connectivity today. We’re going to talk about the impact it has on technology and our business, we’re going to talk a little bit about sort of the history and have some fun with that and then we’re going to talk about the implications of connectivity and the systems and how we’re going to architect those systems going forward and things that we need to do to make sure that our businesses are connected.

And hopefully we’re going to tell you all about some great products that we’ve built that we think will make it a lot easier to do connected systems.

So to start with, we’re going to do a bit of history on connectivity itself. “What hath God wrought?” That was the first message ever sent by telegram back in 1844 by Samuel Morse: “What hath God wrought?” It seems like maybe Mr. Morse had a bit of an understanding of the amount of change that was about to be thrust upon us based on that comment and actually that first message.

But things proliferated rapidly, as we might expect. That was 1844, May 24th, 1844 when that first message was sent. Less than six years later in 1850, over 12,000 miles of cable were strung across our rapidly expanding continent.

In 1858, the first transatlantic cable was run across the Atlantic, of course, and it was run by a newspaper tycoon that was trying to connect the countries and get information to flow more rapidly. That first commercial message was 99 words, sent by Queen Victoria to Patrick Buchanan, our president at that time. The message took sixteen and a half hours at a cost of $10 per word in that year’s currency.

Connectivity has gotten a bit cheaper over time but it certainly was proliferating rapidly.

In 1876, the first phone call, Alexander Graham Bell called his assistant, Thomas Watson, “Mr. Watson, please come, I need to see you.” Potentially the first demand of IT ever. (Laughter.) That was the first phone call.

And then we started with the wireless. Marconi in 1902 did the first transatlantic wireless broadcast, shortly after that followed by television and I don’t know whether that was the beginning of the downfall of modern society or whether it was reality TV in ’99 but it began nevertheless.

And then computer technology started to drive the proliferation of connectivity. In 1944, the computer used the U.S. government to do calculations. In 1951, the first commercial computer made available. In 1969, the research project, what we now call the Internet, began in Massachusetts, followed by in 1991 where what we know today as the World Wide Web, www and hypertext and all that information, that project kicked off and in 1994 the Internet as we know it today was released by the U.S. government and made available.

It’s been a rapid progression since. In 2000, Microsoft stepped forward and said, wait a minute, we can use this thing for more than HTML and static pages, more than e-mail, we can program this thing; we can make it so that intelligent, secure messages, program to program occur. We called that .NET. We worked with our competitors and partners on Web-services standards to really try to make sure that this connectivity could be discovered and programmed against.

In 2002, we released our first set of products, Visual Studio .NET, the first integrated design time and runtime for Web services. And that year there were about 300 million people who were connected to the Internet, in 2004, there were 508 million people connected to the Internet and I hear projections that that’s going to double in 2005 to 1 billion people connected to the Internet.

That implies change to our industry. Customers want information, partners want — no, they demand information, and that’s going to have a big implication on how we think about our systems and how we build our systems going forward.

Business Application Architecture: Our Journey

So what kind of implications does it have on the architecture of an app? Let’s take a bit of a history tour there as well. Ah, this is the life. Remember the days of the mainframe? Simple, just build one big massive program, you slam the UI in there, you get the data access, you put multiple functions. I was a COBOL, IMS, DBD, C programmer and I loved that. You got the mid and the mod, you’re calculating the characters on the screen. You’ve got your PCB, Program Control Block to access the data and let’s just say maybe this was an order entry program, right? You’d have a couple of functions in there. You might have customer validation, you might have check customer pricing, you might have product inquiries; lots of stuff going on inside there.

Well, that was good for simplicity and super good for performance; we tuned those things and, man, we knew how many I/Os we were going to do, they ran well, but they really stunk for maintainability, right? How many guys can you put on one piece of code at one time? I don’t want to have to rewrite customer validation every time I want to use that routine. So it was very difficult to be able to really get in there and keep the rapid pace of change going that business was driving.

So new technologies came forth, client, server, minicomputers, PC servers and we did a couple of things smart, we got the UI out from the process program, right, pushed it out, could be a smart client, maybe an HTML client. Stored procedures came about and we were able to rip some of that code out of the processing and push it back into the database server and that was all a good thing.

But we still had this big knot around that monolithic process. So we said, hey, we can add some interfaces, we can maybe just call that customer validation program or call that credit check program and we’ve made some progress.

But we still didn’t have the factoring quite right for large scale asynchronous systems that could be program and discovered across the Internet. This worked super well on the intranet but still didn’t have really good capability to go across.

So we started talking about Web services and we worked with partners and things have come a long way.

What we really are talking about now — and this has changed — you don’t have to write your systems all from scratch as Web services, certainly a lot of customers wrap them, but we want you to expose those processes.

First, you’ve got to decide what a Web service is in essence, and a Web service, as I define it, and believe me this is not a textbook definition, it’s just sort of what I think about it, a business process or Web service that does a single business function, it is well defined and does not have context on any other service and does not require state from any other service, so it’s a very unique function. Think about service one being that customer validation, service two being that credit authorization, three is product inquiries, that sort of thing; so a very nice and well defined set of services.

But you take that flow control logic, that stuff that makes it an order entry program and you take that out and that gives us tremendous flexibility to do things like create what some people are calling composite apps. Now, I’m not here to define new terms, I know that drives everybody nuts, but these are terms that sort of people are using and I’m sure the analysts will think of terms and we’ll get this thing very well defined over time.

But this is an architecture that we think is super important going forward and gives you tremendous flexibility. Yes, you can create one composite app with those various services, you could create multiple composite apps by just taking and providing a new set of flow control logic and that gives you a lot of flexibility.

Think about the flexibility to connect to multiple different UIs, right, it’s not just connectivity that’s connecting the world, it’s all these different devices, handheld devices, dumb devices, smart devices, all those have potential and we need to connect to them.

And it also gives you the flexibility in the back-end in terms of the data. Now, I’d love, many of us have worked hard on those enterprise data models in the ’80s, we all wanted that nice, concise data model, well defined, and believe me, many of us died on that hill. By the time you get the thing done, the data changes, there’s another acquisition, another divestiture. Distributed data is sort of a fact of life. I’m all for normalizing data, I’m an old database guy from the start, but sometimes it’s just not possible and practical and you’re going to have databases with redundant data at times and you’re going to call those into these services and the services are going to aggregate and deal with some of the facts of life about data.

So this is in essence the architecture we’re talking about.

Now, in the past, we as an industry, you know, Microsoft and other vendors haven’t helped you guys a lot here, and that’s something we need to do a heck of a lot better job of. We’ve also learned a lot and this is new. Customers have been aggressive in implementing Web services. You’ve got companies like eBay, now 40 percent of eBay’s listings come in through Web services.

So there’s been a good set of key learnings and we should just kind of recap those. We’re talking about some people call it a services-oriented architecture, define those services.

The second thing that’s key is externalize that flow control or workflow some people call it. That’s super important to give you the flexibility to dynamically re-aggregate and create applications, giving you maximum reuse, maximum reuse or value of each of those services in that valuable code that you write.

Federated identity: You don’t want to have a customer experience where every time you jump from process to process, a little popup window comes and, hey, could you just reenter your user ID and password just one more time, for the thousandth time that day. That’s not a good thing. Single sign-on is key.

Rich user experience: It still is important to have the UI right and have it integrated in the workflow of the customer. We have this experience at Microsoft. We, like all of you, have pursued this CRM craze, we had to have a CRM system and we put it in and we reengineered our sales process and we’re feeling really proud of ourselves. You know what, we kept looking at the data and looking at the data and the reps in the field, the people in the field just simply weren’t putting the data in the system.

So we went out and talked to them and we forced them to put the data in the system. You know, if you don’t put the data in the system, the CRM system, you don’t get paid, and that was a great idea. Then at the end of each month they would come in for a couple days, jam in whatever they remembered, build their checkmark by saying they did it, took them two days out of the field, which is a bad idea.

The Web interface just wasn’t how they worked, they worked and lived in Outlook. They received e-mails from customers and they wanted a quick follow-up, they received information from their boss, they wanted to put that in their follow-up, they lived in Outlook. So we wrapped our CRM system and other databases around CRM with a Web service and we plugged it into Outlook and utilization has gone up very dramatically.

Federated data: I’m not advocating it, I think it’s kind of a fact of life. The services model does sort of echo the chaos of the real business world. We don’t have time always to normalize that data. I wish we did and I’m all for it, it makes the database guys’ lives a little easier, but sometimes we have to match the flexibility and chaos of the real world with an architecture that facilitates that and we think the services-oriented architecture is more flexible.

And the fact that your service can be called from anyone, depending on how much you allow, availability of a given service, the requirements have gone up dramatically and that’s something that we have to think about in our design and certainly something we have to think about from an IT pro perspective in management.

So this architecture is proliferating quite rapidly. And, sure, you’re going to see a lot of business-to-business activity about getting customers and partners and employees connected, we’re going to continue to work hard in this, there’s lot of case studies of customers on our Web site that have pursued a Web services architecture. I’m not saying that you have to throw out all your existing systems and rewrite. You can wrap existing systems, we do it, lots of customers do it, but it is something to think about architecturally as you move forward, think hard about breaking down into autonomic services or business services and exposing those services as Web services and trying to externalize that flow control up.

Dev Ready Database Development

All right, so my job, all of our jobs at Microsoft is to make this easier and we haven’t made it easier. First of all, there weren’t any standards; well, we worked on that. Then we didn’t have an integrated toolkit; we took care of that in 2002 and we’re going to keep working on that. Mission critical abilities; yep, we talked about that. And better decisions; you definitely want access both to the data and the process that’s going on in this rapidly moving connected system world.

And today we’re going to talk about is the things that we’ve done with an exciting new set of products, Visual Studio 2005, SQL Server 2005 and BizTalk Server 2006, the exciting things that we’ve done in those products to help you build better connected systems.

We’re going to jump into the dev ready piece first. A big part of what we do and a lot of the work that went into Visual Studio 2005 and SQL Server 2005 and, yes, I know it’s taken some time to get these done — trust me, no one in the world wants to ship SQL Server 2005 more than me and we will ship it, I promise you that, things are going very well, we’ll talk about that a bit later, but it was a lot of work that went into making sure that the customer and developer had a very seamless development experience when putting together a database application.

We worked hard to make sure that the Visual Studio IDE is the environment that you can live in and you can live in that environment, whether you’re in a SQL Server management resource or whether you’re in the BI development studio and it looks exactly the same as if you’re doing development. So you can sit there, you can step through if your code is running on the client, you can walk through that code, you can jump to the mid-tier, you can walk through that code and you can jump right back to the database and you can walk through that code in a seamless, integrated experience for developing and debugging. It’s very, very powerful, we think it’s a paradigm changing solution for developers to be able to have that level of integration.

We also worked hard to make sure that the CLR was deeply embedded inside SQL Server. It works very closely together. This gives you tremendous flexibility in developing your applications. It used to be that you would have to design the application and decide at design time where your code was going to run and it was going to run way back in the database and as a stored procedure, you had to write it in T-SQL, in stored procedure language, if it was going to run in the mid-tier, VB, C#, whatever you choose. But those decisions weren’t always easy to make at design time. Sometimes you’d like to wait a little later in the process to see how the code is behaving to see how much resource it takes and maybe your call patterns changed.

So now, you can develop in any of your CLR languages and you can make that decision on where you want that code to run at runtime, not at design time and it’s a very powerful and flexible feature.

Next is Service Broker. We talked about asynchronous process, right? That’s super important. Synchronous processes don’t scale well and don’t really facilitate the world of the Internet where you maybe want to hand off some work to be done in the background. The Service Broker gives you tremendous flexibility as a developer. Some people like to call it a message queue and the Service Broker seems very offended by that; it’s much more sophisticated as a Service Broker. I don’t know exactly what the hell a Service Broker is formally but it’s much more sophisticated than a queue.

Say that you’re in beta connection to the database, you’re inserting data, updating, doing queries; in that same connection, you don’t have to do two phase commit to somewhere else and learn some other programming language, you can pop a message into a queue and the Service Broker will make sure that message gets there once and exactly once, and if you’ve got multiple messages they’ll make sure they’ll all in order and those messages when they get inserted are put into that queue and can automatically trigger stored procedures, they can automatically ship off to another named server. It’s a very powerful and very flexible environment.

Let’s just use a simple example. Say that you were updating your inventory and at the same time you’re updating your inventory you want to put a message in the queue for the forecasting system, so the forecasting system can kick off asynchronously in the background. It’s very simple and easy to do with Service Broker. Customers do this and developers do this all the time and today they have to create their own tables and their own message queues; this will make it much, much easier to do that.

Native XML data types. Today, we take the XML in, we shred it, we put it into relational tables and then we put it back together and pump it out. It’s not super efficient, it works, but with SQL Server 2005 we’ll have a native XML data type, we’ll have a much, much higher performing XML parser and you’ll be able to store data as XML in the database, query it, query those XML columns, if you will, inside a given row; it’s a very powerful and very flexible environment.

So we talked about taking that flow control logic, pulling it out of the programs, and a great way to be able to develop that flow control logic is BizTalk. There’s a rules engine inside BizTalk that will allow you to simply and easily write the context of the flow and call this function or service, call that service and be able to manage that very seamlessly, again very well integrated in with Visual Studio from the developer experience and the BizTalk team has worked super hard on this release to make it much, much easier to deploy.

Today, one of the down sides of the current product 2004 is it’s difficult to deploy applications. We’ve heard that feedback and they have a very simple, I think literally the team was telling me it took 70 clicks to deploy an app and they’ve got that down to three and I think it says one but that’s a bit of a stretch, I think they said it’s three, but it’s a heck of a lot simpler than it has been in the past and again deeply integrated with SQL Server, utilizing SQL Server notification services, so you’re going to get a lot of stuff for free in terms of BAM, workflow management and others that we’re going to talk about later if you use BizTalk or a product like that, we’d certainly prefer you to use BizTalk, but to be able to put that flow control logic and we’re super excited about that.

RFID Infrastructure from Microsoft

And now announcing RFID infrastructure. Now, I noticed as I walked around the floor, most of you were ripping those RFID tags off your badge because they were a little annoying and one thing we do promise is if you wanted to do an RFID demo, and we’re going to show you a set of demos, we weren’t tracking any people, I promise they were all anonymous, it was just for the show, but RFID is a super important technology going forward. Talk about being connected, we’re not only going to have to connect people, we’re going to have to connect objects and RFID will be a breakthrough in terms of managing inventory, shipping, distribution; there’s a lot of productivity that can be gained through this fast and more logically distant scanning technology.

We worked hard, I certainly want to thank Symbol and Printronix for helping us at this show. Symbol helped us with the readers that you see sitting around the floor and Printronix with the printers. Those tags actually had an antenna in them and they’re an RFID tag and they had a number in them and when you walked by a reader, the reader can register that and you’ll see that in the demo.

Our RFID solution will allow you a lot of flexibility. We’re going to work hard to make sure that RFID is available at very low cost and very plentiful from a Windows perspective.

Today, one of the big challenges with RFID is it’s a new technology, a lot of the standards haven’t been developed in terms of the protocol for the readers, so the readers have proprietary protocols, there are standards emerging. We’ll have simple plug and play devices where whatever reader you’re using, you can plug in that device, the data is pumped into the server, we’ll have a simple object model where you’ll be able to program against that data, put it in a database or kick off another process, such as your SAP system or whatever, wherever you’re like to put that data.

So this is a super important play for us. The technology that you’re seeing today on the floor and the software in the demos is the technology that we’re developing. We’re announcing that, we’re not announcing really the timeframe today officially and the team will be mad at me but I think you should expect this sort of in the 2006 timeframe. We’re very far along with it and we’ve got a lot of testing to today, there’s a lot to do here in terms of hardware but we’re going to work hard to make sure that RFID is available to all of our customers.

Dev Ready: Team Development

Team development. This is kind of a funny thing. You think about software teams, they’re always teams; you know, Microsoft, big team; the way you build software, big teams. Visual Studio has always been about the developer and an individual developer where we haven’t really done to facilitate much of a team play.

Visual Studio Team System is really focused at building a great development environment for your software development team and it starts early in the cycle, all the way back to the architect where you’re doing the design and initial layout and an integrated design surface, then passes that on to the developer where they can do again further and more detailed design based on the initial drawings of the infrastructure layout and also the solution layout; pass that on into the test environment where they have many new productivity tools for QA and testing, load tools where you can run up loads and stress it, code profiling tools and other quality assurance tools to scan the code to make sure you have high quality code; all integrated into a much more sophisticated and scalable source code control manager and much, much more work we can do in the future in integrating it with Project Server and others. But you’ll see a lot of work, this is by far the most comprehensive developer product we’ve ever shipped, we’re super excited about it and we will be putting a lot of effort into this. There’s already great partner support for it and I believe there are six or 15, there’s a lot of partners already plugging in and extending this with solutions and we’re super excited about that expect that to certainly continue.

One of the best things you can do for code to keep your costs low is to not write too much of it. The more you write, the more you have to debug, the more you have to maintain. And one of the big advances in Visual Studio 2005 is that we’ve reduced the amount of code that you’ve had to write and we’ve also made it much more symmetric to be able to have a Web development experience for your browser and the smart client. It’s super powerful, right, you see it in Outlook today where we have the smart client and you get the offline capability and then we have OWA where you get the browser capability no matter where you are but you also have a very similar look and feel.

Dev Ready: Web and Smart Client

Web development, how do we do these big code reductions, there’s no magic. We talked to a lot of developers, we found out what they do and we create code snippets and we include those as modules that you can call, simple things, page layout, logins, permission handling, navigation, simple things that really reduce the amount of code that you have to write.

We also put in an important feature, performance feature called cache synch. Cache synch is a very cool thing. It allows a Web, a mid-tier developer who has data that they’re accessing normally has to go all the way back to the database and get that data. Well, a lot of times they know that data doesn’t change very often and with cache synch you can take and you can register that data inside SQL Server and say, hey, whenever that data changes, notify me and I’ll refresh my cache, but until that happens, I’m not going to go back to the database every time I request this piece of information.

Doing access to memory is about 5,000 times faster than jumping back and hitting the disk, so it’s a good thing, if you can leave that data penned in the cache in the mid-tier, it’s a heck of a lot better for performance and scalability and cache synch between Visual Studio 2005 and SQL Server 2005 is an excellent and easy way to do that.

64-bit, you’re going to hear a lot about that; believe me, with the success of the X64 standard, AMD and Intel, you’re going to see 64-bit machines everywhere. It’s going to be difficult to buy a non-64-bit machine in probably 24 months.

And smart client development, nothing wrong with smart client. You’re going to find out as you start to talk to those users, they like offline. What’s been wrong is it’s been our fault that we haven’t made it easy to deploy those applications. You’re going to see an example of Click Once where you can easily deploy applications.

And now to give you a bit of a demo, which is a heck of a lot easier to see than talk about all the technology changes we’ve made in Visual Studio, I’m going to ask Brian Keller to come out and do a quick demo of a lot of what we just talked about. Hey, Brian. How are you? (Applause.)

BRIAN KELLER: All right, how you doing, good morning, Paul.

PAUL FLESSNER: Good. So what are you going to do for us today? Hey, wait, before we get started, somebody told me your mom is here today.

BRIAN KELLER: That’s right. That’s one e-mail score that hopefully we have in the bag or at least we’d better.

PAUL FLESSNER: You’d better do a good job.

BRIAN KELLER: I know. Ms. Bee actually told me that you have a bad habit of shutting off servers, so this is my demo machine so just be careful.

PAUL FLESSNER: Okay, I’ll stay away from that, all right.

BRIAN KELLER: Well, I’m really excited about the enhancements we have coming in Visual Studio 2005. Yesterday, VJ showed you how we’re extending Visual Studio Tools for Office to help you build Outlook integration.

Today, I’m going to show you some more of the developer enhancements in Visual Studio 2005 that are designed to help save you time and put key features you need at your fingertips.

Let me show you this application I have. You just got done talking about the RFID data that we have, how we have some readers stationed and if you chose to keep the RFID badge that you got at registration, just walking around the Tech•Ed convention center —

PAUL FLESSNER: The anonymous RFID tag.

BRIAN KELLER: This is anonymous, very anonymous. You’re helping us gather aggregate and anonymous data that can help us make Tech•Ed and other events like this better in the future.

So let me show you what we have going on here.

You can see this is a map of our convention center, we have the keynote hall, we have the hands-on lab area and we have the dining room. I built this using Windows forms in Visual Studio 2005.

Now, one of the top requests we get from developers is to help them build applications that have the same look and feel of other applications like Microsoft Office that their users are already familiar with. So I used some controls to make this menu strip look like Microsoft Money. We also have some toolbars that expand and collapse, pretty cool. So all these controls and others ship with Visual Studio 2005 and they can easily be extended and customized for use within your applications.

You can see I’m actually getting some tags coming in right here. This is all powered by the RFID technology you talked about and we’re pulling that off SQL Server 2005. Let’s show one example of how this application can help us make Tech•Ed even better.

This is hands-on lab area and we’re actually monitoring, this is the data from yesterday where we were monitoring the number of attendees that we get. Up here you’ll see this midpoint. This is a ratio we use to help us track the number of proctors to the number of attendees that we have; we want to maintain a good balance there.

As I start moving through the data from yesterday, you’ll actually see that towards the middle of the day we started to have a spike, probably once the keynote hall emptied out and people realized that we had some really great labs back there.

So in the future we might wire up alerts to tell staffers in other parts of the convention hall that, hey, we need help in the hands-on lab area or we need additional staff in the pavilion or the cabana area.

PAUL FLESSNER: Get out of that break room and get to work!

BRIAN KELLER: Absolutely.

So this is a great application but we can make it even better for you, Paul. So let’s open up Visual Studio 2005 and the first thing we’re going to do is drag a new data grid view onto this form. That’s a new component we ship with Visual Studio 2005 and you’ll notice that when I’m working with this I get a Smart Tag that pops up. Smart Tags are a new feature in Visual Studio 2005 that gives me quick access to common tasks I might want to perform on the control.

For this particular control, I want it to occupy the entire panel it’s in, so we’ll say dock in parent container. We can also select the data source that we want, two clicks and we’re done. Pretty cool, huh?

PAUL FLESSNER: Yeah.

BRIAN KELLER: Let’s go over to the event view code. The only thing left to do is wire up our lists assets button. And normally at this point of the demo I’d have to write several lines of code and you guys would be waiting for like five minutes here. Well, I’m actually going to leverage the code snippets function that we have in Visual Studio 2005, so right-click, insert a snippet. You’ll notice we actually ship hundreds of code snippets with Visual Studio 2005. That’s code that you don’t have to write, you don’t have to test, you just select it from the list and you have immediate access to things like calculate a mortgage payment or access a remote printer or things like that. And this is fully extensible as well; I actually have my own code snippet library in here and it’s very easy to create your own code snippets and share these with your team to really enforce best practices across your organization.

The only thing left to do is fill out my tag ID, that’s the column that we want to return into my data grid view.

Now, Paul, I’m a little bit embarrassed to tell you this but there’s a critical piece of hardware that we need for your demo that’s not here yet.

PAUL FLESSNER: Yeah, that’s what you want to hear right now.

BRIAN KELLER: I know, I know, tell me about it, the nerves are shaking now. But luckily for us, the guys backstage tell me that it’s on its way, it has an RFID tag, we’ve got RFID readers back at the loading dock so we’ll know when it gets here or at least SQL Server will know when it gets here.

But I really want to take nothing for granted, I want to know when it gets here up on stage with us.

PAUL FLESSNER: We’re counting on you. You know, your mom is out there.

BRIAN KELLER: I know, I know; don’t want to let her down.

So let’s go to the toolbar. I’ve actually already built a component to help us with a part of this in case this happened. So let’s go to the toolbar, we have an add alert component, we’ll drop that onto the form. What this does is it just takes a tag ID that we enter, puts it into a table so that SQL Server knows we’re looking for it.

Every time any RFID tag is read from any portion of the convention center, this T-SQL trigger gets fired. We compare it against the table of tags we’re looking for and you talked a little bit about the Service Broker, so Service Broker gives us a great asynchronous programming model right within SQL Server 2005 and if we find the tag we’re looking for we just send a message to Service Broker and it takes care of the rest for us.

PAUL FLESSNER: Excellent.

BRIAN KELLER: Once Service Broker processes the message, it’s going to fire a stored procedure that comes back and tells our application, hey, our hardware is here.

So let’s take a look at that stored procedure because it’s actually pretty cool. You’ll notice that this is written completely in Visual Basic. We’re hosting the Common Language Runtime right within SQL Server 2005 so now I can use the full power of the .NET framework along with the language I already happen to know, which in this case is Visual Basic, for writing my application; pretty cool, huh?

PAUL FLESSNER: Yeah, that’s cool.

BRIAN KELLER: Now, I didn’t finish writing this earlier and you’ll notice that I actually have an error in my code, as indicated by this blue squiggly. But with Visual Basic 2005 we’re bringing the auto correct functionality right into it, sort of like in Microsoft Word when you have a spelling error.

PAUL FLESSNER: That never happens to me.

BRIAN KELLER: Never happens to me either, but, you know, for demo purposes.

So we’ll select the error here and it not only tells me I have an error but it says here’s the recommended fix. Looks like I forgot to fully qualify my class, we’ll select that from the list and we should be ready to rock with this stored procedure.

PAUL FLESSNER: OK.

BRIAN KELLER: So let’s go ahead and deploy this application. We’ll use the new Click-Once deployment wizard that ships with Visual Studio 2005. Click-Once allows me to take even a Windows forms rich client application, push it up to a Web server and all my users have to do is run a link that I might e-mail them or put it on a Web page and Click-Once not only installs the application but it also puts any prerequisites on their machine they might need, such as the .NET framework, so you not longer have to worry about that.

We can also choose to install this application offline as well so that we get a Start menu entry as well as an add-remove programs entry. We’ll do that, click Finish and then rerun our application.

Let’s find it on the Start menu. This is the same application I ran at the beginning of the demo. But this time it will go out and hopefully it will find that there’s a new update available.

Sure enough, there’s our update. We’ll click Okay to grab that new update and in a second you’ll see the new version number, Click-Once automatically increments that as part of the publishing process.

PAUL FLESSNER: So this is the end user just saying I’ve got this alert, I’m going to load this new app, I hit it and it’s done.

BRIAN KELLER: Just fire and forget, absolutely.

And here’s our updated application. Now all that’s left to do is actually type in the tag ID we’re waiting for. My friend Jenny actually gave me an easy tag ID for this hardware because we knew we’d be waiting for it, so that’s 5558675309. (Laughter.) And we’re waiting for that in the keynote hall and we want a visual alert so we’ll add that. We have a visual alert in our application but I want to make absolutely certain that we see it. So I called my friends at [inaudible] and they send us these USB-powered, blue flashing lights so we’ll know when it gets here.

So as you’ve seen, Visual Studio 2005, combined with SQL Server 2005, gives me a really first class development platform for not only building those applications but deploying and maintaining them as well. What do you think, Tech•Ed? (Applause.)

All of you have Visual Studio 2005 Beta 2 in your attendee bags and you can go home, sign the go-live license and actually start deploying applications today.

PAUL FLESSNER: Outstanding. Well, that’s pretty exciting, they seem to like it.

BRIAN KELLER: Oh, looks like our hardware has shown up here. All right, here it comes, here’s our critical hardware. You might want to stand back a little, Paul.

PAUL FLESSNER: Holy cow!

BRIAN KELLER: Paul, I want you to meet The Finalizer. This is a battlebot that runs on the .NET Compact Framework. You’ll notice it has a razor sharp axe here, has a spinning saw blade and it looks like it brought up something. Do you mind reaching down and seeing what it brought?

PAUL FLESSNER: You know, I didn’t write the code on that and —

BRIAN KELLER: You don’t trust it.

PAUL FLESSNER: — it’s looking kind of wicked there. Why don’t you reach down and grab it.

BRIAN KELLER: All right, well, you’re the vice president so I guess I’ll grab it here.

PAUL FLESSNER: I don’t think you’d want that baby running around your datacenter, would you?

BRIAN KELLER: Probably not but I hear you’re dangerous as well, so I’m OK with the battlebot.

So what do we have there, Paul?

PAUL FLESSNER: Well, this is from Creative. It’s a Portable Media Center, very nice, brand new piece of hardware. What are we going to do with it?

BRIAN KELLER: Well, the battlebot tells me that it wants to give away five of these at random to people that are in this keynote hall.

PAUL FLESSNER: Well, that’s exciting.

BRIAN KELLER: So let’s use our data grid view we added earlier and make that happen.

So let’s go to the keynote location and when I —

PAUL FLESSNER: So now is the payoff if you didn’t tear that tag off.

BRIAN KELLER: Absolutely. If you kept your tag, then you’re entered in the raffle.

So when I click this button, we’ll actually choose five random RFID tags.

PAUL FLESSNER: The first RFID raffle ever.

BRIAN KELLER: I think so.

PAUL FLESSNER: Probably go down in history.

BRIAN KELLER: I hope so, yeah, we’ll be in the record books.

So, battlebot, are you ready for this? I think he’s ready; he’s not doing anything.

PAUL FLESSNER: Maybe that’s just as well actually.

BRIAN KELLER: We’ll go ahead and pull our stored procedure here and there are our five winning tags from the world’s first RFID-based raffle. So that’s probably hard to read at the back but we’ll get these on a slide for you at the end of the presentation.

PAUL FLESSNER: Excellent.

BRIAN KELLER: Paul, thanks for letting me show Visual Studio 2005.

PAUL FLESSNER: Oh, well, thank you very much.

BRIAN KELLER: I’m going to go backstage and crack open that code for the battlebot.

PAUL FLESSNER: You are, huh? Are you sure that’s a good idea?

BRIAN KELLER: We’ll see.

PAUL FLESSNER: Oh, that looks like a camera on there. Can we turn that on and get a view?

BRIAN KELLER: Yeah, we sure can. We can see everything that the battlebot is seeing.

PAUL FLESSNER: Let’s turn it on.

BRIAN KELLER: So, yeah, looks like I should be able to optimize this. I’ll go play with this and let you know.

PAUL FLESSNER: Yeah, that sounds like a great idea.

BRIAN KELLER: See you later. Thanks, Paul. (Applause.)

Connected Systems Developer Competition

PAUL FLESSNER: All right, we’re going to have a little bit more fun, too. We decided to do a little developer competition. All of you, anybody that’s in the information technology field can enter, it can be an individual — I can hear that thing running around back there, it scares me — it can be an individual, it can be a company, you can enter the competition utilizing you have to use Visual Studio 2005, SQL Server 2005 and BizTalk 2006. All of your entries, fill out your entry forms and get them in by August 30th, that’s key, and then we’ll be judging the winners at our launch. All the winners will be invited to the launch. The grand prize is $50,000, worth your trip and effort I hope, and there will be lots of other prizes.

The judges will be independent, it’s sponsored by Microsoft and MSDN Mag. You can find out all about the competition, so feel free to give it a read up there on Microsoft.com and hopefully we’ll get lots of people entering the competition and hopefully somebody can win that $50,000.

So I know we’ve talked a lot about what’s been going on but sometimes it makes a lot more sense when you see a customer utilizing the technology, so we’ve put together a quick video to show you just that.

(Video segment.)

VOICEOVER GUY: And now a special technology demo with our senior fake technology correspondent, Samantha Bee.

SAMANTHA BEE: Every industry has its own special lingo, its own linguistic shorthand for taking complex topics and communicating them simply. But when it comes to technology, oh no, you can’t do that. Instead, the powers that be prefer to take simple ideas and communicate them in as convoluted a way as possible. For years, this was considered at Microsoft to be a core competency.

Now, in just a few minutes, Paul Flessner will be telling you about the great mission critical capabilities of the upcoming release of SQL Server 2005. Now, being a long term Microsoft employee, he’ll be compelled to describe the three key benefits with copious amounts of adjectives and tech-speak. Frankly, he can’t help himself.

These three key benefits are availability and reliability, security and performance and scalability. Never mind that that’s actually five things. Microsoft has grouped them together just so they sound like three.

However, Paul has authorized me to preempt him on this with a demo. It’s a demo of the SQL Server Technical Benefits Translator or the TBT. Now, here’s how it works. We’ll take three overly complex benefit definitions, run them through the TBT and then find out what they really are in plain English. Let’s start with availability and reliability. Voiceover guy, help me out here.

VOICEOVER GUY: Yes, Samantha. SQL Server 2005 delivers on the need for mission critical systems to operate at extremely high levels of reliability, thus precluding times of unavailability while rarely exhibiting the need for frequent updating or patching. Though Windows updates or patches are required, they are executed in a fashion not impacting availability because its new functionality enables reboots to be unnecessary during patches or the workloads of key servers to be seamlessly mirrored by other servers definitely not requiring rebooting.

SAMANTHA BEE: Yes. We take that accurate but amazingly long benefit and we plug it into the TBT. Then we see how it distills is to its essence. And the translation is:

VOICEOVER GUY: Downtime is for suckers. (Laughter.)

SAMANTHA BEE: Now there’s a statement even I can understand.

Let’s try security next. When it comes to mission critical capabilities, it is essential that servers have not just external protection but also the wherewithal to prevent the unauthorized access and dissemination of sensitive corporate data, which is thus achieved by supporting security functions and data encryption.

Okay, we plug that little nugget into the TBT and we get — and the translation is:

VOICEOVER GUY: Hey, hackers, bite me! (Laughter.)

SAMANTHA BEE: And finally, we address performance and scalability. As businesses grow, platforms should expand in parallel, being able to increase their capabilities by facilitating exponential increases in data, users and reporting without having as a prerequisite substantial IT involvement in rebuilding efforts but instead simply supplementing with additional hardware to achieve the desired levels of scalability.

Oh gosh, okay, so we put that into the TBT and we get — and the translation is:

VOICEOVER GUY: SQL Server 2005 is like spandex pants. (Laughter.)

SAMANTHA BEE: Yes. Now that totally works. No matter how big you get, they still fit. (Laughter.) Wonderful.

Now, with that, we’re going to conclude this demo and return you to Paul who will undoubtedly retranslate everything I just so thoughtfully translated.

PAUL FLESSNER: I’m not sure I can top that but we’ll give it a quick try.

Mission-Ready: Ultra-High Availability

High availability. We talked about this before, right? Well, services now are used by lots of different applications, so we want to make darn sure that the availability is there.

Now, Microsoft is a database vendor, SQL Server has been getting a lot of share, and we’re going to talk about that in a few minutes but it’s sort of that most mission critical app and is a space that’s sort of been reserved over the years for Oracle and IBM. They’ve done a good job of saying, ah, Microsoft is not ready for the primetime. They’ve got lots of tier 2 and tier 3 apps but that single most mission-critical app, they’re just not there.

Well, we did have some availability things we wanted to work on, some scalability but with 2005 we think we’re there. We absolutely want you to feel comfortable and well positioned to put SQL Server forward for that most mission critical app the next time it comes up to have a major modification or rewrite.

We’ve had failover clustering for some time. It’s good technology, there’s a couple of downsides. One, the whole server fails over. Two, it’s a shared-disk model so if the disk is the problem, which generally can be the problem, it’s certainly one of the most frequent, that’s definitely a problem, and it takes a bit of time. Sometimes the failover isn’t what you need for that most mission critical application.

So we’ve introduced a new feature, database mirroring. This is a very exciting feature that’s got a couple of different options: safety on, which is the most secure and highest availability, you can have the backup server in a physical different geography and when the data is committed on the primary server it’s committed on the backup server. Sure, there’s some performance degradation on that but it’s minor and customers who have that level of availability they adjust the hardware appropriately and make up for that.

Safety off is sort of an asynchronous, sort of a more sophisticate log shipping or automated log shipping where at a commit point the log is shipped over in a background task and the backup server picks that up and takes off with it.

So making the failover extremely predictable, clustering technology is still important and we continue to still support that, it’s a great feature, but those two together give you the most available solution on the Windows platform.

Always online: You can’t take that server down to do, you need to create an index, I need to reorg, I need to defrag an index. That’s super important that that server doesn’t have to come down and with this release we clean up all those offline operations.

Fast recovery. It used to be that our database was unavailable during the undo and that could be a long time, sort of unbounded based on the transaction size. That’s gone, the database is available for activity and update as soon as the redo is complete, which is bounded.

Fine-grained online repairs. This is super critical. You’ve got a couple terabyte database and you have a page corruption because of a fault on the hard disk or whatever, you can go in and just fix that page and only that page is offline during that operation.

Database snapshot is a super-important feature that allows you to be able to take quickly and easily with very low overhead sort of a logical snapshot in time of the database and only the first change is copied over so you’re not copying and moving the whole database.

And this is a feature that actually Oracle came out with a long time ago, it’s an isolation, snapshot isolation, one that I think is a big cagey but a lot of customers are used to it. This is the old writers don’t block readers and it allows you a lot of flexibility, with some trepidation, you have to make sure you’re doing your own locking but it is a feature that a lot of customers had asked for, so we went ahead and put that in the product for 2005.

Mission-Ready: Stronger Security

Security. Now, this was an unbelievable wakeup call for us. Honestly, when we were hit with Slammer back in 2002, it was something we just had no idea how severe it could be and how much impact it could have on your business. It was honestly traumatic for the team, the team felt terrible about it and I know it caused great pain to many, many, many of our customers. And we certainly have apologized for that and I apologize again for it today.

But we have taken extreme measures to make sure that we improve in this area. Now, I don’t know how to be perfect in it, and we’re going to keep vigilant and we’ve made a tremendous amount of progress. We’ve changed our processes for development, we’ve invented new tools that we’ll make available to you. We’ve changed fundamentally the training of our people to make sure.

And as you can see, we’ve done I think a reasonable job of eliminating security problems in the existing code. We’ve only had one critical fix since 2002. These are all critical fixes as reported by that little URL down there, an independent survey.

And then, well, you’ve seen that show, the Myth Busters, where they take these common, everyday known things and they go out and they try them and then they say myth busted or validated. One of my competitors claims to be unbreakable. Now, you’ve probably seen those ads too. So we went out and checked just how many security vulnerabilities have they reported over the same time period. And, yes, this is Oracle and ouch; they’re still working it a little bit. Thirty of those in 2004 are Oracle 10G and so they have a bit more work to do.

Now, I do this for a couple reasons. One, to poke a little fun at the competition, that’s okay, they’ve sure beaten me up over the years, but two, many of you guys tell me, look, I need to sell SQL Server inside my organization. There’s this entrenched VBA group that’s all about Oracle and Oracle is perfect. Well, Oracle has got some important improvements they can make in their processes as well and I’m sure they’re working hard on this but we wanted to make sure that you understood how seriously we took this and how much investment we’ve made in this space going forward and we will continue to make a strong investment going forward.

Security all up. It’s not enough to just change your process, you’ve also got to do things in the product. One of the most important things you can do to stop the proliferation of any kind of virus or any kind of malicious activity that gets into the ecosystem for software is to make sure that there’s a big amount of heterogeneity in the installation. It used to be we’d bring up all the services and let them go. Now you’re going to turn on the services that you want and only those will be running; so much more explicit in terms of feature configuration.

Native data encryption, something a lot of you have been asking for all the way down to the column level so you get a lot of flexibility.

Certificate management based on how you want to do your security around the encryption, whether you want to do ACL based or password based.

Auditing and authorization. A lot more events fired so you can help police your database. I know a lot of you have Sarbanes-Oxley compliance that you’re working with and a lot better support and there’s more we can do in that space for sure.

And richer policy enforcement so that you have to force those policy changes.

Now, one of the things that we’ve done that we’re super proud of and we’ve gotten a lot of positive feedback is create a Best Practices Analyzer. We take the information from our support organizations, from the newsgroups, from our experiences in touching customers directly, things in the product that have tripped you up; maybe they’re just configuration, maybe they’re changes that you’ve made to your code that aren’t what we call or think of in terms of as a best practice and we’ve made this tool available for free download and it allows you to scan your server configuration, scan your SQL source code and tell you and make recommendations — it doesn’t change anything — but make recommendations on things that you might want to change to put yourself in a better position.

This has been so popular, Exchange has already shipped one and all of the Windows Server System products will ship one of these as of 2006 criteria, which we’ll talk more about in a minute; again, really trying to work on that total cost of ownership and put you in control of your environment so that you can keep them in a best practice state.

2005: I said it’s almost ready and it is. It’s a release the team is extremely proud of and so proud of they haven’t wanted to let go of it but we’re working on that. We’re absolutely down to the final level of quality assurance, CTP 15, which we’ll talk about in a minute, or the June CTP will be available and that’s the release where we really have all the features in there, we will be over the summer taking any feedback and doing final quality assurance on the product.

Mission-Ready: Running Microsoft Today

But as you can see, no product goes live for our external customers until it runs all of Microsoft and all of that happens before we release.

So that information or knowledge or wisdom that people say, hey, you wait for SP 1 for it to be ready, SP 1 I assure you is baked into this product. This product, SQL Server 2005 has been running our company for almost a year. Our SAP system went into production last August. Today, all ten of our top ten applications run on 2005, five of those are over a terabyte in size.

Seven customers — and honestly they come up every day — have gone live with the product and we will have a worldwide set of customers live on the product when we launch.

So we’re extremely excited about the progress we’ve been making.

New Benchmarks

So how does the scalability look? Well, one of the things that we always hold as a core tenet in SQL Server is when we ship a new release, we’ve got to provide great value, we’ve got to do better in terms of performance, the performance is a key feature for every release of the database product and we’ve go to do it at less cost. It’s not easy to do; think about your own products. Better product at less cost is a tricky thing to do. Well, we don’t do it alone. Some of it is all the good work we do in the SQL Server team but a lot of it is by our partners, our hardware partners. In this case we’ve got two great benchmarks from HP working closely with Intel. The TPCC benchmark is an IA-64 machine and the other is a 16 processor IA machine as well.

Now, let’s talk about this is a comparison against SQL Server 2000, pretty impressive. On the C, 37 percent improvement on the performance to push us up over a million transactions per minute. Now, let’s think about that. That’s a lot of transactions. If you extrapolate that out, that’s like 1.5 billion transactions in a day. We looked around for like the biggest workload we could find, Visa does 100 million credit card validations in a day and believe me, credit card validation is a lot simpler transaction than the TPCC definitions. AT&T does about 200 million phone calls a day.

So these benchmarks are extremely large in terms of the capacity that they’re processing and at incredible price performance.

H, that’s a different benchmark, so C is that all up TP benchmark, H is about queries and data warehousing and all that. It’s an area frankly that we hadn’t performed well in our SQL Server 2000, not as well as we’d like, so we made dramatic difference in this workload. This particular benchmark, 162 percent improvement, 54 percent less cost.

These are apples to apples comparisons. That’s the same hardware on SQL Server 2000 as SQL Server 2005.

But you’re probably asking how are we doing against the competitors. So here’s our old friend Oracle again. On the C we’re pretty close. It’s the same hardware, best one we’ve got in terms of comparables, and we’re 7 percent better but 37 percent lower on the price performance. And on the H a little better improvement and we’re extremely proud of that.

Now, these benchmarks sort of get insane. Yes, there are other larger benchmarks out there. On the hardware that we’re available to run on we’re doing extremely well and we’re going to be the winner on that. But at some point these benchmarks sort of get nuts, 3 million, blah, blah, blah, it’s sort of how much money can you collect to do the benchmarks.

So we appreciate all the support from our hardware vendors and we continue to work on these because they’re super important for us to tune the product and we’ve got some great results, so we’re just extremely pleased to be able to put that forward.

Now, Francois Ajenstat is going to come out and talk about and demo some of the performance and scalability improvements we’ve made and I guess we’ll just jump in and have him come out right now. Hi, Francois. (Applause.)

FRANCOIS AJENSTAT: Hey. Thanks, Paul.

PAUL FLESSNER: How are you?

FRANCOIS AJENSTAT: Good morning. Those are some great benchmarks you just showed.

PAUL FLESSNER: Thanks, we’re proud of them.

FRANCOIS AJENSTAT: In the past, organizations perceived SQL Server as reaching a ceiling when reaching through scale and availability requirements but with SQL Server 2005 we’ve invested heavily in performance, scalability and high availability to ensure that we can support the most mission critical applications in the world.

In this demo, we’re going to compare the performance of SQL Server 2000 on the inner screen with SQL Server 2005 on the outer screen. The application that we’ve designed will monitor a number of different Windows performance counters. So the bars in the middle will go up to represent the CPU utilization and will turn red when we’re maxing out. And you can see the memory usage, the page files and then the different types of relational and analytical queries that we’ll be running.

Now, it’s important to note that these demos or these servers are running on exactly the same hardware. It’s these HP ProLiant DL585 servers right here. The CPU, memory, storage and data are exactly the same. The only difference is the screens on the inside; those servers are running SQL Server 2000 while the other ones are running 2005.

So let’s go ahead and start our load.

We’re going to start our load on SQL Server 2000 and you’ll see that we have a number of queries being executed, we’re sending some relational queries and some insert statements and in a few seconds you’ll see some MDX or analytical queries being executed. And although SQL Server has some great performance, in this case we’re sending very, very complex loads to our server to really stress the limits of SQL Server 2000.

So let’s go ahead and look at 2005 on the outer screens and see how that same load performs.

So we’re going to start 1x and our load starts and you see our SQL queries, our insert statements and our MDX queries being generated. And that same load is performing dramatically better on 2005 than on SQL Server 2000.

PAUL FLESSNER: Just a few peaks but it levels out a little bit better and stays down.

FRANCOIS AJENSTAT: Yeah, and we’re able to handle a lot more queries.

So let’s go ahead and see, because we have a lot of headroom there, let’s go ahead and see what happens when we double that load. So let’s go and make that 2x and we’re going to have twice as many clients than we did on 2000. And now once again we’re doing pretty well but now our CPU is hitting probably 70 to 90 percent but we’re executing a lot more queries per second on 2005 compared to 2000.

But because I don’t like to see a lot of those red bars, we can go one step further. So what we’re going to do is actually now move over to a 64-bit version of SQL Server 2000 and basically we’re going to have the exact same configuration but now we’re going to be running SQL Server 2005 on Windows Server 2003 X64 edition and let’s see what happens.

So we’re going to do a double load like we have on the 32-bit, that same exact load is running and you can see that we’re actually not stressing the CPU too much and that’s due to the virtually unlimited memory space available on 64-bit so we’re able to cache a lot more of those queries in memory, which is a lot faster than going to disk.

So we started off with SQL Server 2000, showed 2005, doubled that load. Now let’s do the same thing. We looked at 32-bit, let’s double the load on 64-bit and see what happens.

All right, now we’re going to go to four times the load and our CPUs are starting to move up and we see our different types of queries executing and look how dramatic the number of queries per second is compared to 2000.

Now, with all of these clients executing on my servers, these are becoming very mission critical servers for my organization. But in this case I’ve turned on database mirroring on my databases so in the case that an error comes up, we’re going to be able to keep our applications, our servers up and running all the time and keep our business running should anything happen but in this case everything is going extremely well.

PAUL FLESSNER: You’re feeling pretty confident about that failover thing, huh?

FRANCOIS AJENSTAT: Absolutely. Database mirroring is configured, something happens, we’re going to failover gracefully.

PAUL FLESSNER: Excellent. Well, let’s just test that a bit. Let me bring that bad boy robot out here one more time.

FRANCOIS AJENSTAT: Oh no.

PAUL FLESSNER: Oh yeah. So let’s check this thing out, huh?

FRANCOIS AJENSTAT: That’s intimidating. It turns out that I put the network switch that’s connected to my 32-bit server right here on stage for some reason.

PAUL FLESSNER: Doing a little unprotected computing, are you?

FRANCOIS AJENSTAT: Yeah, you know, we never know if something is going to go wrong. Maybe that’s a global organization and that network switch is somewhere else in the world that hasn’t finalized in their organization.

PAUL FLESSNER: This thing looks a lot more dangerous in the datacenter than I ever was.

OK, all you folks in the front row that got the safety glasses, now would be a good time to don those.

Finalizer, do what you must!

FRANCOIS AJENSTAT: All right!

PAUL FLESSNER: All right.

FRANCOIS AJENSTAT: Thank you, Finalizer.

PAUL FLESSNER: There we go.

FRANCOIS AJENSTAT: So actually destroyed the tables and the switch. And what happened on the screens is the SQL Server 2005 32-bit edition, the CPUs went down to zero because we can’t actually see that server anymore, but the queries are still executing. And they’ve actually moved over to the 64-bit machine, so we’ve failed over to 64-bit. So now all of that load that we were generating from that client is actually moving over and we’re keeping our business up and running.

PAUL FLESSNER: This thing is running like 6x the load that it was previously.

FRANCOIS AJENSTAT: Six times the load as we did before.

PAUL FLESSNER: Excellent. Well, I think that was a demonstration.

FRANCOIS AJENSTAT: That was pretty good.

PAUL FLESSNER: All right, thanks very much.

FRANCOIS AJENSTAT: Great, thanks, Paul. (Applause.)

VOICEOVER GUY: And now please welcome back Samantha Bee.

SAMANTHA BEE: Hello again. Now, I’ve been asked to keep this next interview short or I will feel the wrath of the battlebot, so let’s just get right to it.

I’m here with Tanya Luddite. Now, Tanya, you are the chairperson of a national organization. Can you tell us about it?

TANYA LUDDITE: It’s None of Your Business.

SAMANTHA BEE: I’m sorry?

TANYA LUDDITE: The organization is None of Your Business.

SAMANTHA BEE: Well, that did turn out to be a very short interview.

TANYA LUDDITE: No. That’s the actual name of the organization.

SAMANTHA BEE: Oh, okay. Now, what do you advocate then?

TANYA LUDDITE: Information is bad, information causes people to run off and try things. We believe all information should be locked away from everyone.

SAMANTHA BEE: Even from information workers?

TANYA LUDDITE: Especially from information workers.

SAMANTHA BEE: Okay. Now, isn’t the prevailing school of thought that more information is better?

TANYA LUDDITE: Well, I don’t know what school of thought you went to but ours uses the IBM-Oracle model. You pay to put information into a database and if you really have to have it back, you pay to see it again. Having go pay in both directions makes it less likely for people to actually use the information and so it remains safely locked away where it can do no harm.

SAMANTHA BEE: Or good.

TANYA LUDDITE: Or good.

SAMANTHA BEE: But doesn’t information help businesses make better decisions?

TANYA LUDDITE: Sure. If you like your business decisions to be based on information. But information just tempts businesses to take risks. When it comes down to it, without information there are no risky decisions. Without risky decisions, there’s no growth. Without growth, there’s no pressure and without pressure, there are happier employees doing very little to actually risk the success of the company.

SAMANTHA BEE: Well, what if I told you that Paul Flessner is about to talk about how new releases of Microsoft programs will actually unlock the information in company databases?

TANYA LUDDITE: I would find that very upsetting.

SAMANTHA BEE: Would you rather come back with me to the backstage area? We could play with the battlebot.

TANYA LUDDITE: Does the BattleBot unlock any information?

SAMANTHA BEE: No, but it does a pretty good job of smashing a network switch to pieces.

TANYA LUDDITE: That sounds lovely.

SAMANTHA BEE: All right, then, thanks for coming to the techie show.

TANYA LUDDITE: My pleasure.

SAMANTHA BEE: And now I turn things back over to Paul.

But remember, when he’s done, I’ll still have the final word.

Decision Ready: Business Activity Monitoring

PAUL FLESSNER: All right, so decisions, it’s one of the final segments of the show. Business activity monitoring. Business activity monitoring is to business process what BI is to data, right? You need to also — especially in this world of services or information supplying, messages may be going outside your firewall, you’re going to want to track that information. BizTalk Server, if you use it as that flow control or workflow service in the middle, you get something very important sort of for free, you get the ability to manage and track all those operations so you get that BAM activity for free.

In 2006 there’s a huge investment in BAM: integration with Office so that you can easily ship that information that’s been triggered and push it into Office to make sure you can do easily analysis in Excel, you can get incredible visibility. If you don’t want to use Office, you can also pop it up and put it into an easy browser or a Web page. SQL Server will give you notifications so that all the activity that’s going on in your environment can easily be viewed and monitored from tools that are comfortable for you. So 2006 makes a huge investment there.

Decision Ready: Business Intelligence

Business intelligence in SQL Server 2005, we’ve always had sort of a simple belief in our value proposition; hey, you buy a database, you buy it to put information into the database but you also want to be able to get information or business intelligence out and that’s a super important part of our value proposition and what we provide and that puts a lot of technology in the product.

The first thing you’ve got to do in any sort of business intelligence application is integrate the data, go out and find it, might be in SQL Server, might be in Oracle, might be on a different platform, might be in a flat file but you’ve got to get that information, get it cleansed and put it into an environment where you can do some queries. That’s the analysis part. Whether you’re in SQL Server or analysis services, you want to be able to do rich queries and get good information out of your database. And then the last piece of this is being able to report on it, dynamically and flexibly, so those IWs don’t drive you guys crazy, the business users constantly in and asking for more reports, more reports as they do today.

SQL Server 2005 integrates all these technologies into one extremely well integrated UI so that you can go in and create an easy environment, if you will, for integrating, analyzing and reporting on your data.

One thing that I heard from all of you loud and clear, and originally when I had announced 2005, we said we were only going to put reporting services in the Standard and Enterprise editions but feedback came back strong that you wanted it in all editions, so we have made that change, so reporting services will be available in all editions of the product. (Applause.)

Report Builder, the end user reporting tool, will also be in Standard, Enterprise and Workgroup; it will not be in the free version, the Express version but it will be all the way down into Workgroup as well.

Now, Donald Farmer, a product manager from the SQL Server team, is going to come out and give a quick demo of that integration and hopefully give us a great opportunity to do a quick BI application. Donald, how are you? (Applause.)

DONALD FARMER: Hi, Paul. Thank you very much.

PAUL FLESSNER: So what have you got for us?

DONALD FARMER: Well, Paul, you’ve been talking about using business intelligence to deliver integration, analysis and reporting of data to end users, but one of the issues that people face in the real world is that data is often not very good quality. So we’re going to use some good quality data to talk about some good quality data we’re going to try and deliver good quality end user reporting at the end, so integration, analysis and reporting for improved quality data all the way through.

PAUL FLESSNER: Well, let’s try it.

DONALD FARMER: OK. It’s going to take me probably about 15 minutes to actually do this.

PAUL FLESSNER: And you know what, we’re getting kind of close on time so I get to act like one of those business users, why don’t we do it in eight?

DONALD FARMER: That sounds pretty real world, I’ve got to do it in half the time that I estimate.

PAUL FLESSNER: All right, well, let’s give it a try. I’ve got a stopwatch here. Are you ready?

DONALD FARMER: You’ve got a stopwatch as well?

PAUL FLESSNER: I do, I just happen to. Let’s go.

DONALD FARMER: OK, so here’s the data I have. This is a survey of Tech•Ed attendees and you’ll see that we’ve asked about the number of desktops you have in your organizations, their roles and their focus, focus being whether they’re IT or Value Added Resellers or whatever. But some people haven’t filled out the application completely so I’ve got some missing numbers here about desktops. And it’s actually useful information that enables us to target the sessions that we do somewhat better.

So I’m going to use integration analysis and reporting to deliver a better quality data experience at the end of this. The first step is to use integration services to source the data. So this is an integration services designer and I can simply drag on the Excel data and look at this same report that we saw, preview it and I can see that some of the data is coming through as null.

So the first thing I’d like to do is conditionally split out that null data and handle it specially. So I can use the conditional split components, this null expression and just check that the number of desktops is null, and if it is null, I want to send it to a special output called null.

My next step, however, is —

PAUL FLESSNER: Kicking out all the dirty data, huh?

DONALD FARMER: Kicking out all the dirty data and I’m going to handle it specially. The problem is how am I going to handle that.

Well, fortunately, we have an analysis services with data mining and data mining enables me to look at good data and understand the patterns in it and then apply those patterns to bad data and try and make some predictions about it.

So, for example, here I have a mining model that I’ve created. I can browse those mining models based on the attendee survey with good data and I can see that the mining model predicts the number of desktops based on the focus and the role of the employees.

I can look at a decision tree, which shows me how those decisions are made. So, for example, the first thing is whether the rule is missing or not. If the rule is not missing, then is the focus on ISVs. If the focus is not on ISVs, then I need to look at whether the focus is corporate IT. If the focus is not corporate IT, then is the focus IT and so on.

PAUL FLESSNER: You’ve got quite a tree going on.

DONALD FARMER: I could walk right down this tree and make all those decisions.

Now, fortunately, being in Visual Studio and having the new development environment, I can actually integrate this directly into the data flow. Typically I would do data mining over a database but in this case I can actually do data mining in flight on the data as it’s flowing through the process. So I can connect my data from the null output to analysis services, connect to my local analysis server, select the survey model, select that mining model that I had and build a data mining query using the visual data mining query builder.

You’ll remember that focus and role were the key predictors and I want to predict the number of desktops. Now, being a prediction, it would also be useful if I could understand the probability that that prediction is correct, so I’ll take prediction probability and add that as an additional feature and predict the probability that each prediction individually is correct.

So now I’ve got integration and analysis at this stage. I’m just going to drop this out into a database, connected to an OLDB database into SQL Server, of course, connect to my little SQL Server, make sure I’m fast loading it and I’ll create the new tables to hold that data —

PAUL FLESSNER: A little better name for the table.

DONALD FARMER: Yeah, a little better name than OLDB destination, let’s call it —

PAUL FLESSNER: Doing fairly well, three minutes 30 seconds.

DONALD FARMER: Oh, that’s not bad so far but I’ve still got to get to the reporting state. Let’s do the preview there, map the incoming data to the table and to this stage I’ve got integration and analysis and I’m ready to populate the table.

Being in Visual Studio, I can actually do some debugging. I can add what we call the data viewer and the data viewer acts almost like a break point on the data flow itself and enables me to stop the data flow at this point and actually visualize the data, just as I could examine values using any other —

PAUL FLESSNER: So kind of debug it as you go.

DONALD FARMER: Exactly, I can debug it as I go.

So at this point, I’m ready to execute the package, right-click, then execute, we’ll attach a Visual Studio debugger, which will show the progress of the package, and will actually show me the data of when I reach this point. I can see that the original number of desktops was null in each case because I’ve split them out, I can see the role and the focus and the number of desktops that we predict and the probability of that prediction.

So, for example, I can sort by the number of desktops and I can see that people who we predict have got 12,500 to 25,000 are typically they’re IT and they’re CTO and that sounds about right. IT departments are big enough to have a CTO, 12,500 desktops sounds about right.

So at this point, I can detach the visual debugger and let the process complete and you’ll see that we get some nice visual indications of the progress at this stage.

And now I’ve got integration and analysis and I’ve populated the database tables with my cleansed data.

However, the next step is to provide you with the report that you need.

PAUL FLESSNER: Yeah, those reports are good to have generated; otherwise, business users are always asking for more reports.

DONALD FARMER: Absolutely, they always want to see this data then.

So at this point, I’m going to add a new project and the new project will be a reporting project and it will actually live in the same Visual Studio solution, so it could be version controlled together, it could be managed and deployed together.

So now I need to just connect to my local SQL Server where I created that new table, connect to that database, create a query and we have a visual query builder that enables you to build that query fairly easily. Add my table. Select the report format, the details that I’d like to see.

PAUL FLESSNER: You’re moving fast now.

DONALD FARMER: Make a compact report; see, it’s all very simple, there’s hardly any typing, it’s all visual building in this case.

Create a deployment folder for the report project and I can choose to preview the report, let’s give it a useful name.

Finish this off and it actually shows me the report. And there is the report generated from that information.

PAUL FLESSNER: It’s a report, but it’s a bit of an ugly duck there, isn’t it?

DONALD FARMER: Well, thanks very much. (Laughter.)

I suppose it does look like the kind of report I could generate in five minutes, but this is a typical real world scenario. He asks me to clean his dirty data, I do it in half the time I estimate and he’s still not happy. (Laughter, applause.)

So I’m going to deploy this report to my report server anyway and I can go into report manager, refresh this and I can see my report for end users to work with. But you’re right, end users typically aren’t going to be happy with that. If I was a typical IT person, I guess I would say, well, if you’re not happy with it, why not do it yourself. (Laughter.)

PAUL FLESSNER: And that’s exactly what we want to say.

DONALD FARMER: Exactly, yeah.

Report building and reporting services now enables end users to build their own reports. So this user interface is much more friendly for the user, it’s not a designer as such, it’s more like a sort of Office interface. I can literally create my own reports in here quite easily. Let’s call this predictive desks. And you can see this provides me with a nice browser for the different information here. I can pull on the focus, drop it in there and I can take the number of desktops here and it even works out the total of the predicted desks. I can run the report and you can see that this is a much better report than I’d had previously.

PAUL FLESSNER: Yeah, it looks a little bit better.

DONALD FARMER: It looks a bit better.

If I like, I can now deploy this report, simply saving it to the local report server. I’m going to call it My Report just to show that it is actually an end user report in this case, and save that. And this has now deployed that report to the report server. If I refresh here, you’ll actually see that My Report now appears.

PAUL FLESSNER: Now you can share those with anybody or keep them secure?

DONALD FARMER: Anyone who can connect to this with the proper security permissions can go in and see this published on the portal.

But it’s not just that I’ve created the report; without any coding, just by dragging and dropping I’ve actually created a fully interactive report. So, for example, if I want to see more information about architects, I can drill in to the architect information and get all that without any coding at all.

PAUL FLESSNER: Oh, outstanding.

DONALD FARMER: If I want to see an individual architect survey response, I can look at it there and see that, without any coding at all as an end user.

PAUL FLESSNER: Outstanding. That’s very impressive.

How did we do on time? Oh, nine minutes flat but it was darn good, I like it.

DONALD FARMER: OK, thank you very much.

PAUL FLESSNER: Thanks very much, Donald. (Applause.)

That integrated experience is really what we’re after. Today, with many competitors and even with our current products in the market, that’s multiple products that you have to pull together to create that experience, so it’s a very difficult thing to do and we hope it will be much, much easier with SQL Server 2005.

And what we’ve all been certainly waiting for is the launch announcement for all of these products is the week of November 7th. We will be doing a large, largest ever launch for these products. It will be a worldwide launch so there will be events through that week. And we’re very, very excited about the build that we’ll be doing through the summer to bring you that launch.

And there’s a couple other pieces of news here. We’re also making available today the first CTP of BizTalk Server 2006 and CTP June or June CPT for SQL Server 2005. That’s, remember, the one I told you that is feature complete.

And finally, one little present that I’m going to make to all of you, this is a $5 million approximately donation today, is a coupon for a free version of SQL Server Standard Edition 2005 for all Tech•Ed attendees. So hopefully you’ll be able to use that and enjoy it. (Applause.)

The Database Market

So again sort of in the vein of having a little fun and picking on the competitors and also giving you some ammunition to go back and talk to your team about when they pick on SQL Server, let’s just take a quick look. These are Gartner numbers for 2004 for revenue share that year. So IBM is the leader in terms of the dollars that they collect for their database, with 34 percent of total dollars spent on databases. Oracle is at 33.7. Remember, IBM’s got all that mainframe and AS400. This is all platform that a database lands on. So it doesn’t matter, it’s not just Windows, it’s Linux, UNIX, AS400 all the way up to the mainframes. And we’re down there in third place with 20 percent.

So you’re looking at that saying, yeah, SQL Server is last in that regard, analysts talk about that, we run third, we have the lowest revenue share of any of the largest commercial vendors.

Well, some of that I say, hey, look, I only run on Windows, I’m sort of blocked out of 50 percent of the market, but that’s my choice and they’d say, tough luck, certainly your option to port to Linux, haven’t discussed that with Bill recently but probably for now we’re going to stick there tight on Windows. So that’s revenue share.

Then we take another look at the market from a different share, unit share. These are external IDC numbers. And what happens here? It’s an interesting phenomenon. IBM falls to 7 percent market share, Oracle at 25 percent market share and who becomes number one, oh yeah, Microsoft SQL Server.

This actually happens to be 2003 data so it’s not completely apples to apples. IDC’s unit share numbers aren’t out for 2004 until August but we feel very good about our position and where we are and we think we took share again in the marketplace so we’re happy to wait for those numbers.

So you might be asking yourself, how does IBM have the No. 1 revenue share, but they have the lowest unit share? Well, let’s take a look. Ooh, I missed something.

So one thing that they’ll pick on you, if you go back and say SQL Server has the most units, they’ll say, oh, well, they just sell them to small and medium business, that’s not really our sector. We also now outsell all the other vendors in the enterprise sector of the market as well. So that’s another good piece of data.

So now let’s get to that how in the heck this happened. So let’s just take a quick pricing survey. Oracle and IBM both announced new products in the last year and they talked about how they’ve dropped their prices and they compete and they’re as cheap as SQL Server or the same cost as SQL Server.

So we took a little survey and we just went and said, okay, let’s just take one CPU, and we all have CPU pricing so we can make it an apples to apples comparison, we took the enterprise editions of the products, one CPU enterprise and we just took retail price because that’s easy to compare. I know you guys get discounts and all that.

So IBM is the same price as SQL Server 2005, Oracle is still a bit confused on what the same price actually means, they’re a little bit more expensive, but this is just for the base product.

Now, we’ve always said in SQL Server we provide great value in the product all up, we put in the box, we don’t want complicated purchasing decisions, you have to pick which edition you’re going to buy, Workgroup, Standard or Enterprise and it costs a little more to go fast and have higher availability but it’s a pretty simple value proposition overall.

But, wait, what happens to some of our competitors? So first of all, when you decide you want manageability, that’s a little up charge for Oracle. This is $56,000 per CPU it goes up to and 35,000 for manageability if you were buying IBM’s DB2.

Then high availability, something important, Oracle talks a lot about rack and how cheap it is. Well, it’s only an extra $20,000 per CPU, getting you up to $76,0000 per CPU. And you know that SQL Server hasn’t moved over there.

Then we’ve got business intelligence, all that great technology that we just saw that goes into Enterprise edition. Yeah, now you’re up to 116 and 165, IBM takes the lead.

And then, of course, if you’re going to take advantage of modern microprocessing technology, all the multi-core technology that’s going to be shipped, you’re not going to have the choice whether you want to buy it or not, SQL Server doesn’t charge by the core, we charge by the CPU. Our competition, however, has chosen that they’re going to charge by the core. So you go to those dual-core products, now you’re up to $232,000 per proc for Oracle.

Now, IBM, I’ve got to be fair, IBM did come down and match us on Intel platforms, they do not charge by core on the Intel platforms but if you’re going to buy that AIX platform, yep, you’re going to get to pay the full boat there of 330,000k.

So I don’t know, I’m a simple farm boy, I guess, I don’t think price leadership necessarily equates to market leadership and we’re very proud of the fact and want to thank all of you for making SQL Server the leading database in the industry with the largest unit share of all commercial databases on the planet today. So thank you all very, very much. (Applause.)

Yeah, I’ve heard a lot that some people are a little frustrated with Oracle and they’d like to move off, I was never an advocate of doing a conversion kit because I just said let’s not worry about installed base, let’s go after new business and that’s been our strategy for a long time but, honestly, the demand now is higher than we’ve ever seen it to help customers convert from Oracle so we are offering at no charge a kit that will be available for free download. If you want to try to convert those Oracle applications, we’ll help you, other service vendors will be made available to help you, but you will be able to easily migrate those applications off of Oracle and onto SQL Server.

And to have a little fun with it — oh yeah, we need one more contest, we’ll have a contest, you can go find all the details at SQLServerchopper.com. For the company or individual that does the most exciting Oracle conversion and really is working hard on their TCO at launch we’ll have a chopper that we will award to that company. It will be from hopefully a very popular East Coast custom bike company, we’re working out the details of that right now. I don’t know exactly what you’re going to do with that chopper, I’m not sure the CEO is going to ride around on one of those but maybe you can park it in the lobby of the datacenter or having some fun with it, I don’t know, but we wanted to make it available to you.

So we’ve also worked hard, I talked very early in the talk about two things that are super important in IT, TCO, TCO, TCO always and we’ve got to keep that in mind and I hope you’ve seen a lot of what we’re trying to do with integration. But we also talked mostly about connectivity today. But that TCO piece is super important.

And one of the things that we’re really working hard on is the Common Engineering Criteria that all of the Windows Server family of products, SQL Server, BizTalk Server, MOM, all of the Windows Server products have the ability to be easily integrated, they look like products that have come from the same company, overall that integration should provide you a dramatic reduction in your overall total cost of ownership and we’ll continue to push hard on that. We’re announcing the 2006 criteria today. There’s a Web site you can go up and research all that and you can sort of scan down and look at all of the technology integration points that we’re looking at in the future. We’ll announce new criteria every year and we’ll continue to do a better job of integrating our product so we can lower that TCO.

So we went a little over, apologize for that, had a lot going on today. We talked a lot about connected systems, about giving you the power for your businesses to take advantage of this amazing connectivity on the planet today that’s growing at an incredibly rapid rate. We talked about the integration of our products for making you dev ready to go in there and capture and build great Web services, making you mission ready with SQL Server so that it can be highly available and also decision ready with all that integrated technology.

It’s been my pleasure to talk to all of you today. Have a great show and I thank you very much for your business. (Applause.)

VOICEOVER GUY: And now a final word from our techie show fake anchor person, Samantha B.

SAMANTHA BEE: Hey, thank you so much for having me and I can tell that thanks to the peeks that you got into the future from Steve’s and Paul’s keynotes things are really looking up. There are lots of solutions that will make your lives easier and possibly even bring your blood pressure back to normal.

Be sure to come and see us next year when Steve Ballmer will announce the first in a series of automated information workers; when you’ve had enough of their demands, you just switch them off and that is the final word.

Have a great rest of the week at Tech•Ed, goodbye everyone and thank you so much for having me. (Applause.)

END

Related Posts