Tuesday 23 June 2009

Free Quality Malware Protection from Microsoft

Microsoft has just released the Beta for their free malware protection product called "Windows Security Essentials" (Codenamed Morro).

This antivirus and anti-spyware program uses the same tried and tested scanning engine and signature updates as Windows OneCare (which goes away once MSE is out) and the ForeFront Client Security (Microsoft's business focused AV solution). It is available in 32bit and 64bit versions and has been optimized to provide a small system overhead and minimal user interface. This is in stark contrast to the bloated AV products available today from many of the main AV vendors.
Although this tool is intended as a consumer product it is difficult not to consider this in the SMB space as well, in the same way Windows Defender has become a standard part of most small business installations.
Those of you old enough, may well be asking yourself if this is going to be any different from the MS-AV farce in the days of MS-DOS 5.0 and Windows 3.1? Well we at CM have been testing this application for a few weeks now and we have found it to be excellent! The UI may be minimal but don't think for one minute that this product has cut back where it matters, its ability to detect and protect your system from malware attacks is top class. Check it out here: http://www.microsoft.com/security_essentials/
So tell you family and friends! Good, simple, solid AV protection is, at last, available for free!

Friday 5 June 2009

SharePoint and Tagging Content

Plenty has been written in the Blogosphere about Web 2.0 and more recently the idea of Enterprise 2.0 in which the kind of functionality that we're now used to on Wikipedia, Facebook, Twitter, Delicious and so on are used within an organisation's intranet and extranet to improve collaboration. You'll also see Enterprise 2.0 called enterprise social software. It's not a surprise; most people reading this will be users of many of these services and know how helpful they can be for keeping in touch, finding obscure information, and meeting new "friends". It's really easy to see how such technologies would be useful on an intranet, particularly in a large, global organisation where employees can feel lost or isolated. When users know about blogs, wikis and social networking, they'll be comfortable with similar tools for sharing knowledge with their colleagues should they should start using them rapidly.

SharePoint, particularly MOSS, is an ideal platform for Enterprise 2.0 and supports many of the features required straight out of the box. Take a look, for example at the list of features in the Wikipedia article on Enterprise 2.0 (a list which was taken from Andrew McAfee):

  • Search: allowing users to search for other users or content.

    SharePoint has market-leading search functionality for locating content and MOSS includes people search for locating users.

  • Links: grouping similar users or content together.

    In My Site you can identify colleagues and those who have things in common with you. Content can be grouped in lots of ways.

  • Authoring: including blogs and wikis.

    SharePoint includes site templates for blogs and wikis. They're very rapid to set up. There's very rich functionality for collaborative authoring of documents.

  • Extensions: recommendations of users; or content based on profile

    MOSS has lots of ways to do this in My Site.

  • Signals: allowing people to subscribe to users or content with RSS feeds

    Any SharePoint list or document library can have an RSS feed.

  • Tags: allowing users to tag content

    Bad news: SharePoint is a little lacking here and it's this that I want to discuss in this article.

Take the default SharePoint blog template as an example: When you add or edit a post, there's no field for tags. You can place the post in a category, but that's not the same thing because the categories are pre-determined and you can't create a new one on-the-fly. Also you can only put the blog entry in a single category, but you often want to tag it with several different words.

In Web 2.0 there are two kinds of tags. The ones added to content by the author are stored with the content. For example, I've tagged this entry with the word "SharePoint" so that when you click on SharePoint in the word cloud, it will appear. Then there are tags that readers add to the content. Since they only get read access to the site, readers can't store their tags with the content. Instead they save a list of links on a site like Delicious and add tags to those links. I'll deal with these two types of tagging separately.

Author Tags

Of course it's straight forward to add a new column to SharePoint for tags. It's just a text field after all. In the blog homepage click Manage Posts, then under Settings, click Add Column. When you edit a blog entry you'll be able to edit this new field. In most blogging tools, this would be a comma separated list.

Display is a little harder. It would be relatively simple to write a Web Part that displays the tags for an item. This would work when a single blog post is displayed (The view where comments are visible) but you wouldn't see it in the list of all posts. To display tags here, under each post, you'd have to write your own Web Part to replace the Posts Web Part, and include a list of tags for each post. This isn't advanced programming but it would take a little time to get right.

Perhaps more importantly there is no tag cloud display in SharePoint out of the box. I know you've seen a tag cloud before because there's one just to the left of this text. It shows a list of all the tags used in the blog and each is sized according to how often it has been used (SQL Server is the most popular tag as I write). Again, it isn't hard to create a Web Part that does this: you'd need to loop through all the entries in your blog, evaluating and counting the comma-separated terms in your Tags column. Then you output text and probably render style attributes to size each term.

I'm not showing you example code for all this because plenty of people have already written some. Notably the people at wsssearch.com whose tagging controls are part of the Community Kit for SharePoint. Here's their Tag Cloud control running in the standard SharePoint blog site:



Two things to point out about these controls: firstly, they don't just work with blog posts. You could use them with announcements, contacts, or just about any content type including your own custom content types. Secondly, they're open source, which means you can use them as a starting point for more ambitious functionality. For example, you can have a tag cloud that linked to content from across a site collection or even across your whole enterprise. You'd have to be careful about indexing and so on to achieve good performance but with care this could be a really useful control to show users hot topics in your organisation.

User Tags

Delicious-style, user tags are in some ways more interesting than author tags because they work in your community of readers and this is really what Web 2.0 and Enterprise 2.0 are all about. It allows you to find people with similar interests to your own and find links that they like: these will probably help you.

SharePoint is already excellent at finding people, particularly when My Sites are widely used. You can find people with similar skills or who have worked on similar projects or have other things in common with you. You can search by name, department, skill, or any other managed property. So what we need is a simple way for users to save their favourite links and tag each one. These can be displayed on the user profile.

Each user will need a new list in their My Site page, with columns for the URL, the tags (probably in a comma-separated list as before), and then maybe Name and Description. Delicious has Title and Notes fields. There's a good blog entry on deploying list templates to My Sites here if you need help with this.

So far, so simple. Now users can find people like them and see their favourite links and tags. We must make this system effortless to use because it will only be helpful when lots of users add all their favourite links and continue to add them as they find new ones. Users' links are stored in their browser favourites or bookmarks so it's essential to give them a tool for importing these into their My Site profile. How would this work?

On Delicious you export a list of favourites from the browser using its standard tools. You then upload this to the server and tag the imported links. If a link is in a folder, for example one called "SharePoint", that name is added as a tag. But you should then review all the imported links and add tags as you need. This kind of solution would be easy to implement in SharePoint in a Web Part. You would add the ASP.NET FileUpload control to the Web Part. When the user clicks "Upload" you can get this file from the FileUpload.PostedFile property and parse it for all the links and folder names. For each link you'd add a new entry to the user's list in their My Site.

If you build such a solution, you should give careful attention to usability. For example, having uploaded a hundred favourites, a user won't like editing each one individually. You should give them a form with arrow controls that enable them to move to the next and previous entry with a single click. You should give them a list of the other tags they've used before: a single click on the tag adds it to the list. You should use Silverlight or AJAX to maximise the responsiveness of the form and cut down on page reloads.

Finally, consider how to make these tags available and interesting to users. Again a tag cloud control will be really helpful, but this one would have to evaluate many more tags spread throughout a large number of My Sites (each of which is a separate Site Collection). Think carefully about performance and indexing to ensure this cloud runs fast. Again, I'd use the Community Kit for SharePoint code as a starting point. I'd also suggest a hierarchical control to enable browsing tags by user, without having to open multiple My Sites, and other displays such as "Lastest 20 tags", "Most popular 20 tags" and so on. Placing these controls on key intranet pages should help users communicate and generate a buzz around hot topics.

Conclusion

So SharePoint 2007 does indeed do full Enterprise 2.0 functionality with a little bit of custom coding. I think we can safely expect SharePoint 2010 to improve on this. It's almost certain to have a Tag Cloud control built-in for example. But it may be a year or more before your organisation upgrades and as I've shown here, we can make big strides right now without a massive effort. Also, you should be considering the Enterprise 2.0 concepts because they enable users themselves to make their intranet a compelling place to surf. This will be a big topic in SharePoint 2010.

Links

Wikipedia Enterprise 2.0 Article

Delicious

Community Kit for SharePoint

Wednesday 3 June 2009

SQL Server 2008 R2 CTP Announced

The second half of 2009 is shaping up to be an important time for Microsoft as several major product releases are scheduled (including Windows 7, Windows Server 2008 R2, and Exchange Server 2010), along with technical previews for SQL Server 2008 R2 and Office 2010, both of which are due for release in the first half of 2010.

‘Kilimanjaro’ confirmed as SQL Server 2008 R2

The summer 2009 release of the CTP of SQL Server 2008 R2 (previously known as ‘Kilimanjaro’) was announced in May at the Tech-Ed event in Los Angeles, and its emergence, hot on the heels of SQL Server 2008, shows just how committed Microsoft are to taking the lead in the data management arena. Detailed discussion about the new release will have to wait until I can get my hands on the CTP itself, but the range of new features, a full list of which can be found at the SQL Server 2008 R2 site (see end of article), looks very promising. The main points are:

Improved Performance and Management

The new version will support 256 logical processors, up from 64 in the current release. This increase enables you to take advantage of the ongoing advances in multi-core processor technology to provide improved performance, which will be invaluable if you are planning to consolidate databases and servers to cut costs and ease the administrative burden. Improvements to SQL Server Management Studio (SSMS) make the centralized management of multiple servers more straightforward through the provision of enrolment wizards and dashboard viewpoints that give you improved insight and access to key information, such as utilization and policy violations.

Improved Data Quality

With the ever increasing amount of data that organizations have to manage, and the proliferation of locations where that data is stored, maintaining data quality has emerged as a major headache for companies over the last few years. SQL Server 2008 R2 includes ‘Master Data Services’, a new feature that helps organizations to track their data more effectively. Master Data Services comprises a ‘Master Data Hub’ and a ‘Stewardship Portal’ through which you can manage master data. By using Master Data Management to identify and maintain a ‘single version of the truth’ within their data, organizations will benefit from improvements in the reliability of business decisions and other operational processes that are based upon that data.

Self-Service Analysis

Add-ins for Microsoft Office Excel 2010 and Microsoft Office SharePoint 2010 promise to make it easier for users to explore and integrate data from multiple sources and to publish reports and analyses for consumption by other users. In addition, the SharePoint 2010 Management Console enables centralized management of user-generated Business Intelligence (BI) activities, including monitoring, setting policies, and securing resources such as reports. Microsoft refer to this as ‘Self Service Analysis’, the idea being that it places the information that users need into their hands, and so speeds up data-dependent business processes.

Reporting Services

Reporting Services has also been re-vamped with improved drag and drop report creation and enhanced data modelling, which make it easier for non-technical users to create reports, and support for geospatial visualization so that, for example, you can view sales statistics by region in a map format.

Summary

The focus on improved data management and BI in this release of SQL Server comes as no surprise and continues the trend first seen in SQL Server 2005. The R2 version of SQL Server 2008 looks like it will have a lot to offer; the improved processor support alone is a major benefit given the current trend towards server consolidation. As more information becomes available, I’ll let you know, but for now you can register for the CTP download at http://www.microsoft.com/sqlserver/2008/en/us/r2.aspx

Tuesday 2 June 2009

Using Microsoft Bing Maps in SharePoint

The last time I blogged I wrote about SharePoint and Google Maps – specifically how to display maps in a SharePoint Web Part. Since you frequently have geographical information stored in SharePoint, most often as postal addresses, this is a really powerful addition to your developer arsenal. But Google Maps is only one of the mapping providers you can use in this way, for example, there's MapQuest and Yahoo! Maps.

Microsoft's mapping solution is called Bing Maps. Just last week Microsoft announced it is rebranding Virtual Earth as Bing Maps. The API that you use to place Bing Maps on your Web site is now called Bing Maps for Enterprise. For those of you who've developed Virtual Earth code before, you'll be pleased to know there's not much change. A few people asked me how to use this technology in SharePoint – it can be done in a very similar way to Google Maps and in this post I'll cover the differences.

Review of Architecture

As for the Google Maps solution I described in my last post, the interesting part about this task is getting a largely server-side technology like SharePoint to work with a client-side technology like Bing Maps. Suppose you have some search results, each of which has a latitude and longitude, that you want to display in a list and on a map. In a conventional Web Part you'd loop through the results in server-side ASP.NET code to build an HTML list to display on the browser. Bing Maps uses JavaScript on the client to add pushpins like this:

var shape = new VEShape(VEShapeType.Pushpin, map.GetCenter());

shape.SetTitle('A new pushpin');

shape.SetDescription('This is just to demonstrate pushpins');

map.AddShape(shape);

So the question is, how to get client-side code to loop through a collection that only exists on the server?

Our approach is to render an XML island that contains relevant information for each search result. The client side code can locate this island and loop through it, adding a pushpin or another shape for each entry. We'll put the JavaScript in a separate file but embed it as a resource in the .NET assembly as we did for Google Maps.

You could also consider an AJAX-style approach to this problem: this would consist of a Web service that receives search terms and returns results. Client-side code could both render the list and the pushpins on the map and you get all the improvements in responsiveness that are achievable with good AJAX coding. One thing to watch out for: the built-in SharePoint Web Services are not enabled for AJAX so you'd have to write your own.

Most of the coding for this Bing Maps solution is exactly the same as for Google Maps, so you should read this after digesting the previous post. The following tasks are exactly the same:

  • Rendering the XML Data Island.
  • Registering and embedding the scripts in the Web Part assembly.
  • Parsing the XML Island.

That leaves us with three tasks that are different for Bing Maps. I'll describe these below.

Map Results Web Part

To put a Bing Map on a static Web page you must first link to the scripts library:

<script src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2" type="text/javascript" ></script>

Then you must use a <div> tag to position and size the map:

<div id='myMap' style="position:relative; width:800px; height:600px;"></div>

These can both be rendered in server-side code like the following in your map Web Part's Render method:

protected override void Render(System.Web.UI.HtmlTextWriter writer)
{
      //Render the script tags that link to Google Maps API
      writer.WriteLine("<script " +
           "src=\"http://ecn.dev.virtualearth.net/mapcontrol/" +
            "mapcontrol.ashx?v=6.2\" " +
            "type=\"text/javascript\"> </script>");
      //Render the div that will display the map
      writer.Write("<br /><div id=\"map\" " +
            "style=\"position:relative; width: 800px; height: 600px\" ></div>");
}

Loading the Map

This JavaScript function loads and sets properties for the map. This should go in the JavaScript file you have embedded in the Web Part assembly:

var map;

function loadMap(){
      //Create the map
      map = new VEMap("map");
      //The default latitude and longitude.
      var latlong = new VELatLong(54.59088, -4.24072);
      //The default zoom level
      var zoomlevel = 5;
      map.setCenterAndZoom(latlong, zoomLevel);
      //Add the controls
      map.SetDashBoardSize(VEDashboardSize.Normal);
      map.LoadMap();
      //Parse the XML in the HTML data
      parseXmlIsland();
}

The parseXmlIsland function is just like that for Google Maps because the XML is the same. For each result in the XML island, it adds a pushpin.

Adding Pushpins

This addPushPin function inserts a new pin at the longitude and latitude specified. The parseXmlIsland function calls this for each result:

function addPushPin(Title, Path, Latitude, Longitude){
     
      //Formulate the HTML that goes into the caption window
      var infoHtml = "<a href="http://www.blogger.com/">" + Title + "</a>";
      //Add the pushpin
      var pinLatLong = new VELatLong(Latitude, Longitude);
      var pin = new VEShape(VEShapeType.PushPin, pinLatLong);
      pin.SetTitle("<h2>" + Title + "</h2>"
      //Add an info window
      pin.SetDescription(infoHtml);
      //Add pushpin to the map
      map.AddShape(pin);
}

Conclusion

So you can Bing Maps in SharePoint using just the same approach as Google Maps. The coding details have differences but the overall architecture and some of the tasks are identical. All the code in both my posts uses latitude and longitude, but both APIs provide geo-coding functions that can convert an address into polar co-ordinates if you need it.

Links

Windows SharePoint Services Developer Center

SharePoint Server Developer Center

Bing Maps Interactive SDK

Fast Track Data Warehouse Reference Architectures

Introduction
In my previous entries, I’ve covered some of the important new features of SQL Server 2008, how they work and why they help improve efficiency and performance, as well as saving money. In this entry, I’m going to take a slight diversion and introduce you to the recently published Fast Track Data Warehouse Reference Architectures, which are essentially a set of guidelines that will help you to plan and implement data warehousing solutions.
Building a data warehouse represents a major investment for any organization, and it requires a significant development effort. Hardware and software can be complex to install, configure, test, and tune, and because the design requirements of a data warehouse are very different to those for OLTP databases, specialist skills are needed, which your average DBS is unlikely to possess – that’s not to say that DBS’s are not capable of learning these skills, of course, but training them up will add to the project’s cost and potentially delay its progress. As a result of these factors, development can be a very long, expensive process, and because of the complexities involved, there is no guarantee that the finished data warehouse will deliver the desired levels of performance or the business insight that is required to drive revenue.

Fast Track Data Warehouse Reference Architectures
The new Fast Track Data Warehouse Reference Architectures are designed to address these issues and to ensure that organizations can quickly and efficiently create high-performance, highly-scalable, cost-effective solutions that meet the needs of the business effectively.
Specifically, the aims of the Fast Track Reference Architectures are:
• To speed up deployment and provide a faster time to value.
• To lower TCO for the customer
• To provide scalability up to tens of terabytes
• To provide excellent performance out of the box
• To provide a choice of hardware vendors

The Fast Track reference architectures deliver on these aims through a combination of factors:
• Firstly, they provide a set of pre-tested hardware configurations based on servers from trusted leading vendors, including HP, Dell, and Bull. This drastically reduces the time to value and TCO because it removes the need for customers to source, configure, and test hardware and software themselves and it provides a reliable, high-performance platform. The hardware configurations include two, four, and eight processor options so that differing performance, scalability, and pricing needs can be met, and extensive supporting technical documentation and best practice guides ensure that customers can fine tune systems to their specific requirements. The available documentation and support files also make it much more straightforward and less risky for organizations to create their own custom configurations, should they choose to go down that route. The choice of vendors provides the flexibility for organizations to make best use of their existing in-house skill base, and reduces the need for re-training.
• Secondly, they leverage the features of SQL Server 2008 Enterprise Edition to help to deliver performance, flexibility, and scalability, and to drive down TCO. These include data and backup compression, partitioning, Resource Governor, and star join
• Finally, the reference architecture configurations are optimized for sequential I/O and use a balanced approach to hardware that avoids performance bottlenecks in the system. Let’s explore these last two concepts in a little more detail.

Sequential I/O
The Fast Track reference architectures are based on the concept of sequential I/O as the primary method of data access. Data warehouses have a usage pattern that is very different to OLTP systems. A business intelligence query will usually involve selecting, summarizing, grouping and filtering data from tables that consist of millions and billions of rows, and will return results for a range of data. For example, a query may return a summary of sales for a particular product from date A to date B. Rows in fact tables are often stored ordered by date, so SQL Server can process queries like this by accessing data sequentially from disk, which, assuming minimal data fragmentation, is very efficient. Sequential I/O and predominantly read based activity are key characteristics of data warehouse workloads, in contrast to OLTP workloads, which more commonly involve random I/O and extensive read / write activity as rows are inserted, updated and deleted.

Balanced Approach
The second key concept underlying the Fast Track Reference Architectures involves optimizing throughput by taking a balanced approach to hardware. Rather than looking at factors such CPUs, I/O channels, and the I/O capacity of the storage system in isolation, a balanced approach assesses the collective impact of these components on total throughput. This helps to avoid the accidental creation of bottlenecks in the system, which can occur if the throughput of any one of the components is not balanced against the others. For example, if the storage system does not have enough drives, or the drives are not fast enough, the speed at which data is read from them will not be fast enough to match the capacity of the other hardware components (CPUs and the system bus primarily), and performance will suffer. This can be confusing to administrators because monitoring may reveal that, for example, your CPUs have spare capacity, and yet response times are still poor. Adding more CPUs would have no effect in this scenario, because the problem is that the hardware is not balanced correctly. Solving the problem involves improving the throughput of the limiting factor, which in this case is the storage system. A balanced approach starts with the CPUs, evaluating the amount of data that each core can process as it is fed in, and the other components are balanced against this.

Project Madison – Scaling to Petabyte Levels
The Fast Track reference architectures we’ve discussed here are all based on a symmetric multiprocessing (or SMP) ‘shared everything’ model, in which your database is hosted on a single powerful server with dedicated CPU and disk resources. These configurations offer excellent value, scalability, and performance for databases in the 4 – 32 terabyte range, but they are unsuitable for larger implementations because the resultant increased resource contention erodes the performance benefits. An extended set of reference architectures, codenamed ‘Project Madison’, are due for release in the near future. Madison provides a scale-out, shared nothing architecture based upon the concept ‘massive parallel processing’ (or MPP), in which multiple servers work together, coordinated by an MPP query optimizer . ‘Shared nothing’ refers to the fact that each server has its own set of resources, which it does not share with any of the other servers. Madison enables growth from terabyte levels to petabyte levels through scaling out, providing a growth path for businesses that meets their requirements now and in the future.

Monday 1 June 2009

LiveID, authentication and the cloud

I would imagine that by now, most people who use Windows (and other operating systems), would have signed up for a LiveID. This is the mechanism that Microsoft use just about everywhere they need to authenticate users on the web. You may have noticed that LiveID accounts can be used on non-Microsoft sites as well.

In this post I wanted to summarize some of the scenarios for using LiveID, and illustrate its usefulness as an authentication mechanism.

  • From a user’s perspective, having to remember a single id and password for a whole lot of sites is convenient. Sometimes a user even **appears** not to have to log in at all, because their credentials are remembered for them.
  • From a developer’s, having someone else look after the authentication process can dramatically simplify an application.

The scenarios I’d like to outline are:

  1. Logging in directly to a LiveID enabled application. Examples include Windows Live Messenger, or Live Mesh.
  2. Using delegated authentication. Your application needs to use a resource in another LiveID enabled application.
  3. Using persistent delegated authentication. Your application needs to use a resource in another LiveID enabled application, but you don’t want to keep asking the user for their credentials.
  4. Using LiveID as the authentication mechanism for your application.
  5. Using a LiveID to authenticate against an OpenID enabled application.

I’m sure there are plenty of other scenarios, but these 5 strike me as the most interesting and useful in practice.

Scenario 1 - Logging in directly to a LiveID enabled application

The most trivial version of this (from a user’s point of view) is logging in to an application like Windows Live Messenger or a protected page somewhere on microsoft.com. Once the user has registered for a LiveID, they can log in anywhere they see the LiveID login logo.

A slightly more complex version of this scenario (for a developer) would be logging in from within a web application.

var accessOptions = new LiveItemAccessOptions(true);
NetworkCredential credentials = new NetworkCredential(userName, password);
LiveOperatingEnvironment endpoint = new LiveOperatingEnvironment();
var authToken = credentials.GetWindowsLiveAuthenticationToken();
endpoint.Connect(authToken, AuthenticationTokenType.UserToken, meshCloudUri, accessOptions);
Mesh meshSession = endpoint.Mesh;
HttpContext.Current.Session["MeshSession"] = meshSession;

The code above logs a user on to the Live Mesh Operating Environment, using the id and password provided by the user. Presumably here, the endpoint looks after the authentication process for you. After that the web application is caching the authenticated Mesh object for the duration of the user’s session.

The significant feature of this scenario, is that all the interaction with LiveID is handled by someone else – in this case the Mesh Live Operating Environment.

Scenario 2 - Using delegated authentication

This scenario differs from the first in that your application needs to authenticate with LiveID **before** accessing a resource. For example, you might have a web application that enables a user to send and receive instant messages from within the application. In this case your application will have to log in to Windows Live Messenger on behalf of the user, hence delegation. You also want the user to have to provide their credentials once per session, so they don’t keep getting prompted to sign in!

Assuming the user already has a LiveID, this scenario breaks down into two major steps:

  1. The user must give their consent for your application to access their Live Messenger account. Ideally this happens only one, or at least infrequently (once a month?).
  2. The user logs in at the start of their session, and your application can then send and receive instant messages for them during the session.

The consent phase

Here the user is giving consent for this **specific** application to have permissions to access their Live Messenger account for some period of time.

  1. Your application must be able to uniquely identify itself – so you must register your application on the Azure Services Developer Portal and get some identifying codes.
  2. Your application must redirect the user to the LiveID consent webpage (passing your app’s unique identifying codes) to allow the user to give their consent.
  3. Your user will be automatically redirected back to your application after giving consent. Also, LiveId will return a consent token (see below) in a cookie to your application.

All of these interactions are, of course, encrypted.

The user uses your application

This is where the delegation occurs – your application can use Live Messenger on behalf of the user.

  1. Once the user has authenticated using LiveID, the LiveID servers return an encrypted cookie called a consent token (if the user doesn’t already have one from the consent phase). This consent token contains, amongst other items, a delegation token and some expiry details. The consent token is potentially a long-lived token (there is also a renewal/refresh mechanism that I won’t go into here).
  2. From this point on, whenever your application needs to interact with Live Messenger, it will send the signed delegation token back to the server.

Once the user logs off, the two tokens are lost, so when they go back to the site they’ll have to log in again and get a new consent token. To avoid replay attacks, the delegation token is signed and datetime stamped.

Scenario 3 – Using persistent delegated authentication

This scenario is very similar to scenario 2. In scenario 2, each time the user uses your application, they have to sign on again to get access (in this example) to the messenger functionality. If your application can cache the consent token, perhaps in a database, then there is no need for the user to have to log on because the delegation token can be re-signed and sent. The only time the user might have to sign on again is to refresh the consent token when it expires.

This approach leads to a much better user experience, but you do have to have a secure way of storing the consent tokens.

Scenario 4 - Using LiveID as the authentication mechanism for your application

The first three scenarios are all using a LiveId as way of authenticating against an existing (Microsoft) application or resource. There is nothing to prevent you from using LiveId as your own application’s authentication mechanism. This has a number of advantages:

  1. You don’t have to go through the hassle of designing your own authentication system, and implementing databases to store userids and passwords etc.
  2. The user doesn’t have to remember yet another userid and password.
  3. You have a a tried and tested authentication scheme.

This scenario is again, very similar to scenario 2. You need to register your application on the Azure Services Developer Portal and obtain your unique identifying codes. The Live SDK includes a set of web controls that will simplify the task of building your UI and handling the various tokens and cookies.

Scenario 5 - Using a LiveID to authenticate against an OpenID enabled application

OpenId is an interesting approach to try and provide a framework where a user might need just one online digital identity, instead of the dozens of userids and passwords most of us currently have.

An OpenId provider enables users to create a digital identitiy (ie create a userid and password). The OpenId provider also validates identities on behalf of other sites. So, for example, if I want to use a site like Stackoverflow, I will need an OpenId. When I visit Stackoverflow, it needs to know who I am, so it asks me for an OpenId. I am then redirected to my OpenId provider where I enter my password, and if it’s correct I’m redirected back to Stackoverflow. Stackoverflow knows who I am, without even having to see my password because it trusts the OpenId provider to authenticate me.

Microsoft currently have a beta version of LiveId working as an OpenId provider. So, if you want your digital identity to be your LiveId, that’s now possible. Of course your could select a different OpenId provider if you preferred.