A GUIDE TO CLAIMS -BASED IDENTITY AND ACCESS CONTROL

Escape Business Solutions

 

———————– Page 2———————–

 

a guide to claims-based identity and access control

 

———————– Page 3———————–

 

 

———————– Page 4———————–

 

a guide to

Claims-Based Identity

and Access Control

second edition

 

Authentication and Authorization

for Services and the Web

 

patterns & practices

Microsoft Corporation

 

———————– Page 5———————–

 

This document is provided “as-is.” Information and views expressed

in this document, including URLs and other Internet website

references, may change without notice. You bear the risk of using it.

Some examples depicted herein are provided for illustration only

and are fictitious. No real association or connection is intended or

should be inferred.

 

©2011 Microsoft. All rights reserved.

 

Microsoft, Active Directory, MSDN, SharePoint, SQL Server, Visual

Studio, Windows, Windows Azure, Windows Live, Windows

PowerShell, and Windows Server are trademarks of the Microsoft

group of companies. All other trademarks are the property of their

respective owners.

 

———————– Page 6———————–

 

Contents

 

foreword

Kim Cameron xvii

 

foreword

Stuart Kwan xix

 

foreword

Steve Peschka xxi

 

preface

Who This Book Is For xxiii

Why This Book Is Pertinent Now xxiv

A Note about Terminology xxiv

How This Book Is Structured xxv

About the Technologies xxviii

What You Need to Use the Code xxix

Application Server xxx

ADFS xxx

Active Directory xxx

Client Computer xxx

Who’s Who xxxi

 

acknowledgements xxxiii

 

———————– Page 7———————–

 

1 An Introduction to Claims 1

What Do Claims Provide? 1

Not Every System Needs Claims 2

Claims Simplify Authentication Logic 3

A Familiar Example 3

What Makes a Good Claim? 5

Understanding Issuers 5

ADFS as an Issuer 5

External Issuers 7

User Anonymity 9

Implementing Claims-Based Identity 9

Step 1: Add Logic to Your Applications to Support Claims 9

Step 2: Acquire or Build an Issuer 10

Step 3: Configure Your Application to Trust the Issuer 10

Step 4: Configure the Issuer to Know about the 11

Application

A Summary of Benefits 12

Moving On 12

Questions 13

 

2 Claims-Based Architectures 15

A Closer Look at Claims-Based Architectures 16

Browser-Based Applications 17

Understanding the Sequence of Steps 19

Optimizing Performance 23

Smart Clients 23

SharePoint Applications and SharePoint BCS 25

Federating Identity across Realms 26

The Benefits of Cross-Realm Identity 26

How Federated Identity Works 28

Federated Identity with ACS 29

Understanding the Sequence of Steps 31

Combining ACS and ADFS 32

Identity Transformation 32

Home Realm Discovery 32

Design Considerations for Claims-Based Applications 35

What Makes a Good Claim? 35

How Can You Uniquely Distinguish One User from Another? 36

 

———————– Page 8———————–

 

How Can You Get a List of All Possible Users

and All Possible Claims? 36

Where Should Claims Be Issued? 37

What Technologies Do Claims and Tokens Use? 38

Questions 41

 

3 Claims-based Single Sign-on for the

Web and Windows Azure 43

The Premise 43

Goals and Requirements 45

Overview of the Solution 46

Inside the Implementation 49

a-Expense before Claims 49

a-Expense with Claims 52

a-Order before Claims 59

a-Order with Claims 59

Signing out of an Application 60

Setup and Physical Deployment 61

Using a Mock Issuer 61

Isolating Active Directory 62

Handling Single Sign-out in the Mock Issuer 63

Converting to a Production Issuer 63

Enabling Internet Access 64

Variation—Moving to Windows Azure 64

Questions 68

More Information 69

 

4 Federated Identity for Web

Applications 71

The Premise 71

Goals and Requirements 72

Overview of the Solution 72

Benefits and Limitations 77

Inside the Implementation 77

Setup and Physical Deployment 77

Using Mock Issuers for Development and Testing 78

Establishing Trust Relationships 78

Questions 79

More Information 80

 

———————– Page 9———————–

 

5 Federated Identity with Windows

Azure Access Control Service 81

The Premise 82

Goals and Requirements 82

Overview of the Solution 83

Example of a Customer with its Own Identity Provider 84

Example of a Customer Using a Social Identity 86

Trust Relationships with Social Identity Providers 88

Description of Mapping Rules in a Federation Provider 89

Alternative Solutions 91

Inside the Implementation 93

Setup and Physical Deployment 94

Establishing a Trust Relationship with ACS 94

Reporting Errors from ACS 95

Initializing ACS 95

Working with Social Identity Providers 96

Managing Users with Social Identities 96

Working with Windows Live IDs 97

Working with Facebook 98

Questions 99

More Information 100

 

6 Federated Identity with

Multiple Partners 101

The Premise 101

Goals and Requirements 102

Overview of the Solution 103

Step 1: Present Credentials to the Identity Provider 104

Step 2: Transmit the Identity Provider’s Security Token

to the Federation Provider 104

Step 3: Map the Claims 105

Step 4: Transmit the Mapped Claims

and Perform the Requested Action 105

Using Claims in Fabrikam Shipping 107

Inside the Implementation 109

 

———————– Page 10———————–

 

Setup and Physical Deployment 117

Establishing the Trust Relationship 117

Organization Section 118

Issuer Section 118

Certificate Section 118

User-Configurable Claims Transformation Rules 119

Questions 119

 

7 Federated Identity with Multiple

Partners and Windows Azure Access

Control Service 123

The Premise 124

Goals and Requirements 125

Overview of the Solution 127

Step 1: Present Credentials to the Identity Provider 128

Step 2: Transmit the Identity Provider’s Security Token

to the Federation Provider 129

Step 3: Map the Claims 129

Step 4: Transmit the Mapped Claims

and Perform the Requested Action 130

Step 1: Present Credentials to the Identity Provider 131

Step 2: Transmit the Social Identity Provider’s

Security Token to ACS 131

Step 3: Map the Claims 132

Step 4: Transmit the Mapped Claims 132

and Perform the Requested Action

Enrolling a New Partner Organization 132

Managing Multiple Partners with a Single Identity 133

Managing Users at a Partner Organization 134

Inside the Implementation 135

Getting a List of Identity Providers from ACS 135

Adding a New Identity Provider to ACS 137

Managing Claims-Mapping Rules in ACS 137

Displaying a List of Partner Organizations 138

Authenticating a User of Fabrikam Shipping 139

Authorizing Access to Fabrikam Shipping Data 140

 

———————– Page 11———————–

 

Setup and Physical Deployment 141

Fabrikam Shipping Websites 141

Sample Claims Issuers 142

Initializing ACS 142

Questions 143

More Information 144

 

8 Claims Enabling Web Services 145

The Premise 145

Goals and Requirements 146

Overview of the Solution 146

Inside the Implementation 148

Implementing the Web Service 148

Implementing the Active Client 150

Implementing the Authorization Strategy 153

Debugging the Application 154

Setup and Physical Deployment 155

Configuring ADFS 2.0 for Web Services 155

Questions 156

 

9 Securing REST Services 159

The Premise 159

Goals and Requirements 160

Overview of the Solution 160

Inside the Implementation 162

The ACS Configuration 162

Implementing the Web Service 163

Implementing the Active Client 167

Setup and Physical Deployment 172

Configuring ADFS 2.0 for Web Services 172

Configuring ACS 172

Questions 173

More Information 174

 

———————– Page 12———————–

 

10 Accessing Rest Services from

a Windows Phone Device 175

The Premise 176

Goals and Requirements 176

Overview of the Solution 177

Passive Federation 177

Active Federation 179

Comparing the Solutions 181

Inside the Implementation 183

Active SAML token handling 183

Web browser control 185

Asynchronous Behavior 187

Setup and Physical Deployment 191

Questions 191

More Information 193

 

11 Claims-Based Single Sign-On for

Microsoft SharePoint 2010 195

The Premise 196

Goals and Requirements 196

Overview of the Solution 197

Authentication Mechanism 197

End-to-End Walkthroughs 199

Visiting Two Site Collections

in a SharePoint Web Application 199

Visiting Two SharePoint Web Applications 200

Authorization in SharePoint 201

The People Picker 202

Single Sign-Out 204

Inside the Implementation 205

Relying Party Configuration in ADFS 205

SharePoint STS Configuration 206

Create a New SharePoint Trusted Root Authority 206

Create the Claims Mappings in SharePoint 207

Create a New SharePoint Trusted Identity Token Issuer 207

SharePoint Web Application Configuration 209

People Picker Customizations 210

 

———————– Page 13———————–

 

Single Sign-Out Control 212

Displaying Claims in a Web Part 214

User Profile Synchronization 214

Setup and Physical Deployment 215

FedAuth Tokens 215

ADFS Default Authentication Method 216

Server Deployment 216

Questions 217

More Information 218

 

12 federated identity for sharepoint

applications 219

The Premise 219

Goals and Requirements 220

Overview of the Solution 220

Inside the Implementation 224

Token Expiration and Sliding Sessions 224

SAML Token Expiration in SharePoint 225

Sliding Sessions in SharePoint 228

Closing the Browser 232

Authorization Rules 232

Home Realm Discovery 232

Questions 234

More Information 236

 

appendices

 

a using fedutil 237

Using FedUtil to Make an Application Claims-Aware 237

 

b message sequences 239

The Browser-Based Scenario 240

The Active Client Scenario 252

The Browser-Based Scenario with Access Control Service (ACS) 258

Single Sign-Out 273

 

———————– Page 14———————–

 

c industry standards 285

Security Assertion Markup Language (SAML) 285

Security Association Management Protocol (SAMP)

and Internet Security Association

and Key Management Protocol (ISAKMP) 285

WS-Federation 285

WS-Federation: Passive Requestor Profile 286

WS-Security 286

WS-SecureConversation 286

WS-Trust 286

XML Encryption 286

 

d certificates 287

Certificates for Browser-Based Applications 287

On the Issuer (Browser Scenario) 287

Certificate for TLS/SSL (Issuer, Browser Scenario) 287

Certificate for Token Signing (Issuer, Browser Scenario) 287

Optional Certificate for Token Encryption

(Issuer, Browser Scenario) 288

On the Web Application Server 288

Certificate for TLS/SSL (Web Server, Browser Scenario) 288

Token Signature Verification (Web Server, Browser

Scenario) 289

Token Signature Chain of Trust Verification (Web Server,

Browser Scenario) 289

Optional Token Decryption (Web Server, Browser Scenario) 289

Cookie Encryption/Decryption (Web Server, Browser Scenario) 290

Certificates for Active Clients 290

On the Issuer (Active Scenario) 290

Certificate for Transport Security (TLS/SSL)

(Issuer, Active Scenario) 290

Certificate for Message Security (Issuer, Active Scenario) 291

Certificate for Token Signing (Issuer, Active Scenario) 291

Certificate for Token Encryption (Issuer, Active Scenario) 291

On the Web Service Host 292

Certificate for Transport Security (TLS/SSL) (Web Service Host,

Active Scenario) 292

Certificate for Message Security

(Web Service Host, Active Scenario) 292

 

———————– Page 15———————–

 

Token Signature Verification (Web Service Host, Active Scenario) 292

Token Decryption (Web Service Host, Active Scenario) 293

Token Signature Chain Trust Verification (Web Service Host,

Active Scenario) 293

On the Active Client Host 293

Certificate for Message Security (Active Client Host) 293

 

e windows azure appfabric access

control service (acs) 295

What Does ACS DO? 296

Message Sequences for ACS 297

ACS Authenticating Users of a Website 298

ACS Authenticating Services, Smart Clients, and Mobile Devices 299

Combining ACS and ADFS for Users of a Website 300

Combining ACS and ADFS for Services, Smart Clients,

and SharePoint BCS 301

Creating, Configuring, and Using an ACS Issuer 302

Step 1: Access the ACS Web Portal 302

Step 2: Create a Namespace for the Issuer Service Instance 302

Step 3: Add the Required Identity Providers to the Namespace 303

Step 4: Configure One or More Relying Party Applications 303

Step 5: Create Claims Transformations and Pass-through Rules 305

Step 6: Obtain the URIs for the Service Namespace 306

Step 7: Configure Relying Party Applications to Use ACS 306

Custom Home Realm Discovery Pages 306

Configuration with the Management Service API 307

Managing Errors 308

Integration of ACS and a Local ADFS Issuer 308

Security Considerations with ACS 310

Tips for Using ACS 311

ACS and STSs Generated in Visual Studio 2010 311

Error When Uploading a Federation Metadata Document 311

Avoiding Use of the Default ACS Home Realm Discovery Page 312

More Information 312

 

———————– Page 16———————–

 

f sharepoint 2010 authentication

architecture and considerations 313

Benefits of a Claims-Based Architecture 313

Windows Identity Foundation

Implementation of the Claims-Based Architecture 315

SharePoint 2010 User Identity 316

The SharePoint 2010 Security Token Service 317

The SharePoint 2010 Services Application Framework 318

Considerations When Using Claims with SharePoint 319

Choosing an Authentication Mode 319

Supported Standards 319

Using Multiple Authentication Mechanisms 320

SharePoint Groups with Claims Authentication 320

SharePoint Profiles and Audiences with Claims Authentication 321

Rich Client, Office, and Reporting Applications

with Claims Authentication 321

Other Trade-offs and Limitations for Claims Authentication 322

Configuring SharePoint to Use Claims 324

Tips for Configuring Claims in SharePoint 325

More Information 326

 

glossary 327

 

answers to questions 337

 

index 365

 

———————– Page 17———————–

 

 

———————– Page 18———————–

 

Foreword

 

Claims-based identity seeks to control the digital experience and al-

locate digital resources based on claims made by one party about an-

other. A party can be a person, organization, government, website,

web service, or even a device. The very simplest example of a claim is

something that a party says about itself.

As the authors of this book point out, there is nothing new about

the use of claims. As far back as the early days of mainframe comput-

ing, the operating system asked users for passwords and then passed

each new application a “claim” about who was using it. But this world

was based to some extent on wishful thinking because applications

didn’t question what they were told.

As systems became interconnected and more complicated, we

needed ways to identify parties across multiple computers. One way

to do this was for the parties that used applications on one computer

to authenticate to the applications (and/or operating systems) that

ran on the other computers. This mechanism is still widely used—for

example, when logging on to a great number of Web sites.

However, this approach becomes unmanageable when you have

many co-operating systems (as is the case, for example, in the enter-

prise). Therefore, specialized services were invented that would regis-

ter and authenticate users, and subsequently provide claims about

them to interested applications. Some well-known examples are

NTLM, Kerberos, Public Key Infrastructure (PKI), and the Security

Assertion Markup Language (SAML).

If systems that use claims have been around for so long, how can

claims-based computing be new or important? The answer is a variant

of the old adage, “All tables have legs, but not all legs have tables.” The

claims-based model embraces and subsumes the capabilities of all the

systems that have existed to date, but it also allows many new things

to be accomplished. This book gives a great sense of the resultant

opportunities.

 

xvii

 

———————– Page 19———————–

 

xviiixviii

 

For one thing, identity no longer depends on the use of unique

identifiers. NTLM, Kerberos, and public key certificates conveyed,

above all else, an identification number or name. This unique number

could be used as a directory key to look up other attributes and to

track activities. But once we start thinking in terms of claims-based

computing, identifiers were not mandatory. We don’t need to say that

a person is associated with the number X, and then look in a database

to see if number X is married. We just say the person is married. An

identifier is reduced to one potential claim (a thing said by some party)

among many.

This opens up the possibility of many more directly usable and

substantive claims, such as a family name, a person’s citizenship, the

right to do something, or the fact that someone is in a certain age

group or is a great customer. One can make this kind of claim without

revealing a party’s unique identity. This has immense implications for

privacy, which becomes an increasingly important concern as digital

identity is applied to our personal lives.

Further, while the earlier systems were all hermetic worlds, we

can now look at them as examples of the same thing and transform a

claim made in one world to a claim made in another. We can use

“claims transformers” to convert claims from one system to another,

to interpret meanings, apply policies, and to provide elasticity. This is

what makes claims essential for connecting our organizations and

enterprises into a cloud. Because they are standardized, we can use

them across platforms and look at the distributed fabric as a real cir-

cuit board on which we can assemble our services and components.

Claims offer a single conceptual model, programming interface,

and end-user paradigm, whereas before claims we had a cacophony of

disjoint approaches. In my experience, the people who use these new

approaches to build products universally agree that they solve many

pressing problems that were impossibly difficult before. Yet these

people also offer a word of advice. Though embracing what has ex-

isted, the claims-based paradigm is fundamentally a new one; the

biggest challenge is to understand this and take advantage of it.

That’s why this book is so useful. It deals with the fundamental

issues, but it is practical and concise. The time spent reading it will be

repaid many times over as you become an expert in one of the trans-

formative technologies of our time.

 

Kim Cameron

Distinguished Engineer—Microsoft Identity Division 

 

———————– Page 20———————–

 

Foreword

 

In the spring of 2008, months before the Windows® Identity Founda-

tion made its first public appearance, I was on the phone with the

chief software architect of a Fortune 500 company when I experi-

enced one of those vivid, clarifying moments that come during the

course of a software project. We were chatting about how difficult it

was to manage an environment with hundreds, or even thousands of

developers, all building different kinds of applications for different

audiences. In such an environment, the burden of consistent applica-

tion security usually falls on the shoulders of one designated security

architect.

A big part of that architect’s job is to guide developers on how to

handle authentication. Developers have many technologies to choose

from. Microsoft® Windows Integrated Authentication, SAML, LDAP,

and X.509 are just a few. The security architect is responsible for writ-

ing detailed implementation guidance on when and how to use all of

them. I imagined a document with hundreds of pages of technology

overviews, decision flowcharts, and code appendices that demon-

strate the correct use of technology X for scenario Y. “If you are build-

ing a web application, for employees, on the intranet, on Windows,

use Windows Integrated Authentication and LDAP, send your queries

to the enterprise directory….”

I could already tell that this document, despite the architect’s best

efforts, was destined to sit unread on the corner of every developer’s

desk. It was all just too hard; although every developer knows security

is important, no one has the time to read all that. Nevertheless, every

organization needed an architect to write these guidelines. It was the

only meaningful thing they could do to manage this complexity.

It was at that moment that I realized the true purpose of the

forthcoming Windows Identity Foundation. It was to render the tech-

nology decision trivial. Architects would no longer need to create com-

plex guidelines for authentication. This was an epiphany of sorts.

 

xix

 

———————– Page 21———————–

 

xxxx

 

Windows Identity Foundation would allow authentication logic

to be factored out of the application logic, and as a result most devel-

opers would never have to deal with the underlying complexity. Fac-

toring out authentication logic would insulate applications from

changing requirements. Making an application available to users at

multiple organizations or even moving it to the cloud would just mean

reconfiguring the identity infrastructure, not rewriting the application

code. This refactoring of identity logic is the basis of the claims-based

identity model.

Eugenio Pace from the Microsoft patterns & practices group has

brought together some of the foremost minds on this topic so that

their collective experience can be yours. He has focused on practical

scenarios that will help you get started writing your own claims-aware

applications. The guide works progressively, with the simplest and

most common scenarios explained first. It also contains a clear over-

view of the main concepts. Working source code for all of the exam-

ples can be found online (http://claimsid.codeplex.com).

I have truly enjoyed having Eugenio be part of our extended engi-

neering team during this project. His enthusiasm, creativity, and per-

severance have made this book possible. Eugenio is one of the handful

of people I have met who revel in the challenge of identity and secu-

rity and who care deeply that it be done right.

Our goal is for this book to earn its way to the corner of your desk

and lie there dog-eared and much referenced, so that we can be your

identity experts and you can get on with the job that is most impor-

tant to you: building applications that matter. We wish you much

success.

 

Stuart Kwan

Group Program Manager, Identity and Access Platform

 

———————– Page 22———————–

 

Foreword

 

As you prepare to dive into this guide and gain a deeper understanding

®

of the integration between claims authentication and Microsoft

SharePoint® 2010, you may find the following admission both

exhilarating and frightening at the same time: two years ago I knew

virtually nothing about claims authentication. Today, I sit here writing

a foreword to an extensive guide on the topic. Whether that’s

because a few people think I know a thing or two about claims, or just

that no one else could spare the time to do it, well, I’ll leave that for

you to decide.

Fortunately, this guide will give you a big advantage over what I

had to work with, and by the time you’re finished reading it you’ll

understand the symbiotic relationship between claims and SharePoint

2010; the good news is that it won’t take you two years to do so.

I’ll be the first to admit that claims authentication, in different

flavors, has been around for a number of years. Like many technolo-

gies that turn into core platform components though, it often takes a

big bet by a popular product or company to get a technology onto the

map. I think SharePoint 2010 has helped create acceptance for claims

authentication. Changes of this magnitude are often hard to appreci-

ate at the time, but I think we’ll look back at this release some day and

recognize that, for many of us, this was the time when we really began

to appreciate what claims authentication offers.

From Windows claims, or authentication as we’ve always known

it, to the distributed authentication model of SAML claims, there are

more choices than ever before. Now we can use federated authentica-

tion much more easily with products such as Active Directory®

Federation Services (ADFS) 2.0, or even connect our SharePoint farms

to authentication providers in the cloud, such as the Windows

Azure™ AppFabric Access Control Service. We aren’t authenticating

only Windows users anymore; we can have users authenticate against

our Active Directory from virtually any application—SiteMinder,

Yahoo, Google, Windows Live, Novell eDirectory. Now we can even

 

xxi

 

———————– Page 23———————–

 

xxiixxii

 

write our own identity provider using Microsoft Visual Studio®

and the Windows Identity Foundation framework. We can use those

claims in SharePoint; we can add our own custom claims to them, we

can inject our own code into the out-of-the-box people picker, and

much more.

I believe this guide provides you with the foundation to help you

take advantage of all of these opportunities and more. Many people

from around the company either directly or indirectly helped to

contribute to its success. Here’s hoping you can build on it and turn it

into your own success.

 

Steve Peschka

Principal Architect

Microsoft SharePoint Online—Dedicated

 

———————– Page 24———————–

 

Preface

 

As an application designer or developer, imagine a world in which you

don’t have to worry about authentication. Imagine instead that all

requests to your application already include the information you need

to make access control decisions and to personalize the application

for the user.

In this world, your applications can trust another system compo-

nent to securely provide user information, such as the user’s name

or email address, a manager’s email address, or even a purchasing

authorization limit. The user’s information always arrives in the same

simple format, regardless of the authentication mechanism, whether

it’s Microsoft® Windows® integrated authentication, forms-based

authentication in a web browser, an X.509 client certificate, or some-

thing more exotic. Even if someone in charge of your company’s

security policy changes how users authenticate, you still get the infor-

mation, and it’s always in the same format.

This is the utopia of claims-based identity that A Guide to Claims-

Based Identity and Access Control describes. As you’ll see, claims provide

an innovative approach for building applications that authenticate and

authorize users.

 

Who This Book Is For

 

This book gives you enough information to evaluate claims-based

identity as a possible option when you’re planning a new application

or making changes to an existing one. It is intended for any architect,

developer, or information technology (IT) professional who designs,

builds, or operates web applications and services that require identity

information about their users. Although applications that use claims-

based identity exist on many platforms, this book is written for people

who work with Windows-based systems. You should be familiar with

 

xxiii

 

———————– Page 25———————–

 

xxiv

 

the Microsoft .NET Framework, ASP.NET, Windows Communication

Foundation (WCF), Microsoft Active Directory® directory service,

and Microsoft Visual C#® development system.

 

Why This Book Is Pertinent Now

 

Although claims-based identity has been possible for quite a while,

there are now tools available that make it much easier for developers

of Windows-based applications to implement it. These tools include

the Windows Identity Foundation (WIF) and Microsoft Active Direc-

tory Federation Services (ADFS) 2.0. This book shows you when and

how to use these tools in the context of some commonly occurring

scenarios.

 

A Note about Terminology

 

This book explains claims-based identity without using a lot of new

terminology. However, if you read the various standards and much of

the existing literature, you’ll see terms such as relying party, STS, sub-

ject, identity provider, and so on. Here is a short list that equates some

of the most common expressions used in the literature with the more

familiar terms used in this book. For additional clarification about

terminology, see the glossary at the end of the book.

 

relying party (rp) = application

service provider (sp) = application

A relying party or a service provider is an application that uses claims.

The term relying party arose because the application relies on an is-

suer to provide information about identity. The term service provider

is commonly used with the Security Assertion Markup Language

(SAML). Because this book is intended for people who design and

build applications, it uses application, or claims-aware application, when

it is discussing the functionality of the application, and relying party or

RP, when it is talking about the role of the application in relation to

identity providers and federation providers. It does not use service

provider or SP.

 

subject = user

principal = user

A subject or a principal is a user. The term subject has been around for

years in security literature, and it does make sense when you think

about it—the user is the subject of access control, personalization,

and so on. A subject can be a non-human entity, such as printer or

 

———————– Page 26———————–

 

preface xxv

 

another device, but this book doesn’t discuss such scenarios. In addi-

tion, the .NET Framework uses the term principal rather than subject.

This book talks about users rather than subjects or principals .

 

security token service (sts) = issuer

Technically, a security token service is the interface within an issuer

that accepts requests and creates and issues security tokens contain-

ing claims.

 

identity provider (IdP) = issuer

An identity provider is an issuer, or a token issuer if you prefer. Identity

providers validate various user credentials, such as user names, pass-

words, and certificates; and they issue tokens.

 

resource security token service (R-STS)

= issuer

A resource security token service accepts one token and issues an-

other. Rather than having information about identity, it has informa-

tion about the resource. For example, an R-STS can translate tokens

issued by an identity provider into application-specific claims.

 

active client = smart or rich client

passive client = browser

Much of the literature refers to active versus passive clients. An active

client can use a sophisticated library such as Windows Communica-

tion Foundation (WCF) to implement the protocols that request and

pass around security tokens (WS-Trust is the protocol used in active

scenarios). In order to support many different browsers, the passive

scenarios use a much simpler protocol to request and pass around

tokens that rely on simple HTTP primitives such as HTTP GET (with

redirects) and POST. (This simpler protocol is defined in the WS-

Federation specification, section 13.)

In this book, an active client is a rich client or a smart client.

A passive client is a web browser.

 

How This Book Is Structured

 

You can think of the structure of this book as a subway that has main

lines and branches. Following the Preface, there are two chapters that

contain general information. These are followed by scenarios that

show how to apply this knowledge with increasingly more sophisti-

cated requirements.

 

———————– Page 27———————–

 

xxvi

 

Here is the map of our subway.

 

Preface

 

An Introduction Claims-Based

to Claims Architectures

 

Claims-Based Single

Sign-On for the Web

 

Claims-Based

Single Sign-On

Single Sign-On in for SharePoint

Windows Azure

 

Federated Identity with Windows

Azure Access Control Service Federated

Federated Identity for Identity for

Web Applications SharePoint

Applications

 

Federated Identity

with Multiple Partners

Federated Identity

with Multiple Partners

and ACS

 

Claims Enabling Securing REST

Web Services Services

 

Accessing REST Services

from Windows Phone

 

figure 1

Map of the book

 

———————– Page 28———————–

 

preface xxvii

 

An Introduction to Claims explains what a claim is and provides

general rules on what makes good claims and how to incorporate

them into your application. It’s probably a good idea that you read this

chapter before you move on to the scenarios.

 

Claims-Based Architectures shows you how to use claims with

browser-based applications and smart client applications. In particular,

the chapter focuses on how to implement single sign-on for your us-

ers, whether they are on an intranet or an extranet. This chapter is

optional. You don’t need to read it before you proceed to the sce-

narios.

 

Claims-Based Single Sign-On for the Web and Windows Azure is

the starting point of the path that explores the implementation of

single sign-on and federated identity. This chapter shows you how to

implement single sign-on and single sign-out within a corporate in-

tranet. Although this may be something that you can also implement

with Integrated Windows Authentication, it is the first stop on the

way to implementing more complex scenarios. It includes a section for

Windows Azure® technology platform that shows you how to move

the claims-based application to the cloud.

 

Federated Identity for Web Applications shows how you can give

your business partners access to your applications while maintaining

the integrity of your corporate directory and theirs. In other words,

your partners’ employees can use their own corporate credentials to

gain access to your applications.

 

Federated Identity with Windows Azure Access Control Service is

the start of a parallel path that explores Windows Azure AppFabric

Access Control Service (ACS) in the context of single sign-on and

federated identity. This chapter extends the scenarios described in the

previous chapter to enable users to authenticate using social identity

providers such as Google and Windows Live® network of Internet

services.

 

Federated Identity with Multiple Partners is a variation of the fed-

erated identity scenario that shows you how to federate with partners

who have no issuer of their own as well as those who do. It demon-

strates how to use the ASP.NET MVC framework to create a claims-

aware application.

 

Federated Identity with Multiple Partners and Windows Azure

Access Control Service extends the scenarios described in the previ-

ous chapter to include ACS to give users additional choices for au-

thentication that include social identity providers such as Google and

Windows Live.

 

———————– Page 29———————–

 

xxviii

 

Claims Enabling Web Services is the first of a set of chapters that

explore authentication for active clients rather than web browsers.

This chapter shows you how to use the claims-based approach with

web services, whereby a partner uses a smart client that communi-

cates with identity providers and token issuers using SOAP-based

services.

 

Securing REST Services shows how to use the claims-based approach

with web services, whereby a partner uses a smart client that com-

municates with identity providers and token issuers using REST-based

services.

 

Accessing REST Services from a Windows Phone Device shows

how you can use claims-based techniques with Windows Phone™

wireless devices. It discusses the additional considerations that you

must take into account when using claims-based authentication with

mobile devices.

 

Claims-Based Single Sign-On for Microsoft SharePoint 2010 be-

gins a path that explores how you can use claims-based identity tech-

niques with Microsoft SharePoint 2010. This chapter shows how

SharePoint web applications can use claims-based authentication with

an external token issuer such as ADFS to enable access from both

internal locations and externally over the web.

 

Federated Identity for SharePoint Applications extends the previ-

ous chapter to show how you can use federated identity techniques

to enable users to authenticate using more than one identity provider

and token issuer.

 

About the Technologies

 

In this guide, you will find discussion on several technologies with

which you may not be immediately familiar. The following is a brief

description of each one, together with links to where you can find

more information.

Windows Identity Foundation (WIF). WIF contains a set of

components that enable developers using the Microsoft .NET Frame-

work to externalize identity logic from their application, improving

developer productivity, enhancing application security, and enabling

interoperability. Developers can apply the same tools and program-

ming model to build on-premises software as well as cloud services

without requiring custom implementations. WIF uses a single simpli-

fied identity model based on claims, together with interoperability

based on industry-standard protocols. For more information see

“Windows Identity Foundation Simplifies User Access for Develop-

ers,” at http://msdn.microsoft.com/en-us/security/aa570351.aspx.

 

———————– Page 30———————–

 

preface xxix

 

Active Directory Federation Service (ADFS). ADFS is a server

role in Windows Server® that provides simplified access and single

sign-on for on-premises and cloud-based applications in the enter-

prise, across organizations, and on the web. It acts as an identity pro-

vider and token issuer to enable user access with native single sign-on

across organizational boundaries and in the cloud, and to easily con-

nect applications by utilizing industry-standard protocols. For more

information, see “Active Directory Federation Services 2.0,” at http://

http://www.microsoft.com/windowsserver2008/en/us/ad-fs-2-overview.

aspx.

Windows Azure. Windows Azure is a cloud services platform

that serves as the development, service hosting and service manage-

ment environment. It is a flexible platform that supports multiple

languages and provides developers with on-demand compute and

storage services to host, scale, and manage web applications over the

Internet through Microsoft datacenters. For more information, see

“Windows Azure,” at http://www.microsoft.com/windowsazure/

windowsazure/default.aspx.

Windows Azure AppFabric Access Control Service (ACS). ACS

is an easy way to provide identity and access control to web applica-

tions and services while integrating with standards-based identity

providers. These identity providers can include enterprise directories

such as Active Directory, and web identities such as Windows Live ID,

Google, Yahoo! and Facebook. ACS enables authorization decisions to

be moved out of the application and into a set of declarative rules that

can transform incoming security claims into claims that applications

understand, and can also be used to manage users’ permissions. For

more information, see “Windows Azure Access Control,” at http://

http://www.microsoft.com/windowsazure/appfabric/overview/default.aspx.

 

What You Need to Use the Code

 

You can either run the scenarios on your own system or you can cre-

ate a realistic lab environment. Running the scenarios on your own

system is very simple and has only a few requirements, which are

listed below.

•     Microsoft Windows Vista® SP1, Windows 7, Windows Server

2008 (32-bit or 64-bit), or Windows Server 2008 R2 (32-bit or

64-bit)

•     Microsoft Internet Information Services (IIS) 7.0 or 7.5

•     Microsoft .NET Framework 4.0

•     Microsoft Visual Studio® 2010 (excluding Express editions)

•     Windows Azure Tools for Microsoft Visual Studio

•     Windows Identity Foundation

 

———————– Page 31———————–

 

xxx

 

NOTE: If you want to install the Windows Azure Tools on Windows

Server 2008 R2 you must first install the .NET Framework version

3.5.1. This is also required for the HTTP Activation features. The

.NET Framework version 3.5.1 can be installed from Windows

Update.

 

Running the scenarios in a realistic lab environment, with an in-

stance of Active Directory Federation Services (ADFS) and Active

Directory, requires an application server, ADFS, Active Directory, and

a client system. Here are their system requirements.

 

Application Server

The application server requires the following:

•     Windows Server 2008 or Windows Server 2008 R2

•     Microsoft Internet Information Services (IIS) 7.0 or 7.5

•     Microsoft Visual Studio 2010 (excluding Express editions)

•     .NET Framework 4.0

•     Windows Identity Foundation

 

ADFS

The ADFS server requires the following:

•     Windows Server 2008 or Windows Server 2008 R2

•     Microsoft Internet Information Services (IIS) 7.0 or 7.5

•     .NET Framework 4.0

•     Microsoft SQL Server® 2005 or 2008 Express Edition

 

Active Directory

The Active Directory system requires Windows Server 2008 or Win-

dows Server 2008 R2 with Active Directory installed.

 

Client Computer

The client computer requires Windows Vista or Windows 7 for active

scenarios. Passive scenarios may use any web browser that supports

HTTP redirection as the client.

 

———————– Page 32———————–

 

preface xxxi

 

Who’s Who

 

As we’ve said, this book uses a number of scenarios that trace the

evolution of several corporate applications. A panel of experts com-

ments on the development efforts. The panel includes a security

specialist, a software architect, a software developer, and an IT profes-

sional. The scenarios can be considered from each of these points of

view. Here are our experts.

 

Bharath is a security specialist. He checks that solutions for

authentication and authorization reliably safeguard a company’s

data. He is a cautious person, with good reason.

 

Providing authentication for a single application

is easy. Securing all applications across our

organization is a different thing.

 

Jana is a software architect. She plans the overall structure of an

application. Her perspective is both practical and strategic. In other

words, she considers not only what technical approaches are needed

today, but also what direction a company needs to consider for the

future.

It’s not easy, balancing the needs of users, the IT

organization, the developers, and the technical

platforms we rely on.

 

Markus is a senior software developer. He is analytical, detail-

oriented, and methodical. He’s focused on the task at hand,

which is building a great claims-based application. He knows

that he’s the person who’s ultimately responsible for the code.

 

I don’t care what you use for authentication,

I’ll make it work.

 

Poe is an IT professional who’s an expert in deploying and running in

a corporate data center. He’s also an Active Directory guru. Poe has

a keen interest in practical solutions; after all, he’s the one who gets

paged at 3:00 AM when there’s a problem.

 

Each application handles authentication differ-

ently. Can I get a bit of consistency please?!?

 

If you have a particular area of interest, look for notes provided by the

specialists whose interests align with yours.

 

———————– Page 33———————–

 

 

———————– Page 34———————–

 

Acknowledgments

 

This book marks a milestone in a journey I started in the winter of

2007. At that time, I was offered the opportunity to enter a com-

pletely new domain: the world of software delivered as a service.

Offerings such as Windows Azure™ technology platform were far

from being realized, and “the cloud” was still to be defined and fully

understood. My work focused mainly on uncovering the specific chal-

lenges that companies would face with this new way of delivering

software.

It was immediately obvious that managing identity and access

control was a major obstacle for developers. Identity and access con-

trol were fundamental. They were prerequisites for everything else. If

you didn’t get authentication and authorization right, you would be

building your application on a foundation of sand.

Thus began my journey into the world of claims-based identity. I

was very lucky to initiate this journey with none other than a claims

Jedi, Vittorio Bertocci. He turned me into a convert.

Initially, I was puzzled that so few people were deploying what

seemed, at first glance, to be simple principles. Then I understood

why. In my discussions with colleagues and customers, I frequently

found myself having to think twice about many of the concepts and

about the mechanics needed to put them into practice. In fact, even

after longer exposure to the subject, I found myself having to care-

fully retrace the interactions among implementation components.

The principles may have been simple, but translating them into run-

ning code was a different matter. Translating them into the right run-

ning code was even harder.

Around this time, Microsoft announced Windows Identity Foun-

dation (WIF), Active Directory® Federation Services (ADFS) 2.0, and

Windows Azure AppFabric Access Control Service (ACS). Once I

understood how to apply those technologies, and how they dramati-

cally simplified claims-based development, I realized that the moment

had come to create a guide like the one you are now reading.

 

xxxiii

 

———————– Page 35———————–

 

xxxiv

 

Even after I had spent a significant amount of time on the subject,

I realized that providing prescriptive guidance required greater profi-

ciency than my own, and I was lucky to be able to recruit for my quest

some very bright and experienced experts. I have thoroughly enjoyed

working with them on this project and would be honored to work

with this fine team again. I was also fortunate to have skilled software

developers, software testers, technical writers, and others as project

contributors.

I want to start by thanking the following subject matter experts

and key contributors to this guide: Dominick Baier, Vittorio Bertocci,

Keith Brown, and Matias Woloski. These guys were outstanding. I

admire their rigor, their drive for excellence, and their commitment to

practical solutions.

Running code is a very powerful device for explaining how tech-

nology works. Designing sample applications that are both techni-

cally and pedagogically sound is no simple task. I want to thank the

project’s development and test teams for providing that balance:

Federico Boerr, Carlos Farre, Diego Marcet, Anant Manuj Mittal, Er-

win van der Valk, and Matias Woloski.

This guide is meant to be authoritative and prescriptive in the

topics it covers. However, we also wanted it to be simple to under-

stand, approachable, and entertaining—a guide you would find inter-

esting and you would enjoy reading. We invested in two areas to

achieve these goals: an approachable writing style and an appealing

visual design.

A team of technical writers and editors were responsible for the

text. They performed the miracle of translating and organizing our

jargon- and acronym-plagued drafts, notes, and conversations into

clear, readable text. I want to direct many thanks to RoAnn Corbisier,

Colin Campbell, Roberta Leibovitz, and Tina Burden for doing such a

fine job on that.

The innovative visual design concept used for this guide was

developed by Roberta Leibovitz and Colin Campbell (Modeled

Computation LLC) who worked with a team of talented designers

and illustrators. The book design was created by John Hubbard (Eson).

The cartoon faces and chapter divisions were drawn by the award-

winning Seattle-based cartoonist Ellen Forney. The technical illustra-

tions were adapted from my Tablet PC mock-ups by Veronica Ruiz.

I want to thank the creative team for giving this guide such a great

look.

I also want to thank all the customers, partners, and community

members who have patiently reviewed our early content and drafts.

You have truly helped us shape this guide. Among those, I want to

highlight the exceptional contributions of Zulfiqar Ahmed, Michele

Leroux Bustamante (IDesign), Pablo Mariano Cibraro (Tellago Inc),

 

———————– Page 36———————–

 

acknowledgments xxxv

 

Hernan DeLahitte (DigitFactory), Pedro Felix, Tim Fischer (Microsoft

Germany), Mario Fontana, David Hill, Doug Hiller, Jason Hogg,

Ezequiel Jadib, Brad Jonas, Seshadri Mani, Marcelo Mas, Vijayavani

Nori, Krish Shenoy, Travis Spencer (www.travisspencer.com), Mario

Szpuszta (Sr. Architect Advisor, Microsoft Austria), Chris Tavares,

Peter M. Thompson, and Todd West.

Finally, I want to thank Stuart Kwan and Conrad Bayer from the

Identity Division at Microsoft for their support throughout. Even

though their teams were extremely busy shipping WIF and ADFS,

they always found time to help us.

 

Eugenio Pace

Senior Program Manager – patterns & practices

Microsoft Corporation

 

Acknowledgements to Contributors to this

Second Edition

All our guides are the result of great work from many people. I’m

happy to see that so many of the original contributors and advisors of

our first guide also worked on this one. The interest in this particular

area has increased notably since the first edition was published. Proof

of that is the continued investment by Microsoft in tools, services,

and products.

As our scope increased to cover SharePoint and Windows Azure

Access Control Service, we also added new community members

and industry experts who have significantly helped throughout the

development of this new edition.

We’d like to acknowledge the following individuals who have

exceptionally contributed to it: Zulfiquar Ahmed, Dominic Betts,

Federico Boerr, Robert Bogue, Jonathan Cisneros, Shy Cohen, David

Crawford, Pedro Felix, David Hill, Alex Homer, Laura Hunter, Chris

Keyser, Jason Lee, Alik Levin, Masashi Narumoto, Nicolas Paez, Brian

Puhl, Paul Schaeflein, Ken St. Cyr, Venky Veeraraghavan, Rathi

Velusamy, Bill Wilder, Daz Wilkin, Jim Zimmerman, Scott Densmore,

Steve Peschka, and Christian Nielsen

We also want to thank everyone who participated in our Code-

Plex community site.

 

Eugenio Pace

Sr. Program Manager Lead – patterns & practices

Microsoft Corporation, May 2011

 

———————– Page 37———————–

 

 

———————– Page 38———————–

 

1 An Introduction to Claims

 

This chapter discusses some concepts, such as claims and federated Claims-based identity isn’t

identity, that may sound new to you. However, many of these ideas new. It’s been in use for

have been around for a long time. The mechanics involved in a claims- almost a decade.

based approach have a flavor similar to Kerberos, which is one of the

most broadly accepted authentication protocols in use today and is

also the protocol used by Microsoft® Active Directory® directory

service. Federation protocols such as WS-Federation and the Security

Assertion Markup Language (SAML) have been with us for many years

as interoperable protocols that are implemented on all major technol-

ogy platforms.

 

What Do Claims Provide?

 

To see the power of claims, you might need to change your view of

authentication. It’s easy to let a particular authentication mechanism

constrain your thinking. If you use Integrated Windows Authentica-

tion (Kerberos or NTLM), you probably think of identity in terms of

Microsoft Windows® user accounts and groups. If you use the ASP.

NET membership and roles provider, you probably think in terms of

user names, passwords, and roles. If you try to determine what the

different authentication mechanisms have in common, you can ab-

stract the individual elements of identity and access control into two

parts: a single, general notion of claims, and the concept of an issuer

or an authority.

 

A claim is a statement that one subject makes about itself or another

subject. The statement can be about a name, identity, key, group,

privilege, or capability, for example. Claims are issued by a provider,

and they are given one or more values and then packaged in security

tokens that are issued by an issuer, commonly known as a security

token service (STS). For a full list of definitions of terms associated

with claims-based identity, see “Claims-Based Identity Term

 

1

 

———————– Page 39———————–

 

22 chapter one

 

Definitions” at http://msdn.microsoft.com/en-us/library/

ee534975.aspx.

 

Thinking in terms of claims and issuers is a powerful abstraction

that supports new ways of securing your applications. Because claims

involve an explicit trust relationship with an issuer, your application

believes a claim about the current user only if it trusts the entity that

issued the claim. Trust is explicit in the claims-based approach, not

implicit as in other authentication and authorization approaches with

which you may be familiar.

The following table shows the relationships between security

tokens, claims, and issuers.

 

You can use claims to Security token Claims Issuer

implement role-based Windows token. This User name and groups. Windows Active Directory

access control token is represented domain.

(RBAC). Roles are as a security identifier

claims, but claims can (SID). This is a unique

contain more value of variable

information than just length that is used to

role membership. identify a security

Also, you can send principal or security

claims inside a signed group in Windows

(and possibly operating systems.

encrypted) security

token to assure the User name token. User name. Application.

 

receiver that they Certificate. Examples can include a Certification authorities,

come from a trusted certificate thumbprint, a including the root authority

issuer. subject, or a distinguished and all authorities in the

name. chain to the root.

 

The claims-based approach to identity makes it easy for users to

sign in using Kerberos where it makes sense, but at the same time, it’s

just as easy for them to use one or more (perhaps more Internet-

Claims provide a powerful friendly) authentication techniques, without you having to recode,

abstraction for identity. recompile, or even reconfigure your applications. You can support any

authentication technique, some of the most popular being Kerberos,

forms authentication, X.509 certificates, and smart cards, as well as

information cards and others.

 

Not Every System Needs Claims

Sometimes claims aren’t needed. This is an important disclaimer. Com-

panies with a host of internal applications can use Integrated Win-

dows Authentication to achieve many of the benefits provided by

claims. Active Directory does a great job of storing user identities, and

because Kerberos is a part of Windows, your applications don’t have

to include much authentication logic. As long as every application you

build can use Integrated Windows Authentication, you may have al-

ready reached your identity utopia.

 

———————– Page 40———————–

 

an introduction to claims 3 3

 

However, there are many reasons why you might need something

other than Windows authentication. You might have web-facing ap-

plications that are used by people who don’t have accounts in your

Windows domain. Another reason might be that your company has

merged with another company and you’re having trouble authenticat-

ing across two Windows forests that don’t (and may never) have a

trust relationship. Perhaps you want to share identities with another

company that has non-.NET Framework applications or you need to

share identities between applications running on different platforms

(for example, the Macintosh). These are just a few situations in which

claims-based identity can be the right choice for you.

 

Claims Simplify Authentication Logic

Most applications include a certain amount of logic that supports

identity-related features. Applications that can’t rely on Integrated

Windows Authentication tend to have more of this than applications

that do. For example, web-facing applications that store user names

and passwords must handle password reset, lockout, and other issues.

Enterprise-facing applications that use Integrated Windows Authen-

tication can rely on the domain controller.

But even with Integrated Windows Authentication, there are still

challenges. Kerberos tickets only give you a user’s account and a list

of groups. What if your application needs to send email to the user?

What if you need the email address of the user’s manager? This starts

to get complicated quickly, even within a single domain. To go beyond

the limitations of Kerberos, you need to program Active Directory.

This is not a simple task, especially if you want to build efficient Light-

weight Directory Access Protocol (LDAP) queries that don’t slow

down your directory server.

Claims-based identity allows you to factor out the authentication Claims help you to factor

logic from individual applications. Instead of the application determin- authentication logic out of

ing who the user is, it receives claims that identify the user. your applications.

 

A Familiar Example

Claims-based identity is all around us. A very familiar analogy is the

authentication protocol you follow each time you visit an airport. You

can’t simply walk up to the gate and present your passport or driver’s

license. Instead, you must first check in at the ticket counter. Here,

you present whatever credential makes sense. If you’re going overseas,

you show your passport. For domestic flights, you present your driver’s

license. After verifying that your picture ID matches your face (au-

thentication), the agent looks up your flight and verifies that you’ve

paid for a ticket (authorization). Assuming all is in order, you receive a

boarding pass that you take to the gate.

 

———————– Page 41———————–

 

44 chapter one

 

A boarding pass is very informative. Gate agents know your name

and frequent flyer number (authentication and personalization), your

flight number and seating priority (authorization), and perhaps even

more. The gate agents have everything that they need to do their jobs

efficiently.

There is also special information on the boarding pass. It is en-

coded in the bar code and/or the magnetic strip on the back. This in-

formation (such as a boarding serial number) proves that the pass was

issued by the airline and is not a forgery.

In essence, a boarding pass is a signed set of claims made by the

airline about you. It states that you are allowed to board a particular

flight at a particular time and sit in a particular seat. Of course, agents

don’t need to think very deeply about this. They simply validate your

boarding pass, read the claims on it, and let you board the plane.

It’s also important to note that there may be more than one way

of obtaining the signed set of claims that is your boarding pass. You

might go to the ticket counter at the airport, or you might use the

airline’s web site and print your boarding pass at home. The gate

agents boarding the flight don’t care how the boarding pass was cre-

ated; they don’t care which issuer you used, as long as it is trusted by

the airline. They only care that it is an authentic set of claims that give

you permission to get on the plane.

In software, this bundle of claims is called a security token. Each

security token is signed by the issuer who created it. A claims-based

application considers users to be authenticated if they present a valid,

signed security token from a trusted issuer. Figure 1 shows the basic

pattern for using claims.

 

Issuer

 

.

e

t .

a n

c e

i k

t

n o

e t

h e

t u

u s

s

A I

 

. .

1 2

 

3. Send token.

Application

 

figure 1

Issuers, security tokens, and applications

 

———————– Page 42———————–

 

an introduction to claims 5 5

 

For an application developer, the advantage of this system is clear:

your application doesn’t need to worry about what sort of credentials

the user presents. Someone who determines your company’s security

policy can make those rules, and buy or build the issuer. Your applica-

tion simply receives the equivalent of a boarding pass. No matter what

authentication protocol was used, Kerberos, SSL, forms authentica-

tion, or something more exotic, the application gets a signed set of

claims that has the information it needs about the user. This informa-

tion is in a simple format that the application can use immediately.

 

What Makes a Good Claim? When you decide what

Think about claims the same way you think about attributes in a cen- kinds of claims to issue, ask

tral repository such as Active Directory, over which you have little yourself how hard is it to

convince the IT department

control. Security tokens can contain claims such as the user’s name, to extend the Active

email address, manager’s email address, groups, roles, and so on. De- Directory schema. They

pending on your organization, it may be easy or difficult to centralize have good reasons for

lots of information about users and issue claims to share that informa- staying with what they

tion with applications. already have. If they’re

reluctant now, claims aren’t

It rarely makes sense to centralize data that is specific to only one going to change that. Keep

application. In fact, applications that use claims can benefit from stor- this in mind when you

ing a separate table that contains user information. This table is where choose which attributes to

you can keep application-specific user data that no other application use as claims.

cares about. This is data for which your application is authoritative. In

other words, it is the single source for that data, and someone must

be responsible for keeping it up to date.

Another use for a table like this is to cache non-authoritative data

that you get from claims. For example, you might cache an email claim

for each user so that you can send out notification email without the

user having to be logged in. You should treat any cached claims as

read-only and refresh them the next time the user visits your applica-

tion and presents fresh claims. Include a date column that you update

each time you refresh the record. That way, you know how stale the

cached claims have become when it comes time to use them.

 

Claims are like salt.

Understanding Issuers Just a little bit flavors

Today, it’s possible to acquire an issuer that provides user information, the broth. The next

packaged as claims. chapter has more

information on what

makes a good claim.

ADFS as an Issuer

If you have Windows Server® 2008 R2 Enterprise Edition, you are

automatically licensed to run the Microsoft issuer, Active Directory

Federation Services (ADFS) 2.0. ADFS provides the logic to authenti- A good issuer can make it

cate users in several ways, and you can customize each instance of easier to implement authori-

your ADFS issuer to authenticate users with Kerberos, forms authen- zation and personalization

tication, or certificates. Alternatively, you can ask your ADFS issuer to in your applications.

 

———————– Page 43———————–

 

66 chapter one

 

accept a security token from an issuer in another realm as proof of

authentication. This is known as identity federation and it’s how you

achieve single sign-on across realms.

 

In identity terms, a realm is the set of applications, URLs, domains,

or sites for which a token is valid. Typically a realm is defined using

an Internet domain such as microsoft.com, or a path within that

domain, such as microsoft.com/practices/guides. A realm is some-

times described as a security domain because it encompasses all

applications within a specified security boundary.

 

You can also receive tokens

that were generated outside Figure 2 shows all the tasks that the issuer performs.

of your own realm, and

accept them if you trust the

issuer. This is known as figure 2 Active Directory

ADFS functions

federated identity. Feder-

ated identity enables

single-sign on, allowing users Active Directory

to sign on to applications in 2. Gather information. Lightweight Directory

different realms without Issuer (ADFS) Services

 

needing to enter realm-

specific credentials. Users

sign on once to access Relational database

.

e

multiple applications in t n.

a

c e

i k

different realms. t

n o XML Custom stores

e t XML

XML

h e

t u

u s

s

A I

 

. .

1 3

 

4. Send token. Claims-based

application

 

After the user is authenticated, the issuer creates claims about

that user and issues a security token. ADFS has a rules engine that

makes it easy to extract LDAP attributes from the user’s record in

Active Directory and its cousin, Active Directory Lightweight Direc-

tory Services (AD LDS). ADFS also allows you to add rules that include

arbitrary SQL statements so that you can extract user data from your

own custom database.

You can extend ADFS to add other stores. This is useful because,

in many companies, a user’s identity is often fragmented. ADFS hides

this fragmentation. Your claims-based applications won’t break if you

decide to move data around between stores.

 

———————– Page 44———————–

 

an introduction to claims 7 7

 

External Issuers

ADFS requires users to have an account in Active Directory or in one

of the stores that ADFS trusts. However, users may have no access to

an Active Directory-based issuer, but have accounts with other well-

known issuers. These issuers typically include social networks and

email providers. It may be appropriate for your application to accept

security tokens created by one of these issuers. This token can also be

accepted by an internal issuer such as ADFS so that the external is-

suer acts as another ADFS store.

To simplify this approach, you can use a service such as Windows

Azure™ Access Control Service (ACS). ACS accepts tokens issued by

many of the well-known issuers such as Windows Live® network of

Internet services, Google, Facebook, and more. It is the responsibility

of the issuer to authenticate the user and issue claims. ACS can then

perform translation and transformation on the claims using configu-

rable rules, and issue a security token that your application can accept.

Figure 3 shows an overview of the tasks that ACS performs, with

options to authenticate users in conjunction with a local issuer such

as ADFS, and directly without requiring a local issuer.

 

ACS can be config-

ured to trust a range

of social networking

identity providers

that are capable of

authenticating users

and issuing claims, as

well as trusting

enterprise and

custom identity

providers.

 

———————– Page 45———————–

 

88 chapter one

 

figure 3

ACS functions Windows Live Facebook

 

Google [ others ]

 

Redirect

user to a Send

trusted claims

user

 

Trust

 

Issuer (ADFS) Issuer (ACS)

Gather Information

 

e e

t n t n

a e a e

c c

i k i k

t o t o

n t n t

e e e e

h u h u

t t

u s u s

s s

A I A I

 

 

S

e n

n e

d k

 

o

t t

o

k d

e n

n e

S

 

Claims-based

Application

 

For more information about obtaining and configuring an ACS

account, see Appendix E, “Windows Azure Access Control Service.”

 

Claims-based applications expect to receive claims about the user,

but they don’t care about which identity store those claims come

Claims-based applications are from. These applications are loosely coupled to identity. This is one of

loosely coupled to identity. the biggest benefits of claims-based identity.

 

———————– Page 46———————–

 

an introduction to claims 9 9

 

User Anonymity

One option that claims-based applications give you is user anonymity.

Remember that your application no longer directly authenticates the

users; instead, it relies on an issuer to do that and to make claims

about them. If user anonymity is a feature you want, simply don’t ask

for any claim that personally identifies the user. For example, maybe

all you really need is a set of roles to authorize the user’s actions, but

you don’t need to know the user’s name. You can do that with claims-

based identity by only asking for role claims. Some issuers (such as

ADFS and Windows Live ID) support the idea of private user identi-

fiers, which allow you to get a unique, anonymous identifier for a user To maintain user

without any personal information (such as a name or email address). anonymity, it is

important that the

Keep user anonymity in mind when you consider the power of claims- issuer does not

based identity. collude with the

application by

providing additional

Implementing Claims-Based Identity information.

 

There are some general set-up steps that every claims-based system

requires. Understanding these steps will help you when you read

about the claims-based architectures.

 

STEP 1: Add Log IC to Your App LICAt Ions

to s upport CLAIms

When you build a claims-based application, it needs to know how to

validate the incoming security token and how to parse the claims that

are inside. Many types of applications can make use of claims for tasks

such as authorizing users and managing access to resources or func-

tionality. For example, Microsoft SharePoint® applications can sup-

port the use of claims to implement authorization rules. Later chapters

of this guide discuss the use of claims with SharePoint applications.

The Windows Identity Foundation (WIF) provides a common

programming model for claims that can be used by both Windows

Communication Foundation (WCF) and ASP.NET applications. If you

already know how to use methods such as IsInRole and properties

such as Identity.Name, you’ll be happy to know that WIF simply adds

one more property: Identity.Claims. It identifies the claims that were

issued, who issued them, and what they contain.

There’s certainly more to learn about the WIF programming

model, but for now just remember to reference the WIF assembly

(Microsoft.IdentityModel.dll) from your ASP.NET applications and

WCF services in order to use the WIF programming paradigm.

 

———————– Page 47———————–

 

1010 chapter one

 

STEP 2: ACqu IrE or Bu ILd A n IssuEr

For most teams, the easiest and most secure option will be to use

ADFS 2.0 or ACS as the issuer of tokens. Unless you have a great deal

of security experience on your team, you should look to the experts

to supply an issuer. If all users can be authenticated in ADFS 2.0

through the stores it trusts, this is a good option for most situations.

For solutions that require authentication using external stores or so-

cial network identity providers, ACS or a combination of ADFS and

ACS, is a good choice. If you need to customize the issuer and the

extensibility points in ADFS 2.0 or ACS aren’t sufficient, you can li-

cense third-party software or use WIF to build your own issuer. Note,

however, that implementing a production-grade issuer requires spe-

cialized skills that are beyond the scope of this book.

 

While you’re developing applications, you can use a stub issuer that

just returns the claims you need. The Windows Identity Foundation

SDK includes a local issuer that can be used for prototyping and

development. You can obtain the SDK from http://www.microsoft.

com/downloads/en/details.aspx?FamilyID=c148b2df-c7af-46bb-

9162-2c9422208504. Alternatively, you can create a custom STS in

Microsoft Visual Studio® and connect that to your application. For

more information, see “Establishing Trust from an ASP.NET Relying

Party Application to an STS using FedUtil” at http://msdn.micro-

soft.com/en-us/library/ee517285.aspx.

 

STEP 3: Conf Igur E Your App LICAt Ion to t rust

th E Issu Er

After you build a claims-based application and have an issuer to sup-

port it, the next step is to set up a trust relationship. An application

trusts its issuer to identify and authenticate users and make claims

about their identities. When you configure an application to rely on a

specific issuer, you are establishing a trust (or trust relationship) with

that issuer.

Trust is unidirectional. There are several important things to know about an issuer when

The application trusts you establish trust with it:

the issuer, and not the •     What claims does the issuer offer?

other way around.

•     What key should the application use to validate signatures on

the issued tokens?

•     What URL must users access in order to request a token from

the issuer?

 

———————– Page 48———————–

 

an introduction to claims 11 11

 

Claims can be anything you can imagine, but practically speaking,

there are some very common claims offered by most issuers. They

tend to be simple, commonly available pieces of information, such as

first name, last name, email name, groups and/or roles, and so on. Each

issuer can be configured to offer different claims, so the application

(technically, this means the architects and developers who design and

build the application) needs to know what claims are being offered so

it can either select from that list or ask whoever manages the issuer to

expand its offering.

All of the questions in the previous list can easily be answered by

asking the issuer for federation metadata . This is an XML document in

a format defined by the WS-Federation standard that the issuer pro-

vides to the application. It includes a serialized copy of the issuer’s

certificate that provides your application with the correct public key

to verify incoming tokens. It also includes a list of claims the issuer

offers, the URL where users can go to get a token, and other more

technical details, such as the token formats that it knows about (al-

though in most cases you’ll be using the default SAML format under-

stood by the vast majority of issuers and claims-based applications).

WIF includes a wizard that automatically configures your application’s

identity settings based on this metadata. You just need to give the

wizard the URL for the issuer you’ve selected, and it downloads the

metadata and properly configures your application.

SharePoint applications are a typical example of the type of ap-

plication that can be configured to trust claims issued by an enterprise

or a federated claims issuer, including issuers such as ADFS and ACS.

In particular, SharePoint applications that use BCS to access remote

services can benefit from using federated claims issuers.

 

STEP 4: Conf Igur E th E Issu Er to Know AB out

th E App LICAt Ion

The issuer needs to know a few things about an application before it

can issue it any tokens:

•     What Uniform Resource Identifier (URI) identifies this applica-

tion?

•     Of the claims that the issuer offers, which ones does this

application require and which are optional?

•     Should the issuer encrypt the tokens? If so, what key should it

use? Issuers only provide claims

•     What URL does the application expose in order to receive to authorized applications.

tokens?

 

———————– Page 49———————–

 

1212 chapter one

 

Each application is different, and not all applications need the

same claims. One application might need to know the user’s groups or

roles, while another application might only need a first and last name.

So when a client requests a token, part of that request includes an

identifier for the application the user is trying to access. This identi-

fier is a URI and, in general, it’s simplest to just use the URL of the

application, for example, http://www.fabrikam.com/purchasing/.

If you’re building a claims-based web application that has a rea-

sonable degree of security, you’ll require the use of secure sockets

layer (SSL) (HTTPS) for both the issuer and the application. This will

protect the information in the token from eavesdroppers. Applica-

There are, of course,

many reasons why an tions with stronger security requirements can also request encrypted

application shouldn’t tokens, in which case the application typically has its own certificate

get any more (and private key). The issuer needs a copy of that certificate (without

information about a the private key) in order to encrypt the token issued for that applica-

user than it needs.

tion.

Just two of them are

compliance with Once again, federation metadata makes this exchange of informa-

privacy laws and the tion easy. WIF includes a tool named FedUtil.exe that generates a

design practice of federation metadata document for your application so that you don’t

loose coupling. have to manually configure the issuer with all of these settings.

 

A Summary of Benefits

 

To remind you of what you’ve learned, here’s a summary of the ben-

efits that claims can provide to you. Claims decouple authentication

from authorization so that the application doesn’t need to include the

logic for a specific mode of authentication. They also decouple roles

from authorization logic and allow you to use more granular permis-

sions than roles might provide. You can securely grant access to users

who might have previously been inaccessible because they were in

different domains, not part of any corporate domain, or using differ-

ent platforms or technologies.

Finally, you can improve the efficiency of your IT tasks by elimi-

nating duplicate accounts that might span applications or domains

and by preventing critical information from becoming stale.

 

Moving On

 

Now that you have a general idea of what claims are and how to build

a claims-based system, you can move on to the particulars. If you are

interested in more details about claims-based architectures for

browser-based and smart client-based applications, see the Chapter 2,

“Claims-Based Architectures.” If you want to start digging into the

 

———————– Page 50———————–

 

an introduction to claims 13 13

 

specifics of how to use claims, start reading the scenarios. Each of the

scenarios shows a different situation and demonstrates how to use

claims to solve the problem. New concepts are explained within the

framework of the scenario to give you a practical understanding of

what they mean. You don’t need to read the scenarios sequentially,

but each chapter presumes that you understand all the material that

was explained in earlier chapters.

 

Questions

 

1. Under what circumstances should your application or

service accept a token that contains claims about the user

or requesting service?

 

a. The claims include an email address.

 

b. The token was sent over an HTTPS channel.

 

c. Your application or service trusts the token issuer.

 

d. The token is encrypted.

 

2. What can an application or service do with a valid token

from a trusted issuer?

 

a. Find out the user’s password.

 

b. Log in to the website of the user’s identity provider.

 

c. Send emails to the user.

 

d. Use the claims it contains to authorize the user for

access to appropriate resources.

 

3. What is the meaning of the term identity federation?

 

a. It is the name of a company that issues claims about

Internet users.

 

b. It is a mechanism for authenticating users so that they

can access different applications without signing on

every time.

 

c. It is a mechanism for passing users’ credentials to

another application.

 

d. It is a mechanism for finding out which sites a user

has visited.

 

———————– Page 51———————–

 

1414 chapter one

 

4. When would you choose to use Windows Azure AppFabric

Access Control Service (ACS) as an issuer for an application

or service?

 

a. When the application must allow users to sign on

using a range of well-known social identity credentials.

 

b. When the application is hosted on the Windows

Azure platform.

 

c. When the application must support single sign-on

(SSO).

 

d. When the application does not have access to an alter-

native identity provider or token issuer.

 

5. What are the benefits of using claims to manage authoriza-

tion in applications and services?

 

a. It avoids the need to write code specific to any one

type of authentication mechanism.

 

b. It decouples authentication logic from authorization

logic, making changes to authentication mechanisms

much easier.

 

c. It allows the use of more fine-grained permissions

based on specific claims compared to the granularity

achieved just using roles.

 

d. It allows secure access for users that are in a different

domain or realm from the application or service.

 

———————– Page 52———————–

 

2 Claims-Based Architectures

 

The web is full of interactive applications that users can visit by simply

clicking a hyperlink. Once they do, they expect to see the page they

want, possibly with a brief stop along the way to log on. Users also

expect websites to manage their logon sessions, although most of

them wouldn’t phrase it that way. They would just say that they don’t

want to retype their password over and over again as they use any of

their company’s web applications. For claims to flourish on the web,

it’s critical that they support this simple user experience, which is

known as single sign-on.

If you’ve been a part of a Microsoft® Windows® domain, you’re For claims-based

already familiar with the benefits of single sign-on. You type your applications, single

sign-on for the web

password once at the beginning of the day, and that grants you access is sometimes called

to a host of resources on the network. Indeed, if you’re ever asked to passive federation .

type your password again, you’re going to be surprised and annoyed.

You’ve come to expect the transparency provided by Integrated Win-

dows Authentication.

Ironically, the popularity of Kerberos has led to its downfall as a

flexible, cross-realm solution. Because the domain controller holds the

keys to all of the resources in an organization, it’s closely guarded by

firewalls. If you’re away from work, you’re expected to use a VPN to

access the corporate network. Also, Kerberos is inflexible in terms of

the information it provides. It would be nice to extend the Kerberos

ticket to include arbitrary claims such as the user’s email address, but

this isn’t a capability that exists right now.

Claims were designed to provide the flexibility that other proto-

cols may not. The possibilities are limited only by your imagination and

the policies of your IT department. The standard protocols that ex-

change claims are specifically designed to cross boundaries such as

security realms, firewalls, and different platforms. These protocols

were designed by many who wanted to make it easier to securely

communicate with each other.

 

15

 

———————– Page 53———————–

 

1616 chapter two

 

Claims decouple your applications from the details of identity.

With claims, it’s no longer the application’s responsibility to authenti-

cate users. All your application needs is a security token from the is-

suer that it trusts. Your application won’t break if the IT department

decides to upgrade security and require users to submit a smart card

instead of submitting a user name and password. In addition, it won’t

need to be recoded, recompiled, or reconfigured.

There’s no doubt that domain controllers will continue to guard

organizational resources. Also, the business challenges, such as how to

resolve issues of trust and how to negotiate legal contracts between

companies who want to use federated identity techniques, remain.

Claims work in conjunction with Claims-based identity isn’t going to change any of that. However, by

your existing security systems to layering claims on top of your existing systems, you can remove some

broaden their reach and reduce of the technical hurdles that may have been impeding your access to

technical obstacles. a broad, flexible single sign-on solution.

 

A Closer Look at Claims-Based Architectures

 

There are several architectural approaches you can use to create

claims-based applications. For example, web applications and SOAP

web services each use slightly different techniques, but you’ll quickly

recognize that the overall shapes of the handshakes are very similar

because the goal is always the same: to communicate claims from the

issuer to the application in a secure fashion. This chapter shows you

how to evaluate the architectures from a variety of perspectives, such

as the user experience, the performance implications and optimiza-

tion opportunities, and how the claims are passed from the issuer to

the application. The chapter also offers some advice on how to design

your claims and how to know your users.

The goal of many of these architectures is to enable federation

with either a browser or a smart client. Federation with a smart client

is based on WS-Trust and WS-Federation Active Requestor Profile.

These protocols describe the flow of communication between smart

clients (such as Windows-based applications) and services (such as

WCF services) to request a token from an issuer and then pass that

token to the service for authorization.

Federation with a browser is based on WS-Federation Passive

Requestor Profile, which describes the same communication flow

between the browser and web applications. It relies on browser redi-

rects, HTTP GET, and POST to request and pass around tokens.

 

———————– Page 54———————–

 

claims-based architectures 17 17

 

Browser-Based Applications

The Windows Identity Foundation (WIF) is a set of .NET Framework

classes that allow you to build claims-aware applications. Among

other things, it provides the logic you need to process WS-Federation

requests. The WS-Federation protocol builds on other standard pro-

tocols such as WS-Trust and WS-Security. One of its features is to

allow you to request a security token in browser-based applications.

WIF makes claims seem much like forms authentication. If users

need to sign in, WIF redirects them to the issuer’s logon page. Here,

the user is authenticated and is then redirected back to the applica-

tion. Figure 1 shows the first set of steps that allow someone to use

single sign-on with a browser application.

 

Issuer

 

Login Page

 

.

n

i

 

n 2.

R

g e

i d

i

r

S e

. c

t

3 r

e

q

u

e

s

t

.

 

1. Send initial request.

Application

 

figure 1

Single sign-on with a browser, part 1

 

If you’re familiar with ASP.NET forms authentication, you might

assume that the issuer in the preceding figure is using forms authenti-

cation if it exposes a page named Login.aspx. But this page may simply

be an empty page that is configured in Internet Information Services

(IIS) to require Integrated Windows Authentication or a client cer-

tificate or smart card. An issuer should be configured to use the most

natural and secure method of authentication for the users that sign in

there. Sometimes a simple user name and password form is enough,

but obviously this requires some interaction and slows down the user.

Integrated Windows Authentication is easier and more secure for

employees in the same domain as the issuer.

 

———————– Page 55———————–

 

1818 chapter two

 

When the user is redirected to the issuer’s log-on page, several

query string arguments defined in the WS-Federation standard are

passed that act as instructions to the issuer. Here are two of the key

arguments with example values:

wa=wsignin1.0

The wa argument stands for “action,” and indicates one of

two things—whether you’re logging on (wsignin1.0) or

logging off (wsignout1.0).

wtrealm=http://www.fabrikam.com/purchasing/

The wtrealm argument stands for “target realm” and

contains a Uniform Resource Indicator (URI) that identifies

the application. The issuer uses the URI to identify the

application the user is logging on to. The URI also allows the

issuer to perform other tasks, such as associating the claims

for the application and replying to addresses.

After the issuer authenticates the user, it gathers whatever claims

the application needs (using the wtrealm parameter to identify the

The issuer is told which target application), packages them into a security token, and signs

application is in use so that the token with its private key. If the application wants its tokens

it issues only the claims that encrypted, the issuer encrypts the token with the public key in the

the application needs. application’s certificate.

Now the issuer asks the browser to go back to the application.

The browser sends the token to the application so it can process the

claims. Once this is done, the user can begin using the application.

To accomplish this, the issuer returns an HTML page to the

browser, including a <form> element with the form-encoded token

inside. The form’s action attribute is set to submit the token to what-

ever URL was configured for the application. The user doesn’t nor-

mally see this form because the issuer also emits a bit of JavaScript

that auto-posts it. If scripts are disabled, the user will need to click a

button to post the response to the server. Figure 2 shows this process.

 

If this sounds familiar,

it’s because forms

authentication uses

a similar redirection

technique with

the ReturnURL

parameter.

 

———————– Page 56———————–

 

claims-based architectures 19 19

 

Issuer

 

Login Page

 

.

n

n e

r >

u m k

t r o 5

t .

e o S

R f h u

. t b

4 < i m

i

t

w

.

 

6. Post <form>,

application

recieves token.

Application

 

7. WIF validates token and issues a cookie.

8. WIF presents the claims to the application.

9. Application processes claims and continues.

 

figure 2

Single sign-on with a browser, part 2

 

Now consider this process from the user’s experience. If the is-

suer uses Integrated Windows Authentication, the user clicks the link

to the application, waits for a moment while the browser is first redi-

rected to the issuer and then back to the application, and then the

user is logged on without any additional input. If the issuer requires

input from the user, such as a user name and password or a smart card,

users must pause briefly to log on, and then they can use the applica-

tion. From the user’s point of view, the logon process with claims is

the same as what he or she is used to, which is critical.

 

Understanding the Sequence of Steps

The steps illustrated in the preceding illustrations can also be depicted

as a sequence of steps that occur over time. Figure 3 shows this se-

quence when authenticating against Active Directory Federation

Services (ADFS) and Active Directory.

 

———————– Page 57———————–

 

2020 chapter two

 

a-Expense : Active Directory :

John : Browser ADFS : Issuer

Application Directory

 

Browse application

 

1

User is not authenticated.

 

Query for user

Browse to issuer (with Kerberos ticket). attributes such

as email, name,

2 3 and cost center.

SAML token signed.

4 5

POST signed

SAML token.

 

Receive the home WIF validates

6 page and a cookie token.

 

This is coordinated by the

WSFederationAuthenticationModule

(FAM).

 

Send another page and a cookie.

 

7 WIF populates

Receive another page. ClaimsPrincipal.

This is coordinated by the

SessionAuthenticationModule

(SAM).

 

figure 3

An audience restriction deter – Browser-based message sequence

mines the URIs the application

will accept. When applying for If a user is not authenticated, the browser requests a token from

a token, the user or application the issuer, which in this case is Active Directory Federation Services

will usually specify the URIs for (ADFS). ADFS queries Active Directory for the necessary attributes

which the token should be valid and returns a signed token to the browser.

(the AppliesTo value, typically After the POST arrives at the application, WIF takes over.

the URL of the application). The application has configured a WIF HTTP module, named WS

The issuer includes this as the FederationAuthenticationModule (FAM), to intercept this POST to

audience restriction in the token the application and handle the processing of the token. The FAM

it issues. listens for the AuthenticateRequest event. The event handler

 

———————– Page 58———————–

 

claims-based architectures 21 21

 

performs several validation steps, including figure 4

checking the token’s audience restriction and the Sequence of steps for initial request

expiration date. Audience restriction is defined by

Event :

the AudienceURI element.

SessionSecurityTokenReceived

The FAM also uses the issuer’s public key to Arguments :

make sure that the token was signed by the raw security token Validate the token

trusted issuer and was not modified in transit. with the

corresponding

Then it parses the claims in the token and uses the security token

HttpContext.User.Identity property (or equiva- handler, such as

lently the Page.User property) to present an SAML 1.1, SAML 2.0,

encrypted or custom.

IClaimsPrincipal object to the application. It also

issues a cookie to begin a logon session, just like

what would happen if you were using forms Create the

authentication instead of claims. This means that ClaimsPrincipal object

with the claims inside.

the authentication process isn’t repeated until the

user signs off or otherwise destroys the cookie, or

until the session expires (sessions are typically Use the

designed to last for a single workday). ClaimsAuthenticationMananger

Figure 4 shows the steps that WIF takes for class to enrich the

ClaimsPrincipal

the initial request, when the application receives a object.

token from the issuer.

One of the steps that the FAM performs is

Event :

to create the session token. On the wire, this SessionSecurityTokenValidated

translates into a sequence of cookies named Arguments :

FedAuth[n]. These cookies are the result of ClaimsPrincipal

 

compressing, encrypting, and encoding the Claims

Create the

Principal object, along with any other attributes. SessionsSecurityToken:

The cookies are chunked to avoid overstepping Encode(Chunk(Encrypt

any size limitations. (ClaimsPrincipal+lifetime+

[original token]))).

Figure 5 shows what the network traffic

looks like for the initial request.

 

Set the HTTPContext.User

property to the

ClaimsPrincipal object.

Convert the session

token into a set

of chunked cookies.

 

Redirect to the

original return URL,

if it exists.

 

figure 5

Sequence of cookies

 

———————– Page 59———————–

 

2222 chapter two

 

figure 6 On subsequent requests to the application, the

Steps for subsequent Check that the SessionAuthenticationModule intercepts the cookies

requests

cookie is present. and uses them to reconstruct the ClaimsPrincipal

If it is, object. Figure 6 shows the steps that WIF takes for any

recreate the

SessionSecurityToken subsequent requests.

by decoding, Figure 7 shows what the network traffic looks like

decrypting, and

decompressing for subsequent requests.

the cookie.

 

Event :

SessionSecurityTokenReceived

Arguments :

session token

 

Check the

SessionSecurityToken

expiration date.

 

Create the

ClaimsPrincipal object

with the claims inside.

 

Set the

HTTPContext.User

property to the

ClaimsPrincipal object.

 

figure 7

Network traffic for subsequent responses

 

All of the steps, both for the initial and subsequent

requests, should run over the Secure Sockets Layer

(SSL) to ensure that an eavesdropper can’t steal either

the token or the logon session cookie and replay them

to the application in order to impersonate a legitimate

user.

 

———————– Page 60———————–

 

claims-based architectures 23 23

 

Optimizing Performance

Are there opportunities for performance optimizations here? The Applications and issuers use

answer is a definite “Yes.” You can use the logon session cookie to cookies to achieve Internet-

cache some state on the client to reduce round-trips to the issuer. The friendly single-sign on.

issuer also issues its own cookie so that users remain logged on at the Single sign-on is also

issuer and can access many applications. Think about how this possible using ACS when a

works—when a user visits a second application and that application local issuer such as ADFS

redirects back to the same issuer, the issuer sees its cookie and knows is not available. However,

the user has recently been authenticated, so it can immediately issue ACS is primarily aimed at

a token without having to authenticate again. This is how to use federated identity scenarios

claims to achieve Internet-friendly single sign-on with a browser- where the user is authenti-

based application. cated in a different realm

from the application. ACS

Smart Clients is discussed in more detail

When you use a web service, you don’t use a browser. Instead, you use in the section “Federated

an arbitrary client application that includes logic for handling claims- Identity with ACS” later

based identity protocols. There are two protocols that are important in this chapter.

in this situation: WS-Trust, which describes how to get a security to-

ken from an issuer, and WS-Security, which describes how to pass that

security token to a claims-based web service.

Recall the procedure for using a SOAP-based web service. You use

the Microsoft Visual Studio® development system or a command-line

tool to download a Web Service Definition Language (WSDL) docu-

ment that supplies the details of the service’s address, binding, and

contract. The tool then generates a proxy and updates your applica-

tion’s configuration file with the information discovered in the WSDL

document. When you do this with a claims-based service, its WSDL

document and its associated WS-Policy document supply all the nec-

essary details about the issuer that the service trusts. This means that

the proxy knows that it needs to obtain a security token from that

issuer before making requests to the service. Because this information

is stored in the configuration file of the client application, at run time

the proxy can get that token before talking to the service. This opti-

mizes the handshake a bit compared to the browser scenario, because

the browser had to visit the application first before being redirected

to the issuer. Figure 8 shows the sequence of steps for smart clients

when the issuer is ADFS authenticating users against Active Directory.

 

———————– Page 61———————–

 

2424 chapter two

 

Rick : Orders : ADFS : Issuer Active Directory:

Application Web Service Directory

 

Use the username to

request a security

token. Validate credentials

and query for user

attributes such as

1 email, name, and

cost center.

 

Return the signed

SAML token.

 

These interactions can be

orchestrated by the WCF

Call operation 1 on WSFederation binding.

the web service When the client proxy

with the SAML wants to call the service, it

first tries to obtain a token.

token.

 

Send the SOAP

2

response. WIF

validates

token.

 

If the client makes a second

call to the web service, it

obtains a new token from

the issuer, unless it cached

the token obtained at the

first call.

 

figure 8

Smart client-based message sequence

 

The steps for a smart client are similar to those for browser-based

applications. The smart client makes a round-trip to the issuer, using

WS-Trust to request a security token. In step 1, The Orders web ser-

vice is configured with the WSFederationHttpBinding. This binding

specifies a web service policy that obligates the client to attach a

SAML token to the security header to successfully invoke the web

service. This means that the client will first have to call the issuer with

a set of credentials such as a user name and password to get a SAML

token back. In step 2, the client can call the web service with the to-

ken attached to the security header.

 

———————– Page 62———————–

 

claims-based architectures 25 25

 

Figure 9 shows a trace of the messages that occur in the smart

client scenario.

 

figure 9

Smart client network traffic

 

The WS-Trust request (technically named a Request for Security

Token, or RST for short) includes a field named AppliesTo, which

allows the smart client to indicate a URI for the web service it’s

ultimately trying to access. This is similar to the wtrealm query string

argument used in the case of a web browser. Once the issuer authen-

ticates the user, it knows which application wants access and it can

decide which claims to issue. Then the issuer sends back the response

(RSTR), which includes a signed security token that is encrypted with

the public key of the web service. The token includes a proof key. This

is a symmetric key randomly generated by the issuer and included as

part of the RSTR so that the client also gets a copy.

Now it’s up to the client to send the token to the web service in

the <Security> header of the SOAP envelope. The client must sign the

SOAP headers (one of which is a time stamp) with the proof key to

show that it knows the key. This extra cryptographic evidence further

assures the web service that the caller was, indeed, the one who was

issued the token in the first place.

At this point, it’s typical to start a session using the WS-Secure

Conversation protocol. The client will probably cache the RSTR for

up to a day in case it needs to reconnect to the same service later on.

 

SharePoint Applications

and SharePoint BCS

A common requirement for single sign-on and federated identity is in

Microsoft SharePoint® applications, including those that use the Busi-

ness Connectivity Services (BCS) to work with data exposed by re-

mote services. Microsoft SharePoint Server 2010 implements a claims-

based identity model that supports authentication across users of

Windows-based and non-Windows -based systems, multiple authen-

tication types, a wide set of principal types, and delegation of user

identity between applications.

SharePoint 2010 can accept claims provided as SAML tokens,

and can use them to make identity-related decisions. These decisions

may consist of simple actions such as personalization based on the

 

———————– Page 63———————–

 

2626 chapter two

 

user name, or more complex actions such as authorizing access to

features and functions within the application.

SharePoint also includes a claims provider that can issue claims

and package these claims into security tokens. It can augment tokens

by adding additional claims, and expose the claims in the SharePoint

people picker tool. The ability to augment existing tokens makes it

easier to build SharePoint applications that use BCS to access remote

services for which authentication is required.

Chapter 10, “Accessing REST Services from a Windows Phone

Device” and Chapter 11, “Claims-Based Single Sign-On for Microsoft

SharePoint 2010″ provide more information about using claims and

issuers in SharePoint 2010. A guide to using claims in SharePoint is

available at “Getting Started with Security and Claims-Based Identity

Model” on the MSDN® website ( http://msdn.microsoft.com/en-us/

library/ee536164.aspx).

 

Federating Identity across Realms

 

So far you’ve learned enough about claims-based identity to under-

stand how to design and build a claims-based application where the

issuer directly authenticates the users.

But you can take this one step further. You can expand your is-

suer’s capabilities to accept a security token from another issuer, in-

stead of requiring the user to authenticate directly. Your issuer now

not only issues security tokens, but also accepts tokens from other

issuers that it trusts. This enables you to federate identity with other

realms (these are separate security domains), which is truly a powerful

feature. Much of the federation process is actually accomplished by

your IT staff, because it depends on how issuers are configured. But

it’s important to be aware of these possibilities because, ultimately,

they lead to more features for your application, even though you

might not have to change your application in any way. Also, some of

these possibilities may have implications for your application’s design.

 

The Benefits of Cross-Realm Identity

Maintaining an identity database for users can be a daunting task.

Even something as simple as a database that holds user names and

passwords can be painful to manage. Users forget their passwords on

a regular basis, and the security stance taken by your company may

not allow you to simply email forgotten passwords to them the way

many low-security websites do. If maintaining a database for users

inside your enterprise is difficult, imagine doing this for hundreds or

thousands of remote users.

 

———————– Page 64———————–

 

claims-based architectures 27 27

 

Managing a role database for remote users is just as difficult.

Imagine Alice, who works for a partner company and uses your pur-

chasing application. On the day that your IT staff provisioned her

account, she worked in the purchasing department, so the IT staff

assigned her the role of Purchaser, which granted her permission to

use the application. But because she works for a different company,

how is your company going to find out when she transfers to the Sales

department? What if she quits? In both cases, you’d want to know

about her change of status, but it’s unlikely that anyone in the HR

department at her company is going to notify you.

It’s unavoidable that any data you store about a remote user even- Alice’s identity is

an asset of Alice’s

tually becomes stale. How can you safely expose an application for a organization, so her

partner business to use? company should

One of the most powerful features of claims-based identity is manage it. Also,

that you can decentralize it. Instead of having your issuer authenticate storing information

remote users directly, you can set up a trust relationship with an is- about remote users

can be considered

suer that belongs to the other company. This means that your issuer a liability for your

trusts their issuer to authenticate users in their realm. Their employees company.

are happy because they don’t need special credentials to use your

application. They use the same single sign-on mechanism they’ve al-

ways used in their company. Your application still works because it

continues to get the same boarding pass it needs. The claims you get

in your boarding pass for these remote users might include less power-

ful roles because they aren’t employees of your company, but your

issuer will be responsible for determining the proper assignments. Fi- Claims can be used to

nally, your application doesn’t need to change when a new organiza- decentralize identity,

tion becomes a partner. The fan-out of issuers to applications is a real eliminating stale data

benefit of using claims—you reconfigure one issuer and many down- about remote users.

stream applications become accessible to many new users.

Another benefit is that claims allow you to logically store data

about users. Data can be kept in the store that is authoritative rather

than in a store that is simply convenient to use or easily accessible.

Identity federation removes hurdles that may have stopped you

from opening the doors to new users. Once your company decides

which realms should be allowed access to your claims-based applica-

tion, your IT staff can set up the proper trust relationships. Then you

can, for example, invite employees from a company that uses Java, to

access your application without having to issue passwords for each of

them. They only need a Java-based issuer, and those have been avail-

able for years. Another possibility is to federate identity with Win-

dows Live® network of Internet services, which supports claims-based

identity. This means that anyone with a Windows Live ID can use your

application.

 

———————– Page 65———————–

 

2828 chapter two

 

How Federated Identity Works

You’ve already seen how federated identity works within a single

realm. Indeed, Figure 2 is a small example of identity federation be-

tween your application and a local issuer in your realm. That relation-

ship doesn’t change when your issuer interacts with an issuer it trusts

in a different realm. The only change is that your issuer is now config-

ured to accept a security token issued by a partner company instead

of directly authenticating users from that company. Your issuer trusts

another issuer to authenticate users so it doesn’t have to. This is simi-

lar to how your application trusts its issuer.

Figure 10 shows the steps for federating identity across realms.

 

Their Trust My

Issuer Issuer

 

Application

 

3 4

. . 5.

S I

e s S

n s e

d u n

e d

t t

. o o t

n k k o

e e e k

k n n e

o . . n

t .

 

e

u

s

. s

e I

 

t .

a 2

c

i

t

n

e

h

t

u

A

 

.

1

 

figure 10

Federating identity across realms

 

Federating identity across realms is exactly the same as you’ve

seen in the earlier authentication techniques discussed in this chapter,

with the addition of an initial handshake in the partner’s realm. Users

first authenticate with an issuer in their own realm. They present the

tokens they receive from their exchanges to your issuer, which accepts

it in lieu of authenticating them directly. Your issuer can now issue a

token for your application to use. This token is what the user sends to

your application. (Of course, users know nothing about this proto-

col—it’s actually the browser or smart client that does this on their

behalf). Remember, your application will only accept tokens signed by

the one issuer that it trusts. Remote users won’t get access if they try

to send a token from their local issuer to your application.

 

———————– Page 66———————–

 

claims-based architectures 29 29

 

At this point, you may be thinking, “Why should my company

trust some other company to authenticate people that use my appli-

cation? That doesn’t seem safe!” Think about how this works without

claims-based identity. Executives from both companies meet and sign

legal contracts. Then the IT staff from the partner company contacts

your IT staff and specifies which of their users need accounts provi-

sioned and which roles they will need. The legal contracts help ensure

that nobody abuses the trust that’s been established. This process has

been working for years and is an accepted practice.

Another question is why should you bother provisioning accounts

for those remote users when you know that data will get stale over

time? All that claims-based identity does is help you automate the

trust, so that you get fresh information each time a user visits your

application. If Alice quits, the IT staff at her company has great per-

sonal incentive to disable her account quickly. They don’t want a po-

tentially disgruntled employee to have access to company resources.

That means that Alice won’t be able to authenticate with their issuer

anymore, which means she won’t be able to use your application, ei-

ther. Notice that nobody needed to call you up to tell you about Alice.

By decentralizing identity management, you get better information

(authoritative information, you could say) about remote users in a

timely fashion.

 

Claims can be used to automate existing trusts between businesses.

 

One possible drawback of federating identity with many other

companies is that your issuer becomes a single point of failure for all

of your federation relationships. Issuers should be as tightly guarded

as domain controllers. Adding features is never without risk, but the

rewards can lead to lower costs, better security, simpler applications,

and happier users.

 

Federated Identity with ACS

Many users already have accounts with identity providers that authen-

ticate users for one or more applications and websites. Social net-

works such as Facebook, and email and service providers such as

Windows Live ID and Google, often use a single sign-on model that

supports authentication for several applications. Users increasingly

expect to be able to use the credentials for these identity providers

when accessing other applications.

ACS is an issuer that can make use of many of these identity pro-

viders by redirecting the user to the appropriate location to enter

credentials, and then using the claims returned from that identity

provider to issue a token to the applications. ACS can also be used to

supplement a local issuer by retrieving claims from a social networking

or email provider and passing these to the local issuer for it to issue

 

———————– Page 67———————–

 

3030 chapter two

 

the required token. ACS effectively allows a broad range of identity

providers to be used for user authentication, both in conjunction with

a local issuer and when no local issuer is available.

Figure 11 shows the overall sequence of steps for a user authen-

ticating with an identity provider through ACS after a request for

authentication has been received by ACS. ACS redirects the user to

the appropriate identity provider. After successful authentication,

ACS and ADFS map claims for the user and then return a token to the

relying party (the claims-based application). Steps 5 and 6, where the

intervention of a local issuer takes place, will only occur if the applica-

It is important for users to tion is configured to use a local issuer such as ADFS that redirects the

understand that, when they user to ACS.

use their social identity

provider credentials to log in Social Identity Providers :

ACS : ADFS :

through ACS, they are − Google − Transition protocols − Map Claims

consenting to some informa- − Windows LiveID

− Facebook − Map claims

tion (such as their name and − etc.

email address) being sent to

n n n

the application. However, e e e e

n k k k n

t o o o

e t e

a k t t d k

c o n o

giving this consent does not i t d n e t

t n r S n

n e e u . r

e u S t 5 u

provide the application with h s e t

t s . R e

u I 3 . R

 

access to their social network A 2. 4 6.

.

1

 

account—it just confirms

their identity to the

application.

 

Claims Based Application

 

token

7. Send

 

figure 11

Federated identity with ACS as the issuer,

optionally including an ADFS local issuer

 

For more details about ACS and the message sequences with

and without a local issuer, see Appendix B, “Message Sequences,”

and Appendix E, “Windows Azure Access Control Service.”

 

A major consideration when using ACS is whether you should

trust the identity providers that it supports. You configure ACS to use

only the identity providers you specifically want to trust, and only

these will be available to users when they log into your application.

For example, depending on your requirements, you may decide to ac-

cept authentication only through Windows Live ID and Google, and

not allow users to log in with a Facebook account. Each identity

provider is an authority for users that successfully authenticate, and

 

———————– Page 68———————–

 

claims-based architectures 31 31

 

each provides proof of this by returning claims such as the user name,

user identifier, and email address.

ACS generates a list of the configured identity providers from

which users can select the one they want to use. You can create cus-

tom pages that show the available identity providers within your own

application if required, and configure rules within ACS that transform

and map the claims returned from the identity provider. After the user

logs in at their chosen identity provider, ACS returns a token that the

application or a local issuer such as ADFS can use to provide authori-

zation information to the application as required.

 

Each identity

Understanding the Sequence of Steps provider will return

Figure 12 shows the sequence of steps for ACS in more detail a different set of

when there is no local issuer. claims. For example,

Windows Live ID

returns a user

Br App ACS Google Live ID identifier, whereas

 

Google returns

the user name and

email address.

 

Not Auth

 

Get Token

 

HRD page

Select IP

 

Redirect

AuthN

 

Redirect + Token

Transform

 

figure 12

ACS federated identity message sequence

 

The user accesses the application and fails authentication. The

browser is redirected to ACS, which generates and returns the list of

accepted identity providers (which may include custom issuers or

another ADFS instance as well as social identity providers and email

services). The user selects the required identity provider, and ACS

redirects the user to that identity provider’s login page. After the

identity provider authenticates the user, it returns a token to ACS that

declares the user to be valid. ACS then maps the claims and generates

a token that declares this user to be valid, and redirects the user to the

 

———————– Page 69———————–

 

3232 chapter two

 

application. The application uses the token to authorize the user for

the appropriate tasks.

This means that the authority for the user’s identity differs at

each stage of the process. For example, if the user chooses to authen-

ticate with Google, then the Google token issuer is the authority in

declaring the user to be valid with them, and it returns proof in the

form of a name and email address. When redirected to ACS, the

browser presents the Google token and ACS becomes the authority

on issuing claims about the user based on the valid token from Google

(called a copy claim). ACS can perform transformation and mapping,

such as to include the claim that this user works in a specific company

and has a specific role in the application.

 

Combining ACS and ADFS

If, instead of authenticating with ACS, the user was originally redi-

rected by the application to a local issuer such as ADFS, which in-

cludes ACS amongst its trusted issuers, the local issuer receives the

token from ACS and becomes the authority in declaring the user valid

based on the claims returned from ACS. The local issuer can also per-

form transformation and mapping, such as to include the claim that

this user works in a specific company and has a specific role in the

application. A scenario that illustrates when this is useful is described

in detail in Chapter 5, “Federated Identity with Windows Azure Ac-

cess Control Service.”

 

Identity Transformation

The issuer’s job is to take some generic incoming identity (perhaps

from a Kerberos ticket, an X.509 certificate, or a set of user creden-

tials) and transform it into a security token that your application can

use. That security token is like the boarding pass, in that it contains all

of the user’s identity details that your application needs to do its job,

I think of an issuer and nothing more. Perhaps instead of the user’s Windows groups,

as an “identity your boarding pass contains roles that you can use right away.

transformer.” It On the other end of the protocol are users who can use their

converts incoming single sign-on credentials to access many applications because the is-

identities into suer in their realm knows how to authenticate them. Their local issuer

something that’s

intelligible to the provides claims to applications in their local realm as well as to issuers

application. in other realms so that they can use many applications, both local and

remote, without having to remember special credentials for each one.

Consider the application’s local issuer in the last illustration, “Fed-

erating identity across realms.” It receives a security token from a user

in some other realm. Its first job is to reject the request if the incom-

ing token wasn’t issued by one of the select issuers that it trusts. But

once that check is out of the way, its job now becomes one of claims

 

———————– Page 70———————–

 

claims-based architectures 33 33

 

transformation. It must transform the claims made by the remote is- ADFS uses a rules

suer into claims that make sense for your application. For a practical engine to support

example, see Chapter 4, “Federated Identity for Web Applications.” claims transformation.

Transformation is carried out with rules such as, “If you see a

claim of this type, with this value, issue this claim instead.” For exam-

ple, your application may have a role called Managers that grants

special access to manager-specific features. That claim may map di-

rectly to a Managers group in your realm, so that local users who are

in the Managers group always get the Managers role in your applica-

tion. In the partner’s realm, they may have a group called Supervisors

that needs to access the manager-specific features in your application.

The transformation from Supervisors to Managers can happen in their

issuer; if it does not, it must happen in yours. This transformation

simply requires another rule in the issuer. The point is that issuers such

as ADFS and ACS are specifically designed to support this type of

transformation because it’s rare that two companies will use exactly In ACS, the transfor-

the same vocabulary. mation and mapping

rules are configured

Home Realm Discovery using the web-based

Now that you’ve seen the possibility of cross-realm federation, think administration portal

about how it works with browser-based applications. Here are the or by making OData-

formatted calls to

steps: the management API.

 

1. Alice (in a remote realm) clicks a link to your application.

 

2. You redirect Alice to your local issuer, just like before.

 

3. Your issuer redirects Alice’s browser to the issuer in her

realm.

 

4. Alice’s local issuer authenticates and issues a token, sending

Alice’s browser back to your issuer with that token.

 

5. Your issuer validates the token, transforms the claims, and

issues a token for your application to use.

 

6. Your issuer sends Alice’s browser back to your application,

with the token that contains the claims your application

needs.

The mystery here is in step 3. How does the issuer know that

Alice is from a remote realm? What prevents the issuer from thinking

she’s a local user and trying to authenticate her directly, which will

only fail and frustrate the user? Even if the issuer knew that Alice was

from a remote realm, how would it know which realm it was? After

all, it’s likely that you’ll have more than one partner.

This problem is known as home realm discovery. Your issuer has

to determine if Alice is from the local realm or if she’s from some

partner organization. If she’s local, the issuer can authenticate her

 

———————– Page 71———————–

 

3434 chapter two

 

directly. If she’s remote, the issuer needs to know a URL to redirect

her to so that she can be authenticated by her home realm’s issuer.

There are two ways to solve this problem. The simplest one is to

have the user help out. In step 2, when Alice’s browser is redirected to

your local issuer, the authentication sequence pauses and the browser

displays a web page asking her what company she works for. (Note

that it doesn’t help Alice to lie about this, because her credentials are

only good for one of the companies on the list—her company.) Alice

clicks the link for her company and the process continues, since the

issuer now knows what to do. To avoid asking Alice this question in

Take a look at the future, your issuer sets a cookie in her browser so that next time

Chapter 3, “Claims- it will know who her issuer is without having to ask.

Based Single Sign-On If the issuer is ACS, it will automatically generate and display a

for the Web,” to see

an example of this page containing the list of accepted identity providers. Alice must

technique. select one of these, and her choice indicates her home realm. If ACS

is using a trusted instance of an ADFS security token service (STS) as

an identity provider, the home realm discovery page can contain a

textbox as well as (or instead of) the list of configured identity provid-

ers where a user can enter a corresponding email address. The user is

then authenticated by the ADFS STS.

The second way to solve this problem is to add a hint to the

query string that’s in the link that Alice clicks in step 1. That query

string will contain a parameter named whr ( hr stands for home realm).

The issuer looks for this hint and automatically maps it to the

URL of the user’s home realm. This means that the issuer doesn’t have

to ask Alice who her issuer is because the application relays that infor-

mation to the issuer. The issuer uses a cookie, just as before, to ensure

that Alice is never bothered with this question.

 

My IT people make sure that

the links to remote

applications always include

this information. It makes

the application much

friendlier for the user and

protects the privacy of my

company by not revealing

all of its partners. Take a look at Chapter 4,

“Federated Identity for

Web Applications,” to

see an example of this

technique.

 

———————– Page 72———————–

 

claims-based architectures 35 35

 

Design Considerations for Claims-Based

Applications

 

Admittedly, it’s difficult to offer general prescriptive guidance for

designing claims because they are so dependent on the particular ap-

plication. This section poses a series of questions and offers some

approaches to consider as you look at your options.

 

What Makes a Good Claim?

Like many of the most important design decisions, this question

doesn’t always have a clear answer. What’s important is that you un-

derstand the tensions at play and the tradeoffs you’re facing. Here are

some concrete examples that might help you start thinking about

some general criteria for what makes a good claim.

First, consider a user’s email address. That’s a prime candidate for

a claim in almost any system, because it’s generally very tightly coupled

to the user’s identity, and it’s something that everyone needs if you

decide to federate identity across realms. An email name can help you

personalize your system for the user in a very meaningful way.

What about a user’s choice of a skin or theme for your website?

Certainly, this is “personalization” data, but it’s also data that’s par-

ticular to a single application, and it’s hard to argue that this is part of

a user’s identity. Your application should manage this locally.

What about a user’s permission to access data in your application?

While it may make sense in some systems to model permissions as

claims, it’s easy to end up with an overwhelming number of these

claims as you model finer and finer levels of authorization. A better

approach is to define a boundary that separates the authorization

data you’ll get from claims from the data you’ll handle through other

means. For example, in cross-realm federation scenarios, it can be

beneficial to allow other realms to be authoritative for some high-

level roles. Your application can then map those roles onto fine-

grained permissions with tools such as Windows Authorization

Manager (AzMan). But unless you’ve got an issuer that’s specifically

designed for managing fine-grained permissions, it’s probably best to

keep your claims at a much higher level.

Before making any attribute into a claim, ask yourself the follow-

ing questions:

•     Is this data a core part of how I model user identity?

•     Is the issuer an authority on this information?

•     Will this data be used by more than one application?

•     Do I want an issuer to manage this data or should my

application manage it directly?

 

———————– Page 73———————–

 

3636 chapter two

 

How Can You Uniquely Distinguish One

User from Another?

Because people aren’t born with unique identifiers (indeed, most

people treasure their privacy), differentiating one person from an-

other has always been, and will likely always be a tricky problem.

Claims don’t make this any easier. Fortunately, not all applications

need to know exactly who the user is. Simply being able to identify

one returning user from another is enough to implement a shopping

cart, for example. Many applications don’t even need to go this far.

But other applications have per-user state that they need to track, so

they require a unique identifier for each user.

Traditional applications typically rely on a user’s sign-in name to

distinguish one user from the next. So what happens when you start

building claims-based applications and you give up control over au-

thentication? You’ll need to pick one (or a combination of multiple)

claims to uniquely identify your user, and you’ll need to rely on your

issuer to give you the same values for each of those claims every time

that user visits your application. It might make sense to ask the issuer

to give you a claim that represents a unique identifier for the user. This

can be tricky in a cross-realm federation scenario, where more than

one issuer is involved. In these more complicated scenarios, it helps to

remember that each issuer has a URI that identifies it and that can be

used to scope any identifier that it issues for a user. An example of

such a URI is http://issuer.fabrikam.com/unique-user-id-assigned-

from-fabrikams-realm.

Email addresses have convenient properties of uniqueness and

scope already built in, so you might choose to use an email claim as a

unique identifier for the user. If you do, you’ll need to plan ahead if

you want users to be able to change the email address associated with

their data. You’ll also need a way to associate a new email address with

that data.

 

How Can You Get a List of All Possible

Users and All Possible Claims?

One thing that’s important to keep in mind when you build a claims-

based application is that you’re never going to know about all the

users that could use your application. You’ve given up that control in

exchange for less responsibility, worry, and hassle over programming

against any one particular user store. Users just appear at your door-

step, presenting the token they got from the issuer that you trust.

That token gives you information about who the user is and what he

or she can do. In addition, if you’ve designed your authorization code

properly, you don’t need to change your code to support new users;

even if those users come from other realms, as they do in federation

scenarios.

 

———————– Page 74———————–

 

claims-based architectures 37 37

 

So how can you build a list of users that allows administrators to

choose which users have permission to access your application and

which don’t? The simple answer is to find another way. This is a per-

fect example of where an issuer should be involved with authorization

decisions. The issuer shouldn’t issue tokens to users who aren’t privi-

leged enough to use your application. It should be configured to do

this without you having to do anything at all in your application.

When designing a claims-based application, always keep in mind

that a certain amount of responsibility for identity has been lifted

from your shoulders as an application developer. If an identity-related

task seems difficult or impossible to build into your application logic,

consider whether it’s possible for your issuer to handle that task for

you.

 

Where Should Claims Be Issued?

The question of where claims should be issued is moot when you have

a simple system with only one issuer. But when you have more com-

plex systems where multiple issuers are chained into a path of trust Always get claims from

that leads from the application back to the issuer in the user’s home authoritative sources.

realm, this question becomes very relevant.

The short answer to the question of where claims should be is-

sued is “by the issuer that knows best.”

Take, for example, a claim such as a person’s email name. The

email name of a user isn’t going to change based on which application

he or she uses. It makes sense for this type of claim to be issued close

to the user’s home realm. Indeed, it’s most likely that the first issuer in

the chain, which is the identity provider, would be authoritative for

the user’s email name. This means that downstream issuers and ap-

plications can benefit from that central claim. If the email name is ever

updated, it only needs to be updated at that central location.

Now think about an “action” claim, which is specific to an applica-

tion. An application for expense reporting might want to allow or

disallow actions such as submitExpenseReport and approve

ExpenseReport. Another type of application, such as one that tracks

bugs, would have very different actions, such as reportBug and

assignBug. In some systems, you might find that it works best to have

the individual applications handle these actions internally, based on

higher-level claims such as roles or groups. But if you do decide to

factor these actions out into claims, it would be best to have an issuer

close to the application be authoritative for them. Having local au-

thority over these sorts of claims means you can more quickly imple-

ment policy changes without having to contact a central authority.

What about a group claim or a role claim? In traditional RBAC

(Role-Based Access Control) systems, a user is assigned to one or

more groups, the groups are mapped to roles, and roles are mapped to

 

———————– Page 75———————–

 

3838 chapter two

 

actions. There are many reasons why this is a good design: the map-

ping from roles to actions for an application can be done by someone

who is familiar with it and who understands the actions defined for

that application. For example, the mapping from user to groups can

be done by a central administrator who knows the semantics of each

group. Also, while groups can be managed in a central store, roles and

actions can be more decentralized and handled by the various depart-

ments and product groups that define them. This allows for a much

more agile system where identity and authorization data can be cen-

tralized or decentralized as needed.

Issuers are typically placed at boundaries in organizations. Take,

for example, a company with several departments. Each department

might have its own issuer, while the company has a central issuer that

acts as a gateway for claims that enter or leave it. If a user at this

Issuers are typically found at company accesses an application in another, similarly structured com-

organizational boundaries. pany, the request will end up being processed by four issuers:

•     The departmental issuer, which authenticates the user and

supplies an email name and some initial group claims

•     The company’s central issuer, which adds more groups and some

roles based on those groups

•     The application’s central issuer, which maps roles from the user’s

company to roles that the application’s company understands

(this issuer may also add additional role-claims based on the

ones already present)

•     The application’s departmental issuer, which maps roles onto

actions

You can see that as the request crosses each of these boundaries,

the issuers there enrich and filter the user’s security context by issuing

claims that make sense for the target context, based on its require-

ments and the privacy policies. Is the email name passed all the way

through to the application? That depends on whether the user’s com-

pany trusts the application’s company with that information, and

whether the application’s company thinks the application needs to

know that information.

 

What Technologies Do Claims

and Tokens Use?

Security tokens that are passed over the Internet typically take one of

two forms:

•     Security Assertion Markup Language (SAML) tokens are XML-

encoded structures that are embedded inside other structures

such as HTTP form posts and SOAP messages.

 

———————– Page 76———————–

 

claims-based architectures 39 39

 

•     Simple Web Token (SWT) tokens that are stored in the HTTP

headers of a request or response.

The tokens are encrypted and can be stored on the client as cookies.

Security Assertion Markup Language (SAML) defines a language

for exchanging security information expressed in the form of asser-

tions about subjects. A subject may be a person or a resource (such as

a computer) that has an identity in a security domain. A typical ex-

ample of a subject is a person identified by an email address within a

specific DNS domain. The assertions in the token can include informa-

tion about authentication status, specific details of the subject (such

as a name), and the roles valid for the subject that allow authorization

decisions to be made by the relying party.

The protocol used to transmit SAML tokens is often referred to

as SAML-P. It is an open standard that is ratified by Oasis, and it is

supported by ADFS 2.0. However, at the time of this writing it

was not natively supported by Windows Identity Foundation (WIF).

To use SAMP-P with WIF requires you to create or obtain a custom

authentication module that uses the WIF extensibility mechanism.

Simple Web Token (SWT) is a compact name-value pair security

token designed to be easily included in an HTTP header.

The transfer of tokens between identity provider, issuer, client,

and the relying party (the application) may happen through HTTP

web requests and responses, or through web service requests and

responses, depending on the nature of the client. Web browsers rely

mainly on HTTP web requests and responses. Smart clients and other

services (such as SharePoint BCS) use web service requests and re-

sponses.

Web service requests make use of a suite of security standards

that fall under the heading of the WS* Extensions. The WS* stan-

dards include the following extensions:

•     WS-Security. This specification defines a protocol for end-to-

end message content security that supports a wide range of

security token formats, trust domains, signature formats, and

encryption technologies. It provides a framework that, in

conjunction with other extensions, provides the ability to send

security tokens as part of a message, to verify message integrity,

and to maintain message confidentiality. The WS-Security

mechanisms can be used for single tasks such as passing a

security token, or in combination to enable signing and encrypt-

ing a message and providing a security token.

•     WS-Trust. This specification builds on the WS-Security proto-

col to define additional extensions that allow the exchange of

security tokens for credentials in different trust domains. It

includes definitions of mechanisms for issuing, renewing, and

 

———————– Page 77———————–

 

4040 chapter two

 

validating security tokens; for establishing the presence of trust

relationships between domains, and for brokering these trust

relationships.

•     WS-SecureConversation. This specification builds on WS-

Security to define extensions that support the creation and

sharing of a security context for exchanging multiple messages,

and for deriving and managing more efficient session keys for

use within the conversation. This can increase considerably the

overall performance and security of the message exchanges.

•     WS-Federation. This specification builds on the WS-Security

and WS-Trust protocols to provide a way for a relying party to

make the appropriate access control decisions based on the

credibility of identity and attribute data that is vouched for by

another realm. The standard defines mechanisms to allow

different security realms to federate so that authorized access

to resources managed in one realm can be provided to subjects

whose identities are managed in other realms.

•     WS-Federation: Passive Requestor Profile. This specification

WS* is a suite of standards describes how the cross trust realm identity, authentication, and

where each builds on other authorization federation mechanisms defined in WS-Federation

standards to provide additional can be utilized used by passive requesters such as web browsers

capabilities or to meet specific to provide identity services. Passive requesters of this profile are

scenario requirements. limited to the HTTP protocol.

 

Security Association Management Protocol (SAMP) and Internet

Security Association and Key Management Protocol (ISAKMP) define

standards for establishing security associations that define the header,

authentication, payload encapsulation, and application layer services

for exchanging key generation and authentication data that is inde-

pendent of the key generation technique, encryption algorithm, and

authentication mechanism in use. All of these are necessary to estab-

lish and maintain secure communications when using IP Security

Service or any other security protocol in an Internet environment.

 

For more information about these standards and protocols,

see Appendix C of this guide.

 

———————– Page 78———————–

 

claims-based architectures 41 41

 

Questions

 

1. Which of the following protocols or types of claims token

are typically used for single sign-on across applications in

different domains and geographical locations?

 

a. Simple web Token (SWT)

 

b. Kerberos ticket

 

c. Security Assertion Markup Language (SAML) token

 

d. Windows Identity

 

2. In a browser-based application, which of the following

is the typical order for browser requests during

authentication?

 

a. Identity provider, token issuer, relying party

 

b. Token issuer, identity provider, token issuer, relying

party

 

c. Relying party, token issuer, identity provider, token

issuer, relying party

 

d. Relying party, identity provider, token issuer, relying

party

 

3. In a service request from a non-browser-based application,

which of the following is the typical order of requests

during authentication?

 

a. Identity provider, token issuer, relying party

 

b. Token issuer, identity provider, token issuer, relying

party

 

c. Relying party, token issuer, identity provider, token

issuer, relying party

 

d. Relying party, identity provider, token issuer, relying

party

 

4. What are the main benefits of federated identity?

 

a. It avoids the requirement to maintain a list of valid

users, manage passwords and security, and store and

maintain lists of roles for users in the application.

 

b. It delegates user and role management to the trusted

organization responsible for the user, instead of it

being the responsibility of your application.

 

———————– Page 79———————–

 

4242 chapter two

 

c. It allows users to log onto applications using the same

credentials, and choose an identity provider that is

appropriate for the user and the application to validate

these credentials.

 

d. It means that your applications do not need to include

authorization code.

 

5. How can home realm discovery be achieved?

 

a. The token issuer can display a list of realms based on

the configured identity providers and allow the user

to select his home realm.

 

b. The token issuer can ask for the user’s email address

and use the domain to establish the home realm.

 

c. The application can use the IP address to establish the

home realm based on the user’s country/region of

residence.

 

d. The application can send a hint to the token issuer in

the form of a special request parameter that indicates

the user’s home realm.

 

———————– Page 80———————–

 

3 Claims-Based Single Sign-On for

the Web and Windows Azure

 

This chapter walks you through an example of single sign-on for in-

tranet and extranet web users who all belong to a single security

realm. You’ll see examples of two existing applications that become

claims-aware. One of the applications uses forms authentication, and For single sign-on, the issuer

one uses Windows authentication. Once the applications use claims- also creates a session with the

based authentication, you’ll see how it’s possible to interact with the user that works with different

applications either from the company’s internal network or from the applications.

public Internet.

This basic scenario doesn’t show how to establish trust relation-

ships across enterprises. (That is discussed in Chapter 4, “Federated

Identity for Web Applications.”) It focuses on how to implement

single sign-on and single sign-off within a security domain as a prepa-

ration for sharing resources with other security domains, and how to

migrate applications to Windows Azure™. In short, this scenario

contains the commonly used elements that will appear in all claims-

aware applications.

 

The Premise

 

Adatum is a medium-sized company that uses Microsoft Active Direc-

tory® directory service to authenticate the employees in its corporate

network. Adatum’s sales force uses a-Order, Adatum’s order process-

ing system, to enter, process, and manage customer orders. Adatum

employees also use aExpense, an expense tracking and reimbursement

system for business-related expenses.

Both applications are built with ASP.NET 4.0 and are deployed in

Adatum’s data center. Figure 1 shows a whiteboard diagram of the

structure of a-Order and a-Expense.

 

43

 

———————– Page 81———————–

 

4444 chapter three

 

Active Directory

 

Users

 

Roles a−Expense Kerberos

a−Order

 

ASP.NET

ASP.NET

 

User Name & Password

Profiles

 

Browser

 

ASP.NET

John at

a−Vacations Adatam Corporation

 

a−Facilities

Java

 

figure 1

Adatum infrastructure before claims

 

The two applications handle authentication differently. The a-

Order application uses Windows authentication. It recognizes the

credentials used when employees logged on to the corporate net-

work. The application doesn’t need to prompt them for user names

and passwords. For authorization, a-Order uses roles that are derived

from groups stored in Active Directory. In this way, a-Order is inte-

grated into the Adatum infrastructure.

Keeping the user The user experience for a-Expense is a bit more complicated. The

database for forms- a-Expense application uses its own authentication, authorization, and

based authentication user profile information. This data is stored in custom tables in an

up to date is painful application database. Users enter a user name and password in a web

since this mainte-

nance isn’t integrated form whenever they start the application. The a-Expense application’s

into Adatum’s process authentication approach reflects its history. The application began as

for managing a Human Resources project that was developed outside of Adatum’s

employee accounts. IT department. Over time, other departments adopted it. Now it’s a

part of Adatum’s corporate IT solution.

The a-Expense access control rules use application-specific roles.

Access control is intermixed with the application’s business logic.

Some of the user profile information that a-Expense uses also

exists in Active Directory, but because a-Expense isn’t integrated with

the corporate enterprise directory, it can’t access it. For example,

 

———————– Page 82———————–

 

claims-based single sign-on for the web and windows azure 45 45

 

Active Directory contains each employee’s cost center, which is also

one of the pieces of information maintained in the a-Expense user

profile database. Changing a user’s cost center in a-Expense is messy

and error prone. All employees have to manually update their profiles

when their cost centers change.

 

Goals and Requirements

 

Adatum has a number of goals in moving to a claims-based identity Your choice of an identity

solution. One goal is to add the single sign-on capability to its net- solution should be based on

work. This allows employees to log on once and then be able to access clear goals and requirements.

all authorized systems, including a-Expense. With single sign-on, users

will not have to enter a user name and password when they use a-

Expense.

A second goal is to enable Adatum employees to access corporate

applications from the Internet. Members of the sales force often

travel to customer sites and need to be able to use a-Expense and

aOrder without the overhead of establishing a virtual private network

(VPN) session.

A third goal is to plan for the future. Adatum wants a flexible

solution that it can adapt as the company grows and changes. Right

now, a priority is to implement an architecture that allows them to

host some applications in a cloud environment such as Windows

Azure. Moving operations out of their data center will reduce their

Dealing with change

capital expenditures and make it simpler to manage the applications. is one of the

Adatum is also considering giving their customers access to some ap- challenges of IT

plications, such as a-Order. Adatum knows that claims-based identity operations.

and access control are the foundations needed to enable these plans.

While meeting these goals, Adatum wants to make sure its solu-

tion reuses its existing investment in its enterprise directory. The

company wants to make sure user identities remain under central ad-

ministrative control and don’t span multiple stores. Nonetheless,

Adatum wants its business units to have the flexibility to control ac-

cess to the data they manage. For example, not everyone at Adatum

is authorized to use the a-Expense application. Currently, access to

the program is controlled by application-specific roles stored in a

departmentally administered database. Adatum’s identity solution

must preserve this flexibility.

Finally, Adatum also wants its identity solution to work with

multiple platforms and vendors. And, like all companies, Adatum

wants to ensure that any Internet access to corporate applications is

secure.

With these considerations in mind, Adatum’s technical staff has

made the decision to modify both the aExpense and the a-Order

applications to support claims-based single sign-on.

 

———————– Page 83———————–

 

4646 chapter three

 

Overview of the Solution

 

Claims can take advantage of The first step was to analyze which pieces of identity information

existing directory information. were common throughout the company and which were specific to

particular applications. The idea was to make maximum use of the

existing investment in directory information. Upon review, Adatum

discovered that their Active Directory store already contained the

necessary information. In particular, the enterprise directory main-

tained user names and passwords, given names and surnames, e-mail

addresses, employee cost centers, office locations, and telephone

numbers.

Since this information was already in Active Directory, the claims-

based identity solution would not require changing the Active Direc-

tory schema to suit any specific application.

They determined that the main change would be to introduce an

issuer of claims for the organization. Adatum’s applications will trust

this issuer to authenticate users.

Nobody likes changing Adatum envisions that, over time, all of its applications will even-

their Active Directory tually trust the issuer. Since information about employees is a corpo-

schema. Adding rate asset, the eventual goal is for no application to maintain a custom

app-specific rules or

claims from a non– employee database. Adatum recognizes that some applications have

Active Directory specialized user profile information that will not (and should not) be

data store to a claims moved to the enterprise directory. Adatum wants to avoid adding

issuer is easier. application-specific attributes to its Active Directory store, and it

wants to keep management as decentralized as possible.

For the initial rollout, the company decided to focus on a-Expense

and a-Order. The a-Order application only needs configuration

changes that allow it to use Active Directory groups and users as

claims. Although there is no immediate difference in the application’s

structure or functionality, this change will set the stage for eventually

allowing external partners to access a-Order.

The a-Expense application will continue to use its own applica-

tion-specific roles database, but the rest of the user attributes will

come from claims that the issuer provides. This solution will provide

single sign-on for aExpense users, streamline the management of user

identities, and allow the application to be accessible remotely from

the Internet.

 

You might ask why Adatum chose claims-based identity rather than

Staging is helpful. Windows authentication for a-Expense. Like claims, Windows

You can change authentication provides single sign-on, and it is a simpler solution

authentication first

than issuing claims and configuring the application to process claims.

without affecting

authorization.

 

———————– Page 84———————–

 

claims-based single sign-on for the web and windows azure 47 47

 

There’s no disagreement here: Windows authentication is

extremely well suited for intranet single sign-on and should be used

when that is the only requirement.

Adatum’s goals are broader than just single sign-on, however.

Adatum wants its employees to have remote access to a-Expense and

a-Order without requiring a VPN connection. Also, Adatum wants to

move aExpense to Windows Azure and eventually allow customers to

view their pending orders in the aOrder application over the Inter-

net. The claims-based approach is best suited to these scenarios.

 

Figure 2 shows the proposal, as it was presented on Adatum’s

whiteboards by the technical staff. The diagram shows how internal

users will be authenticated.

 

Active

Issuer Directory

 

1

 

Users

0 Kerberos

 

Roles a−Expense

a−Order

 

ASP.NET

 

User Name &

ASP.NET

Password

 

Profiles

 

Browser

 

John at

Adatam Corporation

 

figure 2

Moving to claims-based identity

 

This claims-based architecture allows Adatum employees to work

from home just by publishing the application and the issuer through

the firewall and proxies. Figure 3 shows the way Adatum employees

can use the corporate intranet from home.

 

———————– Page 85———————–

 

4848 chapter three

 

Firewall and Proxy

 

Internet

ACTIVE

DIRECTORY

 

user Name &

Issuer Password

 

Users

0 Kerberos

 

1 2

 

Roles a−Expense

a−Order

Browser

3

ASP.NET

 

User Name & ASP.NET

Password

 

Profiles

John at

Home

 

− Name Browser

 

− Cost Center

 

ASP.NET John at

a−Vacations Adatam Corporation

 

a−Facilities

Java

 

figure 3

The Active Directory Claims-based identity over the Internet

Federation Services

(ADFS) proxy role Once the issuer establishes the remote user’s identity by prompt-

provides intermediary ing for a user name and password, the same claims are sent to the

services between an

Internet client and an application, just as if the employee is inside the corporate firewall.

ADFS server that is This solution makes Adatum’s authentication strategy much more

behind a firewall. flexible. For example, Adatum could ask for additional authentication

requirements, such as smart cards, PINs, or even biometric data, when

someone connects from the Internet. Because authentication is now

the responsibility of the issuer, and the applications always receive the

same set of claims, the applications don’t need to be rewritten. The

ability to change the way you authenticate users without having to

change your applications is a real benefit of using claims.

You can also look at this proposed architecture from the point of

view of the HTTP message stream. For more information, see the mes-

sage sequence diagrams in Chapter 2, “Claims-Based Architectures.”

 

———————– Page 86———————–

 

claims-based single sign-on for the web and windows azure 49 49

 

Inside the Implementation

 

Now is a good time to walk through the process of converting a-

Expense into a claims-aware application in more detail. As you go

through this section, you may want to download the Microsoft Visual

Studio® solution 1SingleSignOn from http://claimsid.codeplex.com.

This solution contains implementations of a-Expense and a-Order,

with and without claims. If you are not interested in the mechanics,

you should skip to the next section.

 

a-Expense before Claims By default, the

Before claims, the a-Expense application used forms authentication downloadable

to establish user identity. It’s worth taking a moment to review the implementations run

standalone on your

process of forms authentication so that the differences with the workstation, but you

claims-aware version are easier to see. In simple terms, forms authen- can also configure

tication consists of a credentials database and an HTTP redirect to a them for a multi-

logon page. tiered deployment.

Figure 4 shows the a-Expense application with forms authentica-

tion. Many web applica-

tions store user

profile information in

Receive page Redirect to original page. cookies rather than

request. in the session state

because cookies scale

better on the server

side. Scale wasn’t a

concern here because

a-Expense is a

departmental

Validate Read Users and application.

 

passwords

Already No credentials and

authenticated? Redirect to logon page. store user profile

in session state. Write

Session state

 

Yes

 

Retrieve user

profile data from Read

Session state

session state and

show page.

 

figure 4

a-Expense with forms authentication

 

———————– Page 87———————–

 

5050 chapter three

 

The logon page serves two purposes in a-Expense. It authenti-

cates the user by asking for credentials that are then checked against

the password database, and it also copies application-specific user

profile information into the ASP.NET’s session state object for later

use. Examples of profile information are the user’s full name, cost

center, and assigned roles. The a-Expense application keeps its user

profile information in the same database as user passwords, which is

typical for applications that use forms authentication.

 

a-Expense intentionally uses custom code for authentication,

authorization, and profiles instead of using Membership, Roles,

and Profile providers. This is typical of legacy applications that

might have been written before ASP.NET 2.0.

 

In ASP.NET, adding forms authentication to a web application

requires three steps: an annotation in the application’s Web.config file

to enable forms authentication, a logon page that asks for credentials,

and a handler method that validates those credentials against applica-

tion data. Here is how those pieces work.

The Web.config file for a-Expense enables forms authentication

with the following XML declarations:

 

<authentication mode=”Forms”>

<forms loginUrl=”~/login.aspx”

requireSSL=”true” … />

</authentication>

 

<authorization>

<deny users=”?” />

</authorization>

 

The authentication element tells the ASP.NET runtime (or Micro-

soft Internet Information Services (IIS) 7.0 when running both in ASP.

NET integrated mode and classic mode) to automatically redirect

any unauthenticated page request to the specified login URL. An

authorization element that denies access to unauthenticated users

(denoted by the special symbol “?”) is also required to make this

redirection work.

Next, you’ll find that a-Expense has a Login.aspx page that uses

the built-in ASP.NET Login control, as shown here.

 

<asp:Login ID=”Login1″ runat=”server”

OnAuthenticate=”Login1OnAuthenticate” … >

</asp:Login>

 

Finally, if you look at the application, you’ll notice that the han-

dler of the Login.aspx page’s OnAuthenticate event looks like the

following.

 

———————– Page 88———————–

 

claims-based single sign-on for the web and windows azure 51 51

 

public partial class Login : Page

{

protected void Login1OnAuthenticate(object sender,

AuthenticateEventArgs e)

{

var repository = new UserRepository();

if (!repository.ValidateUser(this.Login1.UserName,

this.Login1.Password))

{

e.Authenticated = false;

return;

}

var user = repository.GetUser(this.Login1.UserName);

if (user != null)

{

this.Session[“LoggedUser”] = user;

e.Authenticated = true;

}

}

}

 

This logic is typical for logon pages. You can see in the code that

the user name and password are checked first. Once credentials are

validated, the user profile information is retrieved and stored in the

session state under the LoggedUser key. Notice that the details of

interacting with the database have been put inside of the application’s

UserRepository class.

Setting the Authenticated property of the AuthenticateEvent

Args object to true signals successful authentication. ASP.NET then

redirects the request back to the original page.

At this point, normal page processing resumes with the execution

of the page’s OnLoad method. In the a-Expense application, this

method retrieves the user’s profile information that was saved in the

session state object and initializes the page’s controls. For example,

the logic might look like the following.

 

protected void OnLoad(EventArgs e)

{

var user = (User)Session[“LoggedUser”];

 

var repository = new ExpenseRepository();

var expenses = repository.GetExpenses(user.Id);

this.MyExpensesGridView.DataSource = expenses;

this.DataBind();

 

base.OnLoad(e);

}

 

———————– Page 89———————–

 

5252 chapter three

 

The session object contains the information needed to make ac-

cess control decisions. You can look in the code and see how a-Ex-

pense uses an application-defined property called AuthorizedRoles

to make these decisions.

 

a-Expense with Claims

The developers only had to make a few changes to a-Expense to

replace forms authentication with claims. The process of validating

credentials was delegated to a claims issuer simply by removing the

logon page and configuring the ASP.NET pipeline to include the Win-

dows Identity Foundation (WIF) WSFederationAuthentication

Module. This module detects unauthenticated users and redirects

them to the issuer to get tokens with claims. Without a logon page,

You only need a few changes the application still needs to write profile and authorization data into

to make the application the session state object, and it does this in the Session_Start method.

claims-aware. Those two changes did the job.

Figure 5 shows how authentication works now that a-Expense is

claims-aware.

 

Receive page Redirect to original page, with claims.

request.

 

Redirect to

claims issuer Redirect to claims issuer.

Already No (WS Federation

authenticated? Authentication

Issuer

Module).

 

Read

Profiles

Run Session_Start

Does in Global.asax.

No Read

session Initialize the Claims

exist? session state with

data from claims. Write

Session state

 

Yes

 

Making a-Expense Retrieve user

profile data from Read

use claims was easy session state and Session state

with WIF’s FedUtil. show page.

exe utility. See

Appendix A. figure 5

 

a-Expense with claims processing

 

———————– Page 90———————–

 

claims-based single sign-on for the web and windows azure 53 53

 

The Web.config file of the claims-aware version of a-Expense

contains a reference to WIF-provided modules. This Web.config file

is automatically modified when you run the FedUtil wizard either

through the command line (FedUtil.exe) or through the Add STS

Reference command by right-clicking the web project in Visual Stu-

dio.

If you look at the modified Web.config file, you’ll see that there

are changes to the authorization and authentication sections as well

as new configuration sections. The configuration sections include the

information needed to connect to the issuer. They include, for ex- We’re just giving the

ample, the Uniform Resource Indicator (URI) of the issuer and infor- highlights here. You’ll

also want to check

mation about signing certificates. out the WIF and

The first thing you’ll notice in the Web.config file is that the au- ADFS product

thentication mode is set to None, while the requirement for authen- documentation.

ticated users has been left in place.

 

<authentication mode=”None” />

 

<authorization>

<deny users=”?” />

</authorization>

 

The forms authentication module that a-Expense previously used has

been deactivated by setting the authentication mode attribute to

None. Instead, the WSFederationAuthenticationModule

(FAM) and SessionAuthenticationModule (SAM) are now in

charge of the authentication process.

 

The application’s Login.aspx page is no longer needed and can be This may seem a little

removed from the application. weird. What’s going

on is that authentica-

Next, you will notice that the Web.config file contains two new tion has been moved

modules, as shown here. to a different part of

 

the HTTP pipeline.

<httpModules>

<add name=”WSFederationAuthenticationModule”

type=”Microsoft.IdentityModel.Web.

WSFederationAuthenticationModule, …” />

 

<add name=”SessionAuthenticationModule”

type=”Microsoft.IdentityModel.Web.

SessionAuthenticationModule, …” />

</httpModules>

 

When the modules are loaded, they’re inserted into the ASP.NET

processing pipeline in order to redirect the unauthenticated requests

to the issuer, handle the reply posted by the issuer, and transform the

 

———————– Page 91———————–

 

5454 chapter three

 

user token sent by the issuer into a ClaimsPrincipal object. The mod-

ules also set the value of the HttpContext.User property to the

ClaimsPrincipal object so that the application has access to it.

The WSFederationAuthenticationModule redirects the user to

the issuer’s logon page. It also parses and validates the security token

that is posted back. This module writes an encrypted cookie to avoid

repeating the logon process. The SessionAuthenticationModule

detects the logon cookie, decrypts it, and repopulates the Claims

Principal object.

The Web.config file contains a new section for the Microsoft.

The ClaimsPrincipal IdentityModel that initializes the WIF environment.

 

object implements <configSections>

the IPrincipal

<section name=”microsoft.identityModel”

interface that you

already know. This type=”Microsoft.IdentityModel.Configuration.

makes it easy to MicrosoftIdentityModelSection,

convert existing Microsoft.IdentityModel, …” />

applications. </configSections>

 

The identity model section contains several kinds of information

needed by WIF, including the address of the issuer and the certificates

(the serviceCertificate and trustedIssuers elements) that are needed

to communicate with the issuer.

 

<microsoft.identityModel>

<service>

<audienceUris>

<add value=

https://{adatum hostname}/a-Expense.ClaimsAware/”

/>

</audienceUris>

 

The value of “adatum hostname” changes depending on where

you deploy the sample code. In the development environment,

it is ” localhost.”

 

Security tokens contain an audience URI. This indicates that the

issuer has issued a token for a specific “audience” (application). Ap-

plications, in turn, will check that the incoming token was actually

issued for them. The audienceUris element lists the possible URIs.

Restricting the audience URIs prevents malicious clients from reusing

a token from a different application with an application that they are

not authorized to access.

 

———————– Page 92———————–

 

claims-based single sign-on for the web and windows azure 55 55

 

<federatedAuthentication>

<wsFederation passiveRedirectEnabled=”true”

issuer=”https://{adatum hostname}/{issuer endpoint} ”

realm=”https://{adatum hostname}/a-Expense.ClaimsAware/”

requireHttps=”true” />

<cookieHandler requireSsl=”true”

path=”/a-Expense.ClaimsAware/” />

</federatedAuthentication>

 

The federatedAuthentication section identifies the issuer and

the protocol required for communicating with it.

 

<serviceCertificate> Using HTTPS

mitigates man-in-the-

<certificateReference x509FindType=”FindByThumbprint”

middle and replay

findValue=”5a074d678466f59dbd063d1a98b1791474723365″ /> attacks. This is

</serviceCertificate> optional during

development, but

The service certificate section gives the location of the certificate be sure to use HTTPS

used to decrypt the token, in case it was encrypted. Encrypting the in production

token is optional, and it’s a decision of the issuer to do it or not. You environments.

don’t need to encrypt the token if you’re using HTTPS, but encryp-

tion is generally recommended as a security best practice.

 

<issuerNameRegistry

type=”Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuer

NameRegistry,

Microsoft.IdentityModel, … >

<trustedIssuers>

<add thumbprint=” f260042d59e14817984c6183fbc6bfc71baf5462″

name=”adatum” />

</trustedIssuers>

</issuerNameRegistry>

 

A thumbprint is the result of hashing an X.509 certificate signa-

ture. SHA-1 is a common algorithm for doing that. Thumbprints

uniquely identify a certificate and the issuer. The issuerNameRegistry

element contains the list of thumbprints of the issuers it trusts. Issuers

are identified by the thumbprint of their signing X.509 certificate. If

the thumbprint does not match the certificate embedded in the in-

coming token signature, WIF will throw an exception. If the thumb-

print matches, the name attribute will be mapped to the Claim.Issuer

property. 

In the code example, the name attribute adatum is required for

the scenario because the a-Expense application stores the federated

user name in the roles database. A federated user name has the for-

mat: adatum\username.

 

———————– Page 93———————–

 

5656 chapter three

 

The following procedure shows you how to find the thumbprint

of a specific certificate.

 

T O FIND A THUMBPRINT

 

1. On the taskbar, click Start, and then type mmc in the search

box.

 

2. Click mmc. A window appears that contains the Microsoft

Management Console (MMC) application.

 

3. On the File menu, click Add/Remove Snap-in.

 

4. In the Add or Remove Snap-ins dialog box, click Certifi-

cates, and then click Add.

 

5. In the Certificates snap-in dialog box, select Computer

account, and then click Next.

 

6. In the Select Computer dialog box, select Local computer,

click Finish, and then click OK.

 

7. In the left pane, a tree view of all the certificates on your

computer appears. If necessary, expand the tree. Expand

the Personal folder. Expand the Certificates folder.

 

8. Click the certificate whose thumbprint you want.

 

9. In the Certificate Information dialog box, click the Details

tab, and then scroll down until you see the thumbprint.

This may seem like a

lot of configuration, In Windows 7, you’ll need to double-click to open the dialog, which

but the FedUtil has the title Certificate, not Certificate Information.

wizard handles it

for you. The changes in the Web.config file are enough to delegate

authentication to the issuer.

There’s still one detail to take care of. Remember from the previ-

ous section that the logon handler (which has now been removed

from the application) was also responsible for storing the user profile

data in the session state object. This bit of logic is relocated to the

Session_Start method found in the Global.asax file. The Session_

Start method is automatically invoked by ASP.NET at the beginning

of a new session, after authentication occurs. The user’s identity is

now stored as claims that are accessed from the thread’s Current

Principal property. Here is what the Session_Start method looks

like.

 

———————– Page 94———————–

 

claims-based single sign-on for the web and windows azure 57 57

 

protected void Session_Start(object sender, EventArgs e)

{

if (this.Context.User.Identity.IsAuthenticated)

{

string issuer =

ClaimHelper.GetCurrentUserClaim(

System.IdentityModel.Claims.ClaimTypes.Name).

OriginalIssuer;

string givenName =

ClaimHelper.GetCurrentUserClaim(

WSIdentityConstants.ClaimTypes.GivenName).Value;

 

string surname =

ClaimHelper.GetCurrentUserClaim(

WSIdentityConstants.ClaimTypes.Surname).Value;

 

string costCenter =

ClaimHelper.GetCurrentUserClaim(

Adatum.ClaimTypes.CostCenter).Value;

 

var repository = new UserRepository();

string federatedUsername =

GetFederatedUserName(issuer, this.User.Identity.Name);

var user = repository.GetUser(federatedUsername);

user.CostCenter = costCenter;

user.FullName = givenName + ” ” + surname;

 

this.Context.Session[“LoggedUser”] = user;

}

Putting globally

}

significant data such

Note that the application does not go to the application data as names and cost

centers into claims

store to authenticate the user because authentication has already

while keeping

been performed by the issuer. The WIF modules automatically read app-specific

the security token sent by the issuer and set the user information in attributes in a local

the thread’s current principal object. The user’s name and some other store is a typical

attributes are now claims that are available in the current security practice.

context.

The user profile database is still used by a-Expense to store the

application-specific roles that apply to the current user. In fact, a-

Expense’s access control is unchanged whether or not claims are used.

The preceding code example invokes methods of a helper class

named ClaimHelper. One of its methods, the GetCurrentUserClaim

method, queries for claims that apply in the current context. You need

to perform several steps to execute this query:

 

———————– Page 95———————–

 

5858 chapter three

 

1. Retrieve context information about the current user by

getting the static CurrentPrincipal property of the System.

Threading.Thread class. This object has the run-time type

IPrincipal.

 

2. Use a run-time type conversion to convert the current

principal object from IPrincipal to the type IClaims

Principal. Because a-Expense is now a claims-aware applica-

tion, the run-time conversion is guaranteed to succeed.

 

3. Use the Identities property of the IClaimsPrinci-

pal interface to retrieve a collection of identities that apply

to the claims principal object from the previous step. The

object that is returned is an instance of the ClaimsIdentity

Collection class. Note that a claims principal may have more

than one identity, although this feature is not used in the

a-Expense application.

 

4. Retrieve the first identity in the collection. To do

this, use the collection’s indexer property with 0 as the

index. The object that is returned from this lookup is the

current user’s claims-based identity. The object has type

IClaimsIdentity.

 

5. Retrieve a claims collection object from the claims

identity object with the Claims property of the IClaims

Identity interface. The object that is returned is an instance

of the ClaimsCollection class. It represents the set of

claims that apply to the claims identity object from the

previous step.

 

6. At this point, if you iterate through the claims collection,

you can select a claim whose claim type matches the one

you are looking for. The following expression is an example

of how to do this.

 

claims.Single(c => c.ClaimType == claimType)

 

Look at the imple- Note that the Single method assumes that there is one

mentation of the claim that matches the requested claim type. It will throw

ClaimHelper class in an exception if there is more than one claim that matches

the sample code for the desired claim type or if no match is found. The Single

an example of how to method returns an instance of the Claim class.

retrieve claims about

the current user. 7. Finally, you extract the claim’s value with the Claim class’s

 

Value property. Claims values are strings.

 

———————– Page 96———————–

 

claims-based single sign-on for the web and windows azure 59 59

 

a-Order before Claims

Unlike a-Expense, the a-Order application uses Windows authentica-

tion. This has a number of benefits, including simplicity.

Enabling Windows authentication is as easy as setting an attri-

bute value in XML, as shown here.

 

<authentication mode=”Windows” />

 

The a-Order application’s approach to access control is consider-

ably simpler than what you saw in aExpense. Instead of combining

authentication logic and business rules, a-Order simply annotates

pages with roles in the Web.config file.

 

<authorization>

<allow roles=”Employee, Order Approver” />

<deny users=”*” />

</authorization>

 

The user interface of the a-Order application varies, depending

on the user’s current role.

 

base.OnInit(e);

 

this.OrdersGrid.Visible =

!this.User.IsInRole(Adatum.Roles.OrderApprover);

this.OrdersGridForApprovers.Visible =

this.User.IsInRole(Adatum.Roles.OrderApprover);

 

a-Order with Claims

Adding claims to a-Order is really just a configuration step. The Converting Windows

application code needs no change. authentication to

If you download the project from http://claimsid.codeplex.com, claims only requires a

you can compare the Web.config files before and after conversion configuration change.

to claims. It was just a matter of right-clicking the project in Visual

Studio and then clicking Add STS Reference. The process is very

similar to what you saw in the previous sections for the a-Expense

application.

The claims types required are still the users and roles that were

previously provided by Windows authentication.

 

Don’t forget that more than

one value of a given claim

type may be present. For

example, a single identity

can have several role claims.

 

———————– Page 97———————–

 

6060 chapter three

 

Signing out of an Application

 

The FederatedPassiveSignInStatus control is provided by WIF. The

following snippet from the Site.Master file shows how the single sign-

on scenario uses it to sign out of an application.

 

<idfx:FederatedPassiveSignInStatus

ID=”FederatedPassiveSignInStatus”

runat=”server”

OnSignedOut=”OnFederatedPassiveSignInStatusSignedOut”

SignOutText=”Logout”

FederatedPassiveSignOut=”true”

SignOutAction=”FederatedPassiveSignOut” />

 

The idfx prefix identifies the control as belonging to the Micro

soft.IdentityModel.Web.Controls namespace. The control causes a

browser redirect to the ADFS issuer, which logs out the user and de-

stroys any cookies related to the session.

In this single sign-on scenario, signing out from one application

signs the user out from all the applications they are currently signed

into in the single sign-on domain.

 

For details about how the simulated issuer in this sample supports

single sign-out, see the section “Handling Single Sign-out in the

Mock Issuer” later in this chapter.

 

The a-Expense application uses an ASP.NET session object to

maintain some user state, and it’s important that this session data is

cleared when a user signs out from the single sign-out domain. The

a-Expense application manages this by redirecting to a special Clean-

Up.aspx page when the application handles the WSFederation

AuthenticationModule_SignedOut event in the global.asax.cs file.

The CleanUp.aspx page checks that the user has signed out and then

abandons the session. The following code example shows the Page_

Load event handler for this page.

 

protected void Page_Load(object sender, EventArgs e)

{

if (this.User.Identity.IsAuthenticated)

{

this.Response.Redirect(“~/Default.aspx”, false);

}

else

{

 

———————– Page 98———————–

 

claims-based single sign-on for the web and windows azure 61 61

 

this.Session.Abandon();

var signOutImage = new byte[]

{

71, 73, …

};

 

this.Response.Cache.SetCacheability

(HttpCacheability.NoCache);

this.Response.ClearContent();

this.Response.ContentType = “image/gif”;

this.Response.BinaryWrite(signOutImage);

} The Cleanup.aspx

page must be listed as

} unauthenticated in

 

The byte array represents a GIF image of the green check mark the Web.config file.

 

that the SignedOut.aspx page in the simulated issuer displays after the

single sign-out is complete.

An alternative approach would be to modify the claims issuer to

send the URL of the clean-up page in the wreply parameter when it

sends a wsignoutcleanup1.0 message to the relying party. However

this would mean that the issuer, not the relying party, is responsible

for initiating the session clean-up process in the relying party.

 

Setup and Physical Deployment

 

The process for deploying a claims-aware web application follows

many of the same steps you already know for non-claims-aware ap-

plications. The differences have to do with the special considerations

of the issuer. Some of these considerations include providing a suit-

able test environment during development, migrating to a production

issuer, and making sure the issuer and the web application are prop-

erly configured for Internet access.

 

Using a Mock Issuer

The downloadable versions of a-Expense and a-Order are set up by Mock issuers simplify the

default to run on a standalone development workstation. This is development process.

similar to the way you might develop your own applications. It’s gen-

erally easier to start with a single development machine.

To make this work, the developers of a-Expense and a-Order

wrote a small stub implementation of an issuer. You can find this code

in the downloadable Visual Studio solution. Look for the project with

the URL https://localhost/Adatum.SimulatedIssuer.1.

 

———————– Page 99———————–

 

6262 chapter three

 

When you first run the a-Expense and a-Order applications, you’ll

find that they communicate with the stand-in issuer. The issuer issues

predetermined claims.

It’s not very difficult to write such a component, and you can re-

use the sample that we can provide.

 

Isolating Active Directory

The a-Order application uses Windows authentication. Since devel-

opers do not control the identities in their company’s enterprise direc-

tory, it is sometimes useful to swap out Active Directory with a stub

Using a simple, during the development of your application.

developer-created The a-Order application (before claims) shows an example of this.

claims issuer is a good To use this technique, you need to make a small change to the Web.

practice during config file to disable Windows authentication and then add a hook in

development and unit

testing. Your network the session authentication pipeline to insert the user identities of your

administrator can choosing. Disable Windows authentication with the following change

help you change to the Web.config file.

the application

configuration to <authentication mode=”None” />

use production

infrastructure The Global.asax file should include code that sets the identity

components when with a programmer-supplied identity. The following is an example.

it’s time for accep-

tance testing and <script runat=”server”>

deployment.

void Application_AuthenticateRequest(object sender, EventArgs e)

{

this.Context.User = MaryMay;

}

 

private static IPrincipal MaryMay

{

get

{

IIdentity identity = new GenericIdentity(“mary”);

string[] roles = { “Employee”, “Order Approver” };

return new GenericPrincipal(identity, roles);

}

}

 

</script>

 

Remove this code before you deploy your application.

 

———————– Page 100———————–

 

claims-based single sign-on for the web and windows azure 63 63

 

Handling Single Sign-out

in the Mock Issuer

The relying party applications (a-Order and a-Expense) use the

FederatedPassiveSignInStatus control to allow the user to log in and

log out. When the user clicks the log out link in one of the applica-

tions, the following sequence of events takes place:

 

1. The user is logged out from the current application. The

WSFederationAuthenticationModule (FAM) deletes any

claims that the user has that relate to the current applica-

tion.

To find out more

2. The FAM sends a wsignout1.0 WS-Federation command about the message

flow when a user

to the issuer.

initiates the single

3. The mock issuer performs any necessary sign-out sign-out process, take

operations from other identity providers, for example, by a look at Appendix B.

 

signing the user out from Active Directory.

 

4. The mock issuer sends a wsignoutcleanup1.0

message to all the relying party applications that the user

has signed into. The mock issuer maintains this list for each

user in a cookie.

 

Note: The mock issuer sends the wsignoutcleanup1.0

message to the relying party applications by embedding

a specially constructed image tag in the sign out page

that includes the wsignoutcleanup1.0 message in the

querystring.

 

5. When the FAM in a relying party application intercepts the

wsignoutcleanup1.0 message, it deletes any claims that the

user has that relate to that application.

 

Converting to a Production Issuer

When you are ready to deploy to a production environment, you’ll Remove the mock issuers

need to migrate from your simulated issuer that runs on your develop- when you deploy the

ment workstation to a component such as ADFS 2.0. application.

Making this change requires two steps. First, you need to modify

the web application’s Web.config file using FedUtil so that it points

to the production issuer. Next, you need to configure the issuer so

that it recognizes requests from your web application and provides

the appropriate claims.

Appendix A of this guide walks you through the process of using

FedUtil and shows you how to change the Web.config files.

You can refer to documentation provided by your production

issuer for instructions on how to add a relying party and how to add

 

———————– Page 101———————–

 

6464 chapter three

 

claims rules. Instructions for the samples included in this guide can be

found at http://claimsid.codeplex.com.

 

Enabling Internet Access

One of the benefits of outsourcing authentication to an issuer is that

existing applications can be accessed from the external Internet very

easily. The protocols for claims-based identity are Internet-friendly.

All you need to do is make the application and the issuer externally

addressable. You don’t need a VPN.

If you decide to deploy outside of the corporate firewall, be aware

that you will need certificates from a certificate authority for the

hosts that run your web application and issuer. You also need to make

sure that you configure your URLs with fully qualified host names or

static IP addresses. The ADFS 2.0 proxy role provides specific support

for publishing endpoints on the Internet.

 

Variation—Moving to Windows Azure

 

The last stage of Adatum’s plan is to move a-Expense to Windows

Azure. Windows Azure uses Microsoft data centers to provide devel-

opers with an on-demand compute service and storage to host, scale,

and manage web applications on the Internet. This variation shows

the power and flexibility of a claims-based approach. The a-Expense

code doesn’t change at all. You only need to edit its Web.config file.

As you go through this section, you may want to download the

It’s easy to move a claims-aware Visual Studio® solution from http://claimsid.codeplex.com.

application to Windows Azure. Figure 6 shows what Adatum’s solution looks like.

 

Trust

 

2

Send token

and access Active

a−Expense Directory

 

a−Expense

 

Issuer Kerberos

 

Get token

1

Browser

 

Windows Azure

John at Adatum

 

figure 6

Adatum

a-Expense on Windows Azure

 

———————– Page 102———————–

 

claims-based single sign-on for the web and windows azure 65 65

 

From the perspective of Adatum’s users, the location of the a-

Expense application is irrelevant except that the application’s URL

might change once it is on Windows Azure, but even that can be

handled by mapping CNAMEs to a Windows Azure URL. Otherwise,

its behavior is the same as if it were located on one of Adatum’s serv-

ers. This means that the sequence of events is exactly the same as

before, when a-Expense became claims-aware. The first time a user

accesses the application, he will not be authenticated, so the WIF

module redirects him to the configured issuer that, in this case, is the

Adatum issuer.

The issuer authenticates the user and then issues a token that

includes the claims that a-Expense requires, such as the user’s name

and cost center. The issuer then redirects the user back to the applica-

tion, where a session is established. Note that, even though it is lo-

cated on the Internet, aExpense requires the same claims as when it

was located on the Adatum intranet.

Obviously, for any user to use an application on Windows Azure,

it must be reachable from his computer. This scenario assumes that

Adatum’s network, including its DNS server, firewalls, and proxies, are

configured to allow its employees to have access to the Internet.

Notice however, that the issuer doesn’t need to be available to

external resources. The a-Expense application never communicates

with it directly. Instead, it uses browser redirections and follows the

protocol for passive clients. For more information about this protocol,

see chapter 2, “Claims-Based Architectures” and Appendix B.

 

Hosting a-Expense on Windows Azure

The following procedures describe how to configure the certificates

that you will upload to Windows Azure and the changes you must

make to the Web.config file. These procedures assume that you al-

ready have a Windows Azure token. If you don’t, see http://www.

microsoft.com/windowsazure/getstarted/ to learn how to do this.

 

T O CONFIGURE THE CERTIFICATES

 

1. In Visual Studio, open the Windows Azure project, such as

a-expense.cloud. Right-click the a-Expense.ClaimsAware

role, and then click Properties.

 

2. If you need a certificate’s thumbprint, click Certificates.

Along with other information, you will see the thumbprint.

 

3. Click Endpoints, and then select HTTPS:. Set the Name

field to HttpsIn. Set the Port field to the port number that

you want to use. The default is 443. Select the certificate

name from the SSL certificate name drop-down box. The

 

———————– Page 103———————–

 

6666 chapter three

 

default is localhost. The name should be the same as the

name that is listed on the Certificates tab.

 

Note that the certificate that is uploaded is only used for SSL and

not for token encryption. A certificate from Adatum is only necessary

if you need to encrypt tokens.

 

Both Windows Azure and WIF can decrypt tokens. You must upload

the certificate in the Windows Azure portal and configure the web

role to deploy to the certificate store each time there is a new

instance. The WIF <serviceCertificate> section should point to

that deployed certificate.

 

The following procedure shows you how to publish the a-Expense

application to Windows Azure.

 

T O PUBLISH A-Ex PENSE TO W INDOWS Az URE

 

1. In Microsoft Visual Studio 2010, open the a-expense.cloud

solution.

 

2. Upload the localhost.pfx certificate to the Windows Azure

project. The certificate is located at [samples-installation-

directory]\Setup\DependencyChecker\certs\localhost.pfx.

The password is “xyz.”

 

3. Modify the a-Expense.ClaimsAware application’s Web.

config file by replacing the <microsoft.identityModel>

section with the following XML code. You must replace the

{service-url} element with the service URL that you

selected when you created the Windows Azure project.

 

<microsoft.identityModel>

<service>

<audienceUris>

<add value=”https://{service-url}.cloudapp.net/” />

</audienceUris>

<federatedAuthentication>

<wsFederation passiveRedirectEnabled=”true”

issuer=

https://{adatum hostname}/{issuer endpoint}”

realm=”https://{service-url}.cloudapp.net/”

requireHttps=”true” />

 

———————– Page 104———————–

 

claims-based single sign-on for the web and windows azure 67 67

 

<cookieHandler requireSsl=”true” />

</federatedAuthentication>

<issuerNameRegistry

type=

“Microsoft.IdentityModel.Tokens.

ConfigurationBasedIssuerNameRegistry,

Microsoft.IdentityModel, Version=3.5.0.0,

Culture=neutral,

PublicKeyToken=31bf3856ad364e35″>

<trustedIssuers>

<!–Adatum s identity provider –>

<add thumbprint=

“f260042d59e14817984c6183fbc6bfc71baf5462”

name=”adatum” />

</trustedIssuers>

</issuerNameRegistry>

<certificateValidation

certificateValidationMode=”None” />

</service>

</microsoft.identityModel>

 

4. Right-click the a-expense.cloud project, and then click

Publish. This generates a ServiceConfiguration file and the

actual package for Windows Azure.

 

5. Deploy the ServiceConfiguration file and package to the

Windows Azure project.

 

Once the a-Expense application is deployed to Windows Azure,

you can log on to http://windows.azure.com to test it.

 

If you were to run this application on more than one role instance in

Windows Azure (or in an on-premise web farm), the default cookie

encryption mechanism (which uses DPAPI) is not appropriate, since

each machine has a distinct key.

In this case, you would need to replace the default Session

SecurityHandler object and configure it with a different cookie

transformation such as RsaEncryptionCookieTransform or a

custom one. The “web farm” sample included in the WIF SDK

illustrates this in detail.

 

———————– Page 105———————–

 

6868 chapter three

 

Questions

 

1. Before Adatum updated the a-Expense and a-Order applica-

tions, why was it not possible to use single sign-on?

 

a. The applications used different sets of roles to

manage authorization.

 

b. a-Order used Windows authentication and a-Expense

used ASP.NET forms authentication.

 

c. In the a-Expense application, the access rules were

intermixed with the application’s business logic.

 

d. You cannot implement single sign-on when user

profile data is stored in multiple locations.

 

2. How does the use of claims facilitate remote web-based

access to the Adatum applications?

 

a. Using Active Directory for authentication makes it

difficult to avoid having to use VPN to access the

applications.

 

b. Using claims means that you no longer need to use

Active Directory.

 

c. Protocols such as WS-Federation transport claims in

tokens as part of standard HTTP messages.

 

d. Using claims means that you can use ASP.NET forms-

based authentication for all your applications.

 

3. In a claims enabled ASP.NET web application, you typically

find that the authentication mode is set to None in the

Web.config file. Why is this?

 

a. The WSFederationAuthenticationModule is now

responsible for authenticating the user.

 

b. The user must have already been authenticated by an

external system before they visit the application.

 

c. Authentication is handled in the On_Authenticate

event in the global.asax file.

 

d. The WSFederationAuthenticationModule is now

responsible for managing the authentication process.

 

———————– Page 106———————–

 

claims-based single sign-on for the web and windows azure 69 69

 

4. Claims issuers always sign the tokens they send to a relying

party. However, although it is considered best practice, they

might not always encrypt the tokens. Why is this?

 

a. Relying parties must be sure that the claims come

from a trusted issuer.

 

b. Tokens may be transferred using SSL.

 

c. The claims issuer may not be able to encrypt the token

because it does not have access to the encryption key.

 

d. It’s up to the relying party to state whether or not it

accepts encrypted tokens.

 

5. The FederatedPassiveSignInStatus control automatically

signs a user out of all the applications she signed into in the

single sign-on domain.

 

a. True.

 

b. False. You must add code to the application to per-

form the sign-out process.

 

c. It depends on the capabilities of the claims issuer. The

issuer is responsible for sending sign-out messages to

all relying parties.

 

d. If your relying party uses HTTP sessions, you must add

code to explicitly abandon the session.

 

More Information

 

Appendix A of this guide walks through the use of FedUtil and also

shows you how to edit the Web.config files and where to locate your

certificates.

MSDN® contains a number of helpful articles, including MSDN

Magazine’s “A Better Approach For Building Claims-Based WCF Ser-

vices” (http://msdn.microsoft.com/en-us/magazine/dd278426.aspx).

To learn more about Windows Azure, see the Windows Azure

Platform at http://www.microsoft.com/windowsazure/.

 

———————– Page 107———————–

 

 

———————– Page 108———————–

 

4 Federated Identity for

Web Applications

 

Many companies want to share resources with their partners, but how

can they do this when each business is a separate security realm with

independent directory services, security, and authentication? One

answer is federated identity. Federated identity helps overcome

some of the problems that arise when two or more separate security

realms use a single application. It allows employees to use their local

corporate credentials to log on to external networks that have trust

relationships with their company. For an overview, see the section Federated identity links

“Federating Identity across Realms” in Chapter 2, “Claims-Based independent security realms.

Architectures.”

In this chapter, you’ll learn how Adatum lets one of its customers,

Litware, use the a-Order application that was introduced in Chapter

3, “Claims-Based Single Sign-On for the Web.”

 

The Premise

 

Now that Adatum has instituted single sign-on (SSO) for its employ-

ees, it’s ready to take the next step. Customers also want to use the

a-Order program to track an order’s progress from beginning to end.

They expect the program to behave as if it were an application within

their own corporate domain. For example, Litware is a longstanding

client of Adatum’s. Their sales manager, Rick, wants to be able to log Adatum does not

on with his Litware credentials and use the a-Order program to deter- want to maintain

mine the status of all his orders with Adatum. In other words, he accounts for another

wants the same single sign-on capability that Adatum’s employees company’s users of its

have. However, he doesn’t want separate credentials from Adatum web application, since

maintaining accounts

just to use a-Order. for third-party users

 

can be expensive.

Federated identity

reduces the cost of

account maintenance.

 

71

 

———————– Page 109———————–

 

7272 chapter four

 

Goals and Requirements

 

The goal of this scenario is to show how federated identity can make

the partnership between Adatum and Litware more efficient. With

federated identity, one security domain accepts an identity that

comes from another domain. This lets people in one domain access

resources located in the other domain without presenting additional

credentials. The Adatum issuer will trust Litware to authoritatively

issue claims about its employees.

In addition to the goals, this scenario has a few other require-

ments. One is that Adatum must control access to the order status

pages and the information that is displayed, based on the partner that

is requesting access to the program. In other words, Litware should

only be able to browse through its own orders and not another com-

pany’s. Furthermore, Litware allows employees like Rick, who are in

the Sales department, to track orders.

Another requirement is that, because Litware is only one of Ada-

tum’s many partners that will access the program, Adatum must be

able to find out which issuer has the user’s credentials. This is called

home realm discovery. For more information, see Chapter 2, “Claims-

Based Architectures.”

One assumption for this chapter is that Litware has already de-

ployed an issuer that uses WS-Federation, just as the Adatum issuer

does.

Security Assertion

Markup Language WS-Federation is a specification that defines how companies can

(SAML) is another share identities across security boundaries that have their own au-

protocol you might thentication and authorization systems. (For more information about

consider for a scenario

like this. ADFS 2.0 WS-Federation, see chapter 2, “Claims-Based Architectures.”) This

supports SAMLP. can only happen when legal agreements between Litware and Adatum

that protect both sides are already in place. A second assumption is

that Litware should be able to decide which of its employees can ac-

cess the a-Order application.

 

Overview of the Solution

 

Once the solution is in place, when Rick logs on to the Litware net-

work, he will access a-Order just as he would a Litware application.

The application can be modified From his perspective, that’s all there is to it. He doesn’t need a special

to accept claims from a partner password or user names. It’s business as usual. Figure 1 shows the

organization. architecture that makes Rick’s experience so painless.

 

———————– Page 110———————–

 

feder ated identity for web applications 73 73

 

Trust

 

Issuer IP

( )

Issuer FP

( )

Get the Adatum

token

2

 

Active Directory

 

3 Get the

s

Litware o

Map the r

e

Trust 1 token b

Claims r

e

K

Browser

 

4

a−Order

Get the orders

Rick at Litware

 

Adatum Litware

 

figure 1

Federated identity between Adatum and Litware

 

As you can see, there have been two changes to the infrastructure

since Adatum instituted single sign-on. A trust relationship now exists

between the Adatum and Litware security domains, and the Adatum

issuer has been configured with an additional capability: it can now

act as a federation provider (FP). A federation provider grants access

to a resource, such as the a-Order application, rather than verifying an

identity. When processing a client request, the a-Order application

relies on the Adatum issuer. The Adatum issuer, in turn, relies on the

Litware issuer that, in this scenario, acts as an identity provider (IdP).

Of course, the diagram represents just one implementation choice;

separating Adatum’s identity provider and federation provider would

also be possible. Keep in mind that each step also uses HTTP redirec-

tion through the client browser but, for simplicity, this is not shown

in the diagram.

 

In the sample solution, there are two Adatum issuers: one is the

Adatum identity provider and one is the Adatum federation provider.

This makes it easier to understand how the sample works. In the real

world, a single issuer would perform both of these roles.

 

The following steps grant access to a user in another security

domain:

 

1. Rick is using a computer on Litware’s network. He is already

authenticated with Active Directory® directory service. He

opens a browser and navigates to the a-Order application.

The application is configured to trust Adatum’s issuer (the

 

———————– Page 111———————–

 

7474 chapter four

 

federation provider). The application has no knowledge of

where the request comes from. It redirects Rick’s request to

the federation provider.

 

2. The federation provider presents the user with a page listing

different identity providers that it trusts. At this point, the

federation provider doesn’t know where Rick comes from.

 

3. Rick selects Litware from the list and then Adatum’s

federation provider redirects him to the Litware issuer to

verify that Rick is who he says he is.

In the sample code, 4. Litware’s identity provider verifies Rick’s credentials and

home realm discovery

is explicit, but this returns a security token to Rick’s browser. The browser

approach has caveats. sends the token back to the federation provider. The claims

For one, it discloses all in this token are configured for the Adatum federation

of Adatum’s partners, provider and contain information about Rick that is relevant

and some companies to Adatum. For example, the claims establish his name and

may not want to

do this. that he belongs to the sales organization. The process of

verifying the user’s credentials may include additional steps

Notice that Adatum’s such as presenting a logon page and querying Active

federation provider Directory or, potentially, other attribute repositories.

is a “relying party”

to Litware’s identity 5. The Adatum federation provider validates and reads the

provider. security token issued by Litware and creates a new token

that can be used by the a-Order application. Claims issued

by Litware are transformed into claims that are understood

by Adatum’s a-Order application. (The mapping rules that

translate Litware claims into Adatum claims were created

when Adatum configured its issuer to accept Litware’s

issuer as an identity provider.)

 

6. As a consequence of the claim mappings, Adatum’s issuer

removes some claims and adds others that are needed for

the a-Order application to accept Rick as a user. The

Adatum issuer uses browser redirection to send the new

token to the application. Windows® Identity Foundation

(WIF) validates the security token and extracts the claims.

It creates a ClaimsPrincipal and assigns it to HttpContext.

User. The a-Order application can then access the claims for

You can see these authorization decisions. For example, in this scenario, orders

steps in more detail in are filtered by organization— the organization name is

Appendix B. It shows provided as a claim.

a detailed message

sequence diagram for

using a browser as the

client.

 

———————– Page 112———————–

 

feder ated identity for web applications 75 75

 

The Adatum federation provider issuer mediates between the

application and the external issuer. You can think of this as a logical

role that the Adatum issuer takes on. The federation provider has two

responsibilities. First, it maintains a trust relationship with Litware’s

issuer, which means that the federation provider accepts and under-

stands Litware tokens and their claims.

Second, the federation provider needs to translate Litware claims

into claims that a-Order can understand. The a-Order application only

accepts claims from Adatum’s federation provider (this is its trusted

issuer). In this scenario, a-Order expects claims of type Role in order

to authorize operations on its website. The problem is that Litware Check out the setup

claims don’t come from Adatum and they don’t have roles. In the and deployment

scenario, Litware claims establish the employee’s name and organiza- section of the chapter

tional group. Rick’s organization, for example, is Sales. To solve this to see how to

problem, the federation provider uses mapping rules that turn a Lit- establish a trust

relationship between

ware claim into an Adatum claim.

issuers in separate

The following table summarizes what happens to input claims trust domains.

from Litware after the Adatum federation provider transforms them

into Adatum output claims.

 

Input Conditions Output claims

 

Claim issuer: Litware Claim issuer: Adatum

Claim type: Group, Claim type: Role; Claim value: Order Tracker

Claim value: Sales

 

Claim issuer: Litware Claims issuer: Adatum

Claim type: Company; Claim value: Litware

 

Claim issuer: Litware Claims issuer: Adatum

Claim type: name Claim type: name; Claim Value: Copied from input

 

Active Directory Federation Services (ADFS) 2.0 includes a claims

rule language that lets you define the behavior of the issuer when it

creates new tokens. What all of these rules generally mean is that if a

set of conditions is true, you can issue some claims.

These are the three rules that the Adatum FP uses:

•     => issue(Type = “http://schemas.adatum.com/claims/2009/08/

organization”, Value = “Litware”);

•     c:[Type == “http://schemas.xmlsoap.org/claims/Group&#8221;, Value ==

“Sales”] => issue(Type = “http://schemas.microsoft.com/

ws/2008/06/identity/claims/role”, Issuer = c.Issuer, OriginalIssuer

= c.OriginalIssuer, Value = “Order Tracker”, ValueType = c.

ValueType);

•     c:[Type == “http://schemas.xmlsoap.org/ws/2005/05/identity/

claims/name”]=> issue(claim = c);

 

———————– Page 113———————–

 

7676 chapter four

 

In all the rules, the part before the “=>” is the condition that must

be true before the rule applies. The part after the “=>” indicates the

action to take. This is usually the creation of an additional claim.

The first rule says that the federation provider will create a claim

of type Organization with the value Litware. That is, for this issuer

(Litware) it will create that claim. The second rule specifies that if

there’s a claim of type Group with value Sales, the federation pro-

vider will create a claim of type Role with the value Order Tracker.

The third rule copies a claim of type name.

An important part of the solution is home realm discovery. The

a-Order application needs to know which issuer to direct users to for

There are no

authentication. If Rick opens his browser and types http://www.

partner-specific

details in the a-Order adatum.com/ordertracking, how does a-Order know that Rick can

application. Partner be authenticated by Litware’s issuer? The fact is that it doesn’t. The

details are kept in a-Order application relies on the federation provider to make that

the FP. decision. The a-Order application always redirects users to the fed-

eration provider.

This approach has two potential issues: it discloses information

publicly about Litware’s relationship with Adatum, and it imposes an

extra step on users who might be confused as to which selection is

appropriate.

You can resolve these issues by giving the application a hint about

the user’s home realm. For example, Litware could send a parameter

in a query string that specifies the sender’s security domain. The ap-

plication can use this hint to determine the federation provider’s be-

havior. For more information, see “Home Realm Discovery” in Chapter

2, “Claims-Based Architectures.”

 

It also increases the

risk of a phishing

attack.

 

An issuer can accept the

whr parameter

as a way to specify

someone’s home realm.

 

———————– Page 114———————–

 

feder ated identity for web applications 77 77

 

Benefits and Limitations

 

Federated identity is an example of how claims support a flexible in-

frastructure. Adatum can easily add customers by setting up the trust

relationship in the federation provider and creating the correct claims

mappings. Thanks to WIF, dealing with claims in a-Order is straight-

forward and, because Adatum is using ADFS 2.0, creating the claim

mapping rules is also fairly simple. Notice that the a-Order application

itself didn’t change. Also, creating a federation required incremental

additions to an infrastructure that was first put in place to implement

single sign-on.

Another benefit is that the claims that Litware issues are about Federated identity

requires a lot less

things that make sense within the context of the organization: Lit- maintenance and

ware’s employees and their groups. All the identity differences be- troubleshooting. User

tween Litware and Adatum are corrected on the receiving end by accounts don’t have

Adatum’s federation provider. Litware doesn’t need to issue Adatum- to be copied and

maintained across

specific claims. Although this is technically possible, it can rapidly

security realms.

become difficult and costly to manage as a company adds new rela-

tionships and applications.

 

Inside the Implementation

 

The Microsoft® Visual Studio® development system solution named

2-Federation found at http://claimsid.codeplex.com is an example of

how to use federation. The structure of the application is very similar

to what you saw in Chapter 3, “Claims-Based Single Sign-On for the

Web.” Adding federated identity did not require recompilation or

changes to the Web.config file. Instead, the issuer was configured to Adding federated identity

act as a federation provider and a trust relationship was established to an existing claims-aware

with an issuer that acts as an identity provider. This process is de- application requires only

scribed in the next section. Also, the mock issuers were extended to a configuration change.

handle the federation provider role.

 

Setup and Physical Deployment

 

The Visual Studio solution named 2-Federation on CodePlex is ini-

tially configured to run on a stand-alone development machine. The

solution includes projects that implement mock issuers for both

Litware and Adatum.

 

———————– Page 115———————–

 

7878 chapter four

 

Using Mock Issuers for Development

and Testing

Mock issuers are helpful for development, demonstration, and testing

because they allow the end-to-end application to run on a single host.

The WIF SDK includes a Visual Studio template that makes it easy

to create a simple issuer class that derives from the SecurityToken

Service base class. You then provide definitions for the GetScope and

GetOutputClaims methods, as shown in the downloadable code

sample that accompanies this scenario.

When the developers at Adatum want to deploy their application,

they will modify the configuration so that it uses servers provided by

Procedures for Adatum and Litware. To do this, you need to establish a trust relation-

establishing trust can ship between the Litware and Adatum issuers and modify the a-Order.

be automated by

using metadata. For OrderTracking application’s Web.config file for the Adatum issuer.

 

example, in ADFS 2.0,

you can use the

FederationMetadata. Establishing Trust Relationships

xml file if you prefer

a more automated In the production environment, Adatum and Litware use production-

approach. The mock grade security token issuers such as ADFS 2.0. For the scenario to

issuers provided in work, you must establish a trust relationship between Adatum’s and

the sample code do Litware’s issuers. Generally, there are seven steps in this process:

not provide this

metadata. 1. Export a public key certificate for token signing from the

Litware issuer and copy Litware’s token signing certificate

to the file system of the Adatum’s issuer host.

 

2. Configure Adatum’s issuer to recognize Litware as a trusted

identity provider.

 

3. Configure Litware’s issuer to accept requests from

the Adatum issuer.

 

4. Configure the a-Order Tracking application as a

relying party within the Adatum issuer.

 

5. Edit claims rules in Litware that are specific to the

Adatum issuer.

 

6. Edit claims transformation rules in the Adatum

issuer that are specific to the Litware issuer.

 

7. Edit claims rules in the Adatum issuer that are

specific to the a-Order Tracking application.

 

You can refer to documentation provided by your production

issuer for instructions on how to perform these steps. Instructions for

the samples included in this guide can be found at http://claimsid.

codeplex.com.

 

———————– Page 116———————–

 

feder ated identity for web applications 79 79

 

Questions

 

1. Federated identity is best described as:

 

a. Two or more applications that share the same set of

users.

 

b. Two or more organizations that share the same set of

users.

 

c. Two or more organizations that share an identity

provider.

 

d. One organization trusting users from one or more

other organizations to access its applications.

 

2. In a federated security environment, claims mapping is

necessary because:

 

a. Claims issued by one organization are not necessarily

the claims recognized by another organization.

 

b. Claims issued by one organization can never be trusted

by another organization.

 

c. Claims must always be mapped to the roles used in

authorization.

 

d. Claims must be transferred to a new ClaimsPrincipal

object.

 

3. The roles of a federation provider can include:

 

a. Mapping claims from an identity provider to claims

that the relying party understands.

 

b. Authenticating users.

 

c. Redirecting users to their identity provider.

 

d. Verifying that the claims were issued by the expected

identity provider.

 

4. Must an identity provider issue claims that are specific to a

relying party?

 

a. Yes

 

b. No

 

c. It depends.

 

———————– Page 117———————–

 

8080 chapter four

 

5. Which of the following best summarizes the trust relation-

ships between the various parties described in the federated

identity scenario in this chapter?

 

a. The relying party trusts the identity provider, which in

turn trusts the federation provider.

 

b. The identity provider trusts the federation provider,

which in turn trusts the relying party.

 

c. The relying party trusts the federation provider, which

in turn trusts the identity provider.

 

d. The federation provider trusts both the identity

provider and the relying party.

 

More Information

 

For more information about federation and home realm discovery, see

“Developer’s Introduction to Active Directory Federation Services” at

http://msdn.microsoft.com/en-us/magazine/cc163520.aspx. Also see

“One does not simply walk into Mordor, or Home Realm Discovery for

the Internet” at http://blogs.msdn.com/vbertocci/archive/2009/04

/08/one-does-not-simply-walk-into-mordor-or-home-realm-discov-

ery-for-the-internet.aspx.

For a tool that will help you generate WS-Federation metadata

documents, see Christian Weyer’s blog at http://blogs.thinktecture.

com/cweyer/archive/2009/05/22/415362.aspx.

For more information about the ADFS 2.0 claim rule language, see

“Claim Rule Language” at http://technet.microsoft.com/en-us/library/

dd807118%28WS.10%29.aspx.

For a simple tool that you can use as a test security token service

(STS) that can issue tokens via WS-Federation, see the SelfSTS tool

at http://archive.msdn.microsoft.com/SelfSTS.

 

———————– Page 118———————–

 

5 Federated Identity with

Windows Azure Access

Control Service

 

In Chapter 4, “Federated Identity for Web Applications,” you saw how

Adatum used claims to enable users at Litware to access the a-Order

application. The scenario described how Adatum could federate with

partner organizations that have their own claims-based identity infra-

structures. Adatum supported the partner organizations by establish-

ing trust relationships between the Adatum federation provider (FP)

and the partner’s identity provider (IdP).

Adatum would now like to allow individual users who are not part

of a partner’s security domain to access the a-Order application. Ada-

tum does not want to manage the user accounts for these individuals:

instead, these individuals should be able to use an existing identity

® ® ®

from social identity providers such as Microsoft Windows Live , In this chapter, the term

Google, Yahoo!, or Facebook. How can Adatum enable users to reuse “social identity” refers to

an existing social identity, such as Facebook ID, when they access the an identity managed by

a-Order application? In addition to establishing trust relationships a well-known, established

with the social identity providers, Adatum must find solutions to online identity provider.

these problems:

•     different identity providers may use different protocols and

token formats to exchange identity data.

•     different identity providers may use different claim types.

•     the Adatum federation provider must be able to redirect users

to the correct identity provider.

•     the a-order application must be able to implement authoriza-

tion rules based on the claims that the social identity providers

issue.

•     Adatum must be able to enroll new users with social identities

who want to use the a-order application.

The Windows Azure™ AppFabric Access Control Service (ACS)

is a cloud-based federation provider that provides services to facili-

tate this scenario. ACS can transition between the protocols used by

 

81

 

———————– Page 119———————–

 

82 chapter five

 

different identity providers to transfer claims, perform mappings be-

tween different claim types based on configurable rules, and help lo-

cate the correct identity provider for a user when they want to access

an application. For more information, see Chapter 2, “Claims-Based

Architectures.”

 

ACS currently supports the following identity providers: Windows

Live, Google, Yahoo!, and Facebook. In addition, it can work with

ADFS 2.0 identity providers or a custom security token service (STS)

compatible with WS-Federation or WS-Trust. ACS also supports

OpenID, but you must configure this programmatically rather than

through the portal.

 

In this chapter, you’ll learn how Adatum enables individual cus-

tomers with a range of different social identity types to access the

a-Order application alongside Adatum employees and employees of

an existing enterprise partner. This chapter extends the scenario de-

scribed in Chapter 4, “Federated Identity for Web Applications,” and

shows Adatum building on its previous investments in a claims-based

identity infrastructure.

 

Consumer users will The Premise

benefit from using their

existing social identities Now that Adatum has enabled federated access to the a-Order ap-

because they won’t need plication for users at some of Adatum’s partners such as Litware,

to remember a new set of Adatum would like to extend access to the a-Order application to

credentials just for users at smaller businesses with no identity infrastructure of their

accessing the a-Order own and to individual consumer users. Fortunately, it is likely that

application. Adatum will

benefit because they won’t these users will already have some kind of social identity such as a

have the overhead of Google ID or a Windows Live ID. Smaller businesses want their users

managing these identi- to be able to track their orders, just as Rick at Litware is already able

ties—securely storing to do. Consumer users want to be able to log on with their social

credentials, managing identity credentials and use the a-Order program to determine

lost passwords, enforcing

password policies, and the status of all their orders with Adatum. They don’t want to be

so on. issued additional credentials from Adatum just to use the a-Order

application.

 

Goals and Requirements

 

The goal of this scenario is to show how federated identity can make

the partnership between Adatum and consumer users and users at

smaller businesses with no security infrastructure of their own work

more efficiently. With federated identity, one security realm can ac-

cept identities that come from another security realm. This lets people

in one domain access resources located in the other domain without

 

———————– Page 120———————–

 

feder ated identity with windows azure access control service 83

 

presenting additional credentials. The Adatum issuer will trust the

common social identity providers (Windows Live ID, Facebook,

Google, Yahoo!) to authenticate users on behalf of the a-Order

application.

 

Adatum trusts the social identity providers indirectly. The federation

provider at Adatum trusts the Adatum ACS instance and that in turn

trusts the social identity providers. If the federation provider at

Adatum trusted all the social identity providers directly, then it

would have to deal with the specifics of each one: the different

protocols and token formats. ACS handles all of this complexity for

Adatum and that makes it really easy for Adatum to support a

variety of social identity providers.

 

In addition to the goals, this scenario has a number of other re-

quirements. One requirement is that Adatum must control access to

the order status pages and the information that the application dis-

plays based on the identity of the partner or consumer user who is

requesting access to the a-Order application. In other words, users at

Litware should only be able to browse through Litware’s orders and

not another company’s orders. In this chapter, we introduce Mary, the

owner of a small company named “Mary Inc.” She, of course, should

only be able to browse through her orders and no one else’s.

Another requirement is that, because Adatum has several partner

organizations and many consumer users, Adatum must be able to find

out which identity provider it should use to authenticate a user’s

credentials. As mentioned in previous chapters, this process is called

home realm discovery. For more information, see Chapter 2, “Claims-

Based Architectures.”

One assumption for this chapter is that Adatum has its own iden-

tity infrastructure in place.

 

Overview of the Solution Although using ACS

simplifies the implementa-

With the goals and requirements in place, it’s time to look at the solu- tion of the Adatum issuer,

tion. As you saw in Chapter 4, “Federated Identity for Web Applica- it does introduce some

tions,” the solution includes the establishment of a claim-based archi- running costs. ACS is a

tecture with an issuer that acts as an identity provider on the subscription service, and

Adatum will have to pay

customer’s side and an issuer that acts as the federation provider on based on its usage of ACS

Adatum’s side. Recall that a federation provider acts as a gateway (ACS charges are calculated

between a resource and all of the issuers that provide claims about the based on the number of

resource’s users. Access Control transactions

In addition, this solution now includes an ACS instance, which plus the quantity of data

transferred in and out of

handles the protocol transition and token transformation for issuers the Windows Azure

that might not be WS-Federation based. This includes many of the datacenters).

social identity providers mentioned earlier in this chapter.

 

———————– Page 121———————–

 

84 chapter five

 

Figure 1 shows the Adatum solution for both Litware that has its

own identity provider, and Mary who is using a social identity—

Google, in this example.

n Trust

o

i

t Windows Live ID

i

s

n

a

r

T Facebook Social identity

l

o

c issuers (IdPs)

o

t Google

o

r

P

Claims

Transformation

 

3

 

t

Claims rus ACS

T

Transformation

2 1

3

4

5

 

Issuer FP Trust

( )

Mary

6

 

a−Order

web application 2 Trust

 

RP

( )

John

 

4

Issuer ldP

( )

 

1

Rick

Adatum

 

figure 1

Accessing the a-Order application from Litware

Litware and by using a social identity

 

The following two sections provide a high-level walkthrough of

the interactions between the relying party (RP), the federation pro-

vider, and the identity provider for customers with and without their

own identity provider. For a detailed description of the sequence of

messages that the parties exchange, see Appendix B.

 

Example of a Customer

with its Own Identity Provider

To recap from Chapter 4, “Federated Identity for Web Applications,”

here’s an example of how the system works for a user, Rick, at the

partner Litware, which has its own identity provider. The steps cor-

respond to the shaded numbers in the preceding illustration.

 

STEP 1: Auth EntICAt E rICK

 

1. Rick is using a computer on Litware’s network. Litware’s

Active Directory® service has already authenticated him. He

opens a browser and navigates to the a-Order application.

Rick is not an authenticated user in a-Order at this time.

Adatum has configured a-Order to trust Adatum’s issuer

(the federation provider). The application has no knowledge

 

———————– Page 122———————–

 

feder ated identity with windows azure access control service 85

 

of where the request comes from. It redirects Rick’s request

to the Adatum federation provider.

 

2. The Adatum federation provider presents the user with a

page listing different identity providers that it trusts (the

“Home realm Discovery” page). At this point, the federation

provider doesn’t know where Rick comes from.

 

3. Rick selects Litware from the list and then Adatum’s

federation provider redirects him to the Litware issuer that

can verify that Rick is who he says he is.

 

4. Litware’s identity provider verifies Rick’s credentials and

returns a security token to Rick’s browser. Litware’s identity

provider has configured the claims in this token for the

Adatum federation provider and they contain information

about Rick that is relevant to Adatum. For example, the

claims establish his name and that he belongs to the sales

organization in Litware.

 

STEP 2: t rA nsmIt L ItwA rE’s sEC urItY t o KEn

to th E AdAtum fEdErAt Ion provIdEr

 

1. Ricks’ browser now posts the issued token back to the

Adatum federation provider. The Adatum federation

provider validates the token issued by Litware and creates a

new token that the a-Order application can use.

 

STEP 3: t rA nsformIng thE t o KEn

 

1. The federation provider transforms the claims issued by

Litware into claims that Adatum’s a-Order application

understands. (The mapping rules that translate Litware

claims into Adatum claims were determined when Adatum

configured its issuer to accept Litware’s issuer as an identity

provider.)

 

2. The claim mappings in Adatum’s issuer remove some claims

and add others that the a-Order application needs in order

to accept Rick as a user, and possibly control access to

certain resources.

 

STEP 4: t rA nsmIt th E t rA nsformEd t o KEn

A nd pErform thE rEqu Est Ed ACt Ion

 

1. The Adatum issuer uses browser redirection to send the

new token to the application. In the a-Order application,

 

———————– Page 123———————–

 

86 chapter five

 

Windows Identity Foundation (WIF) validates the security

token and extracts the claims. It creates a ClaimsPrincipal

object and assigns it to HttpContext.User property. The

a-Order application can then access the claims for authori-

zation decisions. For example, in this scenario, the applica-

tion filters orders by organization, which is one of the pieces

of information provided as a claim.

 

Example of a Customer

Using a Social Identity

Here’s an example of how the system works for a consumer user such

In the sample, the as Mary who is using a social identity. The steps correspond to the

simulated issuer

allows you to select un-shaded numbers in the preceding illustration.

 

between Adatum,

partner organizations, STEP 1: prEs Ent CrEdEntIALs to th E Id EntItY provIdEr

and social identity

providers. 1. Mary is using a computer at home. She opens a browser and

navigates to the a-Order application at Adatum. Adatum has

configured the a-Order application to trust Adatum’s issuer

(the federation provider). Mary is currently un-authenticat-

ed, so the application redirects Mary’s request to the

Adatum federation provider.

 

2. The Adatum federation provider presents Mary with a page

listing different identity providers that it trusts. At this

point, the federation provider doesn’t know which security

realm Mary belongs to, so it must ask Mary which identity

provider she wants to authenticate with.

 

3. Mary selects the option to authenticate using her social

identity and then Adatum’s federation provider redirects her

to the ACS issuer to verify that Mary is who she says she is.

Adatum’s federation provider uses the whr parameter in the

request to indicate to ACS which social identity provider to

use—in this example it is Google.

 

In this sample, the Adatum simulated issuer allows users to enter the

email address associated with their social identity provider. The

simulated issuer parses this email address to determine the value of

the whr parameter. Another option would be to let the user choose

from a list of social identity providers. You should check what

options are available with the issuer that you use; you may be able to

query your issuer for the list of identity providers that it currently

supports.

 

4. ACS automatically redirects Mary to the Google issuer.

 

———————– Page 124———————–

 

feder ated identity with windows azure access control service 87

 

Mary never sees an ACS page; when ACS receives the request from

the Adatum issuer, ACS uses the value of the whr parameter to

redirect Mary directly to her social identity provider. However, if the

whr parameter is missing, or does not have a valid value, then ACS

will display a page that allows the user to select the social identity

provider that she wants to use.

 

5. Google verifies Mary’s credentials and returns a security

token to Mary’s browser. The Google identity provider has

added claims to this token for ACS: the claims include basic

information about Mary. For example, the claims establish Mary must give her

consent before

her name and her email address.

Google will pass the

claims on to ACS.

STEP 2: t rA nsmIt th E Id EntItY provIdEr’s

sEC urItY t o KEn to ACs

 

1. The Google identity provider uses HTTP redirection to

redirect the browser to ACS with the security token it has

issued.

 

2. ACS receives this token and verifies that it was issued by

the identity provider.

 

STEP 3: t rA nsform thE C LAIms

 

1. If necessary, ACS converts the token issued by the identity

provider to the security assertion markup language (SAML)

2.0 format and copies the claims issued by Google into the

new token.

 

2. ACS returns the new token to Mary’s browser.

 

STEP 4: t rA nsmIt th E Id EntItY provIdEr’s sEC urItY

t o KEn to thE f EdErAt Ion provIdEr

 

1. Mary’s browser posts the issued token back to the Adatum

federation provider.

 

2. The Adatum federation provider receives this token and

validates it by checking that ACS issued the token.

 

STEP 5: mA p thE C LAIms

 

1. Adatum’s federation provider applies token mapping rules

to the ACS security token. These rules transform the claims

into claims that the a-Order application can understand.

 

2. The Adatum federation provider returns the new claims to

Mary’s browser.

 

———————– Page 125———————–

 

88 chapter five

 

STEP 6: t rA nsmIt th E mA ppEd CLAIms A nd pErform

th E rEqu Est Ed ACt Ion

 

1. Mary’s browser posts the token issued by the Adatum

federation provider to the a-Order application. This token

contains the claims created by the mapping process.

 

2. The application validates the security token by checking

that the Adatum federation provider issued it.

 

3. The application reads the claims and creates a session for

Mary. It can use Mary’s identity information from the token

to determine which orders Mary can see in the application.

 

Because this is a web application, all interactions happen through

the browser. (See the section “Browser-Based Scenario with ACS” in

Appendix B for a detailed description of the protocol for a browser-

based client.)

The principles behind these interactions are exactly the same as

those described in Chapter 4, “Federated Identity for Web Applica-

tions.”

Adatum’s issuer, acting as a federation provider, mediates between

the application and the external issuers. The federation provider has

two responsibilities. First, it maintains a trust relationship with partner

issuers, which means that the federation provider accepts and under-

Different social stands Litware tokens and their claims, ACS tokens and their claims,

identity providers and tokens and their claims from any other configured partner. Sec-

return different claims ond, the federation provider needs to translate claims from partners

to ACS: for example,

the Windows Live ID and ACS into claims that a-Order can understand. The a-Order ap-

identity provider only plication only accepts claims from Adatum’s federation provider (this

returns a guid-like is its trusted issuer). In this scenario, a-Order expects claims of type

nameidentifier claim, Role and Organization in order to authorize operations on its web

the Google identity site. The problem is that ACS claims don’t come from Adatum and

provider returns name

and email claims in they don’t have these claim types. In the scenario, the claims from

addition to the ACS only establish that a social identity provider has authenticated

nameidentifier claim. the user. To solve this problem, the Adatum federation provider uses

mapping rules that add a Role claim to the claims from ACS.

 

Trust Relationships with

Social Identity Providers

The nature of a trust relationship between Adatum and a business

partner such as Litware, is subtly different from a trust relationship

between Adatum and a social identity provider such as Google or

 

———————– Page 126———————–

 

feder ated identity with windows azure access control service 89

 

Windows Live. In the case of a trust relationship between Adatum and

a business partner such as Litware, the trust operates at two levels;

there is a business trust relationship characterized by business con-

tracts and agreements, and a technical trust relationship characterized

by the configuration of the Adatum federation provider to trust to-

kens issued by the Litware identity provider. In the case of a trust re-

lationship between Adatum and a social identity provider such as

Windows Live, the trust is only a technical trust; there is no business

relationship between Adatum and Windows Live. In this scenario,

Adatum establishes a business trust relationship with the owner of

the social identity when the owner enrolls to use the a-Order applica-

tion and registers his or her social identity with Adatum. A further

difference between the two scenarios is in the claims issued by the

identity providers. Adatum can trust the business partner to issue rich,

accurate claims data about its employees such as cost centers, roles,

and telephone numbers, in addition to identity claims such as name

and email. The claims issued by a social identity provider are minimal,

and may sometimes be just an identifier. Because there is no business

trust relationship with the social identity provider, the only thing that

Adatum knows for sure is that each individual with a social identity

has a unique, unchanging identifier that Adatum can use to recognize

that it’s the same person returning to the a-Order application.

 

An individual’s unique identifier is unique to that instance of ACS: if

Adatum creates a new ACS instance, each individual will have a new

unique identifier. This is important to be aware of if you’re using the

unique identifier to map to other user data stored elsewhere.

 

Description of Mapping Rules

in a Federation Provider

The claims that ACS returns from the social identity provider to the

Adatum federation provider do not include the role or organization

claims that the a-Order application uses to authorize access to order

data. In some cases, the only claim from the social identity provider is

the nameidentifier that is a guid-like string. The mapping in rules in

the Adatum federation provider must add the role and organization

claims to the token. In the sample, the mapping rules simply add the

OrderTracker role, and “Mary Inc.” as an organization.

The following table summarizes the mapping rules that the Ada-

tum federation provider applies when it receives a token from ACS

when the user has authenticated with Google.

 

———————– Page 127———————–

 

90 chapter five

 

Input claim Output claim Notes

 

nameidentifier A unique id allocated by Google.

 

emailaddress The users registered email address

with Google. The user has agreed

to share this address.

 

name name The users name. This is the only

claim passed through to the

application. The issuer property

of the claim is set to adatum, and

the originalissuer is set to acs\

Google.

 

identityprovider Google

 

Role The simulated issuer adds this

claim with a value of “Order

Tracker.”

 

Organization The simulated issuer adds this

claim with a value of “MaryInc.”

 

The following table summarizes the mapping rules that the simu-

lated issuer applies when it receives a token from ACS when the user

has authenticated with Windows Live ID.

 

Input claim Output claim Notes

 

nameidentifier A unique id allocated by

Windows Live ID.

 

identityprovider uri:WindowsLiveID

 

name The simulated issuer copies the

value of the nameidentifier claim

to the name claim. The issuer

property of the claim is set to

adatum, and the originalissuer is

set to acs\LiveID.

 

Role The simulated issuer adds this

claim with a value of “Order

Tracker.”

 

Organization The simulated issuer adds this

claim with a value of “MaryInc.”

 

———————– Page 128———————–

 

feder ated identity with windows azure access control service 91

 

The following table summarizes the mapping rules that the simu-

lated issuer applies when it receives a token from ACS when the user

has been authenticated by a Facebook application.

 

Input claim Output claim Notes

 

nameidentifier A unique id allocated by the

Facebook application.

 

identityprovider Facebook-194130697287302. The

number here uniquely identifies

your Facebook application.

name name The users name. This is the only These mappings are, of

claim passed through to the course, an example and for

application. The issuer property demonstration purposes

of the claim is set to adatum, and only. Notice that as they

the originalissuer is set to acs\ stand, anyone authenti-

Facebook. cated by Google or

Windows Live ID has

Role The simulated issuer adds this

access to the “Mary Inc.”

claim with a value of “Order

orders in the a-Order

Tracker.”

application. A real

Organization The simulated issuer adds this federation provider

claim with a value of “MaryInc.” would probably check

that the combination of

In the scenario described in this chapter, because of the small identityprovider and

numbers of users involved, Adatum expects to manage the enrolment nameidentifier claims is

as a manual process. For a description of how this might be automated, from a registered, valid

see Chapter 7, “Federated Identity with Multiple Partners and Win- user and look up in a local

database their name, role,

dows Azure Access Control Service.”

and organization.

 

Alternative Solutions

 

Of course, the solution we’ve just described illustrates just one imple-

mentation choice; another possibility would be to separate Adatum’s

identity provider and federation provider and let ACS manage the

federation and the claims transformation. Figure 2 shows the trust

relationships that Adatum would need to configure for this solution.

 

———————– Page 129———————–

 

92 chapter five

 

n Trust

o

i

t Windows Live ID

i

s

n

a

r

T Facebook Social identity

l

o

c issuers (IdPs)

o

t Google

o

r

P

Claims

Transformation

 

t

rus ACS

T

 

Trust

Trust

Issuer IdP

( ) Mary

 

a−Order

web application

 

RP

( )

John

 

Issuer ldP

( )

 

Rick

Adatum

 

figure 2

Litware

Using ACS to manage the federation

Adatum has already with Adatum’s partners

 

invested in its own

identity infrastructure In this alternative solution, ACS would trust the Adatum and

and has an existing Litware identity providers and there is no longer a trust relationship

federation provider between the Litware and Adatum issuers. Adatum should also evalu-

running in their own

ate the costs of this solution because there will be additional ACS

datacenter. As a rather

risk-averse organiza- transactions as it handles sign-ins from users at partners with their

tion, Adatum prefers own identity providers. These costs need to be compared with the

to continue to use cost of running and managing this service on-premises.

their tried and tested

solution rather A second alternative solution does away with ACS leaving all the

than migrate the responsibilities for protocol transition and claims transformation to

functionality to ACS. the issuer at Adatum. Figure 3 shows the trust relationships that

Adatum would need to configure for this solution.

 

———————– Page 130———————–

 

feder ated identity with windows azure access control service 93

 

Windows Live ID

 

Trust Facebook Social identity

Google issuers (IdPs)

 

n

o

i

t

i

s

n

a

r

T

l

o

c

o Claims

t

o

r

P Transformation

 

Mary

Issuer FP

( ) Trust

 

This alternative removes

a−Order

web application Trust a dependency on ACS:

 

RP

( ) an external, third-party

John

service. It still relies on

Issuer ldP the social identity providers

( )

for their authentication

Rick services.

Adatum

 

figure 3

Litware

Using the Adatum issuer

for all federation tasks

 

Although this alternative solution means that Adatum does not

need to pay any of the subscription charges associated with using

ACS, Adatum is concerned about the additional complexity of its is-

suer, which would now need to handle all of the protocol transition

and claims transformation tasks. Furthermore, implementing this

scenario would probably take some time (weeks or months), while

Adatum could probably configure the solution with ACS in a matter

of hours. The question becomes one of business efficiency: would

Adatum get a better return by investing in creating and maintaining

infrastructure services, or by focusing on their core business services?

 

Inside the Implementation

 

The Visual Studio solution named 6-FederationWithAcs found at

http://claimsid.codeplex.com is an example of how to use federation

with ACS. The structure of the application is very similar to what you

saw in chapter 4, “Federated Identity for Web Applications.” There

are no changes to the a-Order application: it continues to trust the

Adatum simulated issuer that provides it with the claims required to

authorize access to the application’s data.

 

———————– Page 131———————–

 

94 chapter five

 

The example solution extends the Adatum simulated issuer to

handle federation with ACS, and uses an ACS instance that is config-

ured to trust the social identity providers. The next section describes

these changes.

 

Setup and Physical Deployment

 

You can run the Visual Studio solution named 6-FederationWithAcs

found at http://claimsid.codeplex.com on a stand-alone development

machine. As with the solutions described in the previous chapters, this

solution uses mock issuers for both Adatum and Litware. There are no

changes to the Litware mock issuer, but the Adatum mock issuer now

has a trust relationship with ACS in addition to the existing trust re-

lationship with Litware, and offers the user a choice of authenticating

You can select the with the Adatum identity provider, the Litware identity provider, or

certificate that ACS ACS.

uses to sign the token You can see the entry for ACS (https://federationwithacs-dev.

it issues to the accesscontrol.windows.net/) in the issuerNameRegistry section of

Adatum federation the Web.config file in the Adatum.SimulatedIssuer.6 project. This

provider in the

Windows Azure entry includes the thumbprint used to verify the token that the Ada-

AppFabric portal. tum federation provider receives from ACS. This is the address of the

ACS instance created for the sample.

When the developers at Adatum want to deploy their application,

they will modify the configuration so that it uses the Adatum federa-

tion provider. They will also modify the configuration of the Adatum

federation provider by adding a trust relationship with the production

ACS instance.

 

Establishing a Trust Relationship

with ACS

Establishing a trust relationship with ACS is very similar to establish-

ing a trust relationship with any other issuer. Generally, there are six

steps in this process:

 

1. Configure Adatum’s issuer to recognize your ACS instance

as a trusted identity provider.

 

You may be able to configure the Adatum issuer automatically

by providing a link to the FederationMetadata.xml file for the

ACS namespace. However, this FederationMetadata.xml will not

include details of all the claims that your ACS namespace offers,

it only includes the nameidentifier and identityprovider

claims. You will need to configure details of other claim types

offered by ACS manually in the Adatum issuer.

 

———————– Page 132———————–

 

feder ated identity with windows azure access control service 95

 

2. Configure the social identity providers that you want to

support in ACS.

 

3. Configure your ACS instance to accept requests from the

Adatum issuer (the Adatum issuer is a relying party as far as

ACS is concerned.)

 

4. Edit the claims rules in ACS to pass the claims from the

social identity provider through to the Adatum issuer.

 

5. If necessary, edit the claims transformation rules in the

Adatum issuer that are specific to the social identity provid-

ers.

 

6. If necessary, edit the claims rules in the Adatum issuer that

are specific to the a-Order application.

 

You can refer to documentation provided by your production is-

suer for instructions on how to perform these steps. You can find

detailed instructions for the ACS configuration in Appendix E of this

guide.

 

Reporting Errors from ACS

You can specify a URL that points to an error page for each relying

party that you define in ACS. In the sample, this page is called

ErrorPage.aspx and you can find it in the Adatum.FederationProvider.6

project. If ACS detects an error during processing, it can post

JavaScript Object Notation (JSON) encoded error information to this

page. The code-behind for this page illustrates a simple approach for

displaying this error information; in practice, you may want to log

these errors and take different actions depending on the specific error

It’s important to mark

that occurs.

ErrorPage.aspx as

un-authenticated in

An easy way to generate an error in the sample so that you can see

the web.config file

how the error processing works is to try to authenticate using a to avoid the risk of

Google ID, but to decline to give consent for ACS to access your recursive redirects.

data by clicking on No thanks after you have logged into Google.

 

Initializing ACS

The sample application includes a set of pre-configured partners for

Fabrikam Shipping, both with and without their own identity provid-

ers. These partners require identity providers, relying parties, and

claims-mapping rules in ACS in order to function. The ACS.Setup.6

project in the solution is a basic console application that you can run

to add the necessary configuration data for the pre-configured part-

ners to your ACS instance. It uses the ACS Management API and the

wrapper classes in the ACS.ServiceManagementWrapper project.

 

———————– Page 133———————–

 

96 chapter five

 

You will still need to perform some manual configuration steps; the

ACS Management API does not enable you to create a new service

namespace. You must perform this operation in the ACS manage-

ment portal.

 

For more information on working with ACS, see Appendix E.

 

Working with Social Identity Providers

 

The solution described in this chapter enables Adatum to support

users with identities from trusted partners such as Litware, and with

identities from social identity providers such as Google or Windows

Live ID. Implementing this scenario in the real world would require

solutions to two additional problems.

First, there is the question of managing how we define the set of

identities (authenticated by one of the social identity providers) that

are members of the same organization. For example, which set of us-

ers with Windows Live IDs and Google IDs are associated with the

organization Mary Inc? With a partner such as Litware with its own

identity provider, Adatum trusts Litware to decide which users at

Litware should be able to view the order data that belongs to Litware.

Second, there are differences between the claims returned from

the social identity providers. In particular, Windows Live ID only re-

turns the nameidentifier claim. This is a guid-like string that Windows

Live guarantees to remain unchanged for any particular Windows Live

ID within the current ACS namespace. All we can tell from this claim

is that this instance of ACS and Windows Live have authenticated the

same person, provided we get the same nameidentifier value returned.

There are no claims that give us the user’s email address or name.

The following potential solutions make these assumptions about

Adatum.

•     Adatum does not want to make any changes to the a-Order

application to accommodate the requirements of a particular

partner.

•     Adatum wants to do all of its claims processing in the Adatum

federation provider. Adatum is using ACS just for protocol

transition, passing through any claims from the social identity

providers directly to the Adatum federation provider.

 

Managing Users with Social Identities

Taking Litware as an example, let’s recap how the relationship with a

partner organization works.

•     Adatum configures the Adatum federation provider to trust the

Litware identity provider. This is a one-time, manual configura-

tion step in this scenario.

 

———————– Page 134———————–

 

feder ated identity with windows azure access control service 97

 

•     Adatum adds a set of claims-mapping rules to the Adatum

federation provider, to convert claims from Litware into claims

that the Adatum a-Order application understands. In this

scenario, the relevant claims that the a-Order application

expects to see are name, Role and Organization.

•     Litware can authorize any of its employees to access the

Adatum a-Order application by ensuring that Litware’s identity

provider gives the user the correct claim. In other words, Litware

controls who has access to Litware’s data in the Adatum a-

Order application.

The situation for a smaller partner organization without its own The Adatum federation

identity provider is a little different. Let’s take MaryInc, which wants provider should generate

to use Windows Live IDs and Google IDs as an example. the Organization claim

•     Unlike a partner with its own identity provider, there is no need rather than pass in through

from Litware. This

to set up a new trust relationship because Adatum already trusts mitigates the risk that a

ACS. From the perspective of the Adatum federation provider, malicious administrator

ACS is where the MaryInc employee claims will originate. at Litware could configure

•     The Adatum federation provider cannot identify the partner the Litware identity

provider to issue a

organization of the authenticated user from the claims it claim using another

receives from ACS. Therefore, Adatum must configure a set of organization’s identity.

mapping rules in the federation provider that map a user’s

unique claim from ACS (such as the nameidentifier claim) to

appropriate values for the name, Role and Organization claims

that the a-Order application expects to see.

•     If MaryInc wants to allow multiple employees to access MaryInc

data in the a-Order application, then Adatum must manually

configure additional mapping rules in its federation provider.

This last point highlights the significant difference between the

partner with its own identity provider and the partner without. The

partner with its own identity provider can manage who has access to

its data in the a-Order application; the partner without its own iden-

tity provider must rely on Adatum to make changes in the Adatum

federation provider if it wants to change who has access to its data.

 

Working with Windows Live IDs

Unlike the other social identity providers supported by ACS that all

return name and emailaddress claims, Windows Live ID only returns

a nameidentifier claim. This means that the Adatum federation pro-

vider must use some additional logic to determine appropriate values

for the name, Role and Organization claims that the a-Order applica-

tion expects to see.

This means that when someone with a Windows Live ID enrolls

to use the Adatum a-Order application, Adatum must capture values

 

———————– Page 135———————–

 

98 chapter five

 

for the nameidentifier, name, Role and Organization claims to use in

the mapping rules in the federation provider (as well as any other data

that Adatum requires). The only way to discover the nameidentifier

value is to capture the claim that Windows Live returns after the user

signs in, so part of the enrollment process at Adatum must include the

user authenticating with Windows Live.

 

It is possible to access data in the user’s Windows Live ID profile,

such as the user’s name and email address, programmatically by

using windows Live messenger Connect. This would eliminate

the requirement that the user manually enter information such as his

name and email address when he enrolled to use the a-Order

application. However, the benefits to the users may not outweigh the

costs of implementing this solution. Furthermore, not all users will

understand the implications of temporarily giving consent to Adatum

to access to their Windows Live ID profile data.

 

With ADFS you can create custom claims transformation mod-

ules that, for example, allow you to implement a mapping rule based

on data retrieved from a relational database. With this in mind, the

enrollment process for new users of the Adatum a-Order application

could populate a database table with the values required for a user’s

set of claims.

 

Working with Facebook

The sample application enables you to use Facebook as one of the

supported social identity providers. Adding support for Facebook did

not require any changes to the a-Order web application. However,

there are differences in the way the Adatum federation provider sup-

ports Facebook as compared to the other social identity providers,

and differences in the ACS configuration.

Configuring Facebook as an identity provider in ACS requires

some additional data; an Application ID that identifies your Facebook

application, an Application secret to authenticate with your Facebook

application, and a list of claims that ACS will request from Facebook.

The additional configuration values enable you to configure multiple

Facebook applications as identity providers for your relying party.

The implication for the Adatum federation provider is that it must

Each set of Facebook application be able to identify the Facebook application to use for authentication

credentials is treated as a in the whr parameter that it passes to ACS. The following code sample

separate identity provider from the FederationIssuers class in the Adatum federation provider

in ACS. shows how the Facebook application ID is included in the whr value.

 

———————– Page 136———————–

 

feder ated identity with windows azure access control service 99

 

// Facebook

homeRealmIdentifier = “facebook.com”;

issuerLocation = Federation. AcsIssuerEndpoint;

whr = “Facebook-194130697287302”;

this.issuers.Add(homeRealmIdentifier,

new IssuerInfo(homeRealmIdentifier, issuerLocation, whr));

 

Questions

 

1. Which of the following issues must you address if you want

to allow users of your application to authenticate with a

®

social identity provider such as Google or Windows Live

network of Internet services?

 

a. Social identity providers may use protocols other than

WS-Federation to exchange claims tokens.

 

b. You must register your application with the social

identity provider.

 

c. Different social identity providers issue different claim

types.

 

d. You must provide a mechanism to enroll users using

social identities with your application.

 

2. What are the advantages of allowing users to authenticate

to use your application with a social identity?

 

a. The user doesn’t need to remember yet another

username and password.

 

b. It reduces the features that you must implement in

your application.

 

c. Social identity providers all use the same protocol to

transfer tokens and claims.

 

d. It puts the user in control of their password manage-

ment. For example, a user can recover a forgotten

password without calling your helpdesk.

 

3. What are the potential disadvantages of using ACS as your

federation provider?

 

a. It adds to the complexity of your relying party

application.

 

———————– Page 137———————–

 

100 chapter five

 

b. It adds an extra step to the authentication process,

which negatively impacts the user experience.

 

c. It is a metered service, so you must pay for each token

that it issues.

 

d. Your application now relies on an external service that

is outside of its control.

 

4. How can your federation provider determine which identity

provider to use (perform home realm discovery) when an

unauthenticated user accesses the application?

 

a. Present the user with a list of identity providers to

choose from.

 

b. Analyze the IP address of the originating request.

 

c. Prompt the user for an email address, and then parse it

to determine the user’s security domain.

 

d. Examine the ClaimsPrincipal object for the user’s

current session.

 

5. In the scenario described in this chapter, the Adatum

federation provider trusts ACS, which in turn trusts the

social identity providers such as Windows Live and Google.

Why does the Adatum federation provider not trust the

social identity providers directly?

 

a. It’s not possible to configure the Adatum federation

provider to trust the social identity providers because

the social identity providers do not make the certifi-

cates required for a trust relationship available.

 

b. ACS automatically performs the protocol transition.

 

c. ACS is necessary to perform the claims mapping.

 

d. Without ACS, it’s not possible to allow Adatum

employees to access the application over the web.

 

More Information

 

Appendix E of this guide provides a detailed description of ACS and

its features.

You can find the MSDN® documentation for ACS 2.0 at http://

msdn.microsoft.com/en-us/library/gg429786.aspx.

 

———————– Page 138———————–

 

6 Federated Identity with

Multiple Partners

 

In this chapter, you’ll learn about the special considerations that apply

to applications that establish many trust relationships. Here you will

also see how use the ASP.NET Model View Controller (MVC) frame-

work to build a claims-aware application.

Although the basic building blocks of federated identity—issuers, Special considerations apply

trust, security tokens and claims—are the same as what you saw in the when there are many trust

previous chapter, there are some identity and authorization require- relationships.

ments that are particular to the case of multiple trust relationships.

In some web applications, such as the one shown in this chapter,

users and customers represent distinct concepts. A customer of an

application can be an organization, and each customer can have many

individual users, such as employees. If the application is meant to scale

to large numbers of customers, the enrollment process for new cus-

tomers must be as streamlined as possible. It may even be automated.

As with the other chapters, it is easiest to explain these concepts in

the context of a scenario.

 

The Premise

 

Fabrikam is a company that provides shipping services. As part of its

offering, it has a web application named Fabrikam Shipping that al-

lows its customers to perform such tasks as creating shipping orders

and tracking them. Fabrikam Shipping is an ASP.NET MVC application

that runs in Fabrikam’s data center. Fabrikam’s customers want their

employees to use a browser to access the shipping application.

Fabrikam has made its new shipping application claims-based.

Like many design choices, this one was customer-driven. In this case,

Fabrikam signed a deal with a major customer, Adatum. Adatum’s

corporate IT strategy (as discussed in chapter 3, “Claims-Based Single

Sign-On for the Web”) calls for the eventual elimination of identity

silos. Adatum wants its users to access Fabrikam Shipping without

 

101

 

———————– Page 139———————–

 

102102 chapter six

 

presenting separate user names and passwords. Fabrikam also signed

agreements with Litware that had similar requirements. However,

Fabrikam also wants to support smaller customers, such as Contoso,

that do not have the infrastructure in place to support federated

identity.

 

Goals and Requirements

 

Larger customers such as Adatum and Litware have some particular

concerns. These include the following:

•     Usability. They would prefer if their employees didn’t need to

learn new passwords and user names for Fabrikam Shipping.

These employees shouldn’t need any credentials other than the

ones they already have, and they shouldn’t have to enter creden-

tials a second time when they access Fabrikam Shipping from

within their security domain.

•     Support. It is easier for Adatum and Litware to manage issues

such as forgotten passwords than to have employees interact

with Fabrikam.

•     Liability. There are reasons why Adatum and Litware have the

authentication and authorization policies that they do. They

want to control who has access to resources, no matter where

those resources are deployed, and Fabrikam Shipping is no

exception. If an employee leaves the company, he or she should

no longer have access to the application.

 

Fabrikam has its own goals, which are the following:

 

•     To delegate the responsibility for maintaining user identities

to its customers, when possible. This avoids a number of

problems, such as having to synchronize data between Fabrikam

and its customers. The contact information for a package’s

sender is an example of this kind of information. Its accuracy

should be the customer’s responsibility because it could quickly

become costly for Fabrikam to keep this information up to date.

•     To bill customers by cost center if one is supplied. Cost

centers should be provided by the customers. This is also

another example of information that is the customer’s responsi-

bility.

•     To sell its services to a large number of customers. This means

that the process of enrolling a new company must be stream-

lined. Fabrikam would also prefer that its customers self-manage

the application whenever possible.

 

———————– Page 140———————–

 

feder ated identity with multiple partners 103 103

 

•     To provide the infrastructure for federation if a customer

cannot. Fabrikam wants to minimize the impact on the applica-

tion code that might arise from having more than one authenti-

cation mechanism for customers.

 

Overview of the Solution

 

With the goals and requirements in place, it’s time to look at the solu-

tion. As you saw in Chapter 4, “Federated Identity for Web Applica-

tions,” the solution includes the establishment of a claims-based archi-

tecture with an issuer that acts as an identity provider (IdP) on the

customers’ side. In addition, the solution includes an issuer that acts

as the federation provider (FP) on Fabrikam’s side. Recall that a fed-

eration provider acts as a gateway between a resource and all of the

issuers that provide claims about the resource’s users.

Figure 1 shows Fabrikam’s solution for customers that have their

own identity provider.

 

Issuer IP

( )

Trust

 

Active

Directory

 

4

K tum

e

1 Get Token ments/Ada

r /Ship

b Fabrikam

e

r

o Shipping

s

 

Browser Trust

 

John

 

2 Get a Fabrikam

Shipping token

Issuer FP

Adatum ( )

 

e

r Map the

a 3

Trust w

t Claims

i

L

/

s

t

n 4

e

m

p

i

h

S Fabrikam

/

 

Issuer

 

Active

Directory Get

 

Token 2

K

e 1

r

b Get a Fabrikam

e

r

o Shipping token

s

Browser

Rick

 

Litware

figure 1

Fabrikam Shipping for customers with an identity provider

 

———————– Page 141———————–

 

104104 chapter six

 

Here’s an example of how the system works. The steps corre-

spond to the numbers in the preceding illustration.

 

STEP 1: prEs Ent CrEdEntIALs to th E Id EntItY provIdEr

 

1. When John from Adatum attempts to use fabrikam ship-

ping for the first time (that is, when he first navigates to

https://{fabrikam host}/f-shipping/adatum), there’s no

session established yet. In other words, from Fabrikam’s

point of view, John is unauthenticated. The URL provides

the Fabrikam Shipping application with a hint about the

In this scenario, customer that is requesting access (the hint is “adatum” at

discovering the home the end of the URL).

realm is automated.

There’s no need for 2. The application redirects John’s browser to Fabrikam’s issuer

the user to provide it, (the federation provider). That is because Fabrikam’s

as was shown in

federation provider is the application’s trusted issuer. As

Chapter 4, “Feder-

ated Identity for Web part of the redirection URL, the application includes the

Applications.” whr parameter that provides a hint to the federation

provider about the customer’s home realm. The value of the

whr parameter is http://adatum/trust.

 

3. Fabrikam’s federation provider uses the whr parameter to

look up the customer’s identity provider and redirect John’s

browser back to the Adatum issuer.

 

4. Assuming that John uses a computer that is already a part of

the domain and in the corporate network, he will already

have valid network credentials that can be presented to

Adatum’s identity provider.

 

5. Adatum’s identity provider uses John’s credentials to authen-

ticate him and then issue a security token with a set of

Adatum’s claims. These claims are the employee name, the

employee address, the cost center, and the department.  

 

STEP 2: t rA nsmIt th E Id EntItY provIdEr’s sEC urItY

t o KEn to thE fEdErAt Ion provIdEr

 

1. The identity provider uses HTTP redirection to redirect

the security token it has issued to Fabrikam’s federation

provider.

 

2. Fabrikam’s federation provider receives this token and

validates it.

 

———————– Page 142———————–

 

feder ated identity with multiple partners 105 105

 

STEP 3: mA p thE C LAIms

 

1. Fabrikam’s federation provider applies token mapping rules

to the identity provider’s security token. The claims are

transformed into something that Fabrikam Shipping under-

stands.

 

2. The federation provider uses HTTP redirection to submit

the claims to John’s browser.

 

STEP 4: t rA nsmIt th E mA ppEd CLAIms A nd pErform

th E rEqu Est Ed ACt Ion

 

1. The browser sends the federation provider’s security token,

which contains the transformed claims, to the Fabrikam

Shipping application.

 

2. The application validates the security token.

 

3. The application reads the claims and creates a session for

John.

 

Because this is a web application, all interactions happen through

the browser. (See Appendix B for a detailed description of the proto-

col for a browser-based client.)

The principles behind these interactions are exactly the same as Automated home realm

those described in Chapter 4, “Federated Identity for Web Applica- discovery is important

tions.” One notable exception is Fabrikam’s automation of the home when there are many

realm discovery process. In this case, there’s no user intervention trust relationships.

necessary. The home realm is entirely derived from information passed

first in the URL and then in the whr parameter.

Litware follows the same steps as Adatum. The only differences

are the URLs used (http://{fabrikam host}/f-shipping/litware and the

Litware identity provider’s address) and the claims mapping rules,

because the claims issued by Litware are different from those issued

by Adatum. Notice that Fabrikam Shipping trusts the Fabrikam fed-

eration provider, not the individual issuers of Litware or Adatum. This

level of indirection isolates Fabrikam Shipping from individual differ-

ences between Litware and Adatum.

Fabrikam also provides identity services on behalf of customers

such as Contoso that do not have issuers of their own. Figure 2 shows

how Fabrikam implemented this.

 

———————– Page 143———————–

 

106106 chapter six

 

o

s

to

n Fabrikam

o

C

/

s

t

n

me Shipping

p

i

h

S

/

4

Trust

 

abrikam

F

a

Get ken Issuer FP

to ( )

ing

2 Shipp

 

t

s

u

Browser r

T 3 Map the

Claims

1 Send a user name

Bill at and password to Issuer IP

( )

Contoso get a token

 

figure 2 Fabrikam

Fabrikam Shipping for customers

without an identity provider

 

Contoso is a small business with no identity infrastructure of its

own. It doesn’t have an issuer that Fabrikam can trust to authenticate

Smaller organizations may Contoso’s users. It also doesn’t care if its employees need a separate

not have their own issuers. set of credentials to access the application.

Fabrikam has deployed its own identity provider to support

smaller customers such as Contoso. Notice, however, that even

though Fabrikam owns this issuer, it’s treated as if it were an external

identity provider, just as those that belong to Adatum and Litware. In

a sense, Fabrikam “federates with itself.”

Because the identity provider is treated as an external issuer, the

process is the same as that used by Adatum and Litware. The only

differences are the URLs and the claim mappings.

 

By deploying an identity provider for customers such as Contoso,

Fabrikam accepts the costs associated with maintaining accounts for

remote users (for example, handling password resets). The benefit is

that Fabrikam only has to do this for customers that don’t have their

own federation infrastructure. Over time, Fabrikam expects to have

fewer customers that need this support.

 

———————– Page 144———————–

 

feder ated identity with multiple partners 107 107

 

It would also be possible for Fabrikam to support third-party

identity providers such as LiveID or OpenID as a way to reduce the

cost of maintaining passwords for external users.

 

Using Claims in Fabrikam Shipping

Fabrikam Shipping uses claims for two purposes. It uses role claims to

control access and it also uses claims to retrieve user profile informa- Fabrikam uses claims for

tion. access control and for user

Access control to Fabrikam Shipping is based on one of three profiles.

roles:

•     Shipment Creator. Anyone in this role can create new orders.

•     Shipment Manager. Anyone in this role can create and modify

existing shipment orders.

•     Administrator. Anyone in this role can configure the system.

For example, they can set shipping preferences or change the

application’s appearance and behavior (“look and feel”).

 

The sender’s address and the sender’s cost center for billing are

the pieces of profile information that Fabrikam Shipping expects as

claims. The cost center allows Fabrikam to provide more detailed in-

voices. For example, two employees from Adatum who belong to two

different departments would get two different bills.

Fabrikam configures claims mappings for every new customer

that uses Fabrikam Shipping. This is necessary because the application

logic within Fabrikam Shipping only understands one set of role

claims, which includes Shipment Creator, Shipment Manager, and

Administrator. By providing these mappings, Fabrikam decouples the

application from the many different claim types that customers pro-

vide.

The following table shows the claims mappings for each customer.

Claims that represent cost centers, user names, and sender addresses

are simply copied. They are omitted from the table for brevity.

 

———————– Page 145———————–

 

108108 chapter six

 

Partner Input conditions Output claims

 

Adatum Claim issuer: Adatum Claim issuer: Fabrikam

Claim type: Group, Claim type: Role,

Claim value: Customer Service Claim value: Shipment Creator

 

Claim issuer: Adatum Claim issuer: Fabrikam

Claim type: Group, Claim type: Role,

Claim value: Order Fulfillments Claim value: Shipment Creator

 

Claim issuer: Fabrikam

Claim type: Role,

Claim value: Shipment Manager

 

Claim issuer: Adatum Claim issuer: Fabrikam

Claim type: Group, Claim type: Role,

Claim value: Admins Claim value: Administrator

 

Claim issuer: Adatum Claim issuer: Fabrikam

Claim type: Organization,

Claim value: Adatum

 

Litware Claim issuer: Litware Claim issuer: Fabrikam

Claim type: Group, Claim type: Role,

Claim value: Sales Claim value: Shipment Creator

 

Claim issuer: Litware Claim issuer: Fabrikam

Claim type: Organization, Claim value:

Litware

 

Contoso Claim issuer: Fabrikam identity provider Claim issuer: Fabrikam

Claim type: e-mail, Claim type: Role,

Claim value: bill@contoso.com Claim value: Shipment Creator

 

Claim issuer: Fabrikam

Claim type: Role,

Claim value: Shipment Manager

 

Claim issuer: Fabrikam

Claim type: Role,

Claim value: Administrator

 

Claim issuer: Fabrikam

Claim type: Organization,

Claim value: Contoso

 

As in Chapter 4, “Federated Identity for Web Applications,” Adatum

could issue Fabrikam-specific claims, but it would not be a best practice

to clutter Adatum’s issuer with Fabrikam-specific concepts such as

Fabrikam roles. Fabrikam allows Adatum to issue any claims it wants,

and then it configures its federation provider to map these Adatum

claims to Fabrikam claims.

 

———————– Page 146———————–

 

feder ated identity with multiple partners 109 109

 

Inside the Implementation

 

Now is a good time to walk through some of the details of the solu-

tion. As you go through this section, you may want to download the

Microsoft® Visual Studio® development system solution 3Federa-

tionWithMultiplePartners from http://claimsid.codeplex.com. If you

are not interested in the mechanics, you should skip to the next sec-

tion.

The Fabrikam Shipping application uses the ASP.NET MVC frame-

work in conjunction with the Windows® Identify Foundation (WIF).

The application’s Web.config file contains the configuration informa-

tion, as shown in the following XML code. The <system.webServer>

section of the Web.config file references WIF-provided modules and Fabrikam Shipping is an

the ASP.NET MVC HTTP handler class. The WIF information is the ASP.NET MVC application

same as it was in the previous scenarios. The MVC HTTP handler is in that uses claims.

the <handlers> section.

 

<system.webServer>

<modules runAllManagedModulesForAllRequests=”true”>

<add name=”WSFederationAuthenticationModule”

preCondition=” integratedMode ”

type=”Microsoft.IdentityModel.Web.

WSFederationAuthenticationModule, …” />

 

<add name=”SessionAuthenticationModule”

preCondition=” integratedMode”

type=”Microsoft.IdentityModel.Web.

SessionAuthenticationModule, …” />

</modules>

<handlers>

Fabrikam chose ASP.NET

<add name=”MvcHttpHandler” MVC because it wanted

preCondition=”integratedMode” improved testability and a

verb=”*” modular architecture. They

path=”*.mvc” considered these qualities

type=”System.Web.Mvc.MvcHttpHandler, …”/> important for an applica-

tion with many customers

… and complex federation

</handlers> relationships.

</system.webServer>

 

———————– Page 147———————–

 

110110 chapter six

 

Fabrikam Shipping is an example of the finer-grained control that’s

available with the WIF API. Although Fabrikam Shipping demon-

strates how to use MVC with WIF, it’s not the only possible ap-

proach. Also, WIF-supplied tools, such as FedUtil.exe, are not

currently fully integrated with MVC applications. For now, you can

edit sections of the configuration files after applying the FedUtil

program to an MVC application. This is what the developers at

Fabrikam did with Fabrikam Shipping.

 

Fabrikam Shipping needs to customize the redirection of HTTP

requests to issuers in order to take advantage of the ASP.NET MVC

architecture. It does this by turning off automatic redirection from

within WIF’s federated authentication module. This is shown in the

following XML code:

 

<federatedAuthentication>

If you set passive

RedirectEnabled to <wsFederation passiveRedirectEnabled=”false ”

false, WIF will no issuer=”https://{fabrikam host}/{issuer endpoint}/”

longer be responsible realm=”https://{fabrikam host}/f-Shipping/FederationResult”

for the redirections to

requireHttps=”true”

your issuers. You will

/>

have complete control

of these interactions. <cookieHandler requireSsl=”true” path=”/f-Shipping/” />

</federatedAuthentication>

 

By setting the passiveRedirectEnabled attribute to false, you

instruct WIF’s federated authentication module not to perform its

built-in redirection of unauthenticated sessions to the issuer. For ex-

ample, Fabrikam Shipping uses the WIF API to perform this redirec-

tion under programmatic control.

ASP.NET MVC applications include the concept of route mappings

and controllers that implement handlers. A route mapping enables you

to define URL mapping rules that automatically dispatch incoming

URLs to application-provided action methods that process them.

(Outgoing URLs are also processed.)

The following code shows how Fabrikam Shipping establishes a

routing table for incoming requests such as “http://{fabrikam host}/f-

shipping/adatum”. The last part of the URL is the name of the organi-

zation (that is, the customer). This code is located in Fabrikam Ship-

ping’s Global.asax file.

 

public class MvcApplication : System.Web.HttpApplication

{

// …

public static void RegisterRoutes(RouteCollection routes)

{

// …

routes.MapRoute(

 

———————– Page 148———————–

 

feder ated identity with multiple partners 111 111

 

“OrganizationDefault”,

“{organization}/”,

new { controller = “Shipment”, action = “Index” });

}

// …

}

 

The RegisterRoutes method allows the application to tell the

ASP.NET MVC framework how URIs should be mapped and handled

in code. This is known as a routing rule.

When an incoming request such as “http://{fabrikam host}/

f-Shipping/adatum” is received, the MVC framework evaluates the There’s a lot of good

routing rules to determine the appropriate controller object that information about

should handle the request. The incoming URL is tested against each APS.NET.MVC

route rule. The first matching rule is then used to process the request. concepts at http://

In the case of the “f-Shipping/adatum” URL, an instance of the ap- http://www.asp.net.

 

plication’s ShipmentController class will be used as the controller,

and its Index method will be the action method.

 

[AuthenticateAndAuthorize(Roles = “Shipment Creator”)]

public class ShipmentController : BaseController

{

public ActionResult Index()

{

// …

}

}

 

The ShipmentController class has been decorated with a custom

attribute named AuthenticateAndAuthorize. This attribute is imple-

mented by the Fabrikam Shipping application. Here is the declaration

of the attribute class.

 

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]

public sealed class AuthenticateAndAuthorizeAttribute :

FilterAttribute, IAuthorizationFilter

{

// …

 

public void OnAuthorization(AuthorizationContext filterContext)

{

if (!filterContext.HttpContext.Request.IsSecureConnection)

{

throw /* … */

}

 

if (!filterContext.HttpContext.User.Identity.IsAuthenticated)

 

———————– Page 149———————–

 

112112 chapter six

 

{

AuthenticateUser(filterContext);

}

else

{

this.AuthorizeUser(filterContext);

}

 

// …

}

 

The AuthenticateAndAuthorizeAttribute class derives from the

FilterAttribute class and implements the IAuthorizationFilter inter-

face. Both these types are provided by ASP.NET MVC. The MVC

framework recognizes these attribute types when they are applied to

controller classes and it calls the OnAuthorization method before

each controller method is invoked. The OnAuthorization method

detects whether or not authentication has been performed already,

and if it hasn’t, it invokes the AuthenticateUser helper method to

contact the application’s federation provider by HTTP redirection.

The following code shows how this happens.

 

private static void AuthenticateUser(AuthorizationContext context)

{

var organizationName =

(string)context.RouteData.Values[“organization”];

 

if (!string.IsNullOrEmpty(organizationName))

{

if (!IsValidTenant(organizationName)) { throw /* … */ }

 

var returnUrl = GetReturnUrl(context.RequestContext);

 

var fam =

FederatedAuthentication.WSFederationAuthenticationModule;

 

var signIn =

new SignInRequestMessage(new Uri(fam.Issuer), fam.Realm)

{

Context = returnUrl.ToString(),

HomeRealm =RetrieveHomeRealmForTenant(organizationName)

};

 

context.Result =

new RedirectResult(signIn.WriteQueryString());

}

}

 

———————– Page 150———————–

 

feder ated identity with multiple partners 113 113

 

The AuthenticateUser method takes the customer’s name from

the route table. (The code refers to a customer as an organization.) In

this example, “adatum” is the customer. Next, the method checks to

see if the customer has been enrolled in the Fabrikam Shipping ap-

plication. If not, it raises an exception.

Then, the AuthenticateUser method looks up the information it

needs to create a federated sign-in request. This includes the URI of

the issuer (that is, Fabrikam’s federation provider), the application’s

realm (the address where the issuer will eventually return the security

token), the URL that the user is trying to access, and the home realm

designation of the customer. The method uses this information to To keep your app

create an instance of WIF’s SignInRequestMessage class. An instance secure and avoid

attacks such as SQL

of this class represents a new request to an issuer to authenticate the injection, you should

current user. verify all values from

In the underlying WS-Federation protocol, these pieces of infor- an incoming URL.

mation correspond to the parameters of the request message that will

be directed to Fabrikam’s federation provider. The following table

shows this correspondence.

 

Parameter Name Contents

 

wrealm Realm This identifies the Fabrikam Shipping application to

the federation provider. This parameter comes from

the Web.config file and is the address to which a

token should be sent.

 

wctx Context This parameter is set to the address of the original

URL requested by the user. This parameter is not

used by the issuer, but all issuers in the chain

preserve it for the Fabrikam Shipping application,

allowing it to send the user to his or her original

destination.

 

whr Home realm This parameter tells Fabrikam’s federation provider

that it should use Adatum’s issuer as the identity

provider for this request.

 

The GetReturnUrl method is a locally defined helper method

that gives the URL that the user is trying to access. An example is

http://{fabrikam host}/f-shipping/adatum/shipment/new.

After using the WIF API to construct the sign-on request mes-

sage, the method configures the result for redirection.

At this point, ASP.NET will redirect the user’s browser to the

federation provider. In response, the federation provider will use the

steps described in the Chapter 3, “Claims-Based Single Sign-On for

the Web,” and Chapter 4, “Federated Identity for Web Applications,”

to authenticate the user. This will include additional HTTP redirection

to the identity provider specified as the home realm. Unlike the previ-

ous examples in this guide, the federation provider in this example

 

———————– Page 151———————–

 

114114 chapter six

 

uses the whr parameter sent by the application to infer the address of

the customer’s identity provider. After the federation provider re-

ceives a security token from the identity provider and transforms it

into a token with the claim types expected by Fabrikam Shipping, it

will POST it to the wrealm address that was originally specified. This

is a special URL configured with the SignInRequestMessage class in

the AuthenticateAndAuthorizeAttribute filter. In the example, the

URL will be f-shipping/FederationResult.

The MVC routing table is configured to dispatch the POST mes-

sage to the FederationResult action handler defined in the Home

Controller class of the Fabrikam Shipping application. This method is

shown in the following code.

 

[ValidateInput(false)]

[AcceptVerbs(HttpVerbs.Post)]

 

public ActionResult FederationResult(string wresult)

{

var fam =

FederatedAuthentication.WSFederationAuthenticationModule;

if (fam.CanReadSignInResponse(

System.Web.HttpContext.Current.Request, true))

{

string returnUrl = this.GetReturnUrlFromCtx();

 

return new RedirectResult(returnUrl);

}

 

// …

}

 

Notice that this controller does not have the AuthenticateAnd

Authorize attribute applied. However, the token POSTed to this ad-

dress is still processed by the WIF Federation Authentication Module

because of the explicit redirection of the return URL.

The FederationResult action handler uses the helper method

GetReturnUrlFromCtx to read the wctx parameter that contains the

original URL requested by the user. This is simply a property lookup

operation: this.HttpContext.Request.Form[“wctx”]. Finally, it issues

a redirect request to this URL.

The ValidateInput custom attribute is required for this scenario

because the body of the POST contains a security token serialized

as XML. If this custom attribute were not present, ASP.NET MVC

would consider the content of the body unsafe and therefore raise an

exception.

 

———————– Page 152———————–

 

feder ated identity with multiple partners 115 115

 

The application then processes the request a second time, but in

this pass, there is an authenticated user. The OnAuthorization

method described earlier will again be invoked, except this time it will

pass control to the AuthorizeUser helper method instead of the

AuthenticateUser method as it did in the first pass. The definition of

the AuthorizeUser method is shown in the following code.

 

private void AuthorizeUser(AuthorizationContext context)

{

var organizationRequested =

(string)context.RouteData.Values[“organization”];

var userOrganiation =

ClaimHelper.GetCurrentUserClaim(

Fabrikam.ClaimTypes.Organization).Value;

 

if (!organizationRequested.Equals(userOrganiation,

StringComparison.OrdinalIgnoreCase))

{

context.Result = new HttpUnauthorizedResult();

return;

}

 

var authorizedRoles = this.Roles.Split(new[] { “,” },

StringSplitOptions.RemoveEmptyEntries);

bool hasValidRole = false;

foreach (var role in authorizedRoles)

{

if (context.HttpContext.User.IsInRole(role.Trim()))

{

hasValidRole = true;

break;

}

}

 

if (!hasValidRole)

{

context.Result = new HttpUnauthorizedResult();

return;

}

}

 

The AuthorizeUser method checks the claims that are present

for the current user. It makes sure that the customer identification in

the security token matches the requested customer as given by the

URL. It then checks that the current user has one of the roles required

to run this application.

 

———————– Page 153———————–

 

116116 chapter six

 

Because this is a claims-aware application, you know that the user

object will be of type IClaimsPrincipal even though its static type

is IPrincipal. However, no run-time type conversion is needed in this

case. The reason is that the code only checks for role claims, and

these operations are available to instances that implement the

IPrincipal interface.

If you want to extract any other claims from the principal, you

will need to cast the User property to IClaimsPrincipal first. This

is shown in the following code.

 

var claimsprincipal =

context.HttpContext.User as IClaimsPrincipal;

 

If the user has a claim that corresponds to one of the permitted

roles (defined in the AuthenticateAndAuthorizeAttribute class), the

AuthorizeUser method will return without setting a result in the

context. This allows the original action request method to run.

In the scenario, the original action method is the Index method

of the ShipmentController class. The method’s definition is given by

the following code example.

 

[AuthenticateAndAuthorize(Roles = “Shipment Creator”)]

public class ShipmentController : BaseController

{

public ActionResult Index()

{

var repository = new ShipmentRepository();

 

IEnumerable<Shipment> shipments;

var organization =

ClaimHelper.GetCurrentUserClaim(

Fabrikam.ClaimTypes.Organization).Value;

 

if (this.User.IsInRole(Fabrikam.Roles.ShipmentManager))

{

shipments =

repository.GetShipmentsByOrganization(organization);

}

else

{

var userName = this.User.Identity.Name;

shipments =

repository.GetShipmentsByOrganizationAndUserName(

organization, userName);

 

———————– Page 154———————–

 

feder ated identity with multiple partners 117 117

 

}

 

var model =

new ShipmentListViewModel { Shipments = shipments };

 

return View(model);

}

// …

}

 

The Index action handler retrieves the data that is needed to

satisfy the request from the application’s data store. Its behavior de-

pends on the user’s role, which it extracts from the current claims

context. It passes control to the controller’s View method for render-

ing the information from the repository into HTML.

 

Setup and Physical Deployment

 

Applications such as Fabrikam Shipping that use federated identity

with multiple partners sometimes rely on automated provisioning and Automated provisioning

may allow for customer-configurable claims mapping. The Fabrikam may be needed when there

Shipping example does not implement automated provisioning, but it are many partners.

includes a prototype of a web interface as a demonstration of the

concepts.

 

Establishing the Trust Relationship

If you were to implement automated provisioning, you could provide

a web form that allows an administrator from a customer’s site to

specify a URI of an XML document that contains federation meta-

data for ADFS 2.0. Alternatively, the administrator could provide the

necessary data elements individually.

If your application’s federation provider is an ADFS 2.0 server, you

can use Windows PowerShell® scripts to automate the configuration

steps. For example, the ADFSRelyingParty command allows you to

programmatically configure ADFS to issue security tokens to particu-

lar applications and federation providers. Look on MSDN® for the

ADFS 2.0 commands that you can use in your PowerShell scripts.

 

Processing a federation request might initiate a workflow process

that includes manual steps such as verifying that a contract has

been signed. Both manual and automated steps are possible, and

of course, you would first need to authenticate the request for

provisioning.

 

———————– Page 155———————–

 

118118 chapter six

 

If you automate provisioning with a federation metadata XML

file, this file would be provided by a customer’s issuer. In the following

example, you’ll see the federation metadata file that is provided by

Adatum. The file contains all the information that Fabrikam Shipping

would need to configure and deploy its federation provider to com-

municate with Adatum’s issuer. Here are the important sections of the

file.

 

Organization Section

The organization section contains the organization name.

 

<Organization>

<OrganizationDisplayName xml:lang=””>

Adatum

</OrganizationDisplayName>

<OrganizationName xml:lang=””>Adatum</OrganizationName>

<OrganizationURL xml:lang=””>

http://{adatum host}/Adatum.Portal/

</OrganizationURL>

</Organization>

 

Issuer Section

The issuer section contains the issuer’s URI.

 

<fed:SecurityTokenServiceEndpoint>

<EndpointReference

xmlns=”http://www.w3.org/2005/08/addressing”&gt;

<Address>

https://{adatum host}/{issuer endpoint}/

</Address>

</EndpointReference>

</fed:SecurityTokenServiceEndpoint>

 

Certificate Section

The certificate section contains the certificate (encoded in base64)

that is used by the issuer to sign the tokens.

 

<RoleDescriptor …>

<KeyDescriptor use=”signing”>

<KeyInfo xmlns=”http://www.w3.org/2000/09/xmldsig#”&gt;

<X509Data>

<X509Certificate>

 

———————– Page 156———————–

 

feder ated identity with multiple partners 119 119

 

MIIB5TCCAV … Ukyey2pjD/R4LO2B3AO

</X509Certificate>

</X509Data>

</KeyInfo>

</KeyDescriptor>

</RoleDescriptor>

 

After Adatum registers as a customer of Fabrikam Shipping, the

customer’s systems administrators must also configure their issuer to

respond to requests from Fabrikam’s federation provider. For ADFS

2.0, this process is identical to what you saw in Chapter 4, “Federated

Identity for Web Applications,” when the Litware issuer began to

provide claims for the a-Order application.

 

User-Configurable Claims

Transformation Rules

It’s possible for applications to let customers configure the claims

mapping rules that will be used by the application’s federation pro-

vider. You would do this to make it as easy as possible for an applica-

tion’s customers to use their existing issuers without asking them to An application with many

produce new claim types. If a customer already has roles or groups, partners may require

perhaps from Microsoft Active Directory, that are ready to use, it is user-configurable claims

convenient to reuse them. However, these roles would need to be transformation rules.

mapped to roles that are understood by the application.

If the federation provider is an ADFS 2.0 server, you can use

Windows PowerShell scripts to set up the role mapping rules. The

claims mapping rules would be different for each customer.

 

Questions

 

1. In the scenario described in this chapter, who should take

what action when an employee leaves one of the partner

organizations such as Litware?

 

a. Fabrikam Shipping must remove the user from its user

database.

 

b. Litware must remove the user from its user database.

 

c. Fabrikam must amend the claims-mapping rules in its

federation provider.

 

d. Litware must ensure that its identity provider no

longer issues any of the claims that get mapped to

Fabrikam Shipping claims.

 

———————– Page 157———————–

 

120120 chapter six

 

2. In the scenario described in this chapter, how does Fabrikam

Shipping perform home realm discovery?

 

a. Fabrikam Shipping presents unauthenticated users

with a list of federation partners to choose from.

 

b. Fabrikam Shipping prompts unauthenticated users for

their email addresses. It parses this address to deter-

mine which organization the user belongs to.

 

c. Fabrikam Shipping does not need to perform home

realm discovery because users will have already

authenticated with their organizations’ identity

providers.

 

d. Each partner organization has its own landing page in

Fabrikam Shipping. Visiting that page will automati-

cally redirect unauthenticated users to that organiza-

tion’s identity provider.

 

3. Fabrikam Shipping provides an identity provider for its

smaller customers who do not have their own identity

provider. What are the disadvantages of this?

 

a. Fabrikam must bear the costs of providing this service.

 

b. Users at smaller customers will need to remember

another username and password.

 

c. Smaller customers must rely on Fabrikam to manage

their user’s access to Fabrikam Shipping.

 

d. Fabrikam Shipping must set up a trust relationship

with all of its smaller customers.

 

4. How does Fabrikam Shipping ensure that only users at a

particular partner can view that partner’s shipping data?

 

a. The Fabrikam Shipping application examines the email

address of the user to determine the organization they

belong to.

 

b. Fabrikam Shipping uses separate databases for each

partner. Each database uses different credentials to

control access.

 

———————– Page 158———————–

 

feder ated identity with multiple partners 121 121

 

c. Fabrikam shipping uses the role claim from the

partner’s identity provider to determine whether the

user should be able to access the data.

 

d. Fabrikam shipping uses the organization claim from

its federation provider to determine whether the user

should be able to access the data.

 

5. The developers at Fabrikam set the wsFederation passive

RedirectEnabled attribute to false. Why?

 

a. This scenario uses active redirection, not passive

redirection.

 

b. They wanted more control over the redirection

process.

 

c. Fabrikam Shipping is an MVC application.

 

d. They needed to be able to redirect to external identity

providers.

 

———————– Page 159———————–

 

 

———————– Page 160———————–

 

7 Federated Identity with

Multiple Partners and

Windows Azure Access

Control Service

 

In Chapter 6, “Federated Identity with Multiple Partners,” you saw

how Fabrikam used claims to enable access to the Fabrikam shipping

application for multiple partners. The scenario described how Fabri-

kam supported users at large partner organizations with their own

claims-based identity infrastructure, and users from smaller organiza-

tions with no claims-based infrastructure of their own. Fabrikam

provided support for the larger partner organizations by establishing

trust relationships between the Fabrikam federation provider (FP) and

the partner’s identity provider (IdP). To support the smaller organiza-

tions, it was necessary for Fabrikam to implement its own identity

provider and manage the collection of enrolled employees from

smaller partners. This scenario also demonstrated how Fabrikam had

taken steps to automate the enrollment process for new partners.

Users at smaller partners had to create new accounts at Fabrikam,

adding to the list of credentials they have to remember. Many indi-

viduals would prefer to reuse an existing identity rather than create a

new one just to use the Fabrikam Shipping application. How can

Fabrikam enable users to reuse existing identities such as Facebook

IDs, Google IDs, or Windows Live® IDs? In addition to establishing

trust relationships with the social identity providers, Fabrikam must

find solutions to these problems:

•     other identity providers may use different protocols to

exchange claims data.

•     other identity providers may use different claim types.

•     fabrikam shipping must be able to use the claims data it

receives to implement authorization rules.

•     the federation provider must be able to redirect users to

the correct identity provider.

•     fabrikam must be able to enroll new users who want to use

the fabrikam shipping application.

 

123

 

———————– Page 161———————–

 

124124 chapter seven

 

In Chapter 5, “Federated Identity with Windows Azure Access

Control Services,” you saw how Adatum extended access to the a-

Order application to include users who wanted to use their social

identity to authenticate with the a-Order application. In this chapter,

you’ll see how Fabrikam replaced its on-premises federation provider

with Windows Azure™ AppFabric Access Control services (ACS), to

You can use ACS to manage enable users at smaller organizations without their own identity infra-

multiple trust relationships. structure to access Fabrikam Shipping.

Unlike the scenario described in Chapter 5, “Federated Identity

with Windows Azure Access Control Services,” users from smaller

partners who use social identity providers will be able to enroll them-

selves with the Fabrikam Shipping application. They will access the

Fabrikam Shipping application alongside employees of existing enter-

prise partners. This chapter extends the scenario described in Chapter

6, “Federated Identity with Multiple Partners.”

 

The Premise

 

Fabrikam is a company that provides shipping services. As part of its

offering, it has a web application named Fabrikam Shipping that al-

lows its customers to perform such tasks as creating shipping orders

and tracking them. Fabrikam Shipping is an ASP.NET MVC application

that runs in the Fabrikam data center.

Fabrikam has already claims-enabled the Fabrikam Shipping web

application, allowing employees from Adatum and Litware to access

the application without having to present separate usernames and

passwords. Users at Contoso, a smaller partner, can also access Fabri-

kam Shipping, but they must log in using credentials that the Fabrikam

identity provider, Active Directory® Federation Services (ADFS) 2.0,

authenticates. Users at Contoso have complained about the fact that

Managing the they must remember a set of credentials specifically for accessing the

accounts for users at Fabrikam Shipping application. All of Contoso’s employees have either

organizations such as Windows® Live IDs or Google accounts, and they would prefer to use

Contoso adds to the these credentials to gain access to the application. Users at other

complexity of the Fabrikam customers have echoed this request, mentioning Facebook

Fabrikam ADFS

implementation. IDs and Yahoo! IDs as additional credential types they would like to

be able to use.

 

———————– Page 162———————–

 

feder ated identity with multiple partners and windows azure acs 125 125

 

Goals and Requirements

 

The primary goal of this scenario is to show how Fabrikam can use

ACS as a federation provider to enable both employees of large part-

ners such as Adatum and Litware, and smaller partners whose employ-

ees use identities from with social identity providers, to access the

Fabrikam Shipping application.

To recap from Chapter 6, “Federated Identity with Multiple Part-

ners,” larger customers such as Adatum and Litware have some par-

ticular concerns. These include the following:

•     Usability. They would prefer if their employees didn’t need to

learn new passwords and user names for Fabrikam Shipping.

These employees shouldn’t need any credentials other than the

ones they already have, and they shouldn’t have to enter creden-

tials a second time when they access Fabrikam Shipping from

within their security domain. The solution described in Chapter

6, “Federated Identity with Multiple Partners,” addresses this

concern and introducing ACS as a federation provider must not

change the user experience for the employees of these custom-

ers.

•     Support. It is easier for Adatum and Litware to manage issues

such as forgotten passwords than to have their employees

interact with Fabrikam. The solution described in Chapter 6,

“Federated Identity with Multiple Partners,” addresses this

concern and introducing ACS as a federation provider must not

change the user experience for the security administrators of

these customers.

•     Liability. There are reasons why Adatum and Litware have the

authentication and authorization policies that they have. They

want to control who has access to their resources, no matter

where those resources are deployed, and Fabrikam Shipping is

no exception. If an employee leaves the company, he or she

should no longer have access to the application. Again, the

solution described in Chapter 6, “Federated Identity with

Multiple Partners,” addresses this concern.

•     Confidentiality. Partners of Fabrikam, such as Adatum, do not

want other partners, such as Litware, to know that they are

using the Fabrikam Shipping service. When a user accesses the

Fabrikam Shipping site, they should not have to choose from a

list of available authentication partners; rather, the site should

automatically redirect them to the correct identity provider

without revealing a list of partners.

 

———————– Page 163———————–

 

126126 chapter seven

 

Fabrikam has its own goals, which are the following:

•     To delegate the responsibility for maintaining user identities

to its customers, when possible. This avoids a number of

problems, such as having to synchronize data between Fabrikam

and its customers. The contact information for a package’s

sender is an example of this kind of data. Its accuracy should be

the customer’s responsibility because it could quickly become

costly for Fabrikam to keep this information up to date. The

solution described in Chapter 6, “Federated Identity with

Multiple Partners,” addresses this concern.

•     To bill customers by cost center if one is supplied. Customers

should provide the cost center information. This is another

example of information that is the customer’s responsibility. The

solution described in Chapter 6, “Federated Identity with

Multiple Partners,” addresses this concern.

•     To sell its services to a large number of customers. This means

that the process of enrolling a new company must be stream-

lined. Fabrikam would also prefer that its customers self-manage

the application whenever possible. The automated enrollment

process must be able to support both large organizations with

their own identity infrastructure, and smaller organizations

whose employees use a social identity provider. Furthermore,

Fabrikam would like to support the widest possible range of

social identity providers.

•     To provide the infrastructure for federation if a customer

cannot. Fabrikam wants to minimize the impact on the applica-

tion code that might arise from having more than one authenti-

cation mechanism for customers. However, Fabrikam would

prefer not to have to maintain an on-premises identity provider

for smaller customers. Instead, it would like users at smaller

customers to use existing social identities.

 

Smaller customers and individual users have some particular concerns.

These include the following:

•     Usability. Individual users would prefer to use existing identities

such as Windows Live IDs or Google account credentials to

access the Fabrikam Shipping website instead of having to

create a new user ID and password just to access this site.

 

———————– Page 164———————–

 

feder ated identity with multiple partners and windows azure acs 127 127

 

•     Support. If individual users forget their passwords, they would

like to be able to use the password recovery tools provided by

their social identity provider rather than interacting with

Fabrikam.

•     Privacy. Individual users do not want their social identity

provider to reveal to Fabrikam private information maintained

by the social identity provider that is not relevant to the Fabri-

kam shipping application.

 

Overview of the Solution Fabrikam must be

careful to explain to

With the goals and requirements in place, it’s time to look at the solu- individual users the

tion. As you saw in Chapter 6, “Federated Identity with Multiple implications of

allowing their social

Partners,” the solution includes the establishment of a claims-based identity provider to

architecture with issuers that act as an identity providers on the cus- release details to ACS

tomers’ side. In addition, the solution includes an issuer that acts as and be clear about

the federation provider on the Fabrikam side. Recall that a federation exactly what

information Fabrikam

provider acts as a gateway between a resource and all of the issuers

Shipping and ACS

that provide claims about the resource’s users. In this chapter, Fabri- will be able to access.

kam replaces the on-premises federation provider with ACS in order

to support authenticating users with social identities. This change also

means that Fabrikam no longer has to host and manage a federation

provider in its own datacenter.

 

Although this solution brings the benefits of easy support for users

who want to use their social identities, and a simplification of the

implementation of the on-premises Fabrikam issuer, there are some

trade-offs that Fabrikam evaluated.

This solution relies on access to ACS for all access to Fabrikam

Shipping. Fabrikam is satisfied by the SLAs in place with the ACS

subscription.

Using ADFS on-premises meant that Fabrikam could support

federation with organizations using the SAMLP protocol. ACS does

not currently support this protocol, but Fabrikam anticipates that all

of its federation partners will support the WS-Federation protocol.

 

Figure 1 shows the Fabrikam Shipping solution using ACS.

 

———————– Page 165———————–

 

128128 chapter seven

 

Trust

Open ID

 

Facebook Issuers (IdPs)

 

Windows LiveID

 

Transform

& Map Claims

3 1

2

3 ACS (FP)

 

Trust

 

Mary

 

4

Fabrikam

Shipping RP 2

( ) Trust

 

4

Issuer idPs

( )

1

John

Fabrikam

In the solution

described in Chapter

6, “Federated Identity figure 1 Adatum

with Multiple Fabrikam Shipping using ACS

Partners,” Fabrikam

used an on-premises Here’s an example of how the system works for a user at an orga-

federation provider

(FP). Now Fabrikam is nization such as Adatum with its own identity provider. This process

using ACS in the is similar, but not identical to the process described in Chapter 6,

cloud instead. “Federated Identity with Multiple Partners.” The steps correspond to

the shaded numbers in the preceding illustration.

 

STEP 1: prEs Ent CrEdEntIALs to th E Id EntItY provIdEr

 

1. When John from Adatum attempts to use fabrikam ship-

ping for the first time (that is, when he first navigates to

https://{fabrikam host}/f-shipping/adatum), there’s no

session established yet. In other words, from Fabrikam’s

point of view, John is unauthenticated. The URL provides

the Fabrikam Shipping application with a hint about the

customer that is requesting access (the hint is “adatum”

at the end of the URL).

 

2. The application redirects John’s browser to the Fabrikam

ACS instance in the cloud (the federation provider). That’s

because the Fabrikam ACS instance is the application’s

trusted issuer. As part of the redirection URL, the applica-

tion includes the whr parameter that provides a hint to ACS

about the customer’s home realm. The value of the whr

parameter is https://localhost/Adatum.SimulatedIssuer.7/.

 

———————– Page 166———————–

 

feder ated identity with multiple partners and windows azure acs 129 129

 

It’s important to use the entityID value from the identity

provider’s FederationMetadata.xml file as the whr value if you

want ACS to automatically redirect the user to the partner’s

identity provider. entityID is an attribute in the issuer’s

federation metadata: ACS uses this attribute value to uniquely

identify identity providers that it trusts.

 

3. ACS uses the whr parameter to look up the customer’s

identity provider and redirect John’s browser to the Adatum

issuer.

This scenario

4. Assuming that John uses a computer that is already part of automates home

the domain and on the corporate network, he will already realm discovery.

have valid network credentials that his browser can present There’s no need for

to the Adatum identity provider. the user to provide

his home realm

5. The Adatum identity provider uses John’s credentials to details, as was the

authenticate him and then issue a security token with a set case in Chapter

of Adatum claims. These claims are the employee name, the 4, “Federated

Identity for Web

employee address, the cost center, the role, and the group. Applications.”

 

Although the identity provider may also issue an organization

claim, Fabrikam will always generate the organization claim

value in ACS. This prevents a malicious administrator at a

partner organization from impersonating a user from another

partner.  

 

STEP 2: t rA nsmIt th E Id EntItY provIdEr’s sEC urItY

t o KEn to thE fEdErAt Ion provIdEr

 

1. The Adatum identity provider uses HTTP redirection

to redirect the browser to the Fabrikam ACS instance,

delivering the security token issued by the Adatum

identity provider to the Fabrikam ACS instance.

 

2. The Fabrikam ACS instance receives this token and

validates it.

 

STEP 3: mA p thE C LAIms

 

1. The Fabrikam ACS instance applies claim-mapping rules to

the claims in the identity provider’s security token. ACS

transforms the claims into claims that Fabrikam Shipping

expects and understands.

 

2. ACS returns a new token with the claims to John’s browser

and uses HTTP redirection to return John’s browser the

Fabrikam Shipping application.

 

———————– Page 167———————–

 

130130 chapter seven

 

The redirection should be to a secure HTTP address (HTTPS) to

prevent the possibility of session hijacking.

 

STEP 4: t rA nsmIt th E mA ppEd CLAIms A nd pErform thE

rEqu Est Ed ACt Ion

 

1. The browser sends the security token from ACS, which

contains the transformed claims, to the Fabrikam Shipping

application.

 

2. The application validates the security token.

 

3. The application reads the claims and creates a session for

John.

 

Because this is a web application, all interactions happen through

the browser. (See Appendix B for a detailed description of the proto-

col for a browser-based client.)

Litware follows the same steps as Adatum. The only differences

are the URLs used (https://{fabrikam host}/f-shipping/litware and the

Litware identity provider’s address) and the claims-mapping rules,

because the claims issued by the Litware identity provider are differ-

ent from those issued by the Adatum identity provider. Notice that

the Fabrikam Shipping web application trusts the Fabrikam ACS in-

stance, not the individual issuers at Litware or Adatum; this level of

indirection isolates Fabrikam Shipping from individual differences

between Litware and Adatum.

In the scenario described in Chapter 6, “Federated Identity with

Multiple Partners,” Fabrikam managed and hosted an identity pro-

vider for smaller customers such as Contoso to enable users from

these customers to authenticate before accessing the Fabrikam Ship-

ping application. Users at organizations such as Contoso would now

prefer to reuse an existing social identity rather than maintaining a

separate set of credentials just for use with Fabrikam Shipping.

Here’s an example of how the system works for a user at an orga-

nization such as Contoso where the users authenticate with an online

social identity provider. The steps correspond to the un-shaded num-

bers in the preceding illustration. ACS treats the online social identity

providers in almost the same way it treats the Adatum and Litware

identity providers. However, it will use a different set of claims-map-

ping rules for the social identity providers and, if necessary, perform

protocol transition as well. Fabrikam didn’t need to change the Fabri-

kam Shipping application in order to support users with social identi-

ties; the application continues to trust ACS and ACS continues to

deliver the same types of claims to Fabrikam Shipping.

 

———————– Page 168———————–

 

feder ated identity with multiple partners and windows azure acs 131 131

 

STEP 1: prEs Ent CrEdEntIALs to th E Id EntItY provIdEr

 

1. When Mary from Contoso attempts to use fabrikam

shipping for the first time (that is, when she first navigates

to https://{fabrikam host}/f-shipping/Contoso), there’s no

session established yet. In other words, from Fabrikam’s

point of view, Mary is unauthenticated. The URL provides

the Fabrikam Shipping application with a hint about the

customer that is requesting access (the hint is “Contoso” at

the end of the URL).

 

2. The application redirects Mary’s browser to the Fabrikam

ACS instance in the cloud (the federation provider). That’s

because the Fabrikam ACS instance is the application’s

trusted issuer. As part of the redirection URL, the applica-

tion includes the whr parameter that provides a hint to the

federation provider about the customer’s home realm. The

value of the whr parameter is uri:WindowsLiveID.

 

In the current implementation, this means that all the employees

at a small partner must use the same social identity provider. In

this example, all Contoso employees must have a Windows Live

ID to be able to access Fabrikam Shipping. You could extend the

sample to enable users at partners such as Contoso to each use

different social identity providers.

 

3. ACS uses the whr parameter to look up the customer’s

preferred social identity provider and redirect Mary’s

browser to the social identity issuer; in this example,

Windows Live.

 

4. The social identity provider, Windows Live in this example,

uses Mary’s credentials to authenticate her and then returns

a security token with a basic set of claims to Mary’s brows-

er. In the case of Windows Live ID, the only claim returned

is nameidentifier.  

 

STEP 2: t rA nsmIt th E soCIAL Id EntItY provIdEr’s

sEC urItY t o KEn to ACs

 

1. The social identity provider uses HTTP redirection to

redirect Mary’s browser with the security token it has issued

to the Fabrikam ACS instance.

 

2. The Fabrikam ACS instance receives this token and

validates it.

 

———————– Page 169———————–

 

132132 chapter seven

 

STEP 3: mA p thE C LAIms

 

1. The Fabrikam ACS instance applies token mapping rules to

the social identity provider’s security token. It transforms

the claims into claims that Fabrikam Shipping understands.

In this example, it adds new claims: name, organization,

role, and costcenter.

 

2. If necessary, ACS transitions the protocol that the social

identity provider uses to the WS-Federation protocol.

 

3. ACS returns a new token with the claims to Mary’s browser.

The types of claims

that ACS sends to

STEP 4: t rA nsmIt th E mA ppEd CLAIms A nd pErform

Fabrikam Shipping

from a user with a th E rEqu Est Ed ACt Ion

 

social identity are the 1. ACS uses HTTP redirection to redirect Mary’s browser with

same claims types as

it sends for users at the security token from ACS, which contains the claims, to

Adatum and Litware. the Fabrikam Shipping application.

 

2. The application validates the security token.

 

3. The application reads the claims and creates a session for

Mary.

 

Enrolling a New Partner Organization

One of Fabrikam’s goals was to enable partner organizations to enroll

themselves with the Fabrikam Shipping application, and enable

Partners, both with and without them to manage their own users. Both larger partners with their

their own identity providers, can own identity providers and smaller partners whose employees use

enroll themselves with Fabrikam identities from social identity providers should be able to perform

Shipping. these operations.

The enrollment process must perform three key configuration steps:

•     Update the Fabrikam Shipping list of registered partners. The

registration data for each partner should include its name, the

URL of a logo image, and an identifier for the partner’s home

realm.

•     For partners using their own identity provider, create a trust

relationship so that the Fabrikam ACS instance trusts the

partner’s identity provider.

•     Create suitable claims-mapping rules in the Fabrikam ACS

instance that transform the claims from the partner’s identity

provider to the claims that Fabrikam Shipping expects to see.

 

Fabrikam uses the partner name and logo that it stores in its list

of registered partners to customize the UI of Fabrikam Shipping when

an employee from the partner visits the site. The partner’s home realm

 

———————– Page 170———————–

 

feder ated identity with multiple partners and windows azure acs 133 133

 

is important because when Fabrikam Shipping redirects a user to ACS

for authentication, it includes the home realm as a value for the whr

parameter in the request’s querystring. To enable ACS to automati-

cally redirect the user to the correct identity provider, the partner’s

home realm value should be the value of the entityID in the partner

identity provider’s FederationMetadata.xml.

Partners without their own identity provider use one of the pre-

configured social identity providers in ACS; enrolling a new partner in

this scenario does not require Fabrikam to configure a new identity

provider in ACS. For partners with their own identity provider, the

enrollment process must configure a new identity provider in ACS. We can use ACS to

handle the differ-

Partners with their own identity provider must configure their ences in the tokens

identity provider; a configuration example might be defining a and protocols that

relying party realm. The details of this will be specific to the type the various social

of identity provider that the partner uses. identity providers

use.

 

Different identity providers return different claims. For example,

Windows Live only returns a nameidentifier claim, while a custom

provider might include name, organization, costcenter, and role

claims. Regardless of the claims that the identity provider issues, the

rules that the enrollment process creates in ACS must be sufficient to

return costcenter, name, organization, and role claims, all of which

the Fabrikam Shipping application requires. ACS can issue these claims

to Fabrikam Shipping either by transforming a claim from the identity

provider, by passing a claim from the identity provider through un-

changed, or by creating a new claim.

 

Managing Multiple Partners

with a Single Identity

A user, such as Paul, may work for two or more partners of Fabrikam

Shipping. If those partners have their own identity providers, then

Paul will have two separate identities, such as paul@contoso.com and

paul@adventureworks.com, for example. However, if the partner or-

ganizations do not have their own identity providers, then it’s likely

that Paul will want to use the same social identity (paul@gmail.com)

with both partners. This raises a problem if Paul has different roles

in the two partner organizations; in Contoso, he may be in the

Shipment Manager role, and in AdventureWorks he may be in the

Administrator role. If ACS assigns roles based on Paul’s identity,

he will end up with both roles assigned to him, which means he will

be in the Administrator role in Contoso.

To handle this scenario, Fabrikam first considered using a differ-

ent service namespace for each partner in ACS. To access Contoso

data at Fabrikam Shipping, Paul would need a token from the

Contoso namespace, to access AdventureWorks data he would need

 

———————– Page 171———————–

 

134134 chapter seven

 

a token from the AdventureWorks namespace. To automate the en-

rollment process for new partners, Fabrikam would need to be able to

create new service namespaces in ACS programmatically. Unfortu-

nately, the ACS Management API does not currently support this

operation.

The solution adopted by Fabrikam was to create a different rely-

ing party (RP) in ACS for each partner. In ACS, each relying party can

have its own set of claims-mapping rules, so the rule group in the

Contoso relying party in ACS can assign the Shipment Manager role

to Paul, while the rule group in the AdventureWorks relying party in

Enabling partners to ACS can assign him the Administrator role. If Paul signs in to Fabri-

manage their own kam Shipping using a token from the Contoso relying party and he

users reduces the then tries to access AdventureWorks data he will need to re-authen-

amount of work ticate in order to obtain a token from the AdventureWorks relying

Fabrikam has to do to party in ACS.

manage the Fabrikam

Shipping application. A single service namespace in ACS can have multiple relying parties.

The wtrealm parameter passed to ACS identifies the relying party

to use, and each relying party has its own set of claims-mapping

rules that include a rule to add an organization claim. Fabrikam

Shipping uses the organization claim to authorize access to data.

 

Managing Users at a Partner

Organization

For a partner organization with its own identity provider, the partner

can manage which employees have access to its data at Fabrikam Ship-

ping using the partner’s identity provider. By controlling which claims

its identity provider issues for individual employees, the partner can

determine what level of access the employee has in the Fabrikam

Shipping application. This approach depends on the claims-mapping

rules that the enrollment process created in ACS. For example, map-

ping the Order Tracker role in Adatum to the ShipmentManager role

in Fabrikam Shipping would give anyone at Adatum with the Order

Tracker role the ability to manage Adatum shipments at Fabrikam.

In the case of a partner without its own identity provider, such as

Contoso where employees authenticate with a social identity pro-

vider, the claims-mapping rules in ACS must include the mapping of

individuals to roles within Fabrikam. To manage these mappings for

these organizations, one user should be a designated administrator

who can edit their organization’s claims-mapping rules. The adminis-

trator would use an administration page hosted on the Fabrikam Ship-

ping enrollment web site to manage the list of users with access to

Contoso data in Fabrikam Shipping and edit the rules that control

 

———————– Page 172———————–

 

feder ated identity with multiple partners and windows azure acs 135 135

 

access levels. This page will use the ACS Management API to make the

necessary configuration changes in ACS.

 

The sample does not implement this feature: each partner without its

own identity provider has only a single user. The enrollment process

configures this user. The sample implementation also assumes that if

a partner did have more than one user, then all the users must use

the same social identity provider.

 

Inside the Implementation

 

Now is a good time to walk through some of the details of the solu-

tion. As you go through this section, you may want to download the

Microsoft Visual Studio® solution, 7FederationWithMultiplePartner-

sAndAcs from http://claimsid.codeplex.com. If you are not interested

in the mechanics, you should skip to the next section.

The scenario described in this chapter is very similar to the sce-

nario described in Chapter 6, “Federated Identity with Multiple Part-

ners.” The key difference is that ACS, rather than an issuer at Fabrikam,

now provides the federation services. The changes to the Fabrikam Modifying Fabrikam

Shipping application all relate to the way Fabrikam Shipping interacts Shipping to use ACS instead

with ACS; in particular, how the application enrolls new partners and of the Fabrikam federation

handles the log on process. The logic of the application and the au- provider was mostly a

thorization rules it applies using the claims from the identity providers configuration task.

is unchanged.

 

Getting a List of Identity Providers

from ACS

When a partner wants to enroll with the Fabrikam Shipping applica-

tion, part of the sign-up process requires the partner to select the

identity provider they want to use. The choice they have is either to

use their own identity provider (at this stage in the enrollment process

Fabrikam Shipping and ACS know nothing about the partner or its

identity provider), or to use one of the pre-configured social identity

providers: Google, Yahoo!, or Windows Live. It’s possible that the list

of available social identity providers might change, so it makes sense

for Fabrikam to build the list programmatically by querying the Fabri-

kam ACS instance. However, there’s no way to ask ACS for only the

list of social identity providers and exclude any custom identity pro-

viders from other partners. The following code sample shows how

Fabrikam implemented an extension method, IsSocial, to check

whether an identity provider is a social identity provider.

 

———————– Page 173———————–

 

136136 chapter seven

 

public static class SocialIdentityProviders

{

public static readonly SocialIdentityProvider

Google = new SocialIdentityProvider {

DisplayName = “Google”,

HomeRealm = “Google”,

Id = “10008641” };

public static readonly SocialIdentityProvider

WindowsLiveId = new SocialIdentityProvider {

DisplayName = “Windows Live ID”,

HomeRealm = “uri:WindowsLiveID”,

Id = “10007989” };

public static readonly SocialIdentityProvider

Yahoo = new SocialIdentityProvider {

DisplayName = “Yahoo!”,

HomeRealm = “Yahoo!”,

Id = “10008653” };

public static Ienumerable<SocialIdentityProvider> GetAll()

{

return new SocialIdentityProvider[3] {

Google, Yahoo, WindowsLiveId };

}

 

public static string GetHomeRealm(string socialIpId)

{

var providers = new[] { Google, Yahoo, WindowsLiveId };

return providers.Single(p => p.Id == socialIpId).HomeRealm;

}

 

public static bool IsSocial(this IdentityProvider ip)

{

if (ip.Issuer.Name.Contains(Google.HomeRealm) ||

A separate web ip.Issuer.Name.Contains(Yahoo.HomeRealm) ||

application ip.Issuer.Name.Contains(WindowsLiveId.HomeRealm))

called f-Shipping. {

Enrollment.7

return true;

handles the

enrollment tasks. }

return false;

}

}

 

The solution includes an ACS.ServiceManagementWrapper proj-

ect that wraps the REST calls that perform management operations

in ACS. The enrollment process builds a list of available social identity

providers by calling the RetrieveIdentityProviders method in this

wrapper class.

 

———————– Page 174———————–

 

feder ated identity with multiple partners and windows azure acs 137 137

 

The ACS.ServiceManagementWrapper project uses password

authentication over HTTPS with the calls that it makes to the

ACS management API. As an alternative, you could sign the

request with a symmetric key or an X.509 certificate.

 

Adding a New Identity Provider to ACS

When a partner with its own identity provider enrolls with Fabrikam

Shipping, part of the enrollment process requires Fabrikam to add

details of the partner’s issuer to the list of identity providers in ACS.

The enrollment process automates this by using the ACS Management

API. The wrapper class in the ACS.ServiceManagementWrapper proj-

ect includes two methods, AddIdentityProvider and AddIdentity

ProviderManually for configuring a new identity provider in ACS.

During the enrollment process, if the user provides a FederationMeta-

data.xml file that contains all of the necessary information to config-

ure the trust, the EnrollmentController class uses the AddIdentity

Provider method. If the user provides details of the identity provider

manually, it uses the AddIdentityProviderManually method. The

enrollment process then adds a relying party and mapping rules to the

identity provider, again using methods in the ServiceManagement

Wrapper wrapper class.

 

Managing Claims-Mapping Rules in ACS

The automated enrollment process for both larger organizations that

have their own identity provider, and smaller partners who rely on a

social identity provider requires Fabrikam to add claims-mapping rules

to ACS programmatically. The wrapper class in the ACS.ServiceMan-

agementWrapper project includes an AddSimpleRuleToRuleGroup

method that the enrollment process uses when it adds a new claims-

mapping rule. The application also uses the AddPassthroughRule

ToRuleGroup when it needs to add a rule that passes a claim through

from the identity provider to the relying party without changing it,

and the AddSimpleRuleToRuleGroupWithoutSpecifyInputClaim

method when it needs to create a new claim that’s not derived from

any of the claims issued by the identity provider.

 

It’s important that the mapping rules don’t simply pass through the

organization claim, but instead create a new organization claim

derived from the identity of the identity provider. This is to prevent

the risk of a malicious administrator at the partner spoofing the

identity of another organization. When registering a new organiza-

tion, the code should verify that the organization name is not already

is use, so that a new registration cannot override an existing organi-

zation name or add itself to an existing organization. The Fabrikam

 

———————– Page 175———————–

 

138138 chapter seven

 

Shipping application uses the organization claim in its authoriza-

tion and data access management logic (for example, when creating

and listing shipments).

 

For partners without their own identity provider, the enrollment

process must also create a new relying party in ACS. The wrapper

class in the ACS.ServiceManagementWrapper project includes an

AddRelyingParty method to perform this operation.

The EnrollmentController class in the f-Shipping.Enrollment.7

project demonstrates how the Fabrikam Shipping application handles

the automated enrollment process.

Each partner without Because Fabrikam uses multiple relying parties in ACS to handle

an identity provider the case where a user with a social identity is associated with multiple

still needs a relying partners, the sample solution disables checking audience URIs in the

party so that

Fabrikam Shipping Web.config file:

 

can recognize when XML

the same user is

associated with two <microsoft.identityModel>

or more different <service>

partner organizations. <audienceUris mode=”Never”>

 

</audienceUris>

</service>

</microsoft.identityModel>

 

Normally, you should not set the audienceUris mode to “Never”

because this introduces a security vulnerability: the correct approach

is to add the audience URIs at run time as Fabrikam Shipping enrolls

new partners. You would also need to share the list of Uris between

the f-Shipping.Enrollment.7 web application and the f-Shipping.7 web

application. Furthermore, to avoid the possibility of one tenant imper-

sonating another, you would use a separate symmetric key for each

tenant. However, as described previously, in this solution ACS adds an

organization claim to the token that it issues that the REST service

can check.

 

Displaying a List of Partner

Organizations

For the purposes of this sample, the home page at Fabrikam Shipping

displays a list of registered partner organizations. In a real application,

you may not want to make this information public because some

partners may not want other partners to know about their business

relationship with Fabrikam Shipping, so each partner would have their

own landing page.

 

———————– Page 176———————–

 

feder ated identity with multiple partners and windows azure acs 139 139

 

In ACS 2.0 (the current version at the time of this writing), it’s not

possible to keep this information private because ACS publishes a

public feed of all the identity providers associated with each relying

party.

 

For this example, the Fabrikam Shipping application generates the

list of partners from a local store instead of querying ACS. Because

Fabrikam Shipping maintains this data locally, there is no need to

query ACS or use the login page that ACS can generate for you.

 

Authenticating a User of Fabrikam

Shipping You can find the

The Fabrikam Shipping application uses the AuthenticateAnd address of the feed

that contains a list of

AuthorizeAttribute attribute class to intercept requests and then ask all the identity provid-

the WSFederationAndAuthenticationModule class to handle the ers in the ACS portal

authentication and to retrieve the user’s claims from ACS. The in the “Application

AuthenticateUser method builds the redirect URL that passes the integration” section

WS-Federation parameters to the ACS instance that Fabrikam Ship- under “Login Page

Integration.”

ping uses. The following table describes the parameters that the

application passes to ACS.

 

Parameter Example value Notes

 

wa wsignin1.0 The WS-Federation command.

 

wtrealm https:// The realm value that ACS uses to

localhost/f-Shipping.7/ identify the relying party.

FederationResult

 

wctx https:// The return URL to which ACS should

localhost/f-Shipping.7/ post the token with claims.

Contoso

 

The Fabrikam Shipping application does not send a whr parameter

identifying the home realm because Fabrikam configures each tenant

in ACS as a relying party with only a single identity provider enabled.

 

The following code example shows the AuthenticateUser

method in the AuthenticateAndAuthorizeAttribute class.

 

private static void AuthenticateUser(AuthorizationContext context)

{

var organizationName =

(string)context.RouteData.Values[“organization”];

 

if (!string.IsNullOrEmpty(organizationName))

{

 

———————– Page 177———————–

 

140140 chapter seven

 

var returnUrl = GetReturnUrl(context.RequestContext);

 

// User is not authenticated and is entering for the first time.

Var fam =

FederatedAuthentication.WSFederationAuthenticationModule;

var signIn = new SignInRequestMessage

(new Uri(fam.Issuer), fam.Realm)

{

Context = returnUrl.ToString(),

Realm = string.Format

(“https://localhost/f-shipping.7/{0}”,organizationName)

};

context.Result =

new RedirectResult(signIn.WriteQueryString());

}

else

{

throw new ArgumentException(“Tenant name missing.”);

}

}

 

Authorizing Access to Fabrikam

Shipping Data

The Fabrikam Shipping application uses the same AuthenticateAnd

Authorize attribute to handle authorization. For example, Fabrikam

Shipping only allows members of the Shipment Manager role to

cancel orders. The following code example from the Shipment

Controller class shows how this is declared:

 

[AuthenticateAndAuthorize(Roles = “Shipment Manager”)]

[AcceptVerbs(HttpVerbs.Post)]

public ActionResult Cancel(string id)

{

}

 

The AuthorizeUser method in the AuthenticateAndAuthorize

Attribute class determines whether a user has the appropriate

Organization and Role claims:

 

private void AuthorizeUser(AuthorizationContext context)

{

var organizationRequested =

(string)context.RouteData.Values[“organization”];

 

 

———————– Page 178———————–

 

feder ated identity with multiple partners and windows azure acs 141 141

 

 

var userOrganization = ClaimHelper

.GetCurrentUserClaim(Fabrikam.ClaimTypes.Organization).Value;

if (!organizationRequested.Equals(

userOrganization, StringComparison.OrdinalIgnoreCase))

{

context.Result = new HttpUnauthorizedResult();

return;

}

 

var authorizedRoles =

this.Roles.Split(new[] { “,” },

StringSplitOptions.RemoveEmptyEntries);

bool hasValidRole = false;

foreach (var role in authorizedRoles)

{

if (context.HttpContext.User.IsInRole(role.Trim()))

{

hasValidRole = true;

break;

}

}

 

if (!hasValidRole)

{

context.Result = new HttpUnauthorizedResult();

return;

}

In a multi-tenant application

}

such as Fabrikam Shipping,

For a discussion of some alternative approaches to authorization the authorization rule that

that Fabrikam Shipping could have taken, see Appendix G, “Authoriza- checks the Organization

tion Strategies.” claim ensures that a tenant

only has access to its own

data.

Setup and Physical Deployment

 

The following sections describe the setup and physical deployment

for the Fabrikam Shipping websites, the simulated claims issuers, and

the initialization of the ACS instance.

 

Fabrikam Shipping Websites

Fabrikam has two separate websites: one for Fabrikam Shipping and

one to manage the enrollment process for new partners. This enables

Fabrikam to configure the two sites for the different expected usage

 

———————– Page 179———————–

 

142142 chapter seven

 

patterns: Fabrikam expects the usage of the shipping site to be sig-

nificantly higher than the usage of the enrollment site.

In the sample application, Fabrikam Shipping maintains a list

of registered partner organizations using the Organization and

OrganizationRepository classes. The following code sample shows

the Organization class:

 

C#

public class Organization

{

public string LogoPath { get; set; }

Using two separate sites public string Name { get; set; }

also circumvents a problem public string DisplayName { get; set; }

that can occur during the public string HomeRealm { get; set; }

enrollment process for a }

partner that uses a social

identity provider. During Both the f-Shipping.Enrollment.7 and the f-Shipping.7 web ap-

the enrollment process, a plications need access to this repository, which the sample imple-

user must sign into their

social identity provider so ments by using a simple file called organizations.txt stored in a folder

that Fabrikam can capture called SharedData.

the claim values that prove

that user’s identity. The The implementation of the enrollment functionality in this sample

enrollment process then shows only a basic outline of how you would implement this func-

creates the new claims- tionality in a real application.

mapping rules in ACS for

the partner. Unless the user

running the enrollment Sample Claims Issuers

process signs out and then The sample comes with two, pre-configured, claims issuers that act as

signs in again (not a great identity providers for Adatum and Litware. These simulated issuers

user experience), they will

not get the full set of claims illustrate the role that a real issuer, such as ADFS 2.0, would play in

that they require to access this scenario. If you want to experiment and extend the sample by

the Fabrikam Shipping enrolling additional partners with their own identity providers, you

application. will need additional issuers. You can either create your own new STS

using the WIF “WCF Security Token Service” template in Visual Stu-

dio and using either the Adatum.SimulatedIssuer.7 or Litware.Simu-

latedIssuer.7 projects as a model to work from, or you could use one

of the simple issuers for Northwind or AdventureWorks in the Assets

folder for this sample.

These simple issuers use the SelfSTS sample application that you

can read about here: http://archive.msdn.microsoft.com/selfsts.

 

Initializing ACS

The sample application includes a set of pre-configured partners for

Fabrikam Shipping, both with and without their own identity provid-

ers. These partners require identity providers, relying parties, and

 

———————– Page 180———————–

 

feder ated identity with multiple partners and windows azure acs 143 143

 

claims-mapping rules in ACS in order to function. The ACS.Setup

project in the solution is a simple console application that you can run

to add the necessary configuration data for the pre-configured part-

ners to your ACS instance. It uses the ACS Management API and the

wrapper classes in the ACS.ServiceManagementWrapper project.

 

You will still need to perform some manual configuration steps; the

ACS Management API does not enable you to create a new service

namespace. You must perform this operation in the ACS manage-

ment portal.

 

Questions

 

1. Why does Fabrikam want to use ACS in the scenario

described in this chapter?

 

a. Because it will simplify Fabrikam’s own internal

infrastructure requirements.

 

b. Because it’s the only way Fabrikam can support users

who want to use a social identity provider for authen-

tication.

 

c. Because it enables users with social identities to

access the Fabrikam Shipping application more easily.

 

d. Because ACS can authenticate users with social

identities.

 

2. In the scenario described in this chapter, why is it necessary

for Fabrikam to configure ACS to trust issuers at partners

such Adatum and Litware?

 

a. Because Fabrikam does not have its own on-premises

federation provider.

 

b. Because Fabrikam uses ACS for all the claims-mapping

rules that convert claims to a format that Fabrikam

Shipping understands.

 

c. Because partners such as Adatum have some users

who use social identities as their primary method of

authentication.

 

d. Because a relying party such as Fabrikam Shipping can

only use a single federation provider.

 

———————– Page 181———————–

 

144144 chapter seven

 

3. How does Fabrikam Shipping manage home realm discovery

in the scenario described in this chapter?

 

a. Fabrikam Shipping presents unauthenticated users

with a list of federation partners to choose from.

 

b. Fabrikam Shipping prompts unauthenticated users

for their email addresses. It parses each address to

determine which organization the user belongs to.

 

c. ACS manages home realm discovery; Fabrikam

Shipping does not.

 

d. Each partner organization has its own landing

page in Fabrikam Shipping. Visiting that page will

automatically redirect unauthenticated users to

that organization’s identity provider.

 

4. Enrolling a new partner without its own identity provider

requires which of the following steps?

 

a. Updating the list of registered partners stored by

Fabrikam Shipping. This list includes the home realm

of the partner.

 

b. Adding a new identity provider to ACS.

 

c. Adding a new relying party to ACS.

 

d. Adding a new set of claims-mapping rules to ACS.

 

5. Why does Fabrikam use a separate web application to

handle the enrollment process?

 

a. Because the expected usage patterns of the enroll-

ment functionality are very different from the

expected usage patterns of the main Fabrikam

Shipping web site.

 

b. Because using the enrollment functionality does not

require a user to authenticate.

 

c. Because the site that handles enrolling new partners

must also act as a federation provider.

 

d. Because the site that updates ACS with new relying

parties and claims-mapping rules must have a different

identity from sites that only read data from ACS.

 

More Information

 

Appendix E of this guide provides a detailed description of ACS

and its features.

 

———————– Page 182———————–

 

8 Claims Enabling Web Services

 

In Chapter 4, “Federated Identity for Web Applications,” you saw

Adatum make the a-Order application available to its partner Litware.

Rick, a salesman from Litware, used his local credentials to log onto

the a-Order website, which was hosted on Adatum’s domain.

To do this, Rick needed only a browser to access the a-Order

website. But what would happen if the request came from an applica-

tion other than a web browser? What if the information supplied by

aOrder was going to be integrated into one of Litware’s in-house ap-

plications?

Federated identity with an active (or “smart”) client application

works differently than federated identity with a web browser. In a

browser-based scenario, the web application requests security tokens

by redirecting the user’s browser to an issuer that produces them.

(This process is shown in the earlier scenarios.) With redirection, the

browser can handle most of the authentication for you. In the active

scenario, the client application actively contacts all issuers in a trust

chain (these issuers are typically an identity provider (IdP) and a fed- Active clients do not need

eration provider (FP)) to get and transform the required tokens. HTTP redirection.

In this chapter, you’ll see an example of a smart client that uses

federated identity. Fortunately, support for Microsoft® Windows®

Communication Foundation (WCF) is a standard feature of the Win-

dows Identity Foundation (WIF). Using WCF and WIF reduces the

amount of code needed to implement both claims-aware web ser-

vices and claims-aware smart clients.

 

The Premise

 

Litware wants to write an application that can read the status of its

orders directly from Adatum. To satisfy this request, Adatum agrees

to provide a web service called a-Order.OrderTracking.Services that

can be called by Litware over the Internet.

 

145

 

———————– Page 183———————–

 

146146 chapter eight

 

Adatum and Litware have already done the work necessary to

establish federated identity, and they both have issuers capable of

interacting with active clients. The necessary communications infra-

structure, including firewalls and proxies, is in place. To review these

elements, see Chapter 4, “Federated Identity for Web Applications.”

Now, Adatum only needs to expose a claims-aware web service

on the Internet. Litware will invoke Adatum’s web service from

within its client application. Because the client application runs in

Litware’s security realm, it can use Windows authentication to estab-

lish the identity of the user and then use this identity to obtain a to-

If Active Directory® ken it can pass along to Adatum’s federation provider.

Federation Services

(ADFS) 2.0 is used,

support for federated Goals and Requirements

identity with active

clients is a standard Both Litware and Adatum see benefits to a collaboration based on

feature.  claims-aware web services. Litware wants programmatic access to

Adatum’s a-Order application. Adatum does not want to be respon-

sible for authenticating any people or resources that belong to an-

Active clients use claims to get other security realm. For example, Adatum doesn’t want to keep and

access to remote services. maintain a database of Litware users.

Both Adatum and Litware want to reuse the existing infrastruc-

ture as much as possible. For example, Adatum wants to enforce

permissions for its web service with the same rules it has for the

browser-based web application. In other words, the browser-based

application and the web service will both use roles for access control.

 

Overview of the Solution

 

Figure 1 gives an overview of the proposed system.

 

Trust

 

R Issuer

e 2

qu

es IdP

t

3 an A ( )

d

at

u

Map the t m

ok

Issuer FP en

Claims ( ) 1

Trust

Request a Litware

4 token

 

Get Orders

WPF

a−Order

Order tracking Smart Rick

WCF web service Client

 

Adatum

Litware

figure 1

Federated identity with a smart client

 

———————– Page 184———————–

 

claims enabling web services 147 147

 

The diagram shows an overview of the interactions and relation-

ships among the different components. It is similar to the diagrams

you saw in the previous chapters, except that no HTTP redirection is

involved.

Litware’s client application is based on Windows Presentation

Foundation (WPF) and is deployed on Litware employees’ desktops.

Rick, the salesman at Litware, uses this application to track orders

with Litware.

Adatum exposes a SOAP web service on the Internet. This web

service is implemented with WCF and uses standard WCF bindings

that allow it to receive Security Assertion Markup Language (SAML)

tokens for authentication and authorization. In order to access this

service, the client must present a security token from Adatum.

The sequence shown in the diagram proceeds as follows:

 

1. Litware’s WPF application uses Rick’s credentials to request

a security token from Litware’s issuer. Litware’s issuer

authenticates Rick and, if the authentication is a success,

returns a Group claim with the value Sales because Rick

is in the sales organization.

 

2. The WPF application then forwards the security token

to Adatum’s issuer, which has been configured to trust

Litware’s issuer.

 

3. Adatum’s issuer, acting as a federation provider, transforms

the claim Group:Sales into Role:Order Tracker and adds a

new claim, Organization:Litware. The transformed claims

are the ones required by Adatum’s web service, a-Order.

OrderTracking.Services. These are the same rules that were

defined in the browser-based scenario.

 

4. Finally, the WPF application sends the web service the

request to return orders. This request includes the security

token obtained in the previous step.

 

This sequence is a bit different from a browser-based web appli-

cation because the smart client application knows the requirements

of the web service in advance and also knows how to acquire the

claims that satisfy the web service’s requirements. The client applica-

tion goes to the identity provider first, the federation provider second,

and then to the web service. The smart client application actively

drives the authentication process.

 

———————– Page 185———————–

 

148148 chapter eight

 

Inside the Implementation

 

Now is a good time to walk through some of the details of the solu-

tion. As you go through this section, you may want to download the

Microsoft Visual Studio® solution, 4ActiveClientFederation, from

http://claimsid.codeplex.com. If you are not interested in the mechan-

ics, you should skip to the next section.

You can implement a claims-based smart client application

The a-Order.OrderTracking using the built-in facilities of WCF, or you can code at a lower level

web service uses WCF standard using the WIF API. The a-Order.OrderTracking web service uses WCF

bindings. standard bindings.

 

Implementing the Web Service

The web service’s Web.config file contains the following WCF service

configuration.

 

<services>

<service

name=”AOrder.OrderTracking.Services.OrderTrackingService”

behaviorConfiguration=”serviceBehavior”>

<endpoint

address=””

binding=”ws2007FederationHttpBinding”

 

bindingConfiguration=

“WS2007FederationHttpBinding_IOrderTrackingService”

contract=

“AOrder.OrderTracking.Contracts.IOrderTrackingService”

/>

<endpoint address=”mex” binding=”mexHttpBinding”

contract=”IMetadataExchange” />

</service>

</services>

 

If your service endpoints support metadata exchange, as a-Order

tracking does, it’s easy for clients to locate services and bind to them

using tools such as Svcutil.exe. However, some manual editing of the

configuration that is auto-generated by the tools will be necessary in

the current example because it involves two issuers: the identity

provider and the federation provider. With only one issuer, the tool

will generate a configuration file that does not need editing.

 

The Web.config file contains binding information that matches

the binding information for the client. If they don’t match, an excep-

tion will be thrown.

 

———————– Page 186———————–

 

claims enabling web services 149 149

 

The Web.config file also contains some customizations. The fol-

lowing XML code shows the first customization.

 

<extensions>

<behaviorExtensions>

<add name=”federatedServiceHostConfiguration”

type=”Microsoft.IdentityModel

.Configuration.ConfigureServiceHostBehaviorExtensionElement,

Microsoft.IdentityModel, …” />

</behaviorExtensions>

</extensions>

 

Adding this behavior extension attaches WIF to the WCF pipe-

line. This allows WIF to verify the security token’s integrity against

the public key. (If you forget to attach WIF, you will see a run-time

exception with a message that says that a service certificate is miss-

ing.)

The service’s Web.config file uses the <Microsoft.identity

Model> element to specify the configuration required for the WIF

component. This is shown in the following code example.

 

<microsoft.identityModel>

<service>

<issuerNameRegistry

type=

“Microsoft.IdentityModel.Tokens.

ConfigurationBasedIssuerNameRegistry,

Microsoft.IdentityModel, Version=3.5.0.0,

Culture=neutral,

PublicKeyToken=31bf3856ad364e35″>

<trustedIssuers>

<add

thumbprint=”f260042d59e14817984c6183fbc6bfc71baf5462″

name=”adatum” />

</trustedIssuers>

</issuerNameRegistry>

<audienceUris>

<add value=

http://{adatum host}/a-Order.OrderTracking.Services/

OrderTrackingService.svc”

/>

</audienceUris>

 

———————– Page 187———————–

 

150150 chapter eight

 

Because the Adatum issuer will encrypt its security tokens with

the web service’s X.509 certificate, the <service> element of the ser-

vice’s Web.config file also contains information about the web ser-

vice’s private key. This is shown in the following XML code.

 

<serviceCertificate>

<certificateReference

findValue=”CN=adatum”

storeLocation=”LocalMachine”

storeName=”My”

x509FindType=”FindBySubjectDistinguishedName”/>

</serviceCertificate>

 

Implementing the Active Client

The client application, which acts as the WCF proxy, is responsible for

orchestrating the interactions. You can see this by examining the

client’s App.config file. The following XML code is in the <system.

serviceModel> section.

 

<client>

<endpoint

address=

http://{adatum host}/a-Order.OrderTracking.Services/

OrderTrackingService.svc”

binding=”ws2007FederationHttpBinding”

bindingConfiguration=

“WS2007FederationHttpBinding_IOrderTrackingService”

contract=”OrderTrackingService.IOrderTrackingService”

name=”WS2007FederationHttpBinding_IOrderTrackingService”>

<identity>

<dns value=”adatum” />

</identity>

</endpoint>

</client>

 

The address attribute gives the Uniform Resource Identifier (URI)

of the order tracking service.

The binding attribute, ws2007FederationHttpBinding, indicates

that WCF should use the WS-Trust protocol when it creates the se-

curity context of invocations of the a-Order order tracking service.

The Domain Name System (DNS) value given in the <identity>

section is verified at run time against the service certificate’s subject

name.

The App.config file specifies three nested bindings in the

<bindings> subsection. The following XML code shows the first of

these bindings.

 

———————– Page 188———————–

 

claims enabling web services 151 151

 

<ws2007FederationHttpBinding>

<binding

name=”WS2007FederationHttpBinding_IOrderTrackingService”>

<security mode=”Message”>

<message>

<issuer

address=”https://{adatum host}/{issuer endpoint}”

binding=”customBinding”

bindingConfiguration=”AdatumIssuerIssuedToken”>

</issuer>

</message>

</security>

</binding>

</ws2007FederationHttpBinding>

 

The issuer address changes depending on how you deploy the sample.

For an issuer running on the local machine, the address attribute of

the <issuer> element will be:

 

https://localhost/Adatum.FederationProvider.4/Issuer.svc

 

For ADFS 2.0, the address will be:

 

https://{adatum host}/Trust/13/IssuedTokenMixed

SymmetricBasic256

 

This binding connects the smart client application to the a-Order.

OrderTracking service. Unlike WCF bindings that do not involve

claims, this special claims-aware binding includes a message security

element that specifies the address and binding configuration of the The message security element

Adatum issuer. The address attribute represents the active endpoint identifies the issuer.

of the Adatum issuer.

The nested binding configuration is labeled AdatumIssuerIssued

Token. It is the second binding, as shown here.

 

<customBinding>

<binding name=”AdatumIssuerIssuedToken”>

<security

authenticationMode=”IssuedTokenOverTransport”

messageSecurityVersion=

“WSSecurity11WSTrust13WSSecureConversation13

WSSecurityPolicy12BasicSecurityProfile10″

>

<issuedTokenParameters>

<issuer

address=

https://{litware host}/{issuer endpoint}”

 

———————– Page 189———————–

 

152152 chapter eight

 

binding=”ws2007HttpBinding”

bindingConfiguration=”LitwareIssuerUsernameMixed”>

</issuer>

</issuedTokenParameters>

</security>

<httpsTransport />

</binding>

</customBinding>

 

The issuer address changes depending on how you deploy the sample.

The federation binding For an issuer running on the local machine, the address attribute of

in the Microsoft .NET the <issuer> element will be:

Framework 3.5 provides

no way to turn off a https://localhost/Litware.SimulatedIssuer.4/Issuer.svc

secure conversation.

For ADFS 2.0 the address will be:

(This feature is available

in version 4.0.) Because https://{litware host}/Trust/13/UsernameMixed

ADFS 2.0 endpoints

have secure conversation The AdatumIssuerIssuedToken binding configures the connec-

disabled, this example tion to the Adatum issuer that will act as the federation provider in

needs a custom binding.

this scenario.

The <security> element specifies that the binding uses WS-Trust.

This binding also nests the URI of the Litware issuer, and for this rea-

son, it is sometimes known as a federation binding . The binding speci-

fies that the binding configuration labeled LitwareIssuerUsername

Mixed is used for the Litware issuer that acts as the identity provider.

The following XML code shows this.

 

<ws2007HttpBinding>

<binding name=”LitwareIssuerUsernameMixed”>

<security mode=”TransportWithMessageCredential”>

<message

clientCredentialType=”UserName”

establishSecurityContext=”false”

/>

</security>

</binding>

</ws2007HttpBinding>

 

This binding connects the Litware issuer that acts as an identity

provider. This is a standard WCF HTTP binding because it transmits

user credentials to the Litware issuer.

 

In a production scenario, the configuration should be changed

to clientCredentialType=”Windows” to use Windows

authentication. For simplicity, this sample uses UserName

credentials. You may want to consider using other options in

a production environment.

 

———————– Page 190———————–

 

claims enabling web services 153 153

 

When the active client starts, it must provide credentials. If the

configured credential type is UserName, a UserName property must

be set. This is shown in the following code.

 

private void ShowOrders()

{

var client =

new OrderTrackingService.OrderTrackingServiceClient();

 

client.ClientCredentials.UserName.UserName = “LITWARE\\rick”;

client.ClientCredentials.UserName.Password =

“thisPasswordIsNotChecked”;

Using the WIF

WSTrustChannel

var orders = client.GetOrdersFromMyOrganization(); gives you more

control, but it

this.DisplayView(new OrderTrackingView() requires a deeper

{ understanding of

WS-Trust.

DataContext =

new OrderTrackingViewModel(orders)

});

}

 

This step would not be necessary if the application were deployed

in a production environment because it would probably use Windows

authentication.

 

WCF federation bindings can handle the negotiations between the

active client and the issuers without additional code. You can achieve

the same results with calls to the WIF WSTrustChannel class.

 

Implementing the Authorization

Strategy

The Adatum web service implements its authorization strategy in the A claims authorization

SimpleClaimsAuthorizationManager class. The service’s Web.config manager determines which

file contains a reference to this class in the <claimsAuthorization methods can be called by

Manager> element. the current user.

 

<claimsAuthorizationManager

type=”AOrder.OrderTracking.Services.

SimpleClaimsAuthorizationManager,

AOrder.OrderTracking.Services” />

 

Adding this service extension causes WCF to invoke the Check

Access method of the specified class for authorization. This occurs

before the service operation is called.

The implementation of the SimpleClaimsAuthorization

Manager class is shown in the following code.

 

———————– Page 191———————–

 

154154 chapter eight

 

public class SimpleClaimsAuthorizationManager :

ClaimsAuthorizationManager

{

public override bool CheckAccess(AuthorizationContext context)

{

return context.Principal.IsInRole(Adatum.Roles.OrderTracker);

}

}

 

WIF provides the base class, ClaimsAuthorizationManager.

Applications derive from this class in order to specify their own ways

of checking whether an authenticated user should be allowed to call

the web service methods.

The CheckAccess method in the a-Order order tracking service

ensures that the caller of any of the service’s methods must have a

role claim with the value Adatum.Roles.OrderTracker, which is de-

fined in the Samples.Web.ClaimsUtilities project elsewhere as the

string, “Order Tracker.”

In this scenario, the Litware issuer, acting as an identity provider,

issues a Group claim that identifies the salesman Rick as being in the

Litware sales organization (value=Sales). The Adatum issuer, acting as

a federation provider, transforms the security token it receives from

Litware. One of its transformation rules adds the role, Order Tracker,

to any Litware employee with a group claim value of Sales. The order

tracking service receives the transformed token and grants access to

the service.

 

Debugging the Application

The configuration files for the client and the web service in this

sample include settings to enable tracing and debugging messages. By

default, they are commented out so that they are not active.

If you uncomment them, make sure you update the <sharedLis-

teners> section so that log files are generated where you can find

them and in a location where the application has write permissions.

Here is the XML code.

 

<sharedListeners>

<add

initializeData=”c:\temp\WCF-service.svclog”

type=”System.Diagnostics.XmlWriterTraceListener”

name=”xml”>

<filter type=”” />

</add>

<add

initializeData=”c:\temp\wcf-service-msvg.svclog”

 

———————– Page 192———————–

 

claims enabling web services 155 155

 

type=”System.Diagnostics.XmlWriterTraceListener, System,

Version=2.0.0.0, Culture=neutral,

PublicKeyToken=b77a5c561934e089″

name=”ServiceModelMessageLoggingListener”

traceOutputOptions=”Timestamp”>

<filter type=”” />

</add>

</sharedListeners>

 

Setup and Physical Deployment

 

By default, the web service uses the local host for all components. In

a production environment, you would want to use separate comput-

ers for the client, the web service, the federation provider, and the

identity provider.

To deploy this application, you must substitute the mock issuer

with a production-grade component such as ADFS 2.0 that supports

active clients. You must also adjust the Web.config and App.config

settings to account for the new server names by changing the issuer

addresses. Remove the mock issuer

Note that neither the client nor the web service needs to be re- during deployment.

compiled to be deployed to a production environment. All of the

necessary changes are in the respective .config files.

 

Configuring ADFS 2.0 for Web Services

In the case of ADFS 2.0, you enable the endpoints using the Microsoft

Management Console (MMC).

To obtain a token from Litware, the UsernameMixed or Windows

Mixed endpoint could be used. UsernameMixed requires a user name

and password to be sent across the wire, while WindowsMixed

works with the Windows credentials. Both endpoints will return a

SAML token.

 

The “Mixed” suffix indicates that the endpoint uses transport

security (based on HTTPS) for integrity and confidentiality; client

credentials are included in the header of the SOAP message.

 

To obtain a token from Adatum, the endpoint used is Issued

TokenMixedSymmetricBasic256. This endpoint accepts a SAML token

as an input and returns a SAML token as an output. It also uses trans-

port and message security.

In addition, Litware and Adatum must establish a trust relation-

ship. Litware must configure Adatum ADFS as a relying party (RP)

and create rules to generate a token based on Lightweight Directory

 

———————– Page 193———————–

 

156156 chapter eight

 

Access Protocol (LDAP) Active Directory attributes. Adatum must

configure Litware ADFS as an identity provider and create rules to

transform the group claims into role claims.

Finally, Adatum must configure the a-Order web service as a rely-

ing party. Adatum must enable token encryption and create rules that

pass role and name claims through.

 

Questions

 

1. Which statements describe the difference between the way

federated identity works for an active client as compared to

a passive client:

 

a. An active client uses HTTP redirects to ask each token

issuer in turn to process a set of claims.

 

b. A passive client receives HTTP redirects from a web

application that redirect it to each issuer in turn to

obtain a set of claims.

 

c. An active client generates tokens to send to claims

issuers.

 

d. A passive client generates tokens to send to claims

issuers.

 

2. A difference in behavior between an active client and a

passive client is:

 

a. An active client visits the relying party first; a passive

client visits the identity provider first.

 

b. An active client does not need to visit a federation

provider because it can perform any necessary claims

transformations by itself.

 

c. A passive client visits the relying party first; an active

client visits the identity provider first.

 

d. An active client must visit a federation provider first

to determine the identity provider it should use.

Passive clients rely on home realm discovery to

determine the identity provider to use.

 

———————– Page 194———————–

 

claims enabling web services 157 157

 

3. The active scenario described in this chapter uses which

protocol to handle the exchange of tokens between the

various parties?

 

a. WS-Trust

 

b. WS-Transactions

 

c. WS-Federation

 

d. ADFS

 

4. In the scenario described in this chapter, it’s necessary to

edit the client application’s configuration file manually,

because the Svcutil.exe tool only adds a binding for a single

issuer. Why do you need to configure multiple issuers?

 

a. The metadata from the relying party only includes

details of the Adatum identity provider.

 

b. The metadata from the relying party only includes

details of the client application’s identity provider.

 

c. The metadata from the relying party only includes

details of the client application’s federation provider.

 

d. The metadata from the relying party only includes

details of the Adatum federation provider.

 

5. The WCF service at Adatum performs authorization checks

on the requests that it receives from client applications.

How does it implement the checks?

 

a. The WCF service uses the IsInRole method to verify

that the caller is a member of the OrderTracker role.

 

b. The Adatum federation provider transforms claims

from other identity providers into Role type claims

with a value of OrderTracker.

 

c. The WCF service queries the Adatum federation

provider to determine whether a user is in the Order

Tracker role.

 

d. It does not need to implement any authorization

checks. The application automatically grants access

to anyone who has successfully authenticated.

 

———————– Page 195———————–

 

 

———————– Page 196———————–

 

9 Securing REST Services

 

In Chapter 8, “Claims Enabling Web Services,” you saw how Adatum

exposed a SOAP-based web service to a client application. The client

used the WS-Trust active federation protocol to obtain a token con-

taining the claims that it needed to access the web service. The sce-

nario that this chapter describes is similar, but differs in that the web

service is REST-based rather than SOAP-based. The client must now

send a Simple Web Token (SWT) containing the claims to the web

service using the OAuth protocol instead of a SAML token using the

WS-Trust protocol. The client will obtain an SWT token from Win-

dows Azure™ AppFabric Access Control services (ACS) v2.

Like Chapter 8, “Claims Enabling Web Services,” this chapter de-

scribes an active scenario. In an active scenario, the client application

actively contacts all issuers in a trust chain; these issuers are typically

an identity provider (IdP) and a federation provider (FP). The client The client application must

application communicates with the identity provider and federation actively call all the issuers

provider to get and transform the tokens that it requires to access the in the trust chain.

relying party (RP) application.

In this chapter, you’ll see an example of a Windows® Presentation

Foundation (WPF) smart client application that uses federated iden-

tity. In Chapter 8, “Claims Enabling Web Services,” the Windows

Communication Foundation (WCF) bindings determined how the

client application called the issuers in the trust chain; in this chapter,

you’ll see how the client must call the identity provider and federation

provider programmatically because WCF does not support the calling

of RESTful web services.

 

The Premise

 

Litware wants to write an application that can read the status of its

orders directly from Adatum. To satisfy this request, Adatum agrees

to provide a web service called a-Order.OrderTracking.Services that

 

159

 

———————– Page 197———————–

 

160160 chapter nine

 

users at Litware can access by using a variety of client applications

over the Internet.

Adatum and Litware have already done the work necessary to

establish federated identity, and they both have issuers capable of

interacting with active clients. The necessary communications infra-

structure, which includes firewalls and proxies, is in place. To review

these elements, see Chapter 4, “Federated Identity for Web Applica-

tions.”

Now, Adatum only needs to expose a claims-aware web service

on the Internet. Litware will invoke Adatum’s web service from

If Active Directory® within its client application. Because the client application runs in

Federation Services Litware’s security realm, it can use Microsoft® Windows® authentica-

(ADFS) 2.0 is used, tion to establish the identity of the user and then use this identity to

you’ll get support for obtain a token it can pass along to Adatum’s federation provider. In

federated identity this scenario Adatum uses ACS as its federation provider.

with active clients as

a standard feature. 

 

Goals and Requirements

 

Both Litware and Adatum see benefits in a collaboration based on

claims-aware web services. Litware wants programmatic access to

Adatum’s a-Order application. Adatum does not want to be respon-

sible for authenticating any people or resources that belong to an-

Active clients use claims to get other security realm. For example, Adatum doesn’t want to keep and

access to remote services. maintain a database of Litware users.

Both Adatum and Litware want to reuse the existing infrastruc-

ture as much as possible. For example, Adatum wants to enforce

permissions for its web service with the same rules it has for the

browser-based web application. In other words, the browser-based

application and the web service will both use roles for access control.

Adatum has decided to expose the a-Order order tracking data as

a RESTful web service to expand the range of clients that can access

the application. Adatum anticipates that partners will implement cli-

ent applications on mobile platforms; in these environments partners

will prefer a lightweight REST API to a SOAP-based API.

 

SWT tokens are smaller than SAML

tokens because they do not include any

XML markup. It is also much easier to

manipulate SWT tokens in JavaScript,

making SWT the preferred token

format for rich JavaScript clients.

 

———————– Page 198———————–

 

securing rest services 161 161

 

Overview of the Solution

 

Figure 1 gives an overview of the proposed solution.

 

3

 

Trust

 

Trust

 

FP

 

ACS

2

 

GetToken

(SWT) IdP

(ADFS 2.0)

 

4 1 GetToken

(SAML)

a−Order

Call Service + SWT

WPF − Smart Client

 

WCF Rick

 

Adatum

Litware

 

figure 1

Federated identity with a smart client

 

The diagram presents an overview of the interactions and rela-

tionships among the different components. It is similar to the diagrams

you saw in the previous chapters.

Litware has a single client application based on Windows Presen-

tation Foundation (WPF) deployed on Litware employees’ desktops.

Rick, a Litware employee, uses this application to track orders with

Adatum.

Adatum exposes a RESTful web service on the Internet. This web

service expects to receive Simple Web Token (SWT) tokens that it

will use to implement authorization rules in the a-Order application.

In order to access this service, the client must present an SWT token

from the Adatum ACS instance.

The sequence shown in the diagram proceeds as follows:

 

1. The Litware WPF application uses Rick’s credentials to

request a security token from the Litware issuer. The

Litware issuer authenticates Rick and, if the authentication

succeeds, it returns a Group claim with the value Sales

because Rick is in the sales organization. The Litware issuer

returns a SAML token to the client application.

 

———————– Page 199———————–

 

162162 chapter nine

 

2. The WPF application then forwards the SAML token to

ACS (the Adatum federation provider), which trusts the

Litware issuer.

 

3. ACS, acting as a federation provider, transforms the claim

Group:Sales into Role:Sales and adds a new claim,

Organization:Litware. The transformed claims are the ones

required by the Adatum a-Order RESTful web service.

These are the same rules that were defined in the browser-

based scenario. ACS also transitions the incoming SAML

token to an SWT token that it returns to the client WPF

It’s also possible to application. The interaction between the client application

wrap SWT tokens and ACS uses the OAuth protocol.

in the WS-Trust and

WS-Federation 4. Finally, the WPF application sends the web service the

protocols by using request for the order tracking data. This request includes

a BinarySecurity

TokenElement . the SWT token obtained in the previous step. The web

service uses the claims in the token to implement its

authorization rules.

This sequence is a bit different from the scenario described in

Chapter 8, “Claims Enabling Web Services.” In this scenario, the fed-

eration provider is an ACS instance that performs token format tran-

sition from SAML to SWT in addition to mapping the claims from the

identity provider into claims that the relying party expects to see.

 

Inside the Implementation

 

Now is a good time to walk through some of the details of the solu-

tion. As you go through this section, you may want to download the

Visual Studio® development system solution called 8ActiveRestCli-

entFederation from http://claimsid.codeplex.com. If you are not in-

terested in the mechanics, you should skip to the next section.

WCF does not provide built-in support for REST on the client or

for SWT on the server so this sample requires more code than you

saw in Chapter 8, “Claims Enabling Web Services.”

The following sections describe some of the key parts of the im-

plementation of the active client, the RESTful web service, and ACS.

 

The ACS Configuration

In this scenario, in addition to handling the claims mapping rules, ACS

is also responsible for transitioning the incoming token from the Lit-

ware identity provider from the SAML format to the SWT format.

This is partially a configuration task, but the active client application

must be able to receive an SWT token from ACS. For more details, see

the section, “Implementing the Active Client,” later in this chapter.

 

———————– Page 200———————–

 

securing rest services 163 163

 

The configuration step in ACS is to ensure that the token format

for the aOrderService relying party is set to SWT. This makes sure

that ACS issues an SWT token when it receives a token from any of

the identity providers configured for the aOrderService relying party.

 

Implementing the Web Service

In this scenario, Adatum exposes the order-tracking feature of the a-

Order application as a RESTful web service. The following snippet

from the Web.config file shows how the application defines the HTTP

endpoint for the service.

 

<services>

In this scenario, the web

<service name=

service does not use

“AOrder.OrderTracking.Services.OrderTrackingService” Windows Identity

behaviorConfiguration=”serviceBehavior”> Foundation (WIF) to

<endpoint handle the incoming

address=”” tokens. However, the

service does use WIF for

binding=”webHttpBinding”

some claims processing;

contract= for example, it uses it

“AOrder.OrderTracking.Contracts.IOrderTrackingService” in the CustomClaims

behaviorConfiguration=”orders” /> AuthorizationManager

class. You will see the

</service>

details in the microsoft.

</services>

identityModel section

<behaviors> in the Web.config file.

<serviceBehaviors>

<behavior name=”serviceBehavior”>

<serviceDebug includeExceptionDetailInFaults=”true” />

<serviceMetadata httpGetEnabled=”true” />

</behavior>

</serviceBehaviors>

<endpointBehaviors>

<behavior name=”orders”>

<webHttp />

</behavior>

</endpointBehaviors>

</behaviors>

 

The Global.asax file contains code to route requests to the ser-

vice definition. The following code sample from the Global.asax.cs file

shows the routing definition in the service.

 

protected void Application_Start(object sender, EventArgs e)

{

RouteTable.Routes.Add(new ServiceRoute(“orders”,

new WebServiceHostFactory(), typeof(OrderTrackingService)));

}

 

———————– Page 201———————–

 

164164 chapter nine

 

The Adatum a-Order application must also extract the claims in-

formation from the incoming SWT token. The application uses the

claims to determine the identity of the caller and the roles that the

caller is a member of in order to apply the authorization rules in the

application. The following code sample from the OrderTracking

Service class shows how the GetOrdersFromMyOrganization

method retrieves the current user’s organization claim to use when it

fetches a list of orders from the order repository.

 

public Order[] GetOrdersFromMyOrganization()

{

string organization = ClaimHelper.GetClaimsFromPrincipal(

HttpContext.Current.User,

Adatum.ClaimTypes.Organization).Value;

var repository = new OrderRepository();

return repository.GetOrdersByCompanyName(organization).

ToArray();

}

 

This method retrieves a claim value from the IClaimsPrincipal

object. In the scenarios described in previous chapters, WIF has been

responsible for populating the IClaimsPrincipal object with claims

from a SAML token: in the current scenario, we are using SWT tokens

and the OAuth protocol, which are not directly supported by WIF.

The Visual Studio solution, 8ActiveRestClientFederation, includes a

project called DPE.OAuth that implements an extension to WIF to

provide support for SWT tokens and the OAuth protocol.

The following snippet from the Web.config file in the a-Order.

OrderTracking.Services.8 project shows how Adatum installed the

modules for the extension to WIF.

 

In addition to the extension module, Microsoft.Samples.DPE.

OAuth.ProtectedResource.ProtectedResourceModule, it’s

necessary to install the standard WSFederationAuthentication

Module and SessionAuthenticationModule modules.

 

<configSections>

<section name=”microsoft.identityModel”

type=”Microsoft.IdentityModel.Configuration.MicrosoftIdentity

ModelSection,

Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral,

PublicKeyToken=31bf3856ad364e35″ />

</configSections>

 

———————– Page 202———————–

 

securing rest services 165 165

 

<system.webServer>

<validation validateIntegratedModeConfiguration=”false” />

<modules runAllManagedModulesForAllRequests=”true”>

<add name=”UrlRoutingModule” type=”System.Web.Routing.

UrlRoutingModule,

System.Web, Version=4.0.0.0, Culture=neutral,

PublicKeyToken=b03f5f7f11d50a3a” />

<add name=”ProtectedResourceModule”

type=”Microsoft.Samples.DPE.OAuth.ProtectedResource.

ProtectedResourceModule,

Microsoft.Samples.DPE.OAuth, Version=1.0.0.0,

Culture=neutral” />

<add name=”WSFederationAuthenticationModule”

type=”Microsoft.IdentityModel.Web.

WSFederationAuthenticationModule,

Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral,

PublicKeyToken=31bf3856ad364e35″

preCondition=”managedHandler” />

<add name=”SessionAuthenticationModule”

type=”Microsoft.IdentityModel.Web.

SessionAuthenticationModule,

Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral,

PublicKeyToken=31bf3856ad364e35″

preCondition=”managedHandler” />

</modules>

</system.webServer>

 

You use the microsoft.identityModel section to configure the

extension module to handle SWT tokens and the OAuth protocol.

 

<microsoft.identityModel>

<service name=”OAuth”>

<audienceUris>

<add value=”https://localhost/a-Order.OrderTracking.

Services.8″ />

</audienceUris>

<claimsAuthorizationManager

type=”AOrder.OrderTracking.Services.

CustomClaimsAuthorizationManager,

AOrder.OrderTracking.Services.8, Culture=neutral” />

<securityTokenHandlers>

<add type=”Microsoft.Samples.DPE.OAuth.Tokens.

SimpleWebTokenHandler,

Microsoft.Samples.DPE.OAuth” />

</securityTokenHandlers>

 

———————– Page 203———————–

 

166166 chapter nine

 

<issuerTokenResolver

type=”Microsoft.Samples.DPE.OAuth.ProtectedResource

.ConfigurationBasedIssuerTokenResolver, Microsoft.Samples.

DPE.OAuth”>

<serviceKeys>

<add serviceName=”https://localhost/a-Order.

OrderTracking.Services.8″

serviceKey=

“lJFL02dwy9n3rCe2YEToblDFHdZmbecmFK1QB88ax7U=” />

</serviceKeys>

</issuerTokenResolver>

<issuerNameRegistry type=”Microsoft.Samples.DPE.OAuth.

ProtectedResource

.SimpleWebTokenTrustedIssuersRegistry, Microsoft.Samples.

DPE.OAuth”>

<trustedIssuers>

<add issuerIdentifier=”https://aorderrest-dev.

accesscontrol.windows.net/”

name=”aOrder” />

</trustedIssuers>

</issuerNameRegistry>

</service>

</microsoft.identityModel>

 

This section also configures a custom claims authorization man-

ager that Adatum uses to apply custom authorization rules in the

service. The following code example shows how the service imple-

ments the custom claims authorization manager class that checks the

caller’s role membership and the resource the caller is requesting. The

IOrderTrackingService interface defines the mapping from the paths

“/all” and “/frommyorganization” to the service methods Get

AllOrders and GetOrdersFromMyOrganization.

 

public class CustomClaimsAuthorizationManager :

ClaimsAuthorizationManager

{

public override bool CheckAccess(AuthorizationContext context)

{

Claim actionClaim =

context.Action.Where(x => x.ClaimType == ClaimTypes.Name).

FirstOrDefault();

Claim resourceClaim =

context.Resource.Where(x => x.ClaimType == ClaimTypes.

Name).FirstOrDefault();

 

———————– Page 204———————–

 

securing rest services 167 167

 

IClaimsPrincipal principal = context.Principal;

 

var resource = new Uri(resourceClaim.Value);

string action = actionClaim.Value;

 

if (action == “GET” && resource.PathAndQuery.Contains

(“/frommyorganization”))

{

if (!principal.IsInRole(Adatum.Roles.OrderTracker))

{

return false;

You can also use a custom

} ClaimsAuthentication

} Manager class to modify

the set of claims attached

if (action == “GET” && resource.PathAndQuery.Contains to the IClaimsPrincipal

(“/all”)) object in the context.

 

{

if (!principal.IsInRole(Adatum.Roles.OrderApprover))

{

return false;

}

To find out more about

}

authorization strategies,

take a look at Appendix G,

return true; “Authorization Strategies.”

}

}

 

Implementing the Active Client

The ACS configuration ensures that the token format for the Adatum

a-Order relying party application is set to SWT. ACS issues an SWT

token when it receives a token from any of the identity providers

configured for the Adatum a-Order relying party (the client obtains

the token from the identity provider and sends it ACS). The client

application uses a custom endpoint behavior to intercept all outgoing

requests; the behavior obtains the token that the relying party re-

quires and attaches it to the request. Figure 2 shows an overview of

 

———————– Page 205———————–

 

168168 chapter nine

 

Adatum ACS

Instance

(FP)

 

Litware IP

 

5

4

 

OrderTrackingServiceClient

 

1

 

CustomHeaderMessageInspector Adatum

3 6 a-Order Tracking Services

OrderTrackingViewModel (RESTful Web Service)

 

2

 

Litware WPF Client

 

this process.

 

figure 2

Attaching an SWT token to the outgoing request

 

The sequence shown in Figure 2 proceeds as follows.

 

1. The service client, the OrderTrackingServiceClient class,

attaches a new behavior to the channel endpoint. This

CustomHeaderBehavior behavior class instantiates a

custom message inspector that has access to every outgoing

request on the channel.

 

2. The client application invokes the GetOrdersForMy

The inspector caches the Organization method that sends a request to the

SWT token to avoid having a-Order order tracking Service.

to revisit the identity

provider and ACS for every 3. The CustomHeaderMessageInspector class intercepts

request to the a-Order the message before it is sent.

application. The sample

caches the token for 30 4. The CustomHeaderMessageInspector class requests

seconds, but you should a SAML token from the Litware identity provider.

adjust this to a suitable

value for your application. 5. The CustomHeaderMessageInspector class sends the

SAML token to ACS and receives an SWT token.

 

6. The CustomHeaderMessageInspector class attaches

the SWT token to the outgoing message header.

 

———————– Page 206———————–

 

securing rest services 169 169

 

Adatum chose to use WCF in the client to manage the call to the

REST-based service rather than the WebClient or HttpWeb

Request classes because it was a convenient way to attach the

SWT token. For an example that uses the HttpWebRequest

class (because WCF is not available on the Windows® Phone 7

platform), see Chapter 10, “Accessing REST Services from a

Windows Phone Device.”

 

Although WIF does not provide full support for REST-based web

services, the sample client application uses WIF to handle some of the

token processing. This reduces the amount of code required to imple-

ment this sample client application. One of the reasons for using a

RESTful web service is to support other client environments, and

Chapter 10, “Accessing REST Services from a Windows Phone 7 De-

vice,” shows you how to implement a client application without using

WIF.

The inspector must first obtain a SAML token from the identity

provider. The following code example from the CustomHeader

MessageInspector class shows how the a-Order.OrderTracking.Client

application uses WIF to perform this task. This method takes three

arguments; the service endpoint, the STS endpoint, and the user’s

credentials.

 

private static SecurityToken GetSamlToken(

string realm, string stsEndpoint, ClientCredentials

clientCredentials)

{

using (var factory = new WSTrustChannelFactory(

new UserNameWSTrustBinding(SecurityMode.

TransportWithMessageCredential),

new EndpointAddress(new Uri(stsEndpoint))))

{

factory.Credentials.UserName.UserName =

clientCredentials.UserName.UserName;

factory.Credentials.UserName.Password =

clientCredentials.UserName.Password;

 

factory.TrustVersion = TrustVersion.WSTrust13;

 

WSTrustChannel channel = null;

 

try

{

var rst = new RequestSecurityToken

{

RequestType = WSTrust13Constants.Request

 

———————– Page 207———————–

 

170170 chapter nine

 

Types.Issue,

AppliesTo = new EndpointAddress(realm),

KeyType = KeyTypes.Bearer,

};

 

channel = (WSTrustChannel)factory.CreateChannel();

 

return channel.Issue(rst);

}

finally

{

if (channel != null)

{

channel.Abort();

}

 

factory.Abort();

}

}

}

 

The token request specifies a bearer token; ACS expects to receive

a bearer token and not a holder-of-key token. For this reason it’s

important to use Secure Sockets Layer (SSL) to secure the connec-

tions between the client application and the identity provider, and

between the client application and ACS in order to mitigate the

threat of a man-in-the-middle attack.

 

The inspector can then send the SAML token to ACS. The follow-

ing code example from the CustomHeaderMessageInspector class

shows how the client application sends the SAML token to ACS and

receives the SWT token in return. The application uses the OAuth

protocol to communicate with ACS.

 

private static NameValueCollection GetOAuthToken(string

xmlSamlToken, string serviceEndpoint, string acsRelyingParty)

{

var values = new NameValueCollection

{

{ “grant_type”, “urn:oasis:names:tc:SAML:2.0:assertion” },

{ “assertion”, xmlSamlToken },

{ “scope”, acsRelyingParty }

};

var client = new WebClient { BaseAddress = serviceEndpoint };

 

byte[] acsTokenResponse = client.UploadValues(“v2/Oauth2-13”,

 

———————– Page 208———————–

 

securing rest services 171 171

 

“POST”, values);

string acsToken = Encoding.UTF8.GetString(acsTokenResponse);

var tokens = new NameValueCollection();

var json = new JavaScriptSerializer();

var parsed = json.DeserializeObject(acsToken) as

Dictionary<string, object>;

 

foreach (var item in parsed)

{

tokens.Add(item.Key, item.Value.ToString());

}

 

return tokens;

}

 

The inspector attaches the SWT token in the Authorization

header in the HTTP request message that the client application is

sending to the a-Order order tracking service. The following code

example shows how the client application performs this task in the

BeforeSendRequest method.

 

var oauthAuthorizationHeader =

string.Format(“OAuth {0}”, oauthToken[“access_token”]);

httpRequestMessageProperty.Headers.Add(

HttpRequestHeader.Authorization, oauthAuthorizationHeader);

 

The SWT token expiry time is accessible in the response from

ACS and the code in the sample checks the expiry time on the SWT

token before attaching it to the outgoing request. With a SAML to-

ken, the expiry time is in the token (not part of the response); if the

issuer encrypts the SAML token, the client application may not have

access to the contents of this token. In this solution, the client appli-

cation simply forwards the SAML token on to ACS.

You can read the expiry time of a SAML token using the following

code:

 

var rst = new RequestSecurityToken

{

RequestType = WSTrust13Constants.RequestTypes.Issue,

AppliesTo = new EndpointAddress(realm),

KeyType = KeyTypes.Bearer,

};

 

channel = (WSTrustChannel)factory.CreateChannel();

RequestSecurityTokenResponse response;

var token = channel.Issue(rst, out response);

var expires = response.Lifetime.Expires.Value;

 

———————– Page 209———————–

 

172172 chapter nine

 

Setup and Physical Deployment

 

By default, the web service uses the local host for all components. In

a production environment, you would want to use separate comput-

ers for the client, the web service, the federation provider, and the

identity provider.

To deploy this application, you must substitute the mock issuer

with a production-grade component such as ADFS 2.0 that supports

active clients. You must also adjust the settings in the client applica-

tion’s App.config file to account for the new server names: the

Remove the mock issuer during addresses for the identity provider and ACS are located in the app

deployment. Settings section.

Note that neither the client nor the web service needs to be re-

compiled to be deployed to a production environment unless you are

changing the ACS service namespace that your solution uses; in this

case, you must update the service namespace name and key in the

CustomServiceHostFactory class in the a-Order order tracking web

service.

 

Configuring ADFS 2.0 for Web Services

In the case of ADFS 2.0, you enable the endpoints using the Microsoft

Management Console (MMC).

To obtain a token from the Litware issuer, you could use the

UsernameMixed or WindowsMixed endpoint. UsernameMixed

requires a user name and password to be sent across the wire,

while WindowsMixed works with the Windows credentials. Both

endpoints will return a SAML token.

 

The “Mixed” suffix indicates that the endpoint uses transport

security (based on HTTPS). For integrity and confidentiality, client

credentials are included in the header of the SOAP message.

 

Configuring ACS

As a minimum, you should configure the aOrderService relying party

in ACS to issue name and organization claims. If you implement any

additional authorization rules, you should ensure that ACS issues

any additional claims that your rules require.

 

To avoid the risk of a partner spoofing an organization name in

a token, you should configure ACS to generate the organization

claim and not simply pass it through from the identity provider.

 

———————– Page 210———————–

 

securing rest services 173 173

 

Questions

 

1. In the scenario described in this chapter, which of the

following statements best describes what happens the first

time that the smart client application tries to use the

RESTful a-Order web service?

 

a. It connects first to the ACS instance, then to the

Litware IP, and then to the a-Order web service.

 

b. It connects first to the Litware IP, then to the ACS

instance, and then to the a-Order web service.

 

c. It connects first to the a-Order web service, then

to the ACS instance, and then to the Litware IP.

 

d. It connects first to the a-Order web service, then

to the Litware IP, and then to the ACS instance.

 

2. In the scenario described in this chapter, which of the

following tasks does ACS perform?

 

a. ACS authenticates the user.

 

b. ACS redirects the client application to the relying

party.

 

c. ACS transforms incoming claims to claims that the

relying party will understand.

 

d. ACS transitions the incoming token format from

SAML to SWT.

 

3.     In the scenario described in this chapter, the Web.config

file in the a-Order web service does not contain a

<microsoft.identity> section. Why?

 

a. Because it configures a custom ServiceAuthorization

Manager class to handle the incoming SWT token in

code.

 

b. Because it is not authenticating requests.

 

c. Because it is not authorizing requests.

 

d. Because it is using a routing table.

 

———————– Page 211———————–

 

174174 chapter nine

 

4. ACS expects to receive bearer tokens. What does this

suggest about the security of a solution that uses ACS?

 

a. You do not need to use SSL to secure the connection

between the client and the identity provider.

 

b. You should use SSL to secure the connection between

the client and the identity provider.

 

c. The client application must use a password to

authenticate with ACS.

 

d. The use of bearer tokens has no security implications

for your solution.

 

5. You should use a custom ClaimsAuthorizationManager

class for which of the following tasks.

 

a. To attach incoming claims to the IClaimsPrincipal

object.

 

b. To verify that the claims were issued by a trusted

issuer.

 

c. To query ACS and check that the current request is

authorized.

 

d. To implement custom rules that can authorize access

to web service methods.

 

More Information

 

To learn more about proof tokens and bearer tokens, see the blog

posts at: http://blogs.msdn.com/b/vbertocci/archive/2008/01/02/

on-prooftokens.aspx and http://travisspencer.com/blog/2009/02/

what-is-a-proof-key.html.

For more information about the DPE.OAuth project used in this

solution, see: http://www.fabrikamshipping.com/.

 

———————– Page 212———————–

 

10 Accessing REST Services from a

Windows Phone Device

 

In Chapter 9, “Securing REST Services,” you saw how Adatum exposed

a REST-based web service that used federated authentication and

SWT tokens. The scenario described there also included a rich desk-

top client application that obtained a Simple Web Token (SWT) to-

ken from Windows Azure™ AppFabric Access Control services (ACS)

to present to the web service. The scenario that this chapter describes

uses the same web service, but describes how to implement a client

application on the Windows® Phone platform.

Creating a Windows Phone client raises some additional security

concerns. You can’t assume that the Windows Phone device is pro-

tected with a password; if the device is stolen or used without the

owner’s consent, a malicious user could access all of the applications

and data on the device unless you introduce some additional security

measures. Such security measures could include requiring the user to

enter a password or PIN to access either your application, or a feature

within your application. The problem here is that any of these secu-

rity measures are likely to reduce the usability of the application and

degrade the overall user experience.

This chapter describes two alternative implementations of the

Windows Phone client: a passive federation approach and an active

federation approach. The active federation implementation shows

how the client application uses the OAuth protocol and contacts all

of the issuers in the trust chain in turn to acquire a valid SWT token

to access the a-Order Tracking application. The passive implementa-

tion shows how to use an embedded web browser control to handle

the redirect messages that are used by the WS-Federation protocol

to coordinate the exchange of messages with the issuers.

The active federation implementation described in this chapter

differs from the implementation shown in Chapter 9, “Securing REST

Services.” Because there is no version of WIF available for Windows

 

175

 

———————– Page 213———————–

 

176176 chapter ten

 

Phone to help with the token processing, the client code in the

The sample client application Windows Phone application is slightly more complex than you’d

demonstrates both active and typically find in a Microsoft® Windows® operating system desktop

passive federation approaches. application.

 

The Premise

 

Litware wants a mobile application that can read the status of its or-

ders directly from Adatum. To satisfy this request, Adatum agrees to

provide a web service called a-Order.OrderTracking.Services that us-

ers at Litware can use from a variety of client applications over the

Internet.

Adatum and Litware have already done the work necessary to

establish federated identity; Litware has an issuer that is capable of

interacting with both active and passive clients, and Adatum has con-

figured an ACS service namespace with the necessary relying parties

(RPs) and identity providers (IdPs). The necessary communications

infrastructure, including firewalls and proxies, is in place. To review

these elements, see Chapter 5, “Federated Identity with Windows

Azure Access Control Service.”

Adatum also has a RESTful web service in place that exposes or-

der-tracking data. This web service is claims-aware and expects to

receive claims in an SWT token. For a description of how the web

If ADFS 2.0 is used, service handles SWT tokens, see Chapter 9, “Securing REST Services.”

support for federated

identity with both

active and passive Goals and Requirements

clients is a standard

feature. 

Both Litware and Adatum see benefits in enabling mobile access to

the a-Order tracking data, and Litware already has plans to adopt

Windows Phone as its preferred mobile platform. Adatum originally

decided to expose the a-Order tracking data using a RESTful web

service in anticipation of developing client applications on mobile

platforms.

Adatum wants to ensure that the Windows Phone client applica-

tion follows best practices in terms of integration with the platform

and design for optimal battery use. Adatum and Litware are concerned

about addressing the possible security issues that arise from using a

mobile platform—in particular, the risks associated with someone

gaining unauthorized access to a device.

Adatum wants to simplify the process of configuring new identity

providers for the Windows Phone application.

 

———————– Page 214———————–

 

accessing rest services from a windows phone device 177 177

 

Overview of the Solution

 

The following sections describe two solutions: one that uses an active

federated authentication approach, and one that uses a passive

federated authentication approach. There is also a discussion of the

advantages and disadvantages of each.

 

Passive Federation

Figure 1 gives an overview of the proposed solution that uses a passive

federation model to obtain an SWT token from ACS.

 

Litware Issuer (IdP)

 

ACS (FP) Trust

 

5

 

GetToken

(SWT)

 

4

 

3 GetToken

(SAML)

Trust

Get identity

provider list

 

1

 

a−Order Tracking

RESTful Web Service (RP)

 

Embedded

8 Browser

 

Call Service + SWT

2 SWT

7

 

Adatum 9 6

 

Application

 

figure 1 Windows Phone Device

Windows Phone using passive federation

 

———————– Page 215———————–

 

178178 chapter ten

 

The diagram presents an overview of the interactions and rela-

tionships among the different components. It is similar to the diagrams

you saw in previous chapters.

Litware has a Windows Phone client application deployed on

Litware employees’ phones. Rick, a Litware employee, uses this ap-

plication to track orders with Adatum.

Adatum exposes a RESTful web service on the Internet. The a-

Order tracking web service expects to receive SWT tokens that

contain the claims it will use for authorization. In order to access this

Adatum has service, the client must present an SWT token from the Adatum ACS

configured the instance.

a-Order tracking The sequence shown in the diagram proceeds as follows:

web service to trust

the Adatum ACS 1. The Windows Phone application connects to a service

instance. namespace in ACS. It obtains a list of configured identity

providers for the relying party (RP) application (Adatum

a-Order tracking) as a JavaScript Object Notation (JSON)

formatted list. Each entry in this list includes the identity

provider’s name and the address of the sign-in page at the

identity provider. You can find the URL for this list on the

ACS Application Management page.

 

2. The Windows Phone application displays this list for Rick to

select the identity provider he wants to use to authenticate.

 

In the sample, there is only one identity provider (Litware),

so Rick has only one choice.

 

3. When Rick selects an identity provider, the Windows Phone

application uses an embedded web browser control to

navigate to the identity provider’s sign-in page (based on

the information retrieved in step 1).

 

4. Because the client application initiates the sign-in passively,

after the Litware identity provider authenticates Rick it

automatically redirects the embedded web browser control

back to ACS, passing it the Security Assertion Markup

Language (SAML) token from the Litware identity provider.

 

5. ACS transforms the tokens based on the rules in the service

namespace, and transitions the incoming SAML token to an

SWT token. ACS returns the SWT token to the embedded

browser.

 

6. The Windows Phone application retrieves the SWT token

from the embedded web browser control and then caches it

on the Windows Phone device.

 

———————– Page 216———————–

 

accessing rest services from a windows phone device 179 179

 

7. The Windows Phone application then makes a REST call to

the a-Order tracking web service, including the SWT token

in the request header.

 

8. The a-Order tracking web service extracts the SWT token

from the request. It uses the claims in the token to imple-

ment authorization rules in the a-Order tracking web

service.

 

9. The service returns the order tracking data to the Windows

 

 

Phone application.

 

This scenario uses the passive WS-Federation protocol; the inter-

action between the identity provider and ACS (the federation pro- The sample application

installs a self-issued

vider) is passive and uses an embedded web browser control on the certificate on the Windows

phone to handle the redirects. The Windows Phone application in- Phone device so that it

vokes the RESTful web service directly, sending the SWT token to the can use SSL when it

web service (the relying party) along with the request for tracking communicates with the

data. Litware identity provider

and the a-Order tracking

The only configuration data that the Windows Phone application application. In a real-world

needs is: scenario, the Litware

•     The URL the phone uses to access the list of identity providers identity provider and the

in JSON format from ACS. The Windows Phone application a-Order tracking applica-

uses this URL in step 1 in the sequence shown in Figure 1. tions will be protected by

certificates from a trusted

•     The URL the phone uses to access the a-Order tracking RESTful third-party issuer.

web service. This happens in step 7 in the sequence shown in

Figure 1.

 

This scenario uses Secure Sockets Layer (SSL) to protect all the

interactions from the Windows Phone device including accessing the

Litware identity provider, the ACS instance, and calling the Adatum

web service.

To improve its usability, the Windows Phone application caches

the SWT token so that for subsequent requests it can simply forward

the cached SWT token instead of re-authenticating with the identity

provider, and obtaining a new SWT token from ACS.

 

Active Federation

Figure 2 shows an alternative solution for the Windows Phone client

application that uses a pure active federation approach.

 

———————– Page 217———————–

 

180180 chapter ten

 

Litware Issuer (IdP)

 

ACS (FP) Trust

 

3

 

1

 

GetToken

(SAML)

Trust

 

2

 

GetToken

(SWT)

 

a−Order Tracking

RESTful Web Service (RP)

 

5

Call Service + SWT

 

4

 

Adatum 6

 

Application

 

figure 2

Windows Phone Device

Windows Phone using active federation

 

The diagram presents an overview of the interactions and rela-

tionships among the different components in the active federation

solution.

Litware has a Windows Phone client application deployed on

Litware employees’ phones. Rick, a Litware employee, uses this ap-

plication to track orders with Adatum.

Adatum exposes a RESTful web service on the Internet. This web

service expects to receive Simple Web Token (SWT) tokens that it

will use to implement authorization rules in the a-Order application.

In order to access this service, the client application must present an

SWT token from the Adatum ACS instance.

 

———————– Page 218———————–

 

accessing rest services from a windows phone device 181 181

 

The sequence shown in the diagram proceeds as follows:

 

1. The Windows Phone application connects the Litware

identity provider. It sends Rick’s credentials and receives a

SAML token in response. This SAML token includes the

claims that the Litware identity provider issues for Rick.

 

2. The Windows Phone application sends the SAML token

from the Litware issuer to ACS.

 

3. The ACS service instance applies the mapping rules for the

Litware identity provider to the incoming claims and

transitions the incoming SAML token to an SWT token.

ACS returns the new SWT token to the Windows Phone

client application.

 

4. The Windows Phone application caches the SWT token so

it can use it for future requests. The Windows Phone

application then makes a REST call to the a-Order tracking

web service, including the SWT token in the request header.

 

5. The a-Order tracking web service extracts the SWT token

from the request. It uses the claims in the token to imple-

ment authorization rules in the a-Order tracking web

service.

 

6. The service returns the order tracking data to the Windows

Phone application.

 

In this solution, the Windows Phone application controls the

process of obtaining the SWT token and invoking the web service

directly. The application code includes logic to visit all of the issuers

in the trust chain in the correct order. It uses the WS-Trust protocol

when it communicates with the Litware identity provider to obtain a

SAML token, and the OAuth protocol to communicate with ACS and

the a-Order tracking service.

As in the passive solution, all the interactions from the Windows

Phone device are secured using SSL.

 

Comparing the Solutions

The passive federation solution that leverages an embedded browser

control offers a simpler approach to obtaining an SWT token because

the embedded web browser control in combination with the WS-

Federation protocol handles most of the logic to visit the issuers and

obtain the SWT token that the application needs to access the a-

Order tracking service. In the active federation solution, the Windows

Phone application must include code to control the interactions with

the issuers explicitly. Furthermore, the active solution must include

 

———————– Page 219———————–

 

182182 chapter ten

 

code to handle the request for a SAML token from the Litware issuer;

this is more complex on the Windows Phone platform than on the

desktop because there is not currently a version of WIF for Windows

Phone. The sample described in Chapter 9, “Securing REST Services,”

shows you how to do this in a Windows Presentation Foundation

(WPF) application.

However, there is some complexity in the passive solution in the

way that the application must interact with an embedded web

browser control to initiate the sign-in with the Litware identity pro-

vider and retrieve the SWT token issued by ACS from the browser

control.

In a WPF application,

you can use Windows For some scenarios, an advantage of the passive federation ap-

Identity Foundation proach is that it enables the Windows Phone application to dynami-

(WIF) to perform cally build the list of identity providers for the user to choose from. If

some of the token you add an additional identity provider to your ACS configuration, the

handling, even though

WIF does not provide Windows Phone client application will detect this the next time it

full support for requests the list of identity providers from ACS. You could use this to

RESTful web services. quickly and easily add support for additional social identity providers

to an already deployed Windows Phone application. In the active

federation solution, the application is responsible for choosing the

identity provider to use, and although you could design the applica-

tion to dynamically build a list of identity providers, this would add

considerably to the complexity of the solution. The active federation

solution is much better suited to scenarios where you have a fixed,

known identity provider for the Windows Phone application to use.

If you compare Figures 1 and 2, you can see that the passive solu-

tion requires more round trips to obtain an SWT token, which will

make this approach slower than the active approach. You should bear

in mind that this applies only to the initial federated authentication.

If the application caches the SWT token, it can reuse it for subse-

quent requests to the a-Order tracking web service.

Another potential disadvantage of the active solution is that it

only works with a WS-Trust compliant Security Token Service (STS).

If the Windows Phone device needs to authenticate with a different

protocol, then you’ll have to implement that protocol on the phone.

You must explicitly add any SWT token caching behavior to the

Windows Phone application for both the active or passive federation

solutions; there is no automatic caching provided in either solution.

However, in the passive federation solution, the embedded web

browser control will automatically cache the SAML token it receives

The lifetime of the from the Litware identity provider; after the initial authentication

SAML token is

with the Litware identity provider, the application will not prompt the

determined by

the token issuer. user will to re-enter their credentials for as long as the cached SAML

token remains valid.

 

———————– Page 220———————–

 

accessing rest services from a windows phone device 183 183

 

Inside the Implementation

 

Now is a good time to walk through some of the details of the solu-

tion. As you go through this section, you may want to download the

Microsoft Visual Studio® development system solution called 9Win-

dowsPhoneClientFederation from http://claimsid.codeplex.com. The

following sections describe some of the key parts of the implementa-

tion; some of these are specific to either the active or passive federa-

tion solution.

 

For details about the implementation of the a-Order tracking web

service, see Chapter 9, “Securing REST Services.” ADFS 2 does not

support the OAuth

protocol, so the

Active SAML Token Handling Windows Phone

The active federation solution must handle the request for a SAML application must

use the WS-Trust

token that the Windows Phone application sends to the Litware protocol to obtain

identity provider. There is no version of WIF available for the Win- a SAML token.

dows Phone platform, so the application must create the SAML sign-

in request programmatically. In the sample application, the GetSaml-

TokenRequest method in the HttpWebRequestExtensions class,

illustrates a technique for requesting a SAML token when WIF is not

available to perform this task for you.

 

See chapter 9, “Securing REST Services,” for an example of an active

client that can use WIF to request a SAML token.

 

The following code sample from the HttpWebRequestExtensions

class shows how the Windows Phone application creates the SAML

token request to send to the identity provider.

 

private static string GetSamlTokenRequest

(string samlEndpoint, string realm)

{

var tokenRequest =

string.Format(

CultureInfo.InvariantCulture,

samlSignInRequestFormat,

Guid.NewGuid().ToString(),

samlEndpoint,

DateTime.UtcNow.ToString(

“yyyy’-‘MM’-‘ddTHH’:’mm’:’ss’.’fff’Z'”),

DateTime.UtcNow.AddMinutes(15).ToString(

“yyyy’-‘MM’-‘ddTHH’:’mm’:’ss’.’fff’Z'”),

“LITWARE\\rick”,

“PasswordIsNotChecked”,

https://aorderphone-dev.accesscontrol.windows.net/&#8221;);

 

———————– Page 221———————–

 

184184 chapter ten

 

return tokenRequest;

}

 

/// Format:

/// {0}: Message Id – Guid

/// {1}: To – https://localhost/Litware.SimulatedIssuer.9/

Issuer.svc

/// {2}: Created – 2011-03-11T01:49:29.395Z

/// {3}: Expires – 2011-03-11T01:54:29.395Z

/// {4}: Username – LITWARE\rick

/// {5}: Password – password

/// {6}: Applies To – https://{project}.accesscontrol.

windows.net/

private const string samlSignInRequestFormat =

@”<s:Envelope xmlns:s=””http://www.w3.org/2003/05/

soap-envelope””

xmlns:a=””http://www.w3.org/2005/08/addressing”&#8221;

xmlns:u=””http://docs.oasis-

open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-

1.0.xsd””> … </s:Envelope>”;

 

The following code example shows how the client posts the

SAML token request to the identity provider and retrieves the SAML

token from the response.

 

public static IObservable<string> PostSamlTokenRequest

(this HttpWebRequest request, string tokenRequest)

{

request.Method = “POST”;

request.ContentType = “application/soap+xml; charset=utf-8”;

 

return

Observable

.FromAsyncPattern<Stream>(request.BeginGetRequestStream,

request.EndGetRequestStream)()

.SelectMany(

requestStream =>

{

using (requestStream)

{

var buffer = System.Text.Encoding.UTF8.

GetBytes(tokenRequest);

requestStream.Write(buffer, 0, buffer.Length);

requestStream.Close();

}

 

———————– Page 222———————–

 

accessing rest services from a windows phone device 185 185

 

return

Observable.FromAsyncPattern<WebResponse>(

request.BeginGetResponse,

request.EndGetResponse)();

},

(requestStream, webResponse) =>

{

string res = new StreamReader

(webResponse.GetResponseStream(),

Encoding.UTF8).ReadToEnd();

var startIndex = res.IndexOf(“<Assertion “);

var endIndex = res.IndexOf(“</Assertion>”);

var token = res.Substring(

startIndex, endIndex + “</Assertion>”.

Length – startIndex);

return token;

});

}

 

Web Browser Control

The passive federation solution uses an embedded web browser con-

trol to handle the passive WS-Federation interactions between the

client application and the issuers. The application wraps the web

browser control in a custom control that you can find in the SL.Phone.

Federation project. The Windows Phone application passes the ad-

dress of the JSON-encoded list of identity providers into this control,

and then retrieves the SAML token from the control when the feder-

ated authentication process is complete. The following code sample

from the MainPage.xaml.cs file shows how the application interacts

with the custom sign-in control.

 

private void OnGetMyOrdersPassiveButtonClicked

(object sender, RoutedEventArgs e)

{

 

var acsJsonEndpoint = “https://aorderphone-dev.

accesscontrol.windows.net/v2/metadata/IdentityProviders.

js?protocol=wsfederation&

realm=https%3A%2F%2Flocalhost%2Fa-

Order.OrderTracking.Services.9&context=&version=1.0″;

SignInControl.RequestSecurityTokenResponseCompleted +=

new EventHandler<SL.Phone.Federation.Controls

 

———————– Page 223———————–

 

186186 chapter ten

 

.RequestSecurityTokenResponseCompletedEventArgs>(

SignInControl_RequestSecurityTokenResponseCompleted);

SignInControl.GetSecurityToken(new Uri(acsJsonEndpoint));

}

 

void SignInControl_RequestSecurityTokenResponseCompleted

(object sender,

SL.Phone.Federation.Controls.RequestSecurityTokenResponse

CompletedEventArgs e)

{

this.GetOrdersWithToken(e.RequestSecurityTokenResponse.

TokenString)

.ObserveOnDispatcher()

.Catch((WebException ex) =>

{

}

.Subscribe(orders =>

{

});

}

 

The catch block in the SignInControl_RequestSecurityToken

ResponseCompleted method enables the client to trap errors such as

“401 Unauthorized” errors from the REST service.

The custom control that contains the embedded web browser

control must raise the RequestSecurityTokenResponseCompleted

event after the control receives the SWT token from ACS. The con-

trol recognizes when it has received the SWT token because ACS

sends a redirect message to a special URL: https://break_here. The

ACS configuration for the aOrderService RP includes this value for

the “Return URL” setting. The following code sample shows how the

Navigating event in the custom control traps this navigation request,

extracts the SWT token, and raises the RequestSecurityToken

ResponseCompleted event to notify the Windows Phone application

that the SWT token is now available.

 

private void SignInWebBrowserControl_Navigating(object sender,

NavigatingEventArgs e)

{

if (e.Uri == new Uri(“https://break_here&#8221;))

{

e.Cancel = true;

 

———————– Page 224———————–

 

accessing rest services from a windows phone device 187 187

 

var acsReply = this.BrowserSigninControl.SaveToString();

 

Regex tagRegex = CreateRegexForHtmlTag

(“BinarySecurityToken”);

var acsBinaryToken = tagRegex.Match(acsReply).Groups[1].

Value;

var acsTokenBytes = Convert.FromBase64String(acsBinaryToken);

var acsToken = System.Text.Encoding.UTF8.GetString(

acsTokenBytes, 0, acsTokenBytes.Length);

 

tagRegex = CreateRegexForHtmlTag(“Expires”);

var expires = DateTime.Parse(tagRegex.Match(acsReply).

Groups[1].Value);

 

tagRegex = CreateRegexForHtmlTag(“TokenType”);

var tokenType = tagRegex.Match(acsReply).Groups[1].Value;

 

if (null != RequestSecurityTokenResponseCompleted)

{

var rstr = new RequestSecurityTokenResponse();

rstr.TokenString = acsToken;

rstr.Expiration = expires;

rstr.TokenType = tokenType;

RequestSecurityTokenResponseCompleted(this,

new RequestSecurityTokenResponseCompletedEventArgs

(rstr, null));

}

}

}

 

You must also explicitly enable JavaScript in the embedded web

browser control on the phone; otherwise the automatic redirections

will fail. The following snippet from the AccessControlServiceSignIn.

xaml file shows how to do this.

 

<phone:WebBrowser x:Name=”BrowserSigninControl”

IsScriptEnabled=”True” Visibility=”Collapsed” />

 

Asynchronous Behavior

Both the active and passive scenarios make extensive use of the

Reactive Extensions (Rx) for the Windows Phone platform to interact

with issuers and the a-Order tracking web service asynchronously. For

example, the active federation solution uses Rx to orchestrate the

interactions with the issuers and ensure that they are visited in the

 

———————– Page 225———————–

 

188188 chapter ten

 

correct sequence. The GetOrders method in the MainPage.xaml.cs

file shows how the client application adds the SWT token to the

request header that it sends to the a-Order tracking web service,

sends the request, and traps any errors such as “401 Unauthorized”

messages, all asynchronously.

 

public IObservable<Order[]> GetOrders()

{

var stsEndpoint =

https://localhost/Litware.SimulatedIssuer.9/Issue.svc&#8221;;

var acsEndpoint =

https://aorderphone-dev.accesscontrol.windows.net/

v2/OAuth2-13″;

 

var serviceEnpoint =

https://localhost/a-Order.OrderTracking.Services.9&#8221;;

var ordersServiceUri = new Uri

(serviceEnpoint + “/orders/frommyorganization”);

 

return

HttpClient.RequestTo(ordersServiceUri)

.AddAuthorizationHeader

(stsEndpoint, acsEndpoint, serviceEnpoint)

.SelectMany(request =>

{

return request.Get<Order[]>();

},

(request, orders) =>

{

return orders;

})

.ObserveOnDispatcher()

.Catch((WebException ex) =>

{

var message = GetMessageForException(ex);

MessageBox.Show(message);

return Observable.Return(default(Order[]));

});

}

 

This example uses the SelectMany method instead of the simple

Select method because the call to the Get method itself returns an

IObservable<Orders[]> instance; using Select would then return

an IObservable<IObservable<Orders[]>> instance. The Select

 

———————– Page 226———————–

 

accessing rest services from a windows phone device 189 189

 

Many method flattens the IObservable<IObservable

<Orders[]>> instance to an IObservable<Orders[]>

instance.

 

The following list outlines the nested sequence of calls in the

active federated authentication scenario. The process starts when the

application calls the MainPage.GetMyOrdersButton_Click method,

and uses Rx to manage the nested sequence of asynchronous calls.

 

1. Call the MainPage.GetOrders method asynchronously on

a background thread.

 

a. Create an HttpWebRequest object to send to the

a-Orders tracking web service.

 

b. Call the HttpWebRequestExtensions.Add

AuthorizationHeader method to add the SWT token

to the HttpWebRequest object asynchronously.

 

i. Create a SAML token request.

 

ii. Call the HttpWebExtensions.PostSamlToken

Request to send the SAML request asynchro-

nously to the Litware identity provider.

 

a. Send the SAML request to the Litware

identity provider.

 

b. Extract the SAML token in the response

from the Litware identity provider.

 

c. Return the SAML token.

 

iii. Call the HttpWebExtensions.PostSwtToken

Request method to send the SAML token to

ACS asynchronously.

 

a. Create an SWT token request that contains

the SAML token.

 

b. Send the SWT token request to ACS.

 

c. Extract the SWT token in the response

from ACS.

 

d. Return the SWT token.

 

iv. Add the SWT token to the HttpWebRequest

object.

 

v. Return the HttpWebRequest object.

 

———————– Page 227———————–

 

190190 chapter ten

 

c. Invoke the a-Orders tracking web service by calling

the HttpWebRequest.Get method asynchronously.

 

i. Send the web request to the a-Orders tracking

web service.

 

ii. Use the BeginGetResponse and EndGet

Response methods to capture the response data.

 

iii. Deserialize the response data to an Order[]

instance.

 

iv. Return the Order[] instance.

 

d Return the results as an Order[] instance.

 

2.     Update the UI with the result of the call to MainPage.

GetOrders.

 

The following list outlines the nested sequence of calls in the

passive federated authentication scenario. The process starts when

the application calls the MainPage.OnGetMyOrdersPassive

Button_Click method, and uses Rx to manage the nested sequence

of asynchronous calls.

 

1. Call the AccessControlServiceSignIn.GetSecurityToken

method to obtain an SWT token.

 

2. Handle the AccessControlServiceSignIn.RequestSecurity

TokenResponseCompleted event.

 

a. Call the MainPage.GetOrdersWithToken method

asynchronously. The SWT token is available in the

EventArgs parameter.

 

i. Create an HTTP request to send to the a-Order

tracking web service.

 

ii. Call the HttpWebRequestExtensions.Add

AuthorizationHeader method asynchronously

to add the SWT token to the request.

 

iii. Invoke the a-Orders tracking web service by

calling the HttpWebRequest.Get method

asynchronously.

 

a. Send the web request to the a-Orders

tracking web service.

 

b. Use the BeginGetResponse and EndGet

Response methods to capture the response

data.

 

———————– Page 228———————–

 

accessing rest services from a windows phone device 191 191

 

c. Deserialize the response data to an Order[]

instance.

 

d. Return the Order[] instance.

 

iv. Return the Order[] instance.

 

b. From the background thread, update the UI with the

Order[] instance data by calling the UpdateOrders

method.

 

Setup and Physical Deployment

 

For the sample Windows Phone application to be able to use SSL

when it communicates with the sample Litware issuer and Adatum

a-Order tracking applications on localhost, it’s necessary to install the

localhost root certificate on the Windows Phone device. To do this,

the Litware sample issuer includes a page that has a link to the re-

quired certificate: http://localhost/Litware.simulatedIssuer.9/root-

Cert/default.aspx. If you navigate to this address on the Windows

Phone device, you can install the root certificate that enables SSL. In

a production environment, you should secure your web service and

issuer with a certificate from a trusted third-party certificate provider

rather than a self-issued certificate; if you do this, it won’t be neces-

sary to install a certificate on the Windows Phone device in order to

access your issuer and web service using SSL.

In the passive federation scenario, the Windows Phone applica-

tion uses an embedded web browser control to navigate to the Lit-

ware identity provider so that the user can enter her credentials. It’s

important that the sign-in page at the issuer is “mobile friendly” and

displays clearly on the Windows Phone device. You should verify that

your issuer renders a suitable sign-in page if you are planning to use a

Windows Phone client in a passive federated authentication scenario.

 

Questions

 

1. Which of the following are issues in developing a claims-

aware application that access a web service for the Win-

dows Phone 7™ platform?

 

a. It’s not possible to implement a solution that uses

SAML tokens on the phone.

 

b. You cannot install custom SSL certificates on the

phone.

 

———————– Page 229———————–

 

192192 chapter ten

 

c. There is no secure storage on the phone.

 

d. There is no implementation of WIF available for the

phone.

 

2. Why does the sample application use an embedded web

browser control?

 

a. To handle the passive federated authentication

process.

 

b. To handle the active federated authentication process.

 

c. To access the RESTful web service.

 

d. To enable the client application to use SSL.

 

3. Of the two solutions (active and passive) described in the

chapter, which requires the most round trips for the initial

request to the web service?

 

a. They both require the same number.

 

b. The passive solution requires fewer than the active

solution.

 

c. The active solution requires fewer than the passive

solution.

 

d. It depends on the number of claims configured for the

relying party in ACS.

 

4. Which of the following are advantages of the passive

solution over the active solution?

 

a. The passive solution can easily build a dynamic list of

identity providers.

 

b. It’s simpler to create code to handle SWT tokens in

the passive solution.

 

c. It’s simpler to create code to handle SAML tokens in

the passive solution.

 

d. Better performance.

 

———————– Page 230———————–

 

accessing rest services from a windows phone device 193 193

 

5. In the sample solution for this chapter, how does the

Windows Phone 7 client application add the SWT token to

the outgoing request?

 

a. It uses a Windows Communication Foundation (WCF)

behavior.

 

b. It uses Rx to orchestrate the acquisition of the SWT

token and add it to the header.

 

c. It uses the embedded web browser control to add the

header.

 

d. It uses WIF.

 

More Information

 

To learn more about developing for Windows Phone 7, see the

“Windows Phone 7 Developer Guide” at: http://msdn.microsoft.com/

en-us/library/gg490765.aspx.

 

———————– Page 231———————–

 

 

———————– Page 232———————–

 

11 Claims-Based Single Sign-On

for Microsoft SharePoint

2010

 

This chapter walks you through an example of integrating two Micro-

soft® SharePoint® services web applications into a single-sign on

(SSO) environment for intranet and extranet web users who all belong

to a single security realm. These users can already access other ASP.

NET web applications in the SSO environment. You’ll see examples of

SharePoint applications that Adatum has made claims-aware so that

Adatum employees can access the SharePoint applications from the

company intranet or from the web.

This basic scenario doesn’t show how to establish a trust relation-

ship between enterprises that would allow users from another com-

pany to access the SharePoint site; that is discussed in Chapter 12,

“Federated Identity for SharePoint Applications.” Instead, this chapter

focuses on how to implement single sign-on and single sign-off

within a security domain as a preparation for sharing resources with

other security domains, and how to configure SharePoint to use

claims-based authentication and authorization. In short, this scenario

contains the commonly used elements that will appear in all claims- Most of what you’ll see

aware SharePoint applications. For further information about inte- described in this chapter

grating ASP.NET web applications into an SSO environment and about SharePoint and claims

making them claims-aware, you should read Chapter 3, “Claims-Based could be achieved without

Single Sign-On for the Web.” needing to claims-enable

SharePoint. However, the

For additional information about SharePoint and claims-based claims-based infrastructure

identity, see Appendix F, “SharePoint 2010 Authentication Architec- that this chapter introduces

ture and Considerations.” forms the basis of more

advanced scenarios, such as

the federated scenario

described in the next

chapter, which can only be

implemented using claims.

 

195

 

———————– Page 233———————–

 

196196 chapter eleven

 

The Premise

 

Adatum is a medium sized company that uses Microsoft Active Direc-

tory® to authenticate the employees in its corporate network. Ada-

tum is planning to implement two applications as SharePoint 2010

web applications that employees will access from both the intranet

and the Internet:

 

1. One application is a portal, named a-Portal, where Adatum

stores the product documentation that’s used by its sales

force when they engage with customers. This SharePoint

web application consists of a single site collection based on

the “Team Site” template.

 

2. The other is a web application, named a-Techs, where field

staff access scheduling information, tasks, and technical

data. It also includes a blog where field technicians can

capture tips and techniques to share with other team

members (and possibly partners in the future). This Share-

Point web application consists of two site collections; one

based on the “Team Site” template, and one based on the

“Blog” template. This web application also uses SharePoint

user profile data.

 

Adatum has already established an SSO environment that includes

existing ASP.NET web applications such as the a-Order and a-Expense

applications. As part of this environment, Adatum has configured Ac-

tive Directory Federation Services (ADFS) to act as an identity pro-

vider (IdP).

 

Goals and Requirements

 

The goals of this scenario are to show how to configure a SharePoint

environment to use a claims-based identity model to control access,

and how to customize SharePoint to provide a way for a SharePoint

farm administrator to effectively manage access to the claims-enabled

SharePoint applications.

Configuring a SharePoint environment to use claims includes

configuring the trust relationship between SharePoint and ADFS and

configuring which claims ADFS passes to SharePoint.

Users must be able to access the SharePoint web applications

from both the intranet and Internet as part of an SSO realm that in-

cludes other ASP.NET web applications. The environment should also

support single sign-out, so that logging out from any ASP.NET or

SharePoint web application logs the user out from all applications that

are part of the SSO domain.

 

———————– Page 234———————–

 

claims-based single sign-on for microsoft sharepoint 2010 197 197

 

SharePoint site collection administrators should be able to con-

trol access to site collections and sites based on role memberships

defined in AD. For example, only users in the Sales role should have

access to the a-Portal web application and only users in the Team

Leader role should be able to post to the blog in the a-Techs applica-

tion.

 

Overview of the Solution

 

Adatum has created two claims-enabled SharePoint web applications: In SharePoint, you

one for salespersons and one for field technical employees. These ap- configure an STS by creating

plications are available on the intranet and Internet. The following a SharePoint trusted identity

diagram shows the main components of the solution suggested by token issuer.

Adatum.

 

Trust

 

STS

 

FedAuth

FedAuth

Cookie

Cookie

 

ADFS

 

Internet

Team

Team Site

Site

Blog

 

a−Portal

a−Techs

 

SharePoint

Browser

 

Browser

John at Adatum

 

a−Order

John at home

 

figure 1

Claims-enabled SharePoint applications at Adatum

 

Authentication Mechanism

Adatum has configured both SharePoint web applications to use During development,

ADFS as a Trusted Identity Provider. Adatum has also configured it’s useful to be able

ADFS to use different authentication types depending on where the to see the set of

user is accessing the applications from: intranet users will sign-in au- claims that a user

tomatically using Integrated Windows Authentication, and Internet has. See the section

“Displaying Claims

users will enter their Adatum Windows credentials into a web form.

in a Web Part” for

In this way, all users authenticate with Active Directory through one way to do this.

ADFS.

 

———————– Page 235———————–

 

198198 chapter eleven

 

An alternative approach that Adatum considered was to configure

two authentication types in each web application in SharePoint.

SharePoint 2010 allows you to configure multiple authentication

mechanisms for a single web application; for example, you could con-

figure a SharePoint web application to use both Windows Authentica-

tion and a trusted identity provider. Figure 2 shows the two alterna-

tive routes by which user attributes from Active Directory become

claims belonging to a SharePoint user in this alternative scenario. The

SharePoint security token service (STS) is an instance of a SharePoint

trusted identity token issuer; the custom claims providers are op-

tional components.

 

Custom IClaimsPrincipal Instance

ADFS SharePoint STS

Claims Provider

+ Claims Mapping + Claims Mapping

Rules Rules (Claims

Augmentation)

 

Claims Collection

 

Active Directory

 

Custom

SharePoint STS Claims Provider

(Claims

Augmentation)

 

figure 2

Building a user’s claims collection

 

The difficulty with this approach is that although both authenti-

cation mechanisms result in a set of claims for the IClaimsPrincipal

Instance associated with the user, without additional code they are

unlikely to generate the same types of claims. For example, the claims

from Windows authentication will include groupsid claims, while the

claims from the trusted identity provider will include role claims. An

additional complexity of this approach is that you’ll probably want to

You can use the

customize the page that SharePoint displays, offering users a choice

claims augmentation

offered by the custom of authentication provider.

claims providers to For an example of how a custom claims provider converts SIDs

programmatically add

additional claims to a to group names, see this blog post: http://blogs.technet.com/b/

user’s claims set. speschka/archive/2010/09/12/a-sharepoint-2010-claims-provider-

to-convert-role-sids-to-group-names.aspx.

 

———————– Page 236———————–

 

claims-based single sign-on for microsoft sharepoint 2010 199 199

 

For an example of how to customize the default SharePoint

page that presents a choice of authentication providers to the user,

see this blog post: http://blogs.msdn.com/b/brporter/ar-

chive/2010/05/10/temp.aspx.

 

For these reasons, Adatum selected the first approach that uses a

single trusted identity provider in SharePoint so that they can use the

claims-mapping rules in ADFS and ensure that a consistent set of

claims reach SharePoint.

 

End-to-End Walkthroughs

The following sections outline two scenarios for a user who accesses

a claims-enabled SharePoint environment: the first scenario describes

what happens when a user accesses two different site collections in

the same SharePoint web application, the second scenario describes

what happens when a user accesses two SharePoint web applications

hosted in the same domain.

The walkthroughs below describe the experience of Internet us-

ers who must provide their username and password to ADFS in order

to authenticate. ADFS will not prompt intranet users (inside the cor-

porate firewall) for their credentials, but will authenticate them using

Integrated Windows Authentication: intranet users will not see the

sign-in page for ADFS.

 

Visiting Two Site Collections in a SharePoint

Web Application

In this walkthrough, John visits the Document Library and then the

Team Site in the a-Techs SharePoint web application.

 

1. John browses to the Team site in the a-Techs SharePoint

web application.

 

2. John has not yet been authenticated so SharePoint redi-

rects his browser to ADFS. There are several intermediate

steps—the SharePoint authentication endpoint and the

SharePoint sign-in endpoint—before it arrives at ADFS.

 

3. John enters his Adatum domain credentials; ADFS validates

the credentials, creates a token that contains John’s claims,

and redirects the browser to the SharePoint STS (the “/_

trust/” endpoint in the SharePoint web application refer-

ences the trusted identity token issuer).

 

4. The SharePoint STS validates the token from ADFS and

issues a FedAuth cookie for the a-Techs SharePoint web

application. This cookie contains a reference to the token

that contains John’s claims; the token itself is stored in the

SharePoint token cache.

 

———————– Page 237———————–

 

200200 chapter eleven

 

5. SharePoint checks that John has adequate permissions to

access to the Team site collection, and redirects his browser

to the site (the “/_layouts/Authenticate.aspx” endpoint in

the SharePoint web application performs the permissions

check).

 

6. John browses to the Blog site in the a-Techs SharePoint web

Application. He does not require a new token for this site

collection because it is part of the same SharePoint web

application.

 

In Chapter 12, “Federated Identity for SharePoint Applications,”

you can see a sequence diagram that illustrates this process in

relation to sliding sessions.

 

Visiting Two SharePoint Web Applications

In this walkthrough, John visits the a-Portal SharePoint web applica-

tion and then visits the a-Techs SharePoint web application.

 

1. John visits the a-Portal SharePoint web application.

 

a. John browses to the Team site in the a-Portal Share –

Point web application.

 

b. John has not yet been authenticated, so SharePoint

redirects his browser to ADFS.

 

c. John enters his Adatum domain credentials; ADFS

validates the credentials, issues a SAML token that

contains his claims, and redirects the browser to the

SharePoint STS (the “/_trust/” endpoint in the Share-

Point web application). ADFS also creates an SSO

cookie so that it can recognize if it has already

authenticated John.

 

d. The SharePoint STS validates the token from ADFS

and issues a FedAuth cookie for the a-Portal Share-

Point web application that contains a reference to

John’s claims in the SharePoint token cache.

 

e. SharePoint checks that John has access to the Team

site collection, and redirects his browser to the site.

 

2. John visits the a-Techs SharePoint web application.

 

a. John browses to the Team site in the a-Techs Share –

Point web application.

 

b. John has not yet been authenticated for this Share –

Point web application so SharePoint redirects his

browser to ADFS.

 

———————– Page 238———————–

 

claims-based single sign-on for microsoft sharepoint 2010 201 201

 

c. ADFS detects the SSO cookie that it issued in step

1-c, and redirects the browser with a new SAML token

to the SharePoint STS.

 

d. The SharePoint STS validates the token from ADFS

and issues a FedAuth cookie for the a-Techs Share-

Point web application that contains a reference to

John’s claims in the SharePoint token cache.

 

e. SharePoint checks that John has sufficient permissions

to access to the Team site collection, and redirects his

browser to the site.

 

In this example, it’s important to ensure that each SharePoint web

application uses its own FedAuth token. If the web applications have

different host names, this will happen automatically. However, if in a

test environment the web applications share the same host name, the

second web application will try to use the existing FedAuth token,

which will not be valid for that web application. Each web applica-

tion must have its own FedAuth token. See the section, “Setup and

Physical Deployment,” in this chapter for more details.

 

Authorization in SharePoint

This scenario uses standard SharePoint groups to control access to the

sites in the two SharePoint web applications. The following table

summarizes the permissions.

 

Site SharePoint Group Permission level Role Claim

 

a-Portal Team SalesSite Members Contribute sales

site

 

a-Techs Team site TechSite Members Contribute techleaders

 

a-Techs Team site TechSite Members Contribute techs

 

a-Techs Blog site TechBlog Members Contribute techleaders

 

a-Techs Blog site TechBlog Visitors Read techs

 

In SharePoint, a site administrator can add users to a SharePoint

group to grant those users the permissions associated with the group.

In a claims-based environment, a site administrator can add users to a

SharePoint group based on the users’ claims; for example, a site admin-

istrator could add all authenticated users in the sales role to the

SharePoint Site Members group by using the Site Permissions Tools.

 

Mapping claims to SharePoint groups simplifies the administration

tasks in SharePoint. There is no need to add individual users to

SharePoint groups.

 

———————– Page 239———————–

 

202202 chapter eleven

 

Adatum has modified the SharePoint People Picker to make it

easier for site administrators to map role and organization claims to

SharePoint groups.

If your identity provider does not provide the claims that you

need to implement your authorization rules, you can use claims aug-

mentation in the SharePoint STS to modify existing claim values or to

add additional claims to an authenticated user.

 

The People Picker

It is difficult for site administrators at Adatum to use the default

people picker to reliably assign permissions in the a-Portal and a-Techs

web applications. The default behavior of the people picker is to allow

the user to enter part of a user name or group name and then use the

search function to locate the user or group. In a claims-enabled Share-

In a claims-enabled Point web application this does not work as expected because there

application, the application is no repository of users and groups for the people picker to search;

receives a set of claims the only information SharePoint has is the claims data associated with

from a trusted issuer about

the current user. The default people picker implementation works

the person accessing the

application. This contrasts around this by always finding a match and resolving the name even if

with the approach whereby the name is incorrect, which makes it easy for an administrator to

the application queries a make a mistake. For example, let’s say the site administrator would like

directory service to discover to assign a permission to anyone in the techs role. If he makes a typing

information about the user.

mistake and searches for techz in the people picker he will get a

The claims-based approach

is much more flexible: the match and be able to assign a permission to a non-existent role.

claims can come from many To prevent this type of error, Adatum implemented a custom

different issuers and be SPClaimsManager component that can search for role and organiza-

used in a federated identity tion values in a pre-defined list of valid values. Figure 3 shows the

environment. However, in overall architecture of the solution that Adatum adopted. There is a

a claims-based scenario the

application may not have central store of valid role and organization names that both ADFS and

direct access to lists of users the SharePoint people picker use: this way Adatum can configure

in a directory. ADFS to issue role and organization claims that the SharePoint

people picker will recognize.

 

———————– Page 240———————–

 

claims-based single sign-on for microsoft sharepoint 2010 203 203

 

Look up roles and ADFS

organizations.

People Picker

Use predefined roles

and organizations.

 

SharePoint

 

Store

 

SharePoint Site Administrator

searches for valid roles

and organizations.

Adatum

 

figure 3

Architecture of the Adatum people picker solution

 

SharePoint and ADFS both run inside the Adatum corporate net-

work. If SharePoint is running in a separate network from ADFS and

the store, then a slightly more complex solution is needed. This might

arise if SharePoint is running in the cloud, or if SharePoint needs to

resolve values used by a partner’s directory services. In this case, the

architecture might include a lookup service as shown in Figure 4; in

SharePoint you can use Business Connectivity Services to make the

call to the lookup service, which introduces a useful layer of indirec-

tion into the architecture.

 

———————– Page 241———————–

 

204204 chapter eleven

 

Query

Claims

Look up roles and Service

organizations.

 

ADFS

 

Use predefined roles

and organizations.

 

People Picker

 

SharePoint

in the Cloud Store

 

Adatum

 

figure 4

SharePoint Site Administrator

searches for valid roles People picker solution architecture

and organizations. including a query claims lookup service

 

Adatum plans to use role and organization claims to assign per-

missions in SharePoint, and wants to avoid assigning permissions to

individual users. However, some organizations may prefer to use

names or email addresses to assign permissions in some circumstances.

It is still possible to do this in a claims-enabled SharePoint site, but

with the standard people picker component, site administrators will

face the same problem whereby the people picker resolves both valid

and invalid names. To work around this problem you can again create

a custom people picker component that resolves name and email

address claim values against your directory service.

 

Single Sign-Out

In the long run, it’s more

maintainable to manage For a SharePoint web application to participate in the single sign-out

permissions based on roles process, it must be able to handle the following scenarios. For more

(and organizations) rather information about single sign-out and the WS-Federation protocol

than on individuals in see Chapter 3, “Claims-Based Single Sign-On for the Web and Win-

SharePoint. You can use

dows Azure.”

Active Directory and ADFS

to manage an individual’s 1. The user should be able to initiate the single sign-out from

role and organization

membership, while in within the SharePoint web application. Adatum modified

SharePoint you can the behavior of the standard sign-out process to send the

focus on mapping roles WS-Federation wsignout message to the token issuer. In

and organizations to the Adatum scenario, this token issuer is ADFS.

SharePoint groups.

 

———————– Page 242———————–

 

claims-based single sign-on for microsoft sharepoint 2010 205 205

 

2. SharePoint web applications should handle WS-Federation

wsignoutcleanup messages from the issuer and invalidate

any security tokens for the application. For this to work in

SharePoint you must configure the SharePoint security

token service to use session cookies rather than persistent

cookies.

 

If the user is signing in using Windows authentication in ADFS, then

revisits the web application after having signed out, he or she will be

signed in automatically and silently. Although the single sign-out has

happened, the user won’t be aware of it.

 

By default, SharePoint uses persistent cookies to store the session

token, and this means that a user can close the browser and re-open

it and get back to the SharePoint web application as long as the

cookie has not expired. The consequence of changing to session cook- The default name for

ies is that if a user closes the browser, she will always be required to the session cookie is

authenticate again when she next visits the SharePoint web applica- FedAuth.

tion. Adatum prefers this behavior because it provides better security.

 

Inside the Implementation

 

The following sections describe the key configuration steps that Ada-

tum performed in order to implement the scenario that this chapter

describes.

 

Relying Party Configuration in ADFS

 

Each SharePoint web application is a separate relying party (RP) from

the perspective of ADFS. Adatum has configured each of the relying

parties to use the WS-Federation protocol and to issue the emailad-

dress and role claims for users that it authenticates, passing the values

of these claims through from Active Directory. The following table

shows the mapping rules that Adatum configured for each relying

party in ADFS.

 

LDAP Attribute Outgoing claim type

 

E-Mail-Addresses E-Mail Address

 

Token-Groups – Unqualified Names Role

 

It’s important that the claims issued to SharePoint by ADFS (or

any other claims issuer) are SAML 1.x compliant. For a description of

the correct name format for claims that will be consumed by Share-

Point, see this blog post: http://social.technet.microsoft.com/wiki/

contents/articles/ad-fs-2-0-the-admin-event-log-shows-error-

111-with-system-argumentexception-id4216.aspx.

 

———————– Page 243———————–

 

206206 chapter eleven

 

ADFS must be able to identify which relying party a request

comes from so that it can issue the correct set of rules. The sample

scenario uses the identifiers shown in the following table:

 

Relying Party Identifiers

 

a-Portal SharePoint web application urn:adatum-portal:sharepoint

 

a-Techs SharePoint web application urn:adatum-techs:sharepoint

 

As part of the configuration in ADFS, you must specify the URL

of the relying party WS-Federation protocol endpoint: this URL will

be the “/_trust/” path in your SharePoint web application.

SharePoint will send

these identifier values You must enter the required information in ADFS manually (or

in the wtrealm create Windows® PowerShell® command-line interface scripts);

parameter. It’s SharePoint does not expose a FederationMetadata.xml document

important to make that you can use to automate the configuration.

sure that these

identifiers match the

configuration in SharePoint STS Configuration

SharePoint. These

You must configure the SharePoint STS to trust the ADFS issuer, and

examples show the

recommended format map the incoming claims from ADFS to claims that your SharePoint

for these identifiers; applications will use. The following sections describe the steps you

however, there is no must perform to complete this configuration.

specific required

format. Remember to install the SharePoint PowerShell snap-in before

attempting to run any SharePoint PowerShell scripts. You can

do this with the following PowerShell command:

 

Add-PSSnapin Microsoft.Sharepoint.Powershell

 

Create a New SharePoint Trusted Root Authority

ADFS signs the tokens that it issues with a token signing certificate.

You must import into SharePoint a certificate that it can use to vali-

date the token from ADFS. You can use the following PowerShell

commands to import a certificate from the adfs.cer file:

 

$cert = New-Object System.Security.Cryptography.X509Certificates.

X509Certificate2(“C:\adfs.cer “)

New-SPTrustedRootAuthority

-Name “Token Signing Cert”

-Certificate $cert

 

You can export this certificate from ADFS using the certificates

node in the ADFS 2.0 Management console.

 

———————– Page 244———————–

 

claims-based single sign-on for microsoft sharepoint 2010 207 207

 

If the signing certificate from ADFS has one or more parent certifi-

cates in its certificate chain, you must add these to SharePoint as

well. You can use the same SharePoint command to do this.

Notice that you must import any certificates that SharePoint

uses into SharePoint; SharePoint does not use the trusted root

authorities in the certificate store on the local machine.

 

Create the Claims Mappings in SharePoint

To map the incoming claims from ADFS to claims that SharePoint

uses, you must create some mapping rules. The following PowerShell

commands show how to create rules to pass through the incoming

emailaddress and role claims.

 

$map = New-SPClaimTypeMapping

-IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/

identity/claims/emailaddress”

-IncomingClaimTypeDisplayName “EmailAddress”

-SameAsIncoming

 

$map2 = New-SPClaimTypeMapping

-IncomingClaimType “http://schemas.microsoft.com/ws/2008/06/

identity/claims/role” -IncomingClaimTypeDisplayName “Role”

–SameAsIncoming

 

You can choose to perform your claims mapping either as a part

of the relying party definition in ADFS, or in the SharePoint STS.

However, the rules-mapping language in ADFS is the more flexible of

the two.

For an example of how to add additional claim types, see the

“People Picker Customizations” section later in this chapter.

 

Create a New SharePoint Trusted Identity Token Issuer

A SharePoint trusted identity token issuer binds together the details

of the identity provider and the mapping rules to associate them with

a specific SharePoint web application. The following PowerShell com-

mands show how to add the configuration settings for the scenario

that this chapter describes. This script uses the $cert, $map, and

$map2 variables from the previous script snippets.

 

$ap = New-SPTrustedIdentityTokenIssuer

-Name “SAML Provider”

-Description “Uses Adatum ADFS as an identity provider”

-Realm “urn:adatum-portal:sharepoint”

-ImportTrustCertificate $cert

 

———————– Page 245———————–

 

208208 chapter eleven

 

-ClaimsMappings $map,$map2

-SignInUrl “https://DC-adatum/adfs/ls/&#8221;

-IdentifierClaim http://schemas.xmlsoap.org/ws/2005/05/identity/

claims/emailaddress

 

$uri = New-Object System.Uri(“https://adatum-sp:31242/&#8221;)

 

$ap.ProviderRealms.Add($uri, “urn:adatum-techs:sharepoint”)

$ap.Update()

 

The following table describes the key parameters in the Power-

Don’t forget to call Shell commands.

the Update method

to save the changes Parameter/command Notes

that the Provider -Realm The realm is the value of the relying party identifier in

Realms.Add method ADFS. In this example, the realm parameter identifies

makes. the a-Portal SharePoint web application. The Add

method of the ProviderRealms object adds the

identifier for the a-Techs SharePoint web application.

The URI is the address of the SharePoint web

application.

 

-ImportTrustCertificate This associates the token-signing certificate from

ADFS with the token issuer.

 

-ClaimsMappings This associates the claims-mapping rules with the

token issuer.

 

-SignInUrl This identifies the URL where the user can authenti-

cate with ADFS.

 

-IdentifierClaim This identifies which claim from the identity provider

uniquely identifies the user.

 

This example uses the email address as the identifier. You may

want to consider alternative unique identifiers because of the possibil-

ity that email addresses can change.

Figure 5 summarizes how the SharePoint trusted identity token

issuer uses the configuration data to issue a SAML token to the Share-

Point web application.

 

———————– Page 246———————–

 

claims-based single sign-on for microsoft sharepoint 2010 209 209

 

SharePoint Trusted Identity Token Issuer

 

Token request

Provider Realms

 

-Look up the relying party Token Issuer

identifier for the web application -Authenticate the

requesting a token. user and issue a

Token Request SAML token. ADFS

 

uses the identifier

to determine which

rules to run.

 

SAML Mapping Rules Token Signing Certificate

Token – Apply the mapping rules. – Verify the signature Issue

Issue on the token. SAML

SAML Token

Token

 

figure 5

The SharePoint trusted identity token issuer

 

When a SharePoint web application requests a token from a

trusted identity provider, the SharePoint trusted token issuer first

looks up the unique identifier of the web application. It passes this

identifier to the external token issuer in the wtrealm parameter of the

request. When the external token issuer returns a SAML token, the

SharePoint trusted identity token issuer verifies the signature, applies

any mapping rules, and places the new SAML token in the SharePoint

token cache. It also creates a FedAuth cookie that contains a refer-

ence to the SAML token in the cache. Whenever the user access a

page in the SharePoint web application, SharePoint first checks if

a valid SAML token exists for the user, and then uses the claims in the

token to perform any authorization checks.

There is a one-to-one mapping between SharePoint trusted iden-

tity token issuers and trust certificates from the external token issuer.

You cannot configure a new SharePoint trusted identity token issuer

using a token-signing certificate that an existing SharePoint trusted

identity token issuer uses.

 

SharePoint Web Application

Configuration

Each web application in SharePoint defines which authentication

mechanisms it can use. In the scenario described in this chapter, Ada-

tum has configured both SharePoint web applications to use a SAML-

based trusted identity provider. Both intranet and internet users use

the SAML-based trusted identity provider.

 

———————– Page 247———————–

 

210210 chapter eleven

 

People Picker Customizations

To customize the behavior of the standard people picker to enable

site administrators to reliably select role and organization claims,

Adatum created a custom claim provider to deploy to SharePoint. The

Microsoft Visual Studio® development system solution, SampleClaim-

sProvider, in the 10SharePoint folder from http://claimsid.codeplex.

com includes a custom claim provider that demonstrates how Adatum

extended the behavior of the people picker. For reasons of simplicity,

this sample does not use a store to maintain the list of role and

organization claims that Adatum uses, the lists of valid claims are

maintained in memory. In a production-quality claims provider you

should read the permissible claims values from a store shared with the

You can configure the

authentication methods identity provider. For more information, see the section “The People

that SharePoint will use for Picker” earlier in this chapter.

a web application in the

“SharePoint 2010 Central Use a custom SPClaimProvider class to override the default people

Administration” site. Just picker behavior.

navigate to Manage Web

Applications, select the The SampleClaimsProvider class extends the abstract SPClaim

application you want to Provider class and overrides the methods FillHierarchy, FillResolve,

change, and click on and FillSearch. The SPTrustedClaimsIssuer class, which derives from

Authentication Providers. the SPClaimProvider class, implements the default UI behavior in the

people picker.

The GetPickerEntry method is responsible for building an entry

that will display in the people picker. The following code sample

shows this method.

 

private PickerEntity GetPickerEntity(string ClaimValue, string

claimType, string GroupName)

{

PickerEntity pe = CreatePickerEntity();

 

var issuer = SPOriginalIssuers.Format(

SPOriginalIssuerType.TrustedProvider, TrustedProviderName);

pe.Claim = new SPClaim(claimType, ClaimValue,

Microsoft.IdentityModel.Claims.ClaimValueTypes.String,

issuer);

pe.Description = claimType + “/” + ClaimValue;

pe.DisplayText = ClaimValue;

pe.EntityData[PeopleEditorEntityDataKeys.DisplayName] =

ClaimValue;

pe.EntityType = SPClaimEntityTypes.Trusted;

pe.IsResolved = true;

pe.EntityGroupName = GroupName;

 

return pe;

}

 

———————– Page 248———————–

 

claims-based single sign-on for microsoft sharepoint 2010 211 211

 

This method uses the ClaimValue, claimType, and GroupName

strings to create a claim that the people picker can display. The Trusted

ProviderName variable refers to the name of the SharePoint trusted

identity token issuer that you are using: the SPOriginal

Issuers.Format method returns a string with the full name of the

original valid issuer that you must use when you create a new claim.

 

Notice that a claim definition includes the claim issuer as well as the

claim type and value. SharePoint will check the source of a claim as

a part of any authorization rules.

If you are creating an identity claim, you must ensure that the

claimType that you pass to the SPClaim constructor matches the

identity claim type of your trusted identity token issuer, and that you

set the EntityType property to SPClaimEntityTypes.User.

 

The people picker uses the value of the Description property to

display a tooltip in the UI when a user hovers the mouse over a

resolved claim.

If you deploy this solution to SharePoint, then the people picker

will display search results from this custom claim provider in addition

to results from the default, built-in claim provider. This means that if

a site administrator searches for a non-existent role or organization

claim, then the default claim provider will continue to resolve this

non-existent claim value. To prevent this behavior, you can make your

custom claim provider the default claim provider. If the name of the Adatum made the custom

trusted identity token issuer is “SAML Provider” and the name of the claim provider the default

custom claim provider is “ADFSClaimProvider,” then the following claim provider in the

PowerShell script will make the custom claim provider the default. SharePoint web applications.

 

$ti = Get-SPTrustedIdentityTokenIssuer “SAML Provider”

$ti.ClaimProviderName = “ADFSClaimsProvider”

$ti.Update()

 

It’s also important to ensure that the claim types that the site

administrator will use in the custom people picker exist in the trusted

identity token issuer. You can use the following PowerShell script to

list the claims that are present in the configuration.

 

$i = Get-SPTrustedIdentityTokenIssuer “SAML Provider”

$i.ClaimTypes

 

You can add claim types to an existing trusted identity token is-

suer using the technique shown in the following PowerShell script.

 

$map = New-SPClaimTypeMapping -IncomingClaimType

http://schemas.microsoft.com/ws/2008/06/identity/claims/

organization”

-IncomingClaimTypeDisplayName “Organization” -LocalClaimType

 

———————– Page 249———————–

 

212212 chapter eleven

 

http://schemas.microsoft.com/ws/2008/06/identity/claims/

organization”

$ti = Get-SPTrustedIdentityTokenIssuer “SAML Provider”

$ti.ClaimTypes.Add(

http://schemas.microsoft.com/ws/2008/06/identity/claims/

organization”)

Add-SPClaimTypeMapping –Identity $map

-TrustedIdentityTokenIssuer $ti

 

This script maps an incoming claim and defines the new claim

type in the trusted identity token issuer.

 

Single Sign-Out Control

To implement single sign-out behavior, you must be able to send the

WS-Federation wsignout message to the token issuer when the user

clicks either the “Sign out” or “Sign in with a different user” link on any

page in the a-Portal or a-Techs SharePoint web applications. Adatum

implemented the single sign-out logic in the SessionAuthentication

Module’s SignedIn and SigningOut events. The Visual Studio solu-

tion, SingleSignOutModule in the 10SharePoint folder from http://

claimsid.codeplex.com, includes a custom HTTP module to deploy to

your SharePoint web application that includes this functionality.

The following code sample shows the DoFederatedSignOut

method that the SigningOut event handler invokes to perform the

sign-out.

 

private void DoFederatedSignOut()

{

string providerName = GetProviderNameFromCookie();

SPTrustedLoginProvider loginProvider = null;

SPSecurity.RunWithElevatedPrivileges(delegate()

{

loginProvider = GetLoginProvider(providerName);

});

 

if (loginProvider != null)

{

string returnUrl = string.Format(

System.Globalization.CultureInfo.InvariantCulture,

“{0}://{1}/_layouts/SignOut.aspx”,

HttpContext.Current.Request.Url.Scheme,

HttpContext.Current.Request.Url.Host);

HttpCookie signOutExpiredCookie =

new HttpCookie(SignOutCookieName, string.Empty);

signOutExpiredCookie.Expires = new DateTime(1970, 1, 1);

HttpContext.Current.Response.Cookies.

 

———————– Page 250———————–

 

claims-based single sign-on for microsoft sharepoint 2010 213 213

 

Remove(SignOutCookieName);

HttpContext.Current.Response.Cookies.

Add(signOutExpiredCookie);

WSFederationAuthenticationModule.FederatedSignOut(

loginProvider.ProviderUri, new Uri(returnUrl));

}

}

 

This method performs the sign-out by calling the SharePoint

SPFederationAuthenticationModule.FederatedSignOut method,

passing the address of the claims provider and the address of the

SharePoint web application’s sign-out page as parameters. To discover

This method reads the

the address of the claims provider, it uses an SPTrustedLogin provider name from a

Provider object: however, to get a reference to the SPTrustedLogin custom sign-out cookie

Provider object it needs its name, and it discovers the name by read- rather than from the

ing the custom sign-out cookie. IClaimsIdentity object

This method uses the SPSecurity.RunWithElevatedPrivileges associated with the current

user: this is because if the

method to invoke the GetLoginProvider method with “Full Control” user’s session has expired,

permissions. there will be no IClaims

The following code sample shows how Adatum creates the custom Identity object. Also,

sign-out cookie in the Session_SignedIn event. it’s not safe to read the

provider name from

private const string SignOutCookieName = “SPSignOut”; the FedAuth cookie.

void WSFederationAuthenticationModule_SignedIn(object sender,

EventArgs e)

{

IClaimsIdentity identity =

HttpContext.Current.User.Identity as IClaimsIdentity;

 

if (identity != null)

{

foreach (Claim claim in identity.Claims)

{

if (claim.ClaimType == SPClaimTypes.IdentityProvider)

{

int index = claim.Value.IndexOf(‘:’);

string loginProviderName = claim.Value.Substring(

index + 1, claim.Value.Length – index – 1);

HttpCookie signOutCookie = new HttpCookie(

SignOutCookieName,

Convert.ToBase64String(

System.Text.Encoding.UTF8.

GetBytes(loginProviderName)));

signOutCookie.Secure = FederatedAuthentication

.SessionAuthenticationModule

.CookieHandler.RequireSsl;

 

———————– Page 251———————–

 

214214 chapter eleven

 

signOutCookie.HttpOnly = FederatedAuthentication

.SessionAuthenticationModule.CookieHandler

.HideFromClientScript;

signOutCookie.Domain = FederatedAuthentication

.SessionAuthenticationModule.CookieHandler

.Domain;

HttpContext.Current.Response.Cookies.Add(signOutCookie);

break;

}

}

}

One of the key reasons

that Adatum selected this }

 

approach for handling

single sign-out was its The custom sign-out cookie is not encrypted or signed. It is

compatibility with the transported using SSL, and only contains the name of the

sliding-sessions implemen- user’s login provider.

tation that Adatum chose

to use. The sign-out You can find a complete listing of the global.asax file that Adatum

process must be initiated use in the a-Portal web application at the end of this chapter.

when the user is inactive

for more than the defined

period of inactivity and Displaying Claims in a Web Part

when the user’s SAML When you’re developing a claims-enabled SharePoint solution, it’s

token ValidTo time is useful to be able to view the set of claims that a user has when he

reached. For details about

visits a SharePoint web application. The Visual Studio solution called

how Adatum implemented

sliding sessions in the DisplayClaimsWebPart in the 10SharePoint folder from http://claim-

a-Portal web application sid.codeplex.com includes a SharePoint Web Part that displays claims

see Chapter 12, “Federated data for the current user. The Web Part displays the following claims

Identity for SharePoint data:

Applications.” •     The claim type.

 

•     The claim issuer (this is typically SharePoint).

•     The original claim issuer (this might be a trusted provider

or the SharePoint STS).

•     The claim value.

 

This is a standard Web Part that you can deploy to a SharePoint

web application directly from Visual Studio or through the SharePoint

UI. After the Web Part is deployed to SharePoint you can add it to any

SharePoint web page. It does not require any further configuration.

 

User Profile Synchronization

A claims-enabled SharePoint environment can synchronize user pro-

file data stored in the SharePoint profile store with profile data that

is stored in directory services and other business systems in the enter-

prise. The important difference in the way that user profiles work in

a claims-enabled web application such as the Adatum a-Techs Share-

 

———————– Page 252———————–

 

claims-based single sign-on for microsoft sharepoint 2010 215 215

 

Point application is how SharePoint identifies the correct user profile

from the claims data associated with an SPUser instance.

To make sure that SharePoint can match up a user profile from the

current SPUser instance, you must ensure that three user properties

are correctly configured.

 

Property name Property value

 

Claim User Identifier This is the unique identifier for a user. For Adatum,

this is the value it used for the IdentifierClaim

parameter when it configured the SharePoint trusted

identity token issuer: http://schemas.xmlsoap.org/

ws/2005/05/identity/claims/emailaddress. To test this, you

must have SharePoint

Claim Provider Identifier This identifies the trusted identity token issuer. For

2010 Server

Adatum this value is “SAML Provider.” This value is

set automatically when you configure the user profile (not Foundation)

installed in Farm

synchronization service.

(not Standalone)

Claim Provider Type This specifies the token provider type. For Adatum mode.

this value is “Trusted Claims Provider Authentica-

tion.” This value is set automatically when you

configure the user profile synchronization service.

 

Setup and Physical Deployment

 

To run this scenario in a lab environment you may want to change

some of the default configuration options in SharePoint and ADFS.

 

FedAuth Tokens

Each SharePoint web application must have its own FedAuth cookie

if it is to function correctly in an single sign-on environment. In a

production environment, this is not normally an issue because each

SharePoint web application has a separate host name: for example,

a-portal.adatum.com, and a-techs.adatum.com. However, in a lab en-

vironment you may not want to configure the necessary DNS infra-

structure to support this; if your SharePoint web applications share

the same host name, for example lab-sp.adatum.com:31242 and lab-

sp.adatum.com:40197, then you must make a configuration change to

make sure that each application uses a different name for the FedAuth

cookie. You can change the name of the FedAuth cookie in the micro-

soft.IdentityModel section of the Web.config file. The following

snippet shows how to change the token name to “techsFedAuth”

from its default name of “FedAuth.”

 

<federatedAuthentication>

<cookieHandler mode=”Custom” path=”/” name=”techsFedAuth”>

</federatedAuthentication>

 

———————– Page 253———————–

 

216216 chapter eleven

 

ADFS Default Authentication Method

By default, an Active Directory Federation Services (ADFS) server

installation uses Integrated Windows Authentication, and an ADFS

proxy installation uses an ASP.NET form to collect credentials. In a lab

environment, if you do not have an ADFS proxy installation, you may

want to change the default behavior of the ADFS server to use an

ASP.NET form. To change this, edit the Web.config file in the /adfs/ls

folder. The following snippet shows “Forms” at the top of the list,

making it the default. This means that in a simple lab environment you

will always need to sign in explicitly.

 

<microsoft.identityServer.web>

<localAuthenticationTypes>

<add name=”Forms” page=”FormsSignIn.aspx” />

<add name=”Integrated” page=”auth/integrated/” />

<add name=”TlsClient” page=”auth/sslclient/” />

<add name=”Basic” page=”auth/basic/” />

</localAuthenticationTypes>

</microsoft.identityServer.web>

 

Server Deployment

ADFS enables you to deploy proxy instances that are intended to

handle authentication requests from the web rather than the internal

corporate network which are handled by the main ADFS server in-

stances. This provides an addition layer of security because the main

ADFS server instances can be kept inside the corporate firewall. For

more information about deploying ADFS servers and ADFS server

proxies, see this section on the TechNet website: http://technet.mi-

crosoft.com/en-us/library/gg982491(ws.10).aspx. You will also need

to ensure that your SharePoint web application is exposed to the in-

ternet to allow Adatum employees to access it remotely.

 

Questions

 

1. Which of the following roles can the embedded STS

in SharePoint perform?

 

a. Authenticating users.

 

b. Issuing FedAuth tokens that contain the claims

associated with a user.

 

c. Requesting claims from an external STS such as ADFS.

 

———————– Page 254———————–

 

claims-based single sign-on for microsoft sharepoint 2010 217 217

 

d. Requesting claims from Active Directory through

Windows Authentication.

 

2. Custom claim providers use claims augmentation to perform

which function?

 

a. Enhancing claims by verifying them against an external

provider.

 

b. Enhancing claims by adding additional metadata to

them.

 

c. Adding claims data to the identity information in the

SPUser object if the SharePoint web application is in

“legacy” authentication mode.

 

d. Adding additional claims to the set of claims from the

identity provider.

 

3. Which of the following statements about the FedAuth

cookie in SharePoint are correct?

 

a. The FedAuth cookie contains the user’s claim data.

 

b. Each SharePoint web application has its own FedAuth

cookie.

 

c. Each site collection has its own FedAuth cookie.

 

d. The FedAuth cookie is always a persistent cookie.

 

4. In the scenario described in this chapter, why did Adatum

choose to customize the people picker?

 

a. Adatum wanted the people picker to resolve role

and organization claims.

 

b. Adatum wanted the people picker to resolve name

and emailaddress claims from ADFS.

 

c. Adatum wanted to use claims augmentation.

 

d. Adatum wanted to make it easier for site

administrators to set permissions reliably.

 

5. In order to implement single sign-out behavior in Share-

Point, which of the following changes did Adatum make?

 

a. Adatum modified the standard signout.aspx page to

send a wsignoutcleanup message to ADFS.

 

b. Adatum uses the SessionAuthenticationModule

SigningOut event to customize the standard sign-out

process.

 

———————– Page 255———————–

 

218218 chapter eleven

 

c. Adatum added custom code to invalidate the FedAuth

cookie.

 

d. Adatum configured SharePoint to use a session-based

FedAuth cookie.

 

More Information

 

For more information about SharePoint and claims-based identity,

see Appendix F, “SharePoint 2010 Authentication Architecture and

Considerations.”

For a detailed, end-to-end walkthrough that describes how to

configure SharePoint and ADFS, see this blog post: http://blogs.tech-

net.com/b/speschka/archive/2010/07/30/configuring-sharepoint-

2010-and-adfs-v2-end-to-end.aspx.

The following resources are useful if you are planning to create a

custom people picker component for your SharePoint environment:

•     People Picker overview (SharePoint Server 2010): http://

technet.microsoft.com/en-us/library/gg602068.aspx

•     Custom claims providers for People Picker (SharePoint Server

2010): http://technet.microsoft.com/en-us/library/gg602072.

aspx

•     Creating Custom Claims Providers in SharePoint 2010: http://

msdn.microsoft.com/library/gg615945.aspx

•     Claims Walkthrough: Writing Claims Providers for SharePoint

2010: http://msdn.microsoft.com/en-us/library/ff699494.aspx

•     How to Override the Default Name Resolution and Claims

Provider in SharePoint 2010: http://blogs.technet.com/b/

speschka/archive/2010/04/28/how-to-override-the-default-

name-resolution-and-claims-provider-in-sharepoint-2010.aspx

For further information about using profiles in a claims-enabled

SharePoint environment, see this blog post: http://blogs.msdn.com/b/

brporter/archive/2010/07/19/trusted-identity-providers-amp-user-

profile-synchronization.aspx.

 

———————– Page 256———————–

 

12 Federated Identity

for SharePoint

Applications

 

In previous chapters, you saw ways that federated identity can help

companies share resources with their partners. The scenarios have

included small numbers of partners as well as large numbers of con-

stantly changing partners, sharing web applications and web services,

and supporting multiple client platforms. These scenarios share an

important feature: they all use claims.

In Chapter 11, “Claims-Based Single Sign-On for Microsoft Share-

Point 2010,” you saw how Adatum could expand its single sign-on

domain to include Microsoft® SharePoint® services web applications.

The SharePoint web applications at Adatum used claims-based

authentication, using claims from an external token issuer Microsoft

Active Directory® Federation Services (ADFS). Adatum wants to allow

In this chapter, you’ll learn how Adatum lets employees at one selected partners access to

of its customers, Litware, use the a-Portal SharePoint application its SharePoint a-Portal

that was introduced in Chapter 11, “Claims-Based Single Sign-On for web application.

Microsoft SharePoint 2010.”

 

The Premise

 

The a-Portal SharePoint application has given Adatum sales personnel

access to up-to-date and accurate product information during the

sales process, which has resulted in improved customer satisfaction.

However, there have been complaints from customers who make

purchases through of Adatum’s partners that some of the product

information has been out of date. This is because Adatum’s partners

are responsible for keeping the product information that they use up

to date. One of these sales partners is Litware. Rick, the CIO of Lit-

ware, has seen the a-Portal SharePoint application and he is keen for

his sales staff to use a-Portal instead of their own copy of the product

information. Adatum has already claims-enabled the a-Portal Share-

Point application (for further information see Chapter 11, “Claims-

 

219

 

———————– Page 257———————–

 

220220 chapter twelve

 

Based Single Sign-On for Microsoft SharePoint 2010″) and made it

available to Adatum employees who work remotely on the Internet.

Litware has already deployed ADFS, so most of the required federa-

tion infrastructure is already available.

 

Goals and Requirements

 

The primary goal of this scenario is to show how to create a Share-

Point site that uses federated identities, so that users from Litware

can access the Adatum a-Portal SharePoint application without hav-

ing to sign in again to the Adatum security realm. The types of claims

issued by Litware are not the same types as the claims used by a-

Portal at Adatum, so it’s necessary to include some claims transforma-

tion logic to convert the claims issued by Litware. Adatum anticipates

that other sales partners will also want to use the a-Portal application,

so the solution must be able to accommodate multiple identity pro-

viders.

The solution should also ensure that partners are kept isolated.

For example, there may be some product information that only Ada-

tum and not Litware sales personnel should see.

For security, Adatum wants to have SharePoint automatically sign

users out of the a-Portal application after a period of inactivity. In

addition, because users will be accessing the a-Portal application on

computers outside the Adatum corporate network, when a user

closes the browser and then re-opens it, the user must re-authenticate

to gain access to the a-Portal web application.

 

Overview of the Solution

Adatum has deployed

an ADFS proxy to Figure 1 shows an overview of the solution adopted by Adatum and

support authenticating

users outside of the Litware. It shows a new trust relationship between Adatum’s issuer,

Adatum corporate and Litware’s issuer. In addition to acting as an identity provider (IdP)

network. for Adatum employees, the Adatum ADFS instance now functions as

a federation provider (FP) for partners such as Litware.

 

———————– Page 258———————–

 

feder ated identity for sharepoint applications 221 221

 

Trust

 

Trust

 

STS Adatum FP Litware IP

 

ADFS

ADFS

 

FedAuth

Cookie 1

2

 

Team

Site

3

 

a−Portal

 

Browser

 

SharePoint

 

Rick at Litware

 

figure 1 Adatum Litware

Federating identity with Litware

 

When Rick, a user from Litware, browses to the a-Portal Share-

Point web application, SharePoint detects that Rick is not authenti-

cated, and redirects his browser to the Adatum federation provider.

The Adatum federation provider then redirects Rick’s browser to the

Litware issuer.

 

For details about how to customize the way that SharePoint redi-

rects a user to a token issuer, see the section “The Sign-In Page” in

Chapter 11, “Claims-Based Single Sign-On for Microsoft SharePoint

2010.”

 

The numbers in the following list correspond to the numbers in

Figure 1.

 

1. Rick authenticates with the Litware identity provider and

obtains a SAML token with claims issued by Litware.

 

2. Rick’s browser redirects back to the Adatum issuer. This

federation provider can apply some custom claims mapping

rules to the set of claims from Litware to create a set of

claims suitable for the a-Portal web application. The

federation provider issues this new set of claims as a SAML

token.

 

———————– Page 259———————–

 

222222 chapter twelve

 

3. Rick’s browser redirects back to SharePoint. SharePoint

validates the token, checks any authorization rules that

apply to the page that Rick requested, and if Rick has

permission, displays the page.

Adatum considered two alternative models for federating with

partners. The first, which is the one that Adatum selected, is shown in

Figure 2.

 

Trust

 

Trust

 

Trust

STS Adatum FP Litware IP

 

ADFS

 

FedAuth

Cookie

 

Fabrikam IP

 

Trust

Team

Site

 

a−Portal

 

Constoso IP

SharePoint

 

figure 2 Adatum Partners

The hub model

 

In the hub model, SharePoint has a single trust relationship with

the Adatum federation provider. The Adatum federation provider

then trusts the partners’ issuers. The Adatum federation provider can

apply rules to the claims from the different identity providers to cre-

ate claims that the SharePoint web application understands.

Figure 3 shows the alternative model.

 

———————– Page 260———————–

 

feder ated identity for sharepoint applications 223 223

 

Trust

 

STS Adatum FP Litware IP

Trust

 

ADFS

 

FedAuth

Cookie

Trust

 

Fabrikam IP

 

Team

Site

 

a−Portal Trust

 

Constoso IP

SharePoint

 

figure 3 Adatum Partners

The direct trust model

 

In the direct trust model, SharePoint manages a trust relationship

with each issuer directly, and uses custom claims providers to manipu-

late the incoming claims to a common set of claims that the a-Portal

web application understands.

The advantages of the hub model adopted by Adatum are that:

1.     It’s easier to manage multiple trust relationships in ADFS An advantage of the

rather than SharePoint. SAMLP protocol over

WS-Federation is that it

2.     It’s simpler to manage a single trust relationship in Share- supports initializing the

Point and it avoids the requirement for multiple custom authentication process

claims providers. from the identity provider

3.     You can reuse the trust relationships in the federation instead of the relying party,

provider with other relying parties. which avoids the require-

ment for either the relying

4.     You can leverage ADFS features such as integration with party (RP) or the federation

auditing tools to track token issuing. provider to perform

5.     ADFS supports the Security Assertion Markup Language home-realm discovery.

protocol (SAMLP) in addition to WS-Federation.

 

———————– Page 261———————–

 

224224 chapter twelve

 

The disadvantage of the hub approach is its performance: it re-

quires more hops to acquire a valid SAML token. With this in mind,

Adatum made some changes to the token caching policy in the a-

Portal web application to reduce the frequency at which it’s necessary

to refresh the SAML token. However, Adatum is using session cookies

rather than persistent cookies to store the SAML token references so

that if the user closes his browser, then he will be forced to re-authen-

ticate when he next visits the a-Portal web application.

Adatum implemented sliding sessions for users of the a-Portal

web application: after a token issuer authenticates a user, the user can

Strictly speaking, continue using the a-Portal web application without having to re-au-

the session cookie thenticate if he remains active. If a user becomes inactive in the web

doesn’t contain application for more than a defined period, then he must re-authenti-

the SAML token, it cate with the claims issuer and obtain a new SAML token. With the

contains a reference sliding-sessions solution in place:

to the SAML token

in the SharePoint •     Provided a user remains active in the a-Portal web application,

token cache. SharePoint will not interrupt the user and require him to

re-authenticate with the SAML token issuer.

•     The a-Portal web application remains secure because users who

It’s important that the sliding- become inactive must re-authenticate when they start using the

session implementation is application again.

compatible with the single

sign-out solution that Chapter

11, “Claims-Based Single Inside the Implementation

Sign-On for Microsoft

SharePoint 2010,” describes. The following sections describe the key configuration steps that Ada-

tum performed in order to implement the scenario that this chapter

describes. The hub model that Adatum selected meant that the

changes in SharePoint were minimal: there is still a single trust rela-

tionship with the Adatum issuer.

The following sections describe the changes Adatum made to the

a-Portal web application in SharePoint to support access from partner

organizations.

 

Token Expiration and Sliding Sessions

One of the Adatum requirements was that the a-Portal application

automatically sign users out after a defined period of inactivity, but

allow them to continue working with the application without re-au-

The main configura- thenticating so long as they remain active. To achieve this, Adatum

tion changes were

in ADFS: adding the implemented a sliding-session mechanism in SharePoint that can re-

trust relationship with new the SharePoint session token. For performance reasons, Adatum

Litware and adding wanted to be able to extend the lifetime of the session token without

the rules to convert having to revisit ADFS (the federation provider) or the partner’s token

Litware claims to

issuer.

Adatum claims.

 

———————– Page 262———————–

 

feder ated identity for sharepoint applications 225 225

 

A cookie (usually named FedAuth) that can exist either as a persis-

tent or in-memory cookie represents the SharePoint session token.

This cookie contains a reference to the SAML token that SharePoint

stores in its token cache. The SAML token contains the claims issued

to the user by any external identity and federation providers, and by

the internal SharePoint security token service (STS).

 

Before showing the details of how Adatum implemented sliding

sessions, it will be useful to understand how token expiration works

by default in SharePoint.

 

SAML Token Expiration in SharePoint

This section describes the standard behavior in SharePoint as it relates

to token expiration.

When Rick from Litware first tries to access the a-Portal web

application, his browser performs all of the following steps in order to

obtain a valid SAML token:

 

1. Rick requests a page in the a-Portal web application.

 

2. Rick’s browser redirects to the SharePoint STS.

 

3. Because Rick is not yet authenticated, the SharePoint STS

redirects Rick’s browser to the Adatum issuer to request a

token.

 

4. The Adatum issuer redirects Rick’s browser to the Litware

issuer to authenticate and obtain a Litware token.

 

5. Rick’s browser returns to the Adatum issuer to transform

the Litware token into an Adatum token.

 

6. Rick’s browser returns to the a-Portal web application to

sign in to SharePoint.

 

7. Rick’s browser returns to the page that Rick originally

requested in the a-Portal web application to view. When Rick’s SAML token

expires he may, or may

All SAML tokens have a fixed lifetime that the token issuer

not, need to re-enter his

specifies when it issues the token; in the Adatum scenario, it is the credentials at the token

Adatum ADFS that sets this value. Once a token has expired, the user issuer (ADFS): this depends

must request a new SAML token from the token issuer. For Rick at on the configuration of

Litware, this means repeating the steps listed above. Because this the issuer. In ADFS, you

can specify the web single

takes time, Adatum does not want users such as Rick to have to reau- sign-on (SSO) lifetime that

thenticate too frequently. However, using a token with a long lifetime determines the lifetime of

can be a security risk because someone else could use Rick’s com- the ADFS SSO cookie.

puter while he wasn’t there and access the a-Portal web application

with Rick’s cached token.

 

———————– Page 263———————–

 

226226 chapter twelve

 

The following table describes the two configuration options that

directly affect when SharePoint requires a user to get a new SAML

token from the issuer.

 

Configuration value Notes

 

SAML token lifetime The token issuer sets this value. In ADFS, you can

configure this separately for each relying party by using

the Set-ADFSRelyingPartyTrust PowerShell command.

Once the SAML token expires, the SharePoint session

expires, and the user must re-authenticate with the

token issuer to obtain a new SAML token.

By default, SharePoint sets the session lifetime to be the

same as the SAML token lifetime.

 

LogonTokenCache- This SharePoint configuration value controls when

ExpirationWindow SharePoint will consider that the SAML token has

expired and ask the user to re-authenticate with the

issuer and obtain a new token. SharePoint checks

whether the SAML token has expired at the start of

every request.

For example, if ADFS sets the SAML token lifetime to

Make sure that the ten minutes, and the LogonTokenCacheExpirationWin-

value of the Logon dow property in SharePoint is set to two minutes, then

TokenCache the session in SharePoint will be valid for eight minutes.

ExpirationWindow If the user requests a page from SharePoint seven

property is always minutes after signing in, then SharePoint checks whether

less than the SAML the session is set to expire during the time in minutes

token lifetime; represented by LogonTokenCacheExpirationWindow:

otherwise, you’ll see in this case the answer is no because seven plus two is

a loop whenever a less than ten.

user tries to access If the user requests a page from SharePoint nine minutes

your SharePoint web after signing in, then SharePoint checks whether the

application and keeps session is set to expire during the time in minutes

being redirected back represented by LogonTokenCacheExpirationWindow:

to the token issuer. in this case the answer is yes because nine plus two is

greater than ten.

 

The following script example shows you how to change the life-

time of the SAML token issued by the “SharePoint Adatum Portal”

relying party in ADFS to 10 minutes.

 

Add-PSSnapin Microsoft.ADFS.PowerShell

Set-AdfsRelyingPartyTrust –TargetName “SharePoint Adatum Portal”

–TokenLifeTime 10

 

The following script example shows you how to change the

LogonTokenCacheExpirationWindow in SharePoint to two minutes.

 

$ap = Get-SPSecurityTokenServiceConfig

$ap.LogonTokenCacheExpirationWindow = (New-TimeSpan -minutes 2)

$ap.Update();

IIsreset

 

———————– Page 264———————–

 

feder ated identity for sharepoint applications 227 227

 

These two configuration settings will cause SharePoint to redi-

rect the user to the issuer to sign in again eight minutes after the user

last authenticated with ADFS.

The sequence diagram in Figure 4 shows how SharePoint manages

its session lifetime and the SAML token that it receives from the to-

ken issuer.

SharePoint SharePoint SAML Token

Browser a−Portal Web SharePoint Home Realm WS−Federation Issuer

Application Authenticate.aspx Discovery Endpoint (ADFS)

 

Get /SitePages/Home.aspx −No session exists

−Redirect to: −No session exists

/_layouts/Authenticate.aspx −Redirect to: −Request a SAML token

1 from the SharePoint STS

/_login/default.aspx −Redirect to:

−Request a SAML token

/_trust/default.aspx

from the identity provider

−Redirect to ADFS

 

−Save the SAML token in −Post SAML token to the

the SharePoint token cache SharePoint STS at:

T −Redirect to the originally −Create a Session

R /_trust/

requested page −Redirect to:

Get −The user now has a /_layouts/Authenticate.aspx

/Lists/Tasks/AllItems.aspx valid session 2

3 −The session has expired

−Redirect to: −The session has expired −Request a SAML token

Get /SitePages/Home.aspx /_layouts/Authenticate.aspx Redirect to: from the SharePoint STS

 

/_login/default.aspx −Redirecr to:

4 /_trust/default.aspx −Request a SAML token

from the identity provider

−Redirect to ADFS

 

−Post SAML token to the

T −Save the SAML token in SharePoint STS at:

R −Redirect to the originally the SharePoint token cache /_trust/

Get requested page −Create a session

/Lists/Tasks/AllItems.aspx −The user now has a −Redirect to:

valid session /_layouts/Authenticate.aspx

 

figure 4

Standard token expiration in SharePoint

 

Figure 4 shows a simplified view of the sequence of interactions. In

reality, SharePoint and the WS-Federation protocol use browser

redirects and automatic posts to manage the interactions between

the various components so that all of the requests go through the

browser.

 

In the sequence diagram, TR represents the time from when ADFS

issues the SAML token to when SharePoint will try to renew the to-

ken. Based on the configuration settings described above, T is set to

R

 

eight minutes.

 

———————– Page 265———————–

 

228228 chapter twelve

 

The following notes refer to the numbers on the sequence diagram:

 

1. This is the first time that the user visits the a-Portal web

application; there is no valid session so SharePoint redirects

the user to begin the sign-in process.

 

2. SharePoint creates a session for the user. The lifetime of the

session is the same as the lifetime of the SAML token issued

by ADFS.

 

3. SharePoint uses the session lifetime and the LogonToken

CacheExpirationWindow property to determine when the

user must sign in again. At this point, the session is still valid.

While the session is valid, the user can continue to visit

pages in the SharePoint web application.

 

4. SharePoint uses the session lifetime and the LogonToken

CacheExpirationWindow property to determine when the

user must sign in again. At this point, SharePoint determines

that the session has expired, so it begins the sign-in process

again. If the ADFS SSO cookie has expired, Rick will have

to enter his credentials to obtain a new SAML token.

 

To force users to re-enter their credentials whenever they are

redirected back to ADFS, you should set the web SSO lifetime in

ADFS to be less than or equal to SAMLtokenlifetime minus the

value of LogonTokenCacheExpirationWindow. In the Adatum

scenario, the web SSO lifetime in ADFS should be set to eight

minutes to force users to re-authenticate when SharePoint redirects

them to ADFS.

 

Sliding Sessions in SharePoint

Adatum wanted to implement sliding sessions so that SharePoint can

extend the lifetime of the session if the user remains active. Adatum

wanted to be able to define an inactivity period, after which Share-

Point forces the user to re-authenticate with ADFS. In other words, a

user will only need to sign in again if the session is allowed to expire

or if the SAML token expires. In this scenario, the session lifetime will

be less than the SAML token lifetime.

To implement this behavior, Adatum first configured ADFS to is-

sue SAML tokens with a lifetime of eight hours. The following Micro-

soft Windows® PowerShell® command-line interface script shows

how you can configure this setting in ADFS for the SharePoint

Adatum Portal relying party.

 

———————– Page 266———————–

 

feder ated identity for sharepoint applications 229 229

 

Add-PSSnapin Microsoft.ADFS.PowerShell

Set-AdfsRelyingPartyTrust –TargetName “SharePoint Adatum Portal”

–TokenLifeTime 480

 

By setting the LogonTokenCacheExpirationWindow value to

470 minutes, Adatum can effectively set the session duration to 10

minutes.

 

$ap = Get-SPSecurityTokenServiceConfig

$ap.LogonTokenCacheExpirationWindow = (New-TimeSpan -minutes 470)

$ap.Update();

IIsreset

 

Remember:

Adatum then modified the way that SharePoint manages its ses- A reference to the

sions. SharePoint now recreates a new session before the existing SAML token in the

session expires (as long as the user visits the SharePoint web applica- SharePoint token

tion before the existing session expires). A user can continue to recre- cache is stored in the

ate sessions up to the time that the SAML token finally expires; in this session. The session

scenario, the user could continue using the a-Portal web application is represented by the

FedAuth cookie.

for eight hours without having to re-authenticate. If the user doesn’t

visit the web application before the session expires, then on the next

visit he must sign in again. The Microsoft Visual Studio® development

system solution, SlidingSessionModule, found in the 10SharePoint

folder from http://claimsid.codeplex.com includes a custom HTTP

module to deploy to your SharePoint web application that includes

this functionality. The following code sample from the Adatum cus-

tom HTTP module shows the implementation.

 

public void Init(HttpApplication context)

{

// Sliding session

FederatedAuthentication.SessionAuthenticationModule

.SessionSecurityTokenReceived +=

SessionAuthenticationModule_SessionSecurityTokenReceived;

}

 

private void SessionAuthenticationModule_

SessionSecurityTokenReceived(

object sender,

SessionSecurityTokenReceivedEventArgs e)

{

double sessionLifetimeInMinutes

= (e.SessionToken.ValidTo –

e.SessionToken.ValidFrom).TotalMinutes;

 

———————– Page 267———————–

 

230230 chapter twelve

 

var logonTokenCacheExpirationWindow = TimeSpan.FromSeconds(1);

SPSecurity.RunWithElevatedPrivileges(delegate()

{

logonTokenCacheExpirationWindow =

Microsoft.SharePoint.Administration.Claims

.SPSecurityTokenServiceManager

.Local.LogonTokenCacheExpirationWindow;

});

DateTime now = DateTime.UtcNow;

DateTime validTo = e.SessionToken.ValidTo

– logonTokenCacheExpirationWindow;

DateTime validFrom = e.SessionToken.ValidFrom;

if ((now < validTo) &&

(now > validFrom.AddMinutes(

(validTo – validFrom).TotalMinutes / 2)))

{

SessionAuthenticationModule sam

= FederatedAuthentication.SessionAuthenticationModule;

e.SessionToken = sam.CreateSessionSecurityToken(

e.SessionToken.ClaimsPrincipal,

e.SessionToken.Context, now,

now.AddMinutes(sessionLifetimeInMinutes),

e.SessionToken.IsPersistent);

e.ReissueCookie = true;

}

}

 

This method first determines the valid from time and valid to time

of the existing session, taking into account the value of the Logon

TokenCacheExpirationWindow property. Then, if the existing ses-

sion is more than halfway through its lifetime, the method uses the

SPSessionAuthenticationModule instance to extend the session. It

does this by creating a new session that has the same lifetime as the

original, but which has a ValidFrom property set to the current time.

The sequence diagram in Figure 5 shows how SharePoint handles

Adatum’s sliding-sessions implementation.

 

———————– Page 268———————–

 

feder ated identity for sharepoint applications 231 231

 

SharePoint SharePoint SAML Token

Browser a−Portal Web SharePoint Home Realm WS−Federation Issuer

Application Authenticate.aspx Discovery Endpoint (ADFS)

 

Get /SitePages/Home.aspx −No session exists

−Redirect to: −No session exists

/_layouts/Authenticate.aspx −Redirect to: −Request a SAML token

1 from the SharePoint STS

/_login/default.aspx −Redirect to:

 

/_trust/default.aspx

−Request a SAML token

from the identity provider

−Redirect to ADFS

 

T Get −Redirect to the originally −Save the SAML token in −Post SAML token to the

F /Lists/Tasks/AllItems.aspx requested page the SharePoint token cache SharePoint STS at:

−The user now has a −Create a session FedAuth /_trust/

valid session cookie.

3 −Redirect to:

/_layouts/Authenticate.aspx

Get /SitePages/Page1.aspx

2

T 4

F

 

Get /SitePages/Home.aspx −The session has expired −The session has expired

−Redirect to:

/_layouts/Authenticate.aspx Redirect to: ….

5 /_login/default.aspx

 

figure 5

Sliding sessions in the a-Portal web application

 

The sequence diagram shows a simplified view of the sequence of

interactions. In reality, SharePoint and the WS-Federation protocol

use browser redirects and automatic posts to manage the interac-

tions between the various components so all of the requests go

through the browser.

 

In the sequence diagram, TF represents the session lifetime. The

session lifetime also defines the inactivity period, after which a user

must re-authenticate with ADFS.

The following notes refer to the numbers on the sequence diagram:

 

1. This is the first time that the user visits the a-Portal web

application; there is no valid session so SharePoint redirects

the user to begin the sign-in process.

 

2. SharePoint creates a session for the user. The effective

lifetime of the session is the difference between the

lifetime of the SAML token issued by ADFS and the value

of the LogonTokenCacheExpirationWindow property.

For Adatum, the lifetime of the session is 10 minutes:

 

———————– Page 269———————–

 

232232 chapter twelve

 

the lifetime of the SAML token is 480 minutes, and the

value of the LogonTokenCacheExpirationWindow

property is 470 minutes.

 

3. SharePoint checks the age of the session. At this point,

although the session is still valid, it is nearing the end of

its lifetime so SharePoint creates a new session, copying

data from the existing session.

 

4. SharePoint checks the age of the session. At this point,

it is still near the beginning of its lifetime so SharePoint

continues to use this session.

 

5. SharePoint checks the age of the session. At this point,

the session has expired so SharePoint initiates the process

of re-authenticating with the identity provider.

 

Closing the Browser

The default behavior for SharePoint is to use persistent session cook-

ies. This enables a user to close the browser, re-open the browser, and

re-visit a SharePoint web application without signing in again. Adatum

wants users to always re-authenticate if they close the browser and

then re-open it and revisit the a-Portal web application. To enforce

this behavior, Adatum configured SharePoint to use an in-memory

instead of a persistent session cookie. You can use the following Pow-

erShell script to do this.

 

$sts = Get-SPSecurityTokenServiceConfig

$sts.UseSessionCookies = $true

$sts.Update()

iisreset

 

Authorization Rules

With multiple partners having access to the a-Portal SharePoint web

application, Adatum wants to have the ability to restrict access to

documents in the SharePoint document library based on the organiza-

tion that the user belongs to. Adatum wants to be able to use the

standard SharePoint groups mechanism for assigning and managing

permissions, so it needs some way to identify the organization a user

belongs to.

Adatum achieves this objective by using claims. Adatum has con-

figured ADFS to add an organization claim to the SAML token it is-

sues based on the federated identity provider that originally authen-

ticated the user. You should not rely on the identity provider to issue

the organization claim because a malicious administrator at a partner

 

———————– Page 270———————–

 

feder ated identity for sharepoint applications 233 233

 

organization could add an organization claim with another partner’s

value and gain access to confidential data.

Chapter 11, “Claims-Based Single Sign-On for Microsoft Share-

Point 2010,” describes how to add the organization claim to the Share-

Point people picker to make it easy for site administrators to set

permissions based on the value of the organization claim.

 

Home Realm Discovery

If Adatum shares the a-Portal web application with multiple partners,

each of those partners will have its own identity provider, as shown in

Figure 2 earlier in this chapter. With multiple identity providers in

place, there must be some mechanism for selecting the correct iden-

tity provider for a user to use for authentication, and that’s the home-

realm discovery process.

Adatum decided to customize the home-realm discovery page

that ADFS provides. The default page in ADFS (/adfs/ls/HomeRealm-

Discovery.aspx) displays a drop-down list of the claims provider trusts

configured in ADFS (claims provider trusts represent identity provid-

ers in ADFS) for the user to select an identity provider. ADFS then

redirects the user to the sign-in page at the identity provider. It’s easy

to customize this page with partner logos to make it easier for users

to select the correct identity provider. In addition, this page in ADFS

has access to the relying party identifier in the wtrealm parameter so

it can customize the list of identity providers based on the identity of

the SharePoint relying party web application. After a user has selected

an identity provider for the first time, ADFS can remember the choice Claims provider trusts

so that in the future, the user bypasses the home-realm discovery page represent identity providers

and redirects the browser directly to the identity provider’s sign-in in ADFS.

page.

 

For details about how to customize the ADFS home-realm discovery

page and configure how long ADFS will remember a user’s selection,

see this page on the MSDN® web site: http://msdn.microsoft.

com/en-us/library/bb625464(vs.85).aspx.

 

Adatum also considered the following options related to the

home-realm discovery page.

•     Automatically determine a user’s home realm based on the user’s

IP address. This would remove the requirement for the user to

specify her home realm when she first visits ADFS; however, this

approach is not very reliable, especially with mobile and home

workers and does not provide any additional security because IP

addresses can be spoofed.

 

———————– Page 271———————–

 

234234 chapter twelve

 

•     Perform the home-realm discovery in SharePoint instead of

ADFS. Adatum could customize the standard SharePoint login

page (usually located at C:\Program Files\Common Files\

Microsoft Shared\Web Server Extensions\14\template\identity-

model\login\default.aspx) to display the list of identity provid-

ers, and then append a whr parameter identifying the user’s

home realm to the address of the ADFS sign-in page. However,

the SharePoint login page only displays to the user if multiple

authentication types are configured in SharePoint; Adatum only

has a single authentication type configured for the a-Portal web

application so Adatum would need to override the behavior of

the standard login page so that it always displays. By default, all

SharePoint web applications share this login page, so SharePoint

You should be sure to would display the same list of identity providers regardless of

keep your SharePoint the SharePoint web application the user is accessing. You can

environment up to override this behavior and display a separate login page for each

date with the latest

patches from SharePoint web application.

 

Microsoft.

 

Questions

 

1. In the scenario described in this chapter, Adatum prefers to

maintain a single trust relationship between SharePoint and

ADFS, and to maintain the trust relationships with the

multiple partners in ADFS. Which of the following are valid

reasons for adopting this model?

 

a. It enables Adatum to collect audit data relating to

external sign-ins from ADFS.

 

b. It allows for the potential reuse of the trust relation-

ships with partners in other Adatum applications.

 

c. It allows Adatum to implement automatic home realm

discovery.

 

d. It makes it easier for Adatum to ensure that Share-

Point receives a consistent set of claim types.

 

2. When must a SharePoint user reauthenticate with the

claims issuer (ADFS in the Adatum scenario)?

 

a. Whenever the user closes and then reopens the

browser.

 

———————– Page 272———————–

 

feder ated identity for sharepoint applications 235 235

 

b. Whenever the ADFS web SSO cookie expires.

 

c. Whenever the SharePoint FedAuth cookie that

contains the SAML token expires.

 

d. Every ten minutes.

 

3. Which of the following statements are true with regard to

the Adatum sliding session implementation?

 

a. SharePoint tries to renew the session cookie before it

expires.

 

b. SharePoint waits for the session cookie to expire and

then creates a new one.

 

c. When SharePoint renews the session cookie, it always

requests a new SAML token from ADFS.

 

d. SharePoint relies on sliding sessions in ADFS.

 

4. Where is the organization claim that SharePoint uses to

authorize access to certain documents in the a-Portal web

application generated?

 

a. In the SharePoint STS.

 

b. In the identity provider’s STS; for example in the

Litware issuer.

 

c. In ADFS.

 

d. Any of the above.

 

5. Why does Adatum rely on ADFS to perform home realm

discovery?

 

a. It’s easier to implement in ADFS than in SharePoint.

 

b. You can customize the list of identity providers for

each SharePoint web application in ADFS.

 

c. You cannot perform home realm discovery in Share-

Point.

 

d. You can configure ADFS to remember a user’s choice

of identity provider.

 

———————– Page 273———————–

 

236236 chapter twelve

 

More Information

 

For information about Windows Identity Foundation (WIF) and

sliding sessions see this post: http://blogs.msdn.com/b/vbertocci/

archive/2010/06/16/warning-sliding-sessions-are-closer-than-they-

appear.aspx.

For more information about automated home-realm discovery,

see Chapter 6, “Federated Identity with Multiple Partners,” and

Chapter 7, “Federated Identity with Multiple Partners and Windows

Azure Access Control Service.”

 

———————– Page 274———————–

 

Appendix A Using Fedutil

 

This appendix shows you how to use the FedUtil wizard for the sce-

narios in this book. Note that a Security Token Service (STS) is

equivalent to an issuer.

 

Using FedUtil to Make an Application

Claims-Aware

 

This procedure shows how to use FedUtil to make an application

claims-aware. In this example, the application is a-Order.

First you’ll need to open the FedUtil tool. There are two ways to

do so. One way is to go to the Windows Identity Foundation (WIF)

SDK directory and run FedUtil.exe. The other is to open the single

sign-on (SSO) solution in Microsoft® Visual Studio® development

system, right-click the a-Order.ClaimsAware project, and then click

Add STS Reference. In either case, the FedUtil wizard opens.

 

T O MAKE AN APPLICATION CLAIMS-AWARE

 

1. In the Application configuration location box, enter the

location of the a-Order Web.config file or browse to it. In

the Application URI box, enter the Uniform Resource

Indicator (URI) for aOrder, and then click Next.

 

2. In the Security Token Service dialog box, select Use an

Existing STS. Alternatively, you can select Create a new

STS project in the current solution to create a custom

STS that you can modify.

 

3. In the STS federation metadata location box, enter the

URI of the federation metadata or browse to it, and then

click Next.

 

237

 

———————– Page 275———————–

 

238238 appendix a

 

4. In the Security token encryption dialog box, select No

encryption, and then click Next.

 

5. In the Offered claims dialog box, click Next.

 

6. On the Summary page, click Finish.

 

Along with using FedUtil, you must also make the following

changes:

•     In the a-Expense Web.config file, change the name of Trusted

Issuer to Adatum. This is necessary because a-Expense uses a

custom data store for users and roles mapping. Names should

be formatted as Adatum\name. For example, Adatum\mary is

correctly formatted.

•     Place the ADFS token signing certificate into the Trusted People

store of the local machine.

 

———————– Page 276———————–

 

Appendix B Message Sequences

 

Appendix B shows in detail the message sequences for the passive

(browser-based) and active (smart) client scenarios. It also includes

information about what the HTTP and, where applicable, Kerberos,

traffic looks like as the browser or client, the application, the issuer,

and Microsoft® Active Directory® directory service communicate

with each other.

 

239

 

———————– Page 277———————–

 

240240 appendix b

 

The Browser-Based Scenario

 

Figure 1 shows the message sequence for the browser-based scenario.

 

Rick : Browser App1: Relying ADFS : Issuer Active Directory :

Party Directory

 

GET /App1

 

Annonymous user?

1

HTTP 302

(redirect to issuer)

 

GET /FederationPassive?wtrealm=App1

Is Windows

2 Redirect to Windows Integrated Authentication

enabled?

sign-on page

 

GET /FederationPassive/

Integrated?wrealm=App1

 

3 HTTP 401 WWW-Authenticate:

Negotitiate

 

Kerberos ticket

request

 

Kerberos ticket

4 response

 

GET /FederationPassive/Integrated?wrealm=App1 and

Kerberos ticket (Authorization header)

Look up rules

for App1

 

ADFS allows

Query for user

you to configure attributes, such as

transformation

the email name

5 rules for each

and cost center.

application.

 

Create SAML

HTTP 200 <form action-“https://../App1”> token with Active

Directory

POST /App1 attributes as

wresult-<RequestSecurityTokenResponse… claims.

 

HTTP 302

/Default.aspx and WIF validates the token (the signature,

FAM cookie experation date, target audience,

6 encrypted, chunked, and trusted issuer).

 

and encoded in

base64 This is coordinated by the

WSFederation Authentication

Module (FAM).

 

GET /SomePage.aspx

and FedAuth cookie This is coordinated by the

SessionAuthentication

chunks

Module (SAM).

 

figure 1 7 HTTP 200 WIF decrypts the cookie

Message sequence for the /SomePage.aspx and populates the

browser-based scenario ClaimsPrincipal object.

 

———————– Page 278———————–

 

message sequences 241 241

 

Figure 2 shows the traffic generated by the browser.

 

figure 2

HTTP traffic

 

The numbers in the screenshot correspond to the steps in the

message diagram. In this example, the name of the application is a-

Expense.ClaimsAware. For example, step 1 in the screen shot shows

the initial HTTP redirect to the issuer that is shown in the message

diagram. The following table explains the symbols in the “#” column.

 

Symbol Meaning

 

Arrow An arrow indicates an HTTP 302 redirect.

 

Key A key indicates a Kerberos ticket transaction (the 401 code indicates

that authentication is required).

 

Globe A globe indicates a response to a successful request, which means

that the user can access a website.

 

STEP 1

The anonymous user browses to a-Expense and the Federation Au-

thentication Module (FAM), WSFederatedAuthenticationModule,

redirects the user to the issuer which, in this example, is located at

https://login.adatumpharma.com/FederationPassive. As part of the

request URL, there are four query string parameters: wa (the action to

execute, which is wsignin1.0), wtrealm (the relying party that this

token applies to, which is a-Expense), wctx (context data such as a

return URL that will be propagated among the different parties), and

wct (a time stamp).

Figure 3 shows the response headers for step 1.

 

———————– Page 279———————–

 

242242 appendix b

 

figure 3

Response headers for step 1

The FAM on a-Expense redirects the anonymous user to the issuer.

Figure 4 shows the parameters that are sent to the issuer with the

query string.

 

figure 4

Query string parameters

 

STEP 2

The issuer is Active Directory Federation Services (ADFS) 2.0 config-

ured with Integrated Windows® Authentication only. Figure 5 shows

that ADFS redirects the user to the integrated sign-on page.

 

ADFS can be configured to allow Integrated Windows Authentica –

tion and/or client certificate authentication and/or forms-based

authentication. In either case, credentials are mapped to an Active

Directory account.

 

———————– Page 280———————–

 

message sequences 243 243

 

figure 5

ADFS redirecting the user to the Integrated Windows Authentication page

 

STEP 3

The IntegratedSignIn.aspx page is configured to use Integrated Win-

dows Authentication on Microsoft Internet Information Services (IIS).

This means that the page will reply with an HTTP 401 status code and

the “WWW-Authenticate: Negotiate” HTTP header. This is shown in

Figure 6.

 

figure 6

ADFS returning WWW-Authenticate: Negotiate header

 

IIS returns the WWW-Authenticate:Negotiate header to let the

browser know that it supports Kerberos or NTLM.

 

———————– Page 281———————–

 

244244 appendix b

 

STEP 4

At this point, the user authenticates with Microsoft Windows creden-

tials, using either Kerberos or NTLM. Figure 7 shows the HTTP traffic

for NTLM, not Kerberos.

 

If the infrastructure, such as the browser and the service principal

names, are correctly configured, the client can avoid step 4 by

requesting a service ticket from the key distribution center that is

hosted on the domain controller. It can then use this ticket together

with the authenticator in the next HTTP request.

 

figure 7

NTLM handshake on the ADFS website

 

The Cookies/Login node for the request headers shows the

NTLM handshake process. This process has nothing to do with claims,

WS-Federation, Security Assertion Markup Language (SAML), or WS-

Trust. The same thing would happen for any site that is configured

 

———————– Page 282———————–

 

message sequences 245 245

 

with Integrated Windows Authentication. Note that this step does

not occur for Kerberos.

 

STEP 5

Now that the user has been successfully authenticated with Micro-

soft Windows credentials, ADFS can generate a SAML token based

on the Windows identity. ADFS looks up the claims mapping rules

The The RequestSecurityTokenRequestSecurityToken

associated with the application using the wtrealm parameter men- ResponseResponse is defined in the is defined in the

tioned in step 1 and executes them. The result of those rules is a set WS-Trust specification. It’s the WS-Trust specification. It’s the

of claims that will be included in a SAML assertion and sent to the shell that will enclose a token of shell that will enclose a token of

user’s browser. any kind. The most common any kind. The most common

The following XML code shows the token that was generated implementation of the token is implementation of the token is

SAML (version 1.1 or 2.0). SAML (version 1.1 or 2.0).

(some attributes and namespaces were deleted for clarity). The shell contains the lifetime The shell contains the lifetime

 

<t:RequestSecurityTokenResponse and the endpoint address for and the endpoint address for

this token.this token.

xmlns:t=”http://schemas.xmlsoap.org/ws/2005/02/trust”&gt;

<t:Lifetime>

<wsu:Created>2009-10-22T14:40:07.978Z</wsu:Created>

<wsu:Expires>2009-10-22T00:40:07.978Z</wsu:Expires>

</t:Lifetime>

<wsp:AppliesTo> The token expiration

The token expiration

date (for WS-Fed).

<EndpointReference> date (for WS-Fed).

 

<Address>

https://www.adatumpharma.com/a-Expense.ClaimsAware/

</Address>

</EndpointReference> The token audience

The token audience

(for WS-Fed).

</wsp:AppliesTo> (for WS-Fed).

 

<t:RequestedSecurityToken>

<saml:Assertion

The SAML token is repreThe SAML token is –

MinorVersion=”1″

sented by an assertion that contains represented by an assertion

AssertionID=”_9f68…” Issuer=”http://…/Trust”&gt; that contains certain conditions, certain conditions, such as the

<saml:Conditions expiration time and audience such as the expiration time

and audience restrictions.restrictions.

NotBefore=”2009-10-22T14:40:07.978Z”

NotOnOrAfter=”2009-10-22T00:40:07.978Z”>

<saml:AudienceRestrictionCondition> The token audience

The token audience

(for SAML).

<saml:Audience> (for SAML).

 

https://www.adatumpharma.com/a-Expense.ClaimsAware/

</saml:Audience>

</saml:AudienceRestrictionCondition>

</saml:Conditions>

Because the browser does not Because the browser does not

<saml:AttributeStatement>

hold a key that can prove its hold a key that can prove its

<saml:Subject>

identity, the token generated is identity, the token generated is

<saml:SubjectConfirmation> of type of type bearerbearer. In this scenario, . In this scenario,

<saml:ConfirmationMethod> enabling HTTPS is critical to enabling HTTPS is critical to

urn:oasis:names:tc:SAML:1.0:cm:bearer avoid potential attacks.avoid potential attacks.

 

———————– Page 283———————–

 

246246 appendix b

 

</saml:ConfirmationMethod>

</saml:SubjectConfirmation>

</saml:Subject>

<saml:Attribute

AttributeName=”name”

AttributeNamespace=

http://…/ws/2005/05/identity/claims”&gt;

<saml:AttributeValue>mary</saml:AttributeValue>

</saml:Attribute>

<saml:Attribute

AttributeName=”CostCenter”

AttributeNamespace=

http://schemas.adatumpharma.com/2009/08/claims”&gt;

<saml:AttributeValue>394002</saml:AttributeValue>

</saml:Attribute> The claims are represented

The claims are represented

by the SAML attributes,

</saml:AttributeStatement> by the SAML attributes,

where ClaimType equals the

<ds:Signature> where ClaimType equals the

AttributeNamespace and

AttributeNamespace and

<ds:SignedInfo> the AttributeName .

the AttributeName .

… The ClaimValue equals the

The ClaimValue equals the

</ds:SignedInfo> AttributeValue .

AttributeValue .

<ds:SignatureValue>

dCHtoNUbvVyz8…n0XEA6BI=

</ds:SignatureValue>

<KeyInfo> The signature and the public The signature and the public

<X509Data> key (an X.509 certificate that is key (an X.509 certificate that is

<X509Certificate> encoded in base64) that will be encoded in base64) that will be

MIIB6DCC…gUitvS6JhHdg used to verify the signature on the used to verify the signature on the

website. If the verification was website. If the verification was

</X509Certificate>

successful, you have to ensure that successful, you have to ensure that

</X509Data> the certificate is the one you trust the certificate is the one you trust

</KeyInfo> (either by checking its thumbprint (either by checking its thumbprint

</ds:Signature> or its serial number).or its serial number).

 

</saml:Assertion>

</t:RequestedSecurityToken>

<t:TokenType>

http://docs.oasis-open.org/wss/ The token generated is

The token generated is

oasis-wss-saml-token-profile-1.1#SAMLV1.1 SAML 1.1.

SAM 1.1.

</t:TokenType>

<t:RequestType>

http://schemas.xmlsoap.org/ws/2005/02/trust/Issue

</t:RequestType>

<t:KeyType>

http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey

</t:KeyType>

</t:RequestSecurityTokenResponse>

 

———————– Page 284———————–

 

message sequences 247 247

 

STEP 6

Once ADFS generates a token, it needs to send it back to the applica-

tion. A standard HTTP redirect can’t be used because the token may

be 4 KB long, which is larger than most browsers’ size limit for a URL.

Instead, issuers generate a <form> that includes a POST method. The

token is in a hidden field. A script auto-submits the form once the

page loads. The following HTML code shows the issuer’s response.

 

<html>

<head>

<title>Working…</title>

</head>

<body>

<form

method=”POST”

name=”hiddenform”

action=

https://www.adatumpharma.com/a-Expense.ClaimsAware/”&gt;

<input type=”hidden” name=”wa” value=”wsignin1.0″ />

<input

type=”hidden”

name=”wresult”

value=”&lt;t:RequestSecurityTokenResponse

xmlns…&lt;/t:RequestSecurityTokenResponse>”

/>

<input

type=”hidden”

name=”wctx”

value=”rm=0&amp;id=passive&amp;

ru=%2fa-Expense.ClaimsAware%2fdefault.aspx”

/>

<noscript>

<p>Script is disabled. Click Submit to continue.</p>

<input type=”submit” value=”Submit” />

</noscript>

</form>

<script language=”javascript”>

window.setTimeout(‘document.forms[0].submit()””’, 0);

</script>

</body>

</html>

 

———————– Page 285———————–

 

248248 appendix b

 

When the application receives the POST, the FAM takes control

of the process. It listens for the AuthenticateRequest event. Figure

8 shows the sequence of steps that occur in the handler of the

AuthenticateRequest event.

 

Event :

SessionSecurityTokenReceived

Arguments :

raw security token

Validate the token

with the

corresponding

security token

handler, such as

SAML 1.1, SAML 2.0,

encrypted or custom

 

Create the

ClaimsPrincipal object

with the claims inside.

 

Use the

ClaimsAuthenticationMananger

class to enrich the

ClaimsPrincipal

object.

 

Event :

SessionSecurityTokenValidated

Arguments :

ClaimsPrincipal

 

Create the

SessionsSecurityToken:

Encode(Chunk(Encrypt

(ClaimsPrincipal+lifetime+

[Original token])))

 

Set the HTTPContext.User

property to the

ClaimsPrincipal object.

Convert the session

token into a set

of chunked cookies.

 

Redirect to the

original return URL,

if it exists.

 

figure 8

Logic for an initial request to an application

 

———————– Page 286———————–

 

message sequences 249 249

 

Notice that one of the steps that the FAM performs is to create

the session security token. In terms of network traffic, this token is a

set of cookies named FedAuth[n] that is the result of compressing,

encrypting, and encoding the ClaimsPrincipal object. The cookies are

chunked to avoid exceeding any cookie size limitations. Figure 9

shows the HTTP response, where a session token is chunked into

three cookies.

 

figure 9

HTTP response from the website with a session token chunked into three

cookies

 

———————– Page 287———————–

 

250250 appendix b

 

STEP 7

The session security token (the FedAuth cookies) is sent on subse-

quent requests to the application. In the same way that the FAM

handles the AuthenticationRequest event, the SAM executes the

logic shown in Figure 10.

 

Check that the

cookie is present.

If it is,

recreate the

SessionSecurityToken

by decoding,

decrypting, and

decompressing

the cookie.

 

Event :

SessionSecurityTokenReceived

Arguments :

session token

 

Check the

SessionSecurityToken

expiration date.

 

Create the

ClaimsPrincipal object

with the claims inside.

 

Set the

HTTPContext.User

property to the

ClaimsPrincipal object.

 

figure 10

Logic for subsequent requests to the application

 

———————– Page 288———————–

 

message sequences 251 251

 

The FedAuth cookies are sent on each request. Figure 11 shows

the network traffic.

 

figure 11

Traffic for a second HTTP request

 

———————– Page 289———————–

 

252252 appendix b

 

The Active Client Scenario

 

The following section shows the interactions between an active client

and a web service that is configured to trust tokens generated by an

ADFS issuer. Figure 12 shows a detailed message sequence diagram.

 

Rick : Desktop Orders : ADFS : Issuer Active Directory :

Application Web Service Directory

 

Send the RequestSecurityToken message and the Use the LDAP to

UserNamePasswordToken in the security header. validate the user

1 name and password

credentials.

 

These interactions are orchestrated

by the WCF federation bindings. The Look up the claim

client proxy obtains a token the first mapping rules for

the Order

time it contacts the web service.

web service.

 

Query for user attributes

such as the email name

Send the RequestSecurityTokenResponse and cost center.

message and the signed SAML token.

 

Create the

SAML token

Send the Orders.GetOrders message and include

and the signed SAML token in the the user

security header attributes as

claims. Sign

the token

WIF validates and encrypt

it.

the token (the

signature, expiration

date, target audience,

and trusted issuer). ADFS allows you to

extract attributes from

2 stores other than Active

Directory. For example, you

WIF allows or denies can use a database, a web

access depending on service, or a file.

the result from the

ClaimsAuthorizationManager object.

Send the

Orders.GetOrders

response. Excecute the operation.

 

If the user makes another call

figure 12 to the web service, the token is

Active client scenario reused unless you create a new

message-diagram proxy.

 

———————– Page 290———————–

 

message sequences 253 253

 

Figure 13 shows the corresponding HTTP traffic for the active

client message sequence.

 

figure 13

HTTP traffic

 

Following are the two steps, explained in detail.

 

STEP 1

The Orders web service is configured with the wsFederationHttp-

Binding. This binding specifies a web service policy that requires the

client to add a SAML token to the SOAP security header in order to

successfully invoke the web service. This means that the client must

first contact the issuer with a set of credentials (the user name and

password) to get the SAML token. The following message represents

a RequestSecurityToken (RST) sent to the ADFS issuer (ADFS)

hosted at https://login.adatumpharma.com/adfs/services/trust/13/

usernamemixed. (Note that the XML code is abridged for clarity.

Some of the namespaces and elements have been omitted.)

 

<s:Envelope>

<s:Header>

<a:Action>

http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue

</a:Action>

<a:To>

https://login.adatumpharma.com/adfs/

services/trust/13/usernamemixed

</a:To>

This is the endpoint of

<o:Security> This is the endpoint of the

the issuer that accepts a

<o:UsernameToken issuer that accepts a

UsernameToken.

UsernameToken.

u:Id=”uuid-bffe89aa-e6fa-404d-9365-d078d73beca5-1″>

<o:Username>

<!– Removed–>

</o:Username>

These are the credentials These are the credentials

<o:Password>

that are sent to the issuer.that are sent to the issuer.

<!– Removed–>

</o:Password>

</o:UsernameToken>

</o:Security>

 

———————– Page 291———————–

 

254254 appendix b

 

</s:Header>

<s:Body>

<trust:RequestSecurityToken

xmlns:trust=

http://docs.oasis-open.org/ws-sx/ws-trust/200512”&gt;

<wsp:AppliesTo>

<EndpointReference>

The client specifies the

<Address> The client specifies the

intended recipient of the

intended recipient of the token.

https://orders.adatumpharma.com/Orders.svc token. In this case, it is the

In this case, it is the Orders web

</Address> Orders web service.

service.

</EndpointReference>

</wsp:AppliesTo>

<trust:TokenType>

http://docs.oasis-open.org/wss/

The issuer expects a

The issuer expects a

oasis-wss-saml-token-profile-1.1#SAMLV1.1 SAML 1.1 token.

SAML 1.1 token.

</trust:TokenType>

<trust:KeyType>

http://docs.oasis-open.org/ws-sx/

ws-trust/200512/SymmetricKey

</trust:KeyType>

</trust:RequestSecurityToken>

</s:Body>

</s:Envelope>

 

The issuer uses the credentials to authenticate the user and exe-

cutes the corresponding rules to obtain user attributes from Active

Directory (or any other attributes store it is configured to contact).

 

<s:Envelope>

<s:Header>

<a:Action>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/

IssueFinal</a:Action>

</s:Header>

<s:Body>

<trust:RequestSecurityTokenResponseCollection

xmlns:trust=”http://docs.oasis-open.org/ws-sx/ws-trust/200512″&gt;

The issuer specifies the

<trust:RequestSecurityTokenResponse> The issuer specifies the

lifetime of the token.

lifetime of the token.

<trust:Lifetime>

<wsu:Created>2009-10-22T21:15:19.010Z</wsu:Created>

<wsu:Expires>2009-10-22T22:15:19.010Z</wsu:Expires>

</trust:Lifetime>

<wsp:AppliesTo> The issuer specifies the

The issuer specifies the

<a:EndpointReference> intended recipient of the

intended recipient of the

token. In this case, it is the

<a:Address> token. In this case, it is the

Orders web service.

Orders web service.

https://orders.adatumpharma.com/Orders.svc

 

———————– Page 292———————–

 

message sequences 255 255

 

</a:Address>

</a:EndpointReference>

</wsp:AppliesTo>

<trust:RequestedSecurityToken>

<xenc:EncryptedData>

<xenc:EncryptionMethod

Algorithm=

http://www.w3.org/2001/04/xmlenc#aes256-cbc&#8221; />

<KeyInfo>

<e:EncryptedKey>

<KeyInfo> The token was encrypted using an

The token was encrypted using an

<o:SecurityTokenReference> X.509 certificate (public key).

X.509 certificate (public key).

The web service must have the

<X509Data> The web service must have the

corresponding private key to

<X509IssuerSerial> corresponding private key to

decrypt it. This section acts as

decrypt it. This section acts as

<X509IssuerName>

a hint to help the web service

a hint to help the web service

CN=localhost select the correct key.

select the correct key.

</X509IssuerName>

<X509SerialNumber>

-124594669148411034902102654305925584353

</X509SerialNumber>

</X509IssuerSerial>

</X509Data>

</o:SecurityTokenReference>

</KeyInfo>

<e:CipherData>

<e:CipherValue>

WayfmLM9DA5….u17QC+MWdZVCA2ikXwBc=

</e:CipherValue>

</e:CipherData> This is the encrypted token. The

This is the encrypted token. The

token is a SAML assertion that

</e:EncryptedKey> token is a SAML assertion that

represents claims about the user.

</KeyInfo> represents claims about the user.

It’s signed with the issuer’s private

It’s signed with the issuer’s private

<xenc:CipherData>

signing key (see below for the

signing key (see below for the

<xenc:CipherValue> decrypted SAML assertion).

decrypted SAML assertion).

U6TLBMVR/M4Ia2Su……/oV+qg/VU=

</xenc:CipherValue>

</xenc:CipherData>

</xenc:EncryptedData>

</trust:RequestedSecurityToken>

<trust:RequestedProofToken>

<trust:ComputedKey>

http://docs.oasis-open.org/ws-sx/

ws-trust/200512/CK/PSHA1

</trust:ComputedKey>

</trust:RequestedProofToken>

<trust:TokenType>

 

———————– Page 293———————–

 

256256 appendix b

 

http://docs.oasis-open.org/wss/ The token that is

The token that is

generated is a

oasis-wss-saml-token-profile-1.1#SAMLV1.1 generated is a

SAML 1.1 token.

SAML 1.1 token.

</trust:TokenType>

<trust:KeyType>

http://docs.oasis-open.org/ws-sx/

ws-trust/200512/SymmetricKey

</trust:KeyType>

</trust:RequestSecurityTokenResponse>

</trust:RequestSecurityTokenResponseCollection>

</s:Body>

</s:Envelope>

 

If you had the private key to decrypt the token (highlighted above

as “<e:CipherValue>U6TLBMVR/M4Ia2Su…”), the following is what

you would see.

 

<saml:Assertion

MajorVersion=”1″

MinorVersion=”1″

This is the issuer identifier

This is the issuer identifier

AssertionID=”_a5c22af0-b7b2-4dbf-ac10-326845a1c6df”

(it’s a URI). It is different

(it’s a URI). It is different

Issuer=”http://login.adatumpharma.com/Trust&#8221; than the actual issuer

than the actual issuer

sign-on URL.

IssueInstant=”2009-10-22T21:15:19.010Z “> sign-on URL.

<saml:Conditions

NotBefore=”2009-10-22T21:15:19.010Z ”

NotOnOrAfter=”2009-10-22T22:15:19.010Z “>

<saml:AudienceRestrictionCondition>

<saml:Audience>

https://orders.adatumpharma.com/Orders.svc

</saml:Audience>

</saml:AudienceRestrictionCondition> The holder-of-key provides

The holder-of-key provides

proof of ownership of

</saml:Conditions> proof of ownership of

a signed SAML token.

<saml:AttributeStatement> a signed SAML token.

SOAP clients often use this

SOAP clients often use this

<saml:Subject>

approach to prove that an

approach to prove that an

<saml:SubjectConfirmation>

incoming request is valid.

incoming request is valid.

<saml:ConfirmationMethod> Note that a browser can’t

Note that a browser can’t

access a key store the way

urn:..:SAML:1.0:cm:holder-of-key access a key store the way

a smart client can.

</saml:ConfirmationMethod> a smart client can.

 

<KeyInfo>

<trust:BinarySecret>

ztGzs3I…VW+6Th38o=

</trust:BinarySecret>

</KeyInfo>

</saml:SubjectConfirmation>

</saml:Subject>

<saml:Attribute

 

———————– Page 294———————–

 

message sequences 257 257

 

AttributeName=”name”

The claims are represented

The claims are represented

AttributeNamespace=

by the SAML attributes.

by the SAML attributes.

http://schemas.xmlsoap.org/ws/2005/05/identity/claims”&gt; The ClaimType equals

The ClaimType equals

<saml:AttributeValue>rick</saml:AttributeValue> the AttributeNamespace

the AttributeNamespace

and the AttributeName .

</saml:Attribute> and the AttributeName .

The ClaimValue equals

<saml:Attribute The ClaimValue equals

the AttributeValue .

the AttributeValue .

AttributeName=”role”

AttributeNamespace=

http://schemas.xmlsoap.org/ws/2005/05/identity/claims”&gt;

 

<saml:AttributeValueOrderTracker</saml:AttributeValue>

</saml:Attribute>

</saml:AttributeStatement>

<ds:Signature>

This is the signature and

<ds:SignedInfo> … </ds:SignedInfo> This is the signature and

public key (an X.509 certificate

public key (an X.509 certificate

<ds:SignatureValue>

encoded in base64) that will be

encoded in base64) that will be

dCHtoNUbvVyz8…n0XEA6BI= used to verify the signature on

used to verify the signature on

</ds:SignatureValue> the web service. If the verification

the web service. If the verification

is successful, you must ensure that

<KeyInfo> is successful, you must ensure that

the certificate is the one you trust,

<X509Data> the certificate is the one you trust,

by checking either its thumbprint

by checking either its thumbprint

<X509Certificate>

or its serial number.

or its serial number.

MIIB6DCC…gUitvS6JhHdg

</X509Certificate>

</X509Data>

</KeyInfo>

</ds:Signature>

</saml:Assertion>

 

STEP 2

Once the client obtains a token from the issuer, it can attach the to-

ken to the SOAP security header and call the web service. This is the

SOAP message that is sent to the Orders web service.

 

Here are the SOAP

Here are the SOAP

<s:Envelope>

action and the URL

action and the URL

<s:Header>

of the web service.

of the web service.

 

<a:Action>http://tempuri.org/GetOrders</a:Action&gt;

<a:To>https://orders.adatumpharma.com/Orders.svc</a:To&gt;

<o:Security>

<u:Timestamp u:Id=”_0″>

<u:Created>2009-10-22T21:15:19.123Z</u:Created>

<u:Expires>2009-10-22T21:20:19.123Z</u:Expires>

</u:Timestamp>

<xenc:EncryptedData >

This is the token from

This is the token from

… the token we’ve got in step 1 … step 1, but encrypted.

step 1, but encrypted.

 

———————– Page 295———————–

 

258258 appendix b

 

</xenc:EncryptedData>

This is the signature of the

This is the signature of the

<Signature xmlns=”http://www.w3.org/2000/09/xmldsig#”&gt;

message generated using the

message generated using the

… SAML assertion. This is a

SAML assertion. This is a

different signature from the

<SignatureValue> different signature from the

token signature. This signature

oaZFLr+1y/I2kYcAvyQv6WSkPYk= token signature. This signature

is generated for any security

</SignatureValue> is generated for any security

token (not just a SAML token)

token (not just a SAML token)

<KeyInfo>

to protect the message content

to protect the message content

<o:SecurityTokenReference> and source verification.

and source verification.

<o:KeyIdentifier

ValueType=

http://docs.oasis-open.org/wss/

oasis-wss-saml-token-profile-1.0#

SAMLAssertionID”>

_a5c22af0-b7b2-4dbf-ac10-326845a1c6df

</o:KeyIdentifier>

</o:SecurityTokenReference>

</KeyInfo>

</Signature>

</o:Security>

</s:Header>

<s:Body>

<GetOrders xmlns=”http://tempuri.org/”&gt;

<customerId>1231</customerId>

</GetOrders>

</s:Body>

</s:Envelope>

 

Windows Identity Foundation (WIF) and Windows Communica-

tion Foundation (WCF) will take care of decrypting and validating the

SAML token. The claims will be added to the ClaimsPrincipal object

and the principal will be added to the WCF security context. The

WCF security context will be used in the authorization manager by

checking the incoming claims against the operation call the client

wants to make.

 

The Browser-Based Scenario with

Access Control Service (ACS)

 

Figure 14 shows the message sequence for the browser-based sce-

nario that authenticates with a social identity provider and uses ACS

for protocol transition.

 

———————– Page 296———————–

 

message sequences 259 259

 

Adatum Simulated

a-Order ACS Google

Mary : Browser Issuer

(RP) (FP) (IdP)

(FP)

 

GET /a-Order.OrderTracking.6/

Anonymous user

 

1 HTTP 302

(redirect to issuer)

 

GET /Adatum.FederationProvider.6

Is the

2 HTTP 302 (redirect to HomeRealmDiscover.aspx) user already

authenticated?

GET /Adatum.FederationProvider.6/HomeRealmDiscovery.aspx

3

HTTP 200

 

POST /Adatum.FederationProvider.6/HomeRealmDiscovery.aspx

 

4 HTTP 302 (redirect to

Adatum.FederationProvider.6/Federation.aspx)

The diagram skips a

POST Adatum.FederationProvider.6/Federation.aspx number of steps here

where the user gives

5 HTTP 302 (redirect to Determine consent for Google to

federationwithacs-dev.accesscontrol.windows.net) Identity Provider release his email

address.

GET federationwithacs-devaccesscontrol.windows.net

 

6 HTTP 302 (redirect http://www.google.com/accounts) Verify RP

 

GET http://www.google.com/accounts/ServiceLogin

 

7 HTTP 200

 

POST http://www.google.com/accounts/ServiceLogin (passing Google ID and password)

 

8 HTTP 302 (redirect to federationwithacs-dev.accesscontrol.windows.net – including token from Google)

 

GET federationwithacs-dev.accesscontrol.windows.net

 

Protocol

9 HTTP 200 (Uses JavaScript to trigger POST to Adatum.FederationProvider.6 issuer) Transition

 

POST /Adatum.FederationProvider.6/Federation.aspx

 

10 HTTP 200 (Uses JavaScript to trigger POST to Claims

a-Order.OrderTracking.6) Mapping

 

POST a-Order.OrderTracking.6

 

WIF veries

11 HTTP 302

the token.

 

POST a-Order.OrderTracking.6

 

WIF decrypts the cookie

12 HTTP 200 and populates the claims

principal object. figure 14

Message sequence for the

browser-based scenario with

ACS and authentication with

a social identity provider

 

———————– Page 297———————–

 

260260 appendix b

 

Figure 15 shows the key traffic generated by the browser. For

reasons of clarity, we have removed some messages from the list.

 

figure 15

HTTP traffic

The numbers in the screenshot correspond to the steps in the

message diagram. In this sample, the name of the application is a-Or-

der.Tracking.6 and it is running on the local machine. The name of the

mock issuer that takes the place of ADFS is Adatum.FederationPro-

vider.6 and it is also running locally, and the name of the ACS instance

is federationwithacs-dev.accesscontrol.windows.net. The sample il-

lustrates a user authenticating with a Google identity.

 

STEP 1

The anonymous user browses to a-Order.OrderTracking.6, and be-

cause there is no established security session, the WSFederatedAu-

thenticationModule (FAM) redirects the browser to the issuer which,

in this example is located at https://localhost/Adatum.FederationPro-

vider.6/. As part of the request URL, there are four query string param-

eters: wa (the action to execute, which is wsignin1.0), wtrealm (the

relying party that this token applies to, which is a-Order.OrderTrack-

ing), wctx (context data, such as a return URL that will be propagated

among the different parties), and wct (a time stamp).

Figure 16 shows the response headers for step 1.

 

———————– Page 298———————–

 

message sequences 261 261

 

figure 16

Response headers for step 1

Figure 17 shows the parameters that are sent to the issuer with

the query string.

 

figure 17

Query string parameters

 

———————– Page 299———————–

 

262262 appendix b

 

STEP 2

The issuer is a simulated issuer that takes the place of ADFS for this

sample. Figure 18 shows that the simulated issuer redirects the user to

the home realm discovery page where the user can select the identity

provider she wants to use.

 

The simulated issuer is built using the WIF SDK.

 

figure 18

Simulated issuer redirecting the user to the HomeRealmDiscovery page

 

STEP 3

On the home-realm discovery page, the user can elect to sign in using

the Adatum provider, the Litware provider, or a social identity pro-

vider. In this walkthrough, the user opts to use a social identity pro-

vider and provides an email address. When the user submits the form,

the simulated issuer parses the email address to determine which so-

cial identity provider to use.

 

STEP 4

The home-realm discovery page redirects the browser to the Federa-

tion.aspx page.

 

STEP 5

The Federation.aspx page at the simulated issuer returns a cookie to

the browser that stores the original wa, wtrealm, wctx, and wct que-

rystring parameters, as was shown in Figure 17. The simulated issuer

redirects the user to the ACS instance, passing new values for these

parameters. The simulated issuer also sends a whr querystring param-

eter; this is a hint to ACS about which social identity provider it should

use to authenticate the user. Figure 19 shows that the simulated is-

suer redirects the user to ACS.

 

———————– Page 300———————–

 

message sequences 263 263

 

figure 19

The simulated issuer redirects the user to ACS

Figure 20 shows the new values of the querystring parameters

that the simulated issuer sends to ACS. This includes the value

“Google” for the whr parameter. The value of the wctx parameter

refers to the cookie that contains the original values of the wa, wt-

realm, wctx, and wct querystring parameters that came from the rely-

ing party—a-Order.OrderTracking.

 

figure 20

Querystring parameters sent to ACS from the simulated issuer

 

STEP 6

ACS verifies that the wtrealm parameter value, https://localhost/

Adatum.FederationProvider.6, is a configured relying party applica-

tion. ACS then examines the whr parameter value to determine which

identity provider to redirect the user to. If there is no valid whr value,

then ACS will display a page listing the available identity providers.

ACS forwards the wtrealm parameter value to Google in the opened.

return_to parameter, so that when Google returns a token to ACS, it

can tell ACS the address of the relying party (for ACS, the relying

party is https://localhost/Adatum.FederationProvider.6.)

 

———————– Page 301———————–

 

264264 appendix b

 

STEP 7

Google displays a login form that prompts the user to provide creden-

tials. This form also indicates to the user that the request came from

ACS.

 

STEP 8

After Google has authenticated the user and obtained consent to re-

turn the users email address to the relying party (ACS), Google redi-

rects the browser back to ACS.

Figure 21 shows the querystring parameters that Google uses to

pass the claims back to ACS.

 

figure 21

Querystring parameters sent from Google to ACS

 

In addition to the claims data, there is also a context parameter

that enables ACS to associate this claim data with the original request

from a-Order.OrderTracking.6. This context parameter includes the

address of the Adatum simulated issuer, which sent the original re-

quest to ACS.

 

STEP 9

ACS transitions the token from Google to create a new SAML 1.1

token, which contains a copy of the claims that Google issued. ACS

uses the information in the context parameter to identify the relying

party application (Adatum.FederationProvider.6) and the rule group

to apply. In this sample, the rule group copies all of the claims from

Google through to the new SAML token.

The following XML code shows the token that ACS generates

(some attributes and namespaces were deleted for clarity).

 

———————– Page 302———————–

 

message sequences 265 265

 

<t:RequestSecurityTokenResponse

The RequestSecurityToken

The RequestSecurityToken

Context=”6d67cfce-9797-4958-ae3c-1eb489b04801″

Response is defined in the

Response is defined in the

xmlns:t=”http://schemas.xmlsoap.org/ws/2005/02/trust”&gt; WS-Trust specification. It’s

WS-Trust specification. It’s

the envelope that encloses a

<t:Lifetime> the envelope that encloses a

token of any kind. The most

<wsu:Created>2011-02-09T15:05:17.355Z</wsu:Created> token of any kind. The most

common implementation of

common implementation of

<wsu:Expires>2011-02-09T15:15:17.355Z</wsu:Expires>

the token is SAML (version 1.1

the token is SAML (version 1.1

</t:Lifetime> or 2.0). The envelope contains

or 2.0). The envelope contains

the lifetime and the endpoint

the lifetime and the endpoint

address for this token.

<wsp:AppliesTo> address for this token.

 

<EndpointReference>

The token expiration date

<Address> The token expiration date

and time (for WS-Fed).

https://localhost/Adatum.FederationProvider.6/ and time (for WS-Fed).

 

</Address>

</EndpointReference>

</wsp:AppliesTo>

 

The token audience

The token audience

<t:RequestedSecurityToken> (for WS-Fed).

(for WS-Fed).

<saml:Assertion

AssertionID=”_592d…”

Issuer=”https://federationwithacs-dev.accesscontrol.

windows.net/”>

<saml:Conditions

NotBefore=”2011-02-09T15:05:17.355Z”

NotOnOrAfter=”2011-02-09T15:15:17.355Z”>

<saml:AudienceRestrictionCondition>

<saml:Audience>

https://localhost/Adatum.FederationProvider.6/

</saml:Audience>

 

The token audience

The token audience

</saml:AudienceRestrictionCondition>

(for SAML).

(for SAML).

</saml:Conditions>

 

<saml:AttributeStatement>

<saml:Subject>

<saml:NameIdentifier>

https://www.google.com/accounts/o8/

id?id=AItOawnvknktThEaScLj34MPreTLfOKqrQazL20

</saml:NameIdentifier>

<saml:SubjectConfirmation>

<saml:ConfirmationMethod>

Because the browser does not Because the browser does not

urn:oasis:names:tc:SAML:1.0:cm:bearer hold a key that can prove its hold a key that can prove its

</saml:ConfirmationMethod> identity, the token generated is identity, the token generated is

</saml:SubjectConfirmation> of type of type bearerbearer. In this scenario, . In this scenario,

enabling HTTPS is critical to enabling HTTPS is critical to

</saml:Subject>

avoid potential attacks.avoid potential attacks.

 

———————– Page 303———————–

 

266266 appendix b

 

<saml:Attribute

AttributeName=”emailaddress”

AttributeNamespace=

http://schemas.xmlsoap.org/ws/2005/05/identity/claims”&gt;

<saml:AttributeValue>mary@gmail.com

</saml:AttributeValue> The claims are represented

The claims are represented

by the SAML attributes,

</saml:Attribute> by the SAML attributes,

where ClaimType equals the

where ClaimType equals the

AttributeNamespace and

<saml:Attribute AttributeNamespace and

the AttributeName .

the AttributeName .

AttributeName=”name” The ClaimValue equals

The ClaimValue equals

AttributeNamespace=”http://schemas.xmlsoap.org/ the AttributeValue .

the AttributeValue .

ws/2005/05/identity/claims”>

<saml:AttributeValue>Mary</saml:AttributeValue>

</saml:Attribute>

 

<saml:Attribute

AttributeName=”identityprovider”

AttributeNamespace=”…”>

<saml:AttributeValue>Google</saml:AttributeValue>

</saml:Attribute>

 

</saml:AttributeStatement>

 

<ds:Signature xmlns:ds=”http://www.w3.org/2000/09/xmldsig#”&gt;

<ds:SignedInfo>

</ds:SignedInfo>

<ds:SignatureValue>

euicdW…UGM7rA==

</ds:SignatureValue>

<KeyInfo xmlns=”http://www.w3.org/2000/09/xmldsig#”&gt;

<X509Data>

<X509Certificate>

MIIDO…jVSbv/3 The signature and the public

The signature and the public

key (an X.509 certificate that

</X509Certificate> key (an X.509 certificate that

is encoded in base64) that will

</X509Data> is encoded in base64) that will

be used to verify the signature

be used to verify the signature

</KeyInfo>

on the website. If the verification

on the website. If the verification

</ds:Signature> was successful, you have to ensure

was successful, you have to ensure

</saml:Assertion> that the certificate is the one you

that the certificate is the one you

trust (by checking either its

</t:RequestedSecurityToken> trust (by checking either its

thumbprint or its serial number).

<t:RequestedAttachedReference> thumbprint or its serial number).

 

<o:SecurityTokenReference>

<o:KeyIdentifier

ValueType=

http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-

 

———————– Page 304———————–

 

message sequences 267 267

 

1.0#SAMLAssertionID”>

_592d8e3a-8f42-4f14-9552-4617959dbd77

</o:KeyIdentifier>

</o:SecurityTokenReference>

</t:RequestedAttachedReference>

<t:RequestedUnattachedReference>

<o:SecurityTokenReference>

<o:KeyIdentifier

ValueType=

http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-

1.0#SAMLAssertionID”>

_592d8e3a-8f42-4f14-9552-4617959dbd77

</o:KeyIdentifier>

</o:SecurityTokenReference>

</t:RequestedUnattachedReference>

<t:TokenType>

urn:oasis:names:tc:SAML:1.0:assertion

</t:TokenType>

<t:RequestType>

http://schemas.xmlsoap.org/ws/2005/02/trust/Issue

</t:RequestType>

<t:KeyType>

http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey

</t:KeyType>

</t:RequestSecurityTokenResponse>

 

This step returns a form to the browser with an HTTP 200 status

message. The user does not see this form because a JavaScript timer

automatically submits the form, posting the new token to the Adatum

simulated issuer. It obtains the address of the simulated issuer from

the Return URL setting in the Adatum.SimulatedIssuer relying party

definition in ACS. The token data is contained in the hidden wresult

field. The following HTML code shows the form that ACS returns to

the browser. Some elements have been abbreviated for clarity.

 

<html>

<head>

<title>Working…</title>

</head>

<body>

<form method=”POST”

name=”hiddenform”

action=”https://localhost/Adatum.FederationProvider.6/

Federation.aspx”>

<input type=”hidden” name=”wa” value=”wsignin1.0″ />

<input type=”hidden” name=”wresult”

 

———————– Page 305———————–

 

268268 appendix b

 

value=”&lt;t:RequestSecurityTokenResponse

Context=&quot;…” />

<input type=”hidden” name=

“wctx” value=”6d67cfce-9797-4958-ae3c-1eb489b04801″ />

<noscript>

<p>

Script is disabled. Click Submit to continue.

</p>

<input type=”submit” value=”Submit” />

</noscript>

</form>

<script language=”javascript”>

window.setTimeout(‘document.forms[0].submit()’, 0);

</script>

</body>

</html>

 

STEP 10

The Adatum simulated issuer applies the claims mapping rules to the

claims that it received from ACS. The following XML code shows the

token that ACS generates (some attributes and namespaces were

deleted for clarity).

 

<trust:RequestSecurityTokenResponseCollection

xmlns:trust=”http://docs.oasis-open.org/ws-sx/ws-trust/200512″&gt;

<trust:RequestSecurityTokenResponse

Context=”rm=0&amp;id=passive&amp;ru=%2fa-Order.

OrderTracking%2f”>

<trust:Lifetime>

<wsu:Created>2011-02-09T15:05:17.776Z</wsu:Created>

<wsu:Expires>2011-02-09T16:05:17.776Z</wsu:Expires>

</trust:Lifetime>

<wsp:AppliesTo>

The token expiration date

<EndpointReference> The token expiration date and

and time (for WS-Fed).

time (for WS-Fed).

<Address>

https://localhost/a-Order.OrderTracking.6/

</Address>

</EndpointReference>

</wsp:AppliesTo>

<trust:RequestedSecurityToken>

<saml:Assertion

AssertionID=”_3770…”

Issuer=”adatum”

IssueInstant=”2011-02-09T15:05:17.776Z”

xmlns:saml=”urn:oasis:names:tc:SAML:1.0:assertion”>

<saml:Conditions

 

———————– Page 306———————–

 

message sequences 269 269

 

NotBefore=”2011-02-09T15:05:17.776Z”

NotOnOrAfter=”2011-02-09T16:05:17.776Z”>

<saml:AudienceRestrictionCondition>

<saml:Audience>

https://localhost/a-Order.OrderTracking.6/

</saml:Audience> The token audience

The token audience

(for SAML).

</saml:AudienceRestrictionCondition> (for SAML).

 

</saml:Conditions>

<saml:AttributeStatement>

<saml:Subject>

<saml:SubjectConfirmation>

<saml:ConfirmationMethod>

urn:oasis:names:tc:SAML:1.0:cm:bearer

</saml:ConfirmationMethod>

</saml:SubjectConfirmation>

</saml:Subject> The claims are represented

by the SAML attributes,

<saml:Attribute

where ClaimType equals the

AttributeName=”name” AttributeNamespace and

AttributeNamespace=”…” the AttributeName .

a:OriginalIssuer=”acs\Google”> The ClaimValue equals the

AttributeValue . These claims

<saml:AttributeValue>

also have an OriginalIssuer

Mary

attribute showing where

</saml:AttributeValue> the claim came from.

</saml:Attribute>

<saml:Attribute

AttributeName=”role”

AttributeNamespace=”http://schemas.microsoft.com/

ws/2008/06/identity/claims”>

<saml:AttributeValue>

Order Tracker

</saml:AttributeValue>

</saml:Attribute>

<saml:Attribute

AttributeName=”organization”

AttributeNamespace=”http://schemas.adatum.com/

claims/2009/08″>

<saml:AttributeValue>

Contoso

</saml:AttributeValue>

</saml:Attribute>

</saml:AttributeStatement>

<ds:Signature xmlns:ds=”http://www.w3.org/2000/09/

xmldsig#”>

<ds:SignedInfo>

 

———————– Page 307———————–

 

270270 appendix b

 

</ds:SignedInfo>

<ds:SignatureValue>ZxLyG…2uU=</ds:SignatureValue>

<KeyInfo xmlns=”http://www.w3.org/2000/09/xmldsig#”&gt;

<X509Data>

<X509Certificate>MIIB5…2B3AO</X509Certificate>

</X509Data>

</KeyInfo>

</ds:Signature>

</saml:Assertion>

</trust:RequestedSecurityToken>

<trust:RequestedAttachedReference>

<o:SecurityTokenReference

k:TokenType=”http://docs.oasis-open.org/wss/oasis-wss-

saml-token-profile-1.1#SAMLV1.1″ >

<o:KeyIdentifier

ValueType=”http://docs.oasis-open.org/wss/oasis-wss-

saml-token-profile-1.0#SAMLAssertionID”>

_377035cf-c44a-4495-a69c-c4b4951af18b

</o:KeyIdentifier>

</o:SecurityTokenReference>

</trust:RequestedAttachedReference>

<trust:RequestedUnattachedReference>

<o:SecurityTokenReference

k:TokenType=”http://docs.oasis-open.org/wss/oasis-wss-

saml-token-profile-1.1#SAMLV1.1″>

<o:KeyIdentifier

ValueType=”http://docs.oasis-open.org/wss/oasis-wss-

saml-token-profile-1.0#SAMLAssertionID”>

_377035cf-c44a-4495-a69c-c4b4951af18b

</o:KeyIdentifier>

</o:SecurityTokenReference>

</trust:RequestedUnattachedReference>

<trust:TokenType>

urn:oasis:names:tc:SAML:1.0:assertion

</trust:TokenType>

<trust:RequestType>

http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue

</trust:RequestType>

<trust:KeyType>

http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer

</trust:KeyType>

</trust:RequestSecurityTokenResponse>

</trust:RequestSecurityTokenResponseCollection>

 

———————– Page 308———————–

 

message sequences 271 271

 

This step returns a form to the browser with an HTTP 200 status

message. The user does not see this form because a JavaScript timer

automatically submits the form, posting the new token to the a-Order.

OrderTracking.6 application. The token with the new claims is con-

tained in the wresult field. The following HTML code shows the form

that ACS returns to the browser. Some elements have been abbrevi-

ated for clarity.

 

<html>

<head>

<title>Working…</title>

</head>

<body>

<form method=”POST” name=”hiddenform”

action=”https://localhost/a-Order.OrderTracking.6/”&gt;

<input type=”hidden” name=”wa” value=”wsignin1.0″ />

<input type=”hidden” name=”wresult”

value=”&lt;trust:RequestSecurityTokenResponse

Collection…” />

<input type=”hidden” name=”wctx”

value=”rm=0&amp;id=passive&amp;ru=%2fa-Order.

OrderTracking%2f” />

<noscript>

<p>

Script is disabled. Click Submit to continue.

</p>

<input type=”submit” value=”Submit” />

</noscript>

</form>

<script language=”javascript”>

window.setTimeout(‘document.forms[0].submit()’, 0);

</script>

</body>

</html>

 

The simulated issuer determines the address to post the token to

(https://localhost/a-Order.OrderTracking.6/) by reading the original

value of the wtrealm parameter that the simulated issuer saved in a

cookie in step 4.

 

STEP 11

The Federation Authentication Module (FAM) validates the security

token from the simulated issuer, and creates a ClaimsPrincipal object

using the claim values from the token. This is compressed, encrypted,

and encoded to create a session security token which the application

returns to the browser as a set of FedAuth[n] cookies. The cookies

are chunked to avoid exceeding any cookie size limitations.

 

———————– Page 309———————–

 

272272 appendix b

 

Figure 22 shows the response headers, which include the Fed-

Auth cookies.

 

figure 22

Response headers, including the FedAuth cookies

 

STEP 12

On subsequent requests to the a-Order.OrderTracking.6 application,

the browser returns the security session data to the application. Fig-

ure 23 shows the FedAuth cookie in the request headers.

 

figure 23

FedAuth cookies in the request header

 

The WSFederatedAuthenticationModule (FAM) decodes, de-

crypts, and decompresses the cookie and verifies the security session

data before recreating the ClaimsPrincipal object.

 

———————– Page 310———————–

 

message sequences 273 273

 

Single Sign-Out

 

Figure 24 shows the single sign-out message sequence for the browser-

based scenario.

 

Adatum Simulated

a-Expense a-Order

John : Browser Issuer

(RP) (RP)

(IdP)

 

GET /a-Expense as an

Anonymous user

 

1 HTTP 302

(redirect to issuer)

 

GET /Adatum.SimulatedIssuer – wsignin1.0

 

2

HTTP 200 – display log in page

 

POST Adatum.SimulatedIssuer

 

3 Create WS –

Add a-Expense to AdatumClaimsRPStsSiteCookie – HTTP 200 Federation token.

 

POST WS-Federation token to a-Expense

 

Create

4 Return FedAuth cookie ClaimsPrincipal

 

GET /a-Expense

 

5 HTTP 200 – display data Authorize

 

Click link to visit a-Order – GET /a-Order as an Anonymous user

 

6 HTTP 302 (redirect to issuer)

 

GET Adatum.SimulatedIssuer – wsignin1.0

Already

authenticated,

7 Add a-Order to AdatumClaimsRPStsSiteCookie – HTTP 200 return WS-

 

Federation token.

POST WS – Federation Token to a-Order

 

Create

8 Return FedAuth cookie

ClaimsPrincipal

 

GET a-Order

 

Authorize

9 HTTP 200 – display data

 

Click Logout link -POST /a-Order

 

10 Delete FedAuth cookie – HTTP 302

 

GET Adatum.SimulatedIssuer – wsignout1.0

 

11 HTTP 302 – redirect the signout page

 

GET /Adatum.SimulatedIssuer/SignOut.aspx – wsignout1.0

 

Sign out from

12 Delete AdatumClaimsRPStsSiteCookie – HTTP 200 any IdPs.

 

GET /a-Expense – wsignoutcleanup1.0 In steps 13 and 14, the URLs

 

are invoked from IMG tags

13 Delete FedAuth cookie -HTTP 200 in the page returned from figure 24

GET /a-Order – wsignoutcleanup1.0 the issuer in step 12. Message sequence for single

sign-out in the browser-based

14 HTTP 200 – the FEDAUTH cookie was deleted in step 10 scenario

 

———————– Page 311———————–

 

274274 appendix b

 

Figure 25 shows the key traffic generated by the browser. For

reasons of clarity, we have removed some messages from the list.

 

figure 25

HTTP traffic

 

The numbers in the screenshot correspond to the steps in the

message diagram. In this sample, the names of the two relying party

applications are a-Expense.ClaimsAware and a-Order.ClaimsAware

and they are running on the local machine. The name of the mock is-

suer that takes the place of ADFS is Adatum.SimulatedIssuer.1 and it

is also running locally. The sample illustrates a user signing in first

to a-Expense.ClaimsAware, then accessing the a-Order.ClaimsAware

application, and then initiating the single sign-out from a link in the

a-Order.ClaimsAware application.

 

———————– Page 312———————–

 

message sequences 275 275

 

STEP 1

The anonymous user browses to a-Expense.ClaimsAware, and because

there is no established security session, the WSFederatedAuthenti-

cationModule (FAM) redirects the browser to the issuer which, in

this example, is located at https://localhost/Adatum.simulatedIssuer.1/.

 

figure 26

Redirect to the issuer

 

As part of the request URL, there are four query string parameters:

wa (the action to execute, which is wsignin1.0), wtrealm (the relying

party that this token applies to, which is a-Expense.ClaimsAware),

wctx (this is context data such as a return URL that will be propa-

gated among the different parties), and wct (a time stamp).

 

figure 27

WS-Federation data sent to the issuer

 

STEP 2

The simulated issuer allows the user to select a User to sign in as for

the session; in this example the user chooses to sign in as John.

 

———————– Page 313———————–

 

276276 appendix b

 

STEP 3

The simulated issuer stores the name of the relying party (which it can

use in the log-out process) in a cookie named AdatumClaimsRPStsSite-

Cookie, and details of the user in the .WINAUTH cookie.

 

figure 28

Cookies containing the user ID and a list of relying parties

 

The simulated issuer then posts the token back to the a-Expense.

ClaimsAware application using a JavaScript timer, passing the WS-

Federation token in the wresult field.

 

figure 29

Sending the WS-Federation token to the relying party

 

———————– Page 314———————–

 

message sequences 277 277

 

STEP 4

The relying party verifies the token, instantiates a ClaimsPrincipal

object, and saves the claim data in a cookie named FedAuth. The ap-

plication sends an HTTP 302 to redirect the browser to the a-Expense.

ClaimsAware website.

 

figure 30

Creating the FedAuth cookie in the a-Expense.ClaimsAware application

 

STEP 5

The a-Expense.ClaimsAware application uses the claims data stored in

the FedAuth cookie to apply the authorization rules that determine

which records John is permitted to view.

 

———————– Page 315———————–

 

278278 appendix b

 

STEP 6

John clicks on the link to visit the a-Order.ClaimsAware application.

From the perspective of the application, the request is from an

anonymous user, so it redirects the browser to the simulated issuer.

 

figure 31

Redirecting to the issuer

 

As part of the request URL, there are four query string parameters:

wa (the action to execute, which is wsignin1.0), wtrealm (the relying

party that this token applies to, which is a-Order.ClaimsAware), wctx

(context data, such as a return URL that will be propagated among the

different parties), and wct (a time stamp).

 

figure 32

WS-Federation data sent to the issuer

 

———————– Page 316———————–

 

message sequences 279 279

 

STEP 7

The simulated issuer recognizes that John is already authenticated

because the browser sends the .WINAUTH cookie.

 

figure 33

The browser sends the .WINAUTH cookie to the issuer

 

The application updates the AdatumClaimRPStsSiteCookie with

details of the new relying party application, and posts a WS-Federa-

tion token back to the relying party.

 

figure 34

The browser updates the cookie with the new relying party

 

———————– Page 317———————–

 

280280 appendix b

 

figure 35

The issuer posts the WS-Federation token to the relying party

 

STEP 8

The relying party verifies the token, instantiates a ClaimsPrincipal

object, and saves the claim data in a cookie named FedAuth. The ap-

plication sends an HTTP 302 to redirect the browser to the a-Order.

ClaimsAware website.

 

figure 36

The a-Order.ClaimsAware site creates a FedAuth cookie

 

STEP 9

The a-Order.ClaimsAware application uses the claims data stored in

the FedAuth cookie to apply the authorization rules that determine

which records John is permitted to view.

 

———————– Page 318———————–

 

message sequences 281 281

 

STEP 10

John clicks on the Logout link in the a-Order.ClaimsAware applica-

tion. The application deletes the FedAuth cookie and redirects the

browser to the simulated issuer to complete the sign-out process.

 

figure 37

Deleting the FedAuth cookie and redirecting to the issuer

 

STEP 11

The simulated issuer redirects the browser to itself, sending a WS-

Federation wsignout1.0 command.

 

figure 38

Sending the wsignout1.0 command

 

———————– Page 319———————–

 

282282 appendix b

 

STEP 12

The simulated issuer signs out from any identity providers and deletes

the contents of the AdatumClaimsRPStsSiteCookie cookie.

 

figure 39

Clearing the cookie with the list of relying parties

 

STEPS 13 AND 14

The simulated issuer uses the list of relying parties from the Adatum-

ClaimsRPStsSiteCookie cookie to construct a list of image URLs:

 

<img src=’https://localhost/a-expense.ClaimsAware/

?wa=wsignoutcleanup1.0′ />

<img src=’https://localhost/a-Order.ClaimsAware/

?wa=wsignoutcleanup1.0′ />

 

These URLs pass the WS-Federation wsignoutcleanup1.0 com-

mand to each of the relying party applications, giving them the op-

portunity to complete the sign-out process in the application and

perform any other necessary cleanup.

 

———————– Page 320———————–

 

message sequences 283 283

 

figure 40

Clearing the FedAuth cookie in the a-Expense.ClaimsAware application

 

figure 41

The FedAuth cookie was cleared for the a-

Order.Claims application in step 10

 

———————– Page 321———————–

 

 

———————– Page 322———————–

 

Appendix C Industry Standards

 

This appendix lists the industry standards that are discussed in this

book.

 

Security Assertion Markup Language (SAML)

 

For more information about SAML, see the following:

•     The OASIS Standard specification, “Assertions and Protocol for

the OASIS Security Assertion Markup Language (SAML) V1.1″

http://www.oasis-open.org/committees/download.php/3406/

oasis-sstc-saml-core-1.1.pdf

 

(Chapter 1, “An Introduction to Claims,” and Chapter 2, “Claims-

Based Architectures,” cover SAML assertions.)

 

Security Association Management Protocol

(SAMP) and Internet Security Association and

Key Management Protocol (ISAKMP)

 

For more information about these protocols, see the following:

•     The IETF draft specification, “Internet Security Association

and Key Management Protocol (ISAKMP)”

http://tools.ietf.org/html/rfc2408

 

WS-Federation

 

For more information about WS-Federation, see the following:

•     The OASIS Standard specification,

http://docs.oasis-open.org/wsfed/federation/v1.2/

•     “Understanding WS-Federation” on MSDN®

http://msdn.microsoft.com/en-us/library/bb498017.aspx

 

285

 

———————– Page 323———————–

 

286286 appendix c

 

WS-Federation: Passive Requestor Profile

 

For more information about WS-Federation Passive Requestor

Profile, see the following:

•     Section 13 of the OASIS Standard specification, “Web Services

Federation Language (WS-Federation) Version 1.2″

http://docs.oasis-open.org/wsfed/federation/v1.2/os/ws-

federation-1.2-spec-os.html#_toc223175002

•     “WS-Federation: Passive Requestor Profile” on MSDN

http://msdn.microsoft.com/en-us/library/bb608217.aspx

 

WS-Security

 

For more information about WS-Security, see the following:

•     The OASIS Standard specification, “Web Services Security:

SOAP Message Security 1.1 (WS-Security 2004)”

http://docs.oasis-open.org/wss/v1.1/wss-v1.1-spec-os-

soApmessagesecurity.pdf

 

WS-SecureConversation

 

For more information about WS-SecureConversation, see the following:

•     The OASIS Standard specification, “WS-SecureConversation

1.3″

http://docs.oasis-open.org/ws-sx/ws-secureconversation/v1.3/

ws-secureconversation.pdf

 

WS-Trust

 

For more information about WS-Trust, see the following:

•     The OASIS Standard specification, “WS-Trust 1.3”

http://docs.oasis-open.org/ws-sx/ws-trust/200512/ws-trust-

1.3-os.html

 

XML Encryption

 

For more information about XML Encryption (used to generate XML

digital signatures), see the following:

•     The W3C Recommendation, “XML Encryption Syntax and

Processing”

http://www.w3.org/tr/2002/ rEC-xmlenc-core-20021210/

 

———————– Page 324———————–

 

Appendix D Certificates

 

This appendix lists the digital certificates that are used in claims-based

applications. To see this in table form, see “Claims Based Identity &

Access Control Guide” on CodePlex (http://claimsid.codeplex.com).

 

Certificates for Browser-Based Applications

 

In browser-based scenarios, you will find certificates used on the is-

suer and on the computer that hosts the web application. The client

computer does not store certificates.

 

On the Issuer (Browser Scenario)

In browser-based scenarios, you will find the following certificates on

the issuer.

 

Certificate for TLS/SSL (Issuer, Browser Scenario)

The Transport Layer Security protocol/Secure Sockets Layer protocol

(TLS/SSL) uses a certificate to protect the communication with the

issuer—for example, for the credentials transmitted to it. The purpose

is to prevent man-in-the-middle attacks, eavesdropping, and replay

attacks.

Requirements: The subject name in the certificate must match

the Domain Name System (DNS) name of the host that provides the

certificate. Browsers will generally check that the certificate has a

chain of trust to one of the root authorities trusted by the browser.

Recommended certificate store: LocalMachine\My

Example: CN=login.adatumpharma.com

 

Certificate for Token Signing (Issuer, Browser Scenario)

The issuer’s certificate for token signing is used to generate an XML

digital signature to ensure token integrity and source verification.

 

287

 

———————– Page 325———————–

 

288288 appendix d

 

Requirements: The worker process account that runs the issuer

needs access to the private key of the certificate.

Recommended certificate store: LocalMachine\My and if Micro-

soft® Active Directory® Federation Services (ADFS) 2.0 is the issuer,

the ADFS 2.0 database will keep a copy.

Example: CN=adatumpharma-tokensign.com

 

The subject name on the certificate does not need to match a DNS

name. It’s a recommended practice to name the certificate in a way

that describes its purpose.

 

Optional Certificate for Token Encryption

(Issuer, Browser Scenario)

The certificate for token encryption secures the SAML token. Encrypt-

ing tokens is optional, but it is recommended. You may opt to rely on

TLS/SSL, which will secure the whole channel.

Requirements: Only the public key is required. The private key is

owned by the relying party for decrypting.

Recommended certificate store: LocalMachine\TrustedPeople,

LocalMachine\AddressBook or if ADFS 2.0 is the issuer, the ADFS 2.0

database will keep it.

Example: CN=a-expense.adatumpharma-tokenencrypt.com

 

Encrypting the token is optional, but it is generally recommended.

Using TLS/SSL is already a measure to ensure the confidentiality of

the token in transit. This is an extra security measure that could be

used in cases where claim values are confidential.

 

On the Web Application Server

In browser-based scenarios, you will find the following certificates on

the web application server.

 

Certificate for TLS/SSL (Web Server, Browser Scenario)

TLS/SSL uses a certificate to protect the communication with the

web application server—for example, for the SAML token posted to

it. The purpose is to prevent man-in-the-middle attacks, eavesdrop-

ping, and replay attacks.

Requirements: The subject name in the certificate must match

the DNS name of the host that provides the certificate. Browsers will

generally check that the certificate has a chain of trust to one of the

root authorities trusted by the browser.

Recommended certificate store: LocalMachine\My

Example: CN=a-expense.adatumpharma.com

 

———————– Page 326———————–

 

certificates 289 289

 

Token Signature Verification

(Web Server, Browser Scenario)

The web application server has the thumbprint of the certificate that

is used to verify the SAML token signature. The issuer embeds the

certificate in each digitally signed security token. The web application

server checks that the digital signature’s thumbprint (a hash code)

matches that of the signing certificate. Windows® Identity Founda-

tion (WIF) and ADFS embed the public key in the token by default.

Requirements: The thumbprint of the issuer’s certificate should

be present in the <issuerNameRegistry> section of the application’s

Web.config file.

Recommended certificate store: None

Example: ‎d2316a731b59683e744109278c80e2614503b17e (This

is the thumbprint of the certificate with CN=adatumpharma-token-

sign.com.)

 

If the certificate (issuer public key) is embedded in the token, the

signature verification is done automatically by WIF. If not, an

IssuerTokenResolver needs to be configured to find the public key.

This is common in interop scenarios; however, WIF and ADFS will

always embed the full public key.

 

Token Signature Chain of Trust Verification

(Web Server, Browser Scenario)

The web application server has a certificate that is used to verify the

trusted certificate chain for the issuer’s token signing certificate.

Requirements: The public key of the issuer certificate should be

installed in LocalMachine\TrustedPeople certificate store unless the

certificate was issued by a trusted root authority.

Recommended certificate store: LocalMachine\TrustedPeople

only if the certificate was not issued by a trusted root authority.

 

The chain-of-trust verification is controlled by an attribute of the

<certificateValidation> element of the WIF configuration section

of the application’s Web.config file. WIF has this setting turned on

by default.

 

Optional Token Decryption

(Web Server, Browser Scenario)

The web application has a certificate that it uses to decrypt the SAML

token that it receives from an issuer (if it was encrypted). The web

application has both public and private keys. The issuer has only the

public key.

 

———————– Page 327———————–

 

290290 appendix d

 

Requirements: The certificate used to decrypt the SAML token

should be configured in the <serviceCertificate> element of the

<microsoft.identityModel> section of the application’s Web.config

file. Also, the App Pool account of the website should have permission

to read the private key of the certificate.

Recommended certificate store: LocalMachine\My

Example: CN=a-expense.adatumpharma-tokenencrypt.com

 

Cookie Encryption/Decryption

(Web Server, Browser Scenario)

The web application server has a certificate that it uses to ensure the

confidentiality of the session cookie created to cache the token claims

for the whole user session.

Requirements: The default WIF mechanism uses the Data Pro-

tection API (DPAPI) to encrypt the cookie. This requires access to a

private key stored in the profile of the App Pool account. You must

ensure that the account has the profile loaded by setting the Load

User Profile to true in the App Pool configuration.

Recommended certificate store: None

 

A more web farm-friendly option is to use a different Cookie

Transform to encrypt/decrypt the token (such as RsaEncryption

CookieTransform) that uses X.509 certificates instead of DPAPI.

 

Certificates for Active Clients

 

In scenarios with active clients that interact with web services, you

will find certificates used on the issuer, on the machine that hosts the

web service, and on the client machine.

 

On the Issuer (Active Scenario)

In active client scenarios, you will find the following certificates on

the issuer.

 

Certificate for Transport Security (TLS/SSL)

(Issuer, Active Scenario)

TLS/SSL uses a certificate to protect the communication with the

issuer—for example, for the credentials transmitted to it. The purpose

is to avoid man-in-the-middle attacks, eavesdropping, and replay at-

tacks.

Requirements: The subject name in the certificate must match

the DNS name of the host that provides the certificate. Browsers will

generally check that the certificate has a chain of trust to one of the

root authorities trusted by the browser.

 

———————– Page 328———————–

 

certificates 291 291

 

Recommended certificate store: LocalMachine\My

Example: CN=login.adatumpharma.com

 

Certificate for Message Security (Issuer, Active Scenario)

A certificate will be used to protect the communication between the

client and the issuer at the message level.

Requirements: For a custom issuer that you implement, the ser-

vice credentials are configured in the Windows Communication

Foundation (WCF) issuer—for example, through the <service

Certificate> section of the issuer’s Web.config file.

For an ADFS 2.0 issuer, this is configured using the Microsoft

Management Console (MMC).

Recommended certificate store: LocalMachine\My or ADFS

database

Example: CN=login.adatumpharma.com

 

Certificate for Token Signing (Issuer, Active Scenario)

The issuer’s certificate for token signing is used to generate an XML

digital signature to ensure token integrity and source verification.

Requirements: The worker process account that runs the issuer

needs access to the private key of the certificate.

Recommended certificate store: LocalMachine\My and the

ADFS 2.0 database

Example: CN=adatumpharma-tokensign.com

 

The subject name on the certificate does not need to match a DNS

name. It’s a recommended practice to name the certificate in a way

that describes its purpose.

 

Certificate for Token Encryption (Issuer, Active Scenario)

The certificate for token encryption secures the SAML token. This

certificate is required when an active client is used.

Requirements: Only the public key is required on the client. The

relying party owns the private key, which it uses to decrypt the SAML

token.

Recommended certificate store: LocalMachine\TrustedPeople,

LocalMachine\AddressBook or the ADFS 2.0 database

Example: CN=a-expense.adatumpharma-tokenencrypt.com

 

Encrypting the token is optional, but it is generally recommended.

The use of TLS/SSL is already a measure to ensure the confidentiality

of the token in transit. This is an extra security measure that could

be used in cases where claim values must be kept confidential.

 

———————– Page 329———————–

 

292292 appendix d

 

On the Web Service Host

These are the certificates used on the machine that hosts the web

service.

 

Certificate for Transport Security (TLS/SSL)

(Web Service Host, Active Scenario)

TLS/SSL uses a certificate to protect the communication with the

web service—for example, for the SAML token sent to it by an issuer.

The purpose is to mitigate and prevent man-in-the-middle attacks,

eavesdropping, and replay attacks.

Requirements: The subject name in the certificate must match

the DNS name of the host that provides the certificate. Active clients

will generally check that the certificate has a chain of trust to one of

the root authorities trusted by that client.

Recommended certificate store: LocalMachine\My

Example: CN=a-expense-svc.adatumpharma.com

 

Certificate for Message Security

(Web Service Host, Active Scenario)

A certificate will be used to protect the communication between the

client and the web service at the message level.

Requirements: The service credentials are configured in the WCF

web service—for example, through the <serviceCertificate> section

of the web service’s Web.config file.

Recommended certificate store: LocalMachine\My

Example: CN=a-expense-svc.adatumpharma.com

 

Token Signature Verification

(Web Service Host, Active Scenario)

The web service host has the thumbprint of the certificate that is

used to verify the SAML token signature. The issuer embeds the cer-

tificate in each digitally signed security token. The web service host

checks that the digital signature’s thumbprint (a hash code) matches

that of the signing certificate. WIF and ADFS embed the public key in

the token by default.

Requirements: The thumbprint of the issuer’s certificate should

be present in the <issuerNameRegistry> section of the web service’s

Web.config file.

Recommended certificate store: None

Example: ‎d2316a731b59683e744109278c80e2614503b17e (This

is the thumbprint of the certificate with CN=adatumpharma-token-

sign.com.)

 

If the certificate (issuer public key) is embedded in the token, the

signature verification is done automatically by WIF. If not, an

 

———————– Page 330———————–

 

certificates 293 293

 

IssuerTokenResolver needs to be configured to find the public key.

This is common in interop scenarios; however, WIF and ADFS will

always embed the full public key.

 

Token Decryption (Web Service Host, Active Scenario)

The web service host has a certificate that it uses to decrypt the

SAML token that it receives from an issuer. The web application has

both public and private keys. The issuer has only the public key.

Requirements: The certificate used to decrypt the SAML token

should be configured in the <serviceCertificate> element of the

<microsoft.identityModel> section of the web service’s Web.config

file. Also, the App Pool account of the web server should have permis-

sion to read the private key of the certificate.

Recommended certificate store: LocalMachine\My

Example: CN=a-expense-svc.adatumpharma-tokenencrypt.com

 

Token Signature Chain Trust Verification (Web Service

Host, Active Scenario)

The web service host has a certificate that is used to verify the

trusted certificate chain for the issuer’s token signing certificate.

Requirements: The public key of the issuer certificate should be

installed in LocalMachine\TrustedPeople certificate store unless the

certificate was issued by a trusted root authority.

Recommended certificate store: LocalMachine\TrustedPeople

only if the certificate was not issued by a trusted root authority.

 

The chain-of-trust verification is controlled by an attribute of the

<certificateValidation> element of the WIF configuration section

of the web service’s Web.config file. WIF has this setting turned on

by default.

 

On the Active Client Host

These are the certificates that are used on the active client computer.

 

Certificate for Message Security (Active Client Host)

A certificate will be used to protect the communication between the

client and the web service or issuer at the message level.

Requirements: If negotiateServiceCredentials is enabled, the

client will obtain the public key of the web service or issuer at run

time. If not, the certificate for message security is configured in the

WCF client by setting the ClientCredentials.ServiceCertificate

property at run time or configuring the <serviceCertificate> element

of the active client’s App.config file. The service credentials are con-

figured in the WCF web service—for example, through the <service

 

———————– Page 331———————–

 

294294 appendix d

 

Certificate> section of the web service’s Web.config file.

Recommended certificate store: LocalMachine\TrustedPeople or

LocalMachine\AddressBook

Example: CN=a-expense-svc.adatumpharma.com

 

———————– Page 332———————–

 

Appendix E Windows Azure

AppFabric Access

Control Service

 

This appendix provides background information about ACS and

shows you how to obtain and configure a Windows Azure™ App

Fabric Access Control Service (ACS) account. ACS makes it easy to

authenticate and authorize website, application, and service users and

is compatible with popular programming and runtime environments.

It allows authentication to take place against many popular web and

enterprise identity providers. Users are presented with a configurable

page listing the identity providers that are configured for the applica-

tion, which assists in the home realm discovery (HRD) process by

permitting the user to select the appropriate identity provider.

ACS also integrates with Windows Identity Foundation (WIF)

tools and environments and Microsoft Active Directory® Federation

Services (ADFS) 2.0. It can accept SAML 1.1, SAML 2.0, and Simple

Web Token (SWT) formatted tokens, and will issue a SAML 1.1,

SAML 2.0, or SWT token. ACS supports a range of protocols that

includes OAuth, OpenID, WS-Federation, and WS-Trust. Rules con-

figured within ACS can perform protocol transition and claims trans-

formation as required by the website, application, or service.

ACS is configured through the service interface using an OData-

based management API, or though the web portal that provides a

graphical and interactive administration experience.

This appendix discusses the ways that ACS can be used by show-

ing several scenarios and the corresponding message sequences. It also

contains information about creating an ACS issuer service instance,

configuring applications to use this service instance, creating custom

home realm discovery pages, error handling, integrating with ADFS,

security considerations, and troubleshooting ACS operations.

 

295

 

———————– Page 333———————–

 

296296 appendix e

 

What Does ACS DO?

 

ACS can be used to implement federated authentication and authori-

zation by acting as a token issuer that authenticates users by trusting

one or more identity providers. The following list contains definitions

of the important entities and concepts involved in this process:

•     Realm or Domain: an area or scope for which a specific identity

provider is authoritative. It is not limited to only an Active

Directory directory service domain or any similar enterprise

mechanism. For example, the Google identity provider service is

authoritative for all users in the Google realm or domain (users

who have an account with Google); but it is not authoritative

for users in the Windows Live® realm or domain (users with an

account on the Windows Live network of Internet services).

•     Home Realm Discovery: the process whereby the realm or

domain of a user is identified so that the request for authentica-

tion can be forwarded to the appropriate identity provider. This

may be accomplished by displaying a list of available identity

providers and allowing the user to choose the appropriate one

(one that will be able to authenticate the user). Alternatively, it

may be achieved by asking the user to provide an email address,

and then using the domain of that address to identify the home

realm or domain of that user for authentication purposes.

•     Identity Provider: a service or site that accepts credentials from

a user. These credentials prove that the user has a valid account

or identity. ACS redirects users to the appropriate identity

provider that can authenticate that user and issue a token

containing the claims (a specific set of information) about that

user. The claims may include only a user identifier, or may

include other details such as the user name, email address, and

any other information that the user agrees to share. An identity

provider is authoritative when the authentication takes place

for a user within the provider’s realm or domain.

•     Security Token Service (STS) or Token Issuer: a service that

issues tokens containing claims. ACS is an STS in that it issues

tokens to relying parties that use ACS to perform authentica-

tion. The STS must trust the identity provider(s) it uses.

•     Relying Party: an application, website, or service that uses a

token issuer or STS to authenticate a user. The relying party

trusts the STS to issue the token it needs. There might be

several trust relationships in a chain. For example, an application

trusts STS A, which in turn trusts another STS B. The applica-

tion is a relying party to STS A, and STS A is a relying party to

STS B.

 

———————– Page 334———————–

 

windows azure appfabric access control service 297 297

 

•     Trust Relationship: a configuration whereby one party trusts

another party to the extent that it accepts the claims for users

that the other party has authenticated. For example, in the

scope of this appendix, ACS must trust the identity providers it

uses and the relaying party must trust ACS.

•     Transformation Rules: operations that are performed on the

claims in a token received from an STS when generating the

token that this entity will issue. ACS includes a rules engine that

can perform a range of operations on the claims in the source

token received from an identity provider or another STS. The

rules can copy, process, filter, or add claims before inserting

them into a token that is issued to the relying party.

•     Protocol Transition: the process in an STS of issuing a token for

a relying party when the original token came from another STS

that implements different token negotiation protocols. For

example, ACS may receive a token from an identity provider

using OpenID, but issue the token to the relying party using the

WS_Federation protocol.

 

In essence, when the user is requesting authentication in a web

browser, ACS receives a request for authentication from a relying

party and presents a home realm discovery page. The user selects an

identity provider, and ACS redirects the user to that identity provider’s

login page. The user logs in and is returned to ACS with a token con-

taining the claims this user has agreed to share in that particular iden-

tity provider.

ACS then applies the appropriate rules to transform the claims,

and creates a new token containing the transformed claims. It then

redirects the user back to the relying party with the ACS token. The

relying party can use the claims in this token to apply authorization

rules appropriate for this user.

The process for service authentication is different because there

is no user interaction. Instead, the service must first obtain a suitable

token from an identity provider, present this token to ACS for trans-

formation, and then present the token that ACS issues to the relying

party. The following sections of this chapter describe the message

sequence in more detail, and explain how you can configure ACS to

perform federated authentication.

 

Message Sequences for ACS

 

ACS can be used as a stand-alone claims issuer, but the typical sce-

nario is to combine it with one or more local issuers such as ADFS or

custom issuers. The sequence of messages and redirections varies

 

———————– Page 335———————–

 

298298 appendix e

 

depending on the specific scenario; however, the following are some

of the more common scenarios for ACS.

 

ACS Authenticating Users of a Website

ACS can be used to authenticate visitors to a website when these

visitors wish to use a social identity provider or another type of iden-

tity provider that ACS supports. Figure 1 shows a simplified view of

the sequence of requests that occur.

 

Issuer (ACS)

 

Trust

3 4 7 8

Trust

r D R t R

r

n y n

o i e t a e

s

e

t

o

t

i

n

f i

c u

t

u

k

s

t

o

t

r

n

r f

s a o

v n o n

e c e t

 

i

e r

d r

H t

u t i

r e m o

y

q n

o k

d d

e

e e i

p m e

n

d

r h v

a n

e

 

t

e

o

c

d

g

u S r

e l c

n

R a

a p

o

e

e i n

m

S

a t

l s a

m i

n

i

n

g Identity Provider

 

1 Access site Windows Live

Claims-aware 2 Redirect to ACS Authenticate 5

website Issue token 6 Google

 

and redirect to

Web Facebook

ACS

browser

9 Send ACS token [ others ]

 

figure 1

ACS authenticating users of a website

 

On accessing the application (1), the visitor’s web browser is redi-

rected to ACS, the trusted source of security tokens (2 and 3). ACS

displays the home realm discovery page (4) containing a list of iden-

tity providers configured for the website or web application. The user

selects an identity provider and ACS redirects the visitor’s web

browser to that identity provider’s login page (5).

After entering the required credentials, the visitor’s browser is

eventually redirected back to ACS (6) with the identity provider’s

token in the request (7). ACS performs any necessary transformation

of the claims in the identity provider’s token using rules configured for

the website or application, and then returns a token containing these

claims (8). The visitor’s browser is then redirected to the claims-aware

website that was originally accessed (9).

This scenario is demonstrated in Chapter 7, “Federated Identity

with Multiple Partners and Windows Azure Access Control Service.”

 

———————– Page 336———————–

 

windows azure appfabric access control service 299 299

 

ACS Authenticating Services, Smart

Clients, and Mobile Devices

ACS can be used to authenticate service requests for web services,

smart clients, and mobile devices such as Windows Phone when the

service uses a social identity provider or another type of identity

provider that ACS supports. Figure 2 shows a simplified view of the

sequence of requests that occur.

 

Issuer (ACS)

 

Trust

3 4

Trust

r t R

r

n

a e

e e

d n t

u

i k

s

r

v o

f

n

t

o

o

 

r

r t

m

p

o

e k

y

t

i d e

n

t

 

 

c

n

e l c

d a o

i i n

m

 

d t

s a

i

n

e n

S i

n

Identity Provider g

 

Windows Live

1 Authenticate

Claims-aware

Google

2 Issue token Send ACS token 5 service

Facebook

 

[ others ] Smart Client

 

or Service

 

figure 2

ACS authenticating services, smart clients, and SharePoint BCS

 

Because the service cannot use the ACS home realm discovery

web page, it must be pre-configured to use the required identity pro-

vider or may query ACS to discover the trusted STS to use. The service

first authenticates with the appropriate identity provider (1), which

returns a token (2) that the service sends to ACS (3). ACS performs

any necessary transformation of the claims in the identity provider’s

token using rules configured for the service, and then returns a token

containing these claims (4). The service then sends the token received

from ACS to the relying party service or resource (5).

This scenario is demonstrated in Chapter 8, “Claims Enabling Web

Services.”

 

———————– Page 337———————–

 

300300 appendix e

 

Combining ACS and ADFS for Users

of a Website

ACS can be used to authenticate visitors to a website when these

visitors will access an ADFS STS first to establish their identity, but

the ADFS STS trusts ACS so that visitors who wish to use a social

identity provider or another type of identity provider can be authen-

ticated. Figure 3 shows a simplified view of the sequence of requests

that occur.

 

Trust

 

Active

Issuer (ADFS) Issuer (ACS)

Directory

 

4 2 5 9

3 1 1 6 1

1

S s R S t R 0

r e

r

a e p e f e a t y

r

o

u e d

n

n

t

o u

m

n

f

n

l i Trust

t

g

c i

n e

m s r

t

h d i r d f n t a k n

e

s o

n i

 

i e

e r t o

o

c A

e t e n

n e e o r t

R t i

t o

u a e d

t q d t C k m k c e g i r a

Trust i q i t s

c u e

e

a e b o S n e e e t m a d d n

d

r

 

m

y

P

i n

n n i

A

i

t

o

s

 

 

o

i t r s r c d e y e v c a

o u C s e l c h H r S o l

u

n

n f l S c a o t e r n c

o e

e

e

n

e i n

v p e

 

u

m

d

r

r

r

S

s i i t a k

f v

u o e

e s a

t c o

i s t m

d n

e i r

i R n o

n

D r f

g

u s

t n

e a

R r

t

 

Identity Provider

1 Access site

Authenticate 7 Windows Live

2 Redirect to ADFS

Claims-aware Issue token and 8 Google

website redirect to ACS

Web Facebook

browser

13 Send ADFS token

[ others ]

 

figure 3

ACS and ADFS authenticating users of a website

 

Upon accessing the application (1), the visitor’s web browser is

redirected to ADFS (2 and 3). ADFS will contain preconfigured rules

that redirect the visitor to ACS (4 and 5), which displays the home

realm discovery page (6) containing a list of identity providers config-

ured for the website or web application. The user selects an identity

provider and ACS redirects the visitor’s web browser to that identity

provider’s login page (7).

After entering the required credentials, the visitor’s browser is

redirected back to ACS with the identity provider’s token in the re-

quest (8 and 9). ACS performs any necessary transformation of the

claims in the identity provider’s token using rules configured for the

website or application, and then redirects the browser to ADFS with

a token containing these claims (10). ADFS receives the token (11),

 

———————– Page 338———————–

 

windows azure appfabric access control service 301 301

 

performs any additional transformations on the claims, and returns a

token (12). The visitor’s browser is then redirected to the claims-aware

website that was originally accessed (13).

 

Combining ACS and ADFS for Services,

Smart Clients, and SharePoint BCS

ACS can be used to authenticate service requests for web services,

smart clients, and Microsoft SharePoint® Business Connectivity Ser-

vices (BCS) applications when the service uses an ADFS STS as the

token issuer, but the service requires a token provided by ACS. Figure

4 shows a simplified view of the sequence of requests that occur.

 

Active

Issuer (ADFS) Trust Issuer (ACS)

Directory

3

n 4

e S

k

2

F

1

o

t D

d A n

n e

k

I

m

c

e

s

o

A

o

A

S t

o s

r S

f

u u

n

c n

d r C

t t t e e u A

i h

t

a

n

v t

i e

e i T

m

e o

a

n

R

n

t o

r

r

i k

b

D

f

t n e o u

i

i c g n s

r a t

e t c

c e l

t a

o w i

r i m

y t s

h

 

Claims-aware

service

5 Send token

 

Smart Client

or Service

figure 4

ACS authenticating services, smart

clients, and SharePoint BCS

 

The service is preconfigured to use ADFS and Active Directory as

the identity provider. The service first authenticates through ADFS

(1), which returns a token (2) that the service sends to ACS (3). ACS

trusts ADFS, and performs any necessary transformation of the claims

in the token using rules configured for the service; then it returns a

token containing these claims (4). The service then sends the token

received from ACS to the relying party service or resource (5).

This scenario is demonstrated in Chapter 9, “Securing REST

Services.”

 

———————– Page 339———————–

 

302302 appendix e

 

Creating, Configuring, and Using an ACS Issuer

 

The complete process for creating and configuring an ACS account to

implement a token issuer requires the following steps:

 

1. Access the ACS web portal.

 

2. Create a namespace for the issuer service instance.

 

3. Add the required identity providers to the namespace.

 

4. Configure one or more relying party applications.

 

5. Create claims transformations and pass-through rules.

 

6. Obtain the URIs for the service namespace.

 

7. Configure relying party applications to use ACS.

 

The following sections explain each of these steps in more detail.

 

STEP 1: ACCEss th E AC s wEB portAL

The initial configuration of ACS must be done using the web portal.

This is a Microsoft Silverlight® browser plug-in application that pro-

vides access to the access control, service bus, and cache features of

the Azure AppFabric for your account. You must log into the portal

using a Windows Live ID associated with your Windows Azure ac-

count. If you do not have a Windows Live ID, you must create and

register one first at http://www.live.com. If you do not have a Win-

dows Azure account, you must create one at http://www.microsoft.

com/windowsazure/account/ before you can use ACS.

ACS is a subscription-based service, and you will be charged for

the use of ACS. The cost depends on the type of subscription you

take out. At the time of writing, the standard consumption charge was

$1.99 per 100,000 transactions.

 

STEP 2: Cr EAt E A nAm EspACE for th E Issu Er

sE rvICE InstA nCE

After you sign into the ACS web portal, you can begin to configure

your service instance. The first step is to define a namespace for your

service. This is prepended to the ACS URI to provide a unique base

URI for your service instance. You must choose a namespace that is

not already in use anywhere else by ACS (you can check the avail-

ability before you confirm your choice), and choose a country/region

where the service will be hosted.

For example, if you choose the namespace fabrikam, the base URI

for your ACS service instance will be https://fabrikam.accesscontrol.

appfabric.com. Endpoints for applications to use for authentication

will be paths based on this unique URI.

 

———————– Page 340———————–

 

windows azure appfabric access control service 303 303

 

After you have created the namespace, you see the main service

management page for your new service instance. This provides quick

access to the configuration settings for the trust relationships (the

relying party applications, identity providers, and rule groups), the

service settings (certificates, keys, and service identities), administra-

tion, and application integration.

You must use the Certificates and Keys page either to upload an

X.509 certificate with a private key for use by ACS when encrypting

tokens, or specify a 256-bit symmetric key. You can also upload a dif-

ferent certificate for signing the tokens if required. You can use cer-

tificates generated by a local certificate authority such as Active Di-

rectory Certificate Services, a certificate obtained from a commercial

certification authority, or (for testing and proof of concept purposes)

self-signed certificates.

 

STEP 3: Add th E rEqu IrEd IdEntItY provIdErs to thE

nA m EspACE

Next, you must specify the identity providers that you will trust to

authenticate requests sent to ACS from applications and users. By

default, Windows Live ID is preconfigured as an identity provider. You

can add additional providers such as Google, Yahoo!, Facebook, your

own or other ADFS issuers, and more. For each one, you can specify

the URL of the login page and an image to display for the identity

provider when the user is presented with a list of trusted providers.

For known identity providers (such as Google, Yahoo!, and Face-

book) these settings are preconfigured and you should consider using

the default settings. If you want to trust another identity provider,

such as a Security Token Service (STS) based at an associated site such

as a partner company, you must enter the login page URL, and option-

ally specify an image to display.

 

By default, ACS uses Windows Live ID as the identity provider to

determine the accounts of ACS administrators. You configure a rule

that identifies administrators through the claims returned by their

identity provider (claim transformation and filtering rules are

described in step 5). You can also use any of the other configured

identity providers to determine which accounts have administrative

rights for this ACS service instance.

 

STEP 4: Conf Igur E o nE or mor E rELYIng pArtY

App LICAt Ions

You can now configure the ACS service to recognize and respond to

relying parties. Typically these are the applications and web services

that will send authentication requests to this ACS service instance.

For each relying party you specify:

 

———————– Page 341———————–

 

304304 appendix e

 

•     A name for the application or service as it will appear in the

authentication portal page where users select an identity

provider.

•     The URIs applicable to this application or service. These include

the realm (URI) for which tokens issued by ACS will be valid,

the URI to redirect the request to after authentication and,

optionally, a different URI to redirect the request to if an

authentication error occurs.

 

It is good practice to always configure the redirection addresses,

even though they are mandated by ACS to be in the same realm as

the token that ACS delivers, in order to mitigate interception attacks

through rerouting the posted token

 

•     The format, encryption policy, and validity lifetime (in seconds)

for the tokens returned from ACS. By default the format is a

SAML 2.0 token, but other formats such as SAML 1.1 and SWT

are available. SAML 1.1 and SAML 2.0 tokens can be encrypted,

but SWT tokens cannot. If you want to return tokens to a web

service that implements the WS-Trust protocol you must select

a policy that encrypts the token.

•     The binding between this relying party and the identity provid-

ers you previously configured for the service namespace. Each

relying party can trust a different subset of the identity provid-

ers you have configured for the service namespace.

•     The token signing options. By default, tokens are signed using

the certificate for the service namespace, and all relying parties

will use the same certificate. However, if you wish, you can

upload more certificates and allocate them to individual relying

parties.

Each option in the configuration page has a “Learn more” link that

provides more information on that setting. You can, as an alternative,

upload a WS-Federation metadata document that contains the re-

quired settings instead of entering them manually into the portal.

 

If you only configure a single identity provider for a relying party,

ACS will not display the Home Realm Discovery page that shows a

list of configured identity providers. It will just use the identity

provider you configured.

 

———————– Page 342———————–

 

windows azure appfabric access control service 305 305

 

STEP 5: Cr EAt E C LAIms t rA nsformAt Ions

A nd pAss-through ruLEs

By default, ACS does not include any of the claims it receives from an

identity provider in the token it issues. This ensures that, by default,

claims that might contain sensitive information are not automatically

sent in response to authentication requests. You must create rules

that pass the values in the appropriate claims through to the token

that will be returned to the relying party. These rules can apply a

transformation to the claim values, or simply pass the value received

from the identity provider into the token.

The rules are stored in rule groups. You create a rule group and

give it a name, then create individual rules for this group. The portal

provides a Generate feature that will automatically generate a pass-

through rule for every claim for every configured identity provider. If

you do not require any transformations to take place, (which is typi-

cally the case if the relying party application or service will access ACS

through another STS such as a local ADFS instance), this set of gener-

ated rules will probably suffice as all of the claims mapping and trans-

formations will take place on the local STS issuer.

If you want to perform transformation of claims within ACS, you

must create custom rules. For a custom rule, you specify:

•     The rule conditions that match an input claim from an identity

provider. You can specify that the rule will match on the claim

type, the claim value, or both.

•     For the claim type, you can specify that it will match any claim

type, one of the standard claim types exposed by this identity

provider, or a specific claim type you enter (as a full XML

namespace value for that claim type).

•     For the claim value, you can specify that it will match any value

or a specific value you enter. Claim types and values are case-

sensitive.

•     The rule actions when generating the output claim. You can

specify the output claim type, the output claim value, or both.

•     For the output claim type you can specify a pass-through action

that generates a claim of the same type as the input claim, select

one of the standard claim types exposed by this identity pro-

vider, or enter a specific claim type (as a full XML namespace

value for that claim type).

•     For the output claim value you can choose to pass through the

original value of the claim, or enter a value.

•     The rule description (optional) that helps you to identify the

rule when you come to apply it.

 

———————– Page 343———————–

 

306306 appendix e

 

STEP 6: oBtAI n thE urIs for thE sE rvICE nAm EspACE

After you configure your ACS service instance, you use the Applica-

tion Integration page of the portal to obtain the endpoints to which

relying parties will connect to authenticate requests. This page also

lists the endpoints for the management service (the API for configur-

ing ACS without using the web portal), the OAuth WRAP URI, and

the URIs of the WS-Federation and WS-Metadata Exchange docu-

ments.

 

STEP 7: Conf Igur E rELYIng pArtY App LICAt Ions

to us E AC s

To add an ACS service reference to an application in Microsoft Visual

Studio® development system, you must download and install the

Windows Identity Foundation SDK. This adds a new option to the

Visual Studio menus to allow you to add an STS reference to a project.

It starts a wizard (the FedUtil utility) that asks for the URI of the

WS-Federation document for your ACS service instance, with can be

obtained from the application integration page of the portal in the

previous step. The wizard adds a reference to the Microsoft.Identity

Model assembly to the project and updates the application configura-

tion file.

If the application is a web application, users will be redirected to

ACS when they access the application, and will see the ACS home

realm discovery page that lists the configured identity providers for

the application that are available. After authenticating with their

chosen identity provider, users will be returned to the application,

which can use the claims in the token returned by ACS to modify its

behavior as appropriate for each user.

For information and links to other resources that describe tech-

niques for using claims and tokens to apply authorization in applica-

tions and services, see “Authorization In Claims Aware Applications

– Role Based and Claims Based Access Control” at http://blogs.msdn.

com/b/alikl/archive/2011/01/21/authorization-in-claims-aware-ap-

plications-role-based-and-claims-based-access-control.aspx.

 

Custom Home Realm Discovery Pages

 

By default, ACS displays a home realm discovery page when a user is

redirected to ACS for authentication. This page contains links to the

identity providers configured for the relying party application, and is

hosted within ACS. If you have configured an ADFS instance as an

identity provider, you can specify email suffixes that are valid for this

ADFS instance, and ACS will display a text box where users can enter

an email address that has one of the valid suffixes. This enables ACS

to determine the home realm for the authenticated user.

 

———————– Page 344———————–

 

windows azure appfabric access control service 307 307

 

As an alternative to using the default ACS-hosted login page, you

can create a custom page and host it with your application (or else-

where). The custom page uses the Home Realm Discovery Metadata

Feed exposed by ACS to get the list and details of the supported

identity providers. To make this easier, you can download the example

login page (which is the same as the default page) from ACS and

modify it as required.

If you are integrating ACS with ADFS, the home realm discovery

page will contain a text box where users can enter an email address

that is valid within trusted ADFS domains. ACS will use this to deter-

mine the user’s home realm. You can create a custom page that con-

tains only a text box and does not include the list of configured

identity providers if this is appropriate for your scenario.

 

Configuration with the Management Service

API

 

Windows Azure AppFabric exposes a REST-based service API in At-

omPub format that uses X.509 client certificates for authentication.

The URI of the management service for your ACS instance is shown

in the application integration page of the web portal after your con-

figure the instance. You can upload any valid X.509 certificate (in .cer

format) to the ACS portal and then use it as a client certificate when

making API requests.

The Windows Azure management API supports all of the opera-

tions available through the web portal with the exception of creating

a namespace. You can use the management API to configure identity

providers, relying parties, rules, and other settings for your namespac-

es. To create new namespaces, you must use the web portal.

Chapter 7, “Federated Identity with Multiple Partners and Win-

dows Azure Access Control Service” and the associated ACS wrapper

used in this sample to configure ACS demonstrate how you can use

the management API to configure identity providers, relying partiers,

and rules.

For more information, see “Access Control Service Samples and

Documentation” at http://acs.codeplex.com/releases/view/57595.

For examples of adding identity providers such as ADFS, OpenID,

and Facebook using the management API, see the following resources:

•     “Adding Identity Provider Using Management Service” at http://

blogs.msdn.com/b/alikl/archive/2011/01/08/windows-azure-

appfabric-access-control-service-v2-adding-identity-provider-

using-management-service.aspx.

 

———————– Page 345———————–

 

308308 appendix e

 

•     “Programmatically Adding OpenID as an Identity Provider Using

Management Service” at http://blogs.msdn.com/b/alikl/ar-

chive/2011/02/08/windows-azure-appfabric-access-control-

service-acs-v2-programmatically-adding-openid-as-an-identity-

provider-using-management-service.aspx.

•     “Programmatically Adding Facebook as an Identity Provider

Using Management Service” at http://blogs.msdn.com/b/alikl/

archive/2011/01/14/windows-azure-appfabric-access-control-

service-acs-v2-programmatically-adding-facebook-as-an-

identity-provider-using-management-service.aspx.

 

Managing Errors

 

One of the configuration settings for a relying party that can be pro-

vided is the URI where ACS will send error messages. ACS sends de-

tails of the error as a JavaScript Object Notation (JSON)-encoded

object in the response body when the original request was an OAuth

request; or a SOAP fault message if the original request was a WS-

Trust request. The response includes a TraceId value that is useful in

identifying failed requests if you need to contact the ACS support

team.

For information about handling JSON-encoded responses, see

“How To: Use Error URL” at http://acs.codeplex.com/wikipage?title=

how%20to%3a%20use%20Error%20urL and “Returning Friendly

Error Messages Using Error URL Feature” at http://blogs.msdn.com/b/

alikl/archive/2011/01/15/windows-azure-appfabric-access-control-

service-acs-v2-returning-friendly-error-messages-using-error-url-

feature.aspx.

Errors that arise when processing management API requests

throw an exception of type DataServiceRequestException.

A list of error codes for ACS is available from “ACS Error Codes”

at http://acs.codeplex.com/wikipage?title=ACs%20Error%20Codes&amp;

version=8.

 

Integration of ACS and a Local ADFS Issuer

 

You can configure ACS to use an ADFS issuer as a trusted identity

provider. This is useful in scenarios where you want users of a local

application to be able to authenticate against an Active Directory

installation (typically within your own organization) when they access

the local application, and then access other services that require a

SAML or other type of claims token. For example, a locally installed

customer management application may use a partner’s externally

hosted service to obtain credit rating information for customers.

 

———————– Page 346———————–

 

windows azure appfabric access control service 309 309

 

The procedure for configuring this scenario is to use the WS-

Federation document that can be created by ADFS to configure ACS

so that it can use the ADFS service as a trusted identity provider. ACS

can accept encrypted tokens from ADFS identity providers as long as

the appropriate X.509 certificate with a private key is hosted by ACS.

The ADFS identity provider receives the public key it will use to en-

crypt tokens when it imports the WS-Federation metadata from ACS.

Afterwards, when users first access the local application they are

authenticated by the local ADFS STS. When the application must

access the externally hosted service, it queries ACS. ACS then authen-

ticates the user against their local ADFS STS and issues a token that

is valid for the remote service. The customer management application

then passes this token to the remote service when it makes the call to

retrieve rating information (see Figure 5).

 

1 Configure using

WS-Fed metadata

Active ADFS ACS

Directory

 

2

Get SAML token

3 Exchange SAML

token for ACS token

 

Externally hosted

Local application service

4 Send ACS token (Relying Party)

with payload

 

figure 5

ACS using an ADFS issuer as a trusted identity provider

 

An alternative (reverse) scenario is to configure ADFS to use ACS

as a trusted issuer. In this scenario, ADFS can authenticate users that

do not have an account in the local Active Directory used by ADFS.

When users log into the application and are authenticated by ADFS,

they can choose an identity provider supported by ACS. ADFS then

accepts the token generated by ACS, optionally maps the claims it

contains, and issues a suitable token (such as a Kerberos ticket) to the

user that is valid in the local domain (see Figure 6).

 

———————– Page 347———————–

 

310310 appendix e

 

1 Configure ADFS

to trust ACS

 

Active ADFS ACS

2 Get SAML token

Directory

 

Issue Kerberos

3

ticket or other

appropriate token

 

Internally hosted

Local application service

Send token obtained (Relying Party)

from ADFS with

4

payload

 

figure 6

ADFS using ACS as a trusted issuer

 

Security Considerations with ACS

 

You must ensure that your applications and services that make use of

ACS for authentication and claims issuance maintain the requisite

levels of security. Although your applications do not have access to

users’ login credentials, ACS does expose claims for the user that your

application must manage securely.

You must ensure that credentials and certificates used by applica-

tions and services, and for access to ACS, are stored and handled in a

secure manner. Always consider using SSL when passing credentials

over networks. Other issues you should consider are those that apply

to all applications, such as protection from spoofing, tampering, repu-

diation, and information disclosure.

For more information and links to related resources that describe

security threats and the relevant techniques available to counter them

see the following resources:

“Windows Azure AppFabric Access Control Service (ACS) v2 –

Threats & Countermeasures” at http://blogs.msdn.com/b/alikl/ar-

chive/2011/02/03/windows-azure-appfabric-access-control-service-

acs-v2-threats-amp-countermeasures.aspx

 

———————– Page 348———————–

 

windows azure appfabric access control service 311 311

 

“Windows Identity Foundation (WIF) Security for ASP.NET Web

Applications – Threats & Countermeasures” at http://blogs.msdn.

com/b/alikl/archive/2010/12/02/windows-identity-foundation-wif-

security-for-asp-net-web-applications-threats-amp-countermea-

sures.aspx

“Microsoft Application Architecture Guide, 2nd Edition” at

http://msdn.microsoft.com/en-us/library/ff650706.aspx

 

Tips for Using ACS

 

The following advice may be useful in resolving issues encountered

when using claims authentication with ACS.

 

ACS and STSs Generated in Visual Studio

2010

Custom STSs created from the Visual Studio 2010 template assume

that the ReplyToAddress is the same as the AppliesToAddress. You

can see this in in the GetScope method of the CustomSecurity

TokenService, which sets scope.ReplyToAddress = scope.Applies

ToAddress. In the case of ACS, the ReplyToAddress and the Applies

ToAddress are different. The STS generates a redirection to the

wrong place and an error occurs when an application accesses ACS to

perform authentication.

To resolve this, replace the line that sets the ReplyToAddress

with the following code.

 

C#

if (request.ReplyTo != null)

{

scope.ReplyToAddress = request.ReplyTo.ToString();

}

else

{

scope.ReplyToAddress = scope.AppliesToAddress;

}

 

Error When Uploading a Federation

Metadata Document

When adding a new ADFS token issuer as a trusted identity provider

to ACS, you may receive an error such as “ACS20009: An error oc-

curred reading the WS-Federation metadata document” when up-

 

———————– Page 349———————–

 

312312 appendix e

 

loading the federation metadata file. ACS validates the signature of

the file, and if you have modified the file that was generated by Vi-

sual Studio this error will occur. If you need to modify the metadata

file, you must re-sign it. A useful tool for this can be found at “WIF

Custom STS Metadata File Editor” (http://botsikas.blogspot.

com/2010/06/wif-custom-sts-metadata-file-editor.html).

 

Avoiding Use of the Default ACS Home

Realm Discovery Page

When using ACS with multiple identity providers, ACS will display a

page with the list of identity providers that are configured the first

time you attempt to sign in. You can avoid this by sending the ap-

propriate whr parameter with the authentication request. The follow-

ing table lists the different values for this parameter for each of the

identity providers.

 

Identity provider whr parameter value

 

Windows Live ID urn:WindowsLiveID

 

Google Google

 

Yahoo! Yahoo!

 

Facebook Facebook-<application-ID>

 

Custom STS The value should match the entityid declared

in the FederationMetadata file of the identity

provider.

 

More Information

 

For more information about setting up and using ACS, see the follow-

ing resources:

•     “Windows Azure AppFabric” at http://www.microsoft.com/

windowsazure/Appfabric/overview/default.asp

•     “Access Control Service Samples and Documentation” at

http://acs.codeplex.com/documentation

•     “Windows Azure Team Blog” at http://blogs.msdn.com/

windowsazure/

 

———————– Page 350———————–

 

Appendix F SharePoint 2010

Authentication

Architecture and

Considerations

 

This appendix provides background information about the way that

Microsoft® SharePoint® 2010 implements claims-based authentica-

tion. This is a major change to the authentication architecture com-

pared to previous versions, and makes it easier to take advantage

of federated authentication approaches for SharePoint websites, ap-

plications, and services. It also contains information on some of the

important factors you should consider when creating and deploying

claims-aware SharePoint applications and services.

Versions prior to SharePoint 2010 use the techniques for authen-

tication provided by the Microsoft Windows® operating system and

ASP.NET. Applications can use Windows authentication (with the

credentials validated by Microsoft Active Directory® directory

service), ASP.NET forms authentication (with credentials typically

validated from a database), or a combination of these techniques.

To make claims-based and federated authentication easier, Share-

Point 2010 can use a claims-based model for authentication. This

model still fully supports Windows and forms authentication, but

does so by integrating these approaches with the claims-based

authentication mechanism. This appendix provides background

information that will help you to understand how this model is

implemented within SharePoint 2010.

 

Benefits of a Claims-Based Architecture

 

Users often require access to a number of different applications to

perform daily tasks, and increasingly these applications are remotely

located so that users access them over the Internet. ASP.NET forms

authentication typically requires the maintenance of a separate user

database at each location. Users must have accounts registered with

all of the Active Directory domains, or in all of the ASP.NET forms

authentication databases.

 

313

 

———————– Page 351———————–

 

314314 appendix f

 

The use of tokens and claims can simplify authentication by al-

lowing the use of federated identity—users are authenticated by an

identity provider that the organization and application trusts. This

may be an identity provider within the organization, such as Active

Directory Federation Services (ADFS), or a third party (a business

partner or a social identity provider such as Windows Live® or

Google).

As well as simplifying administration tasks, claims-based authen-

tication also assists users because it makes it possible for users to use

the same account (the same credentials) to access multiple applica-

tions and services in remote locations, hosted by different organiza-

tions. This allows for single sign-on (SSO) in that users can move from

one application to another, or make use of other services, without

needing to log on each time.

The integration of claims-based authentication with the existing

architecture of SharePoint provides the following benefits:

•     Support for single sign-on over the Internet in addition to the

existing location-dependent mechanisms such as Active Direc-

tory, LDAP, and databases.

•     Automatic and secure delegation of identity between applica-

tions and between machines in a server farm.

•     Support for multiple different authentication mechanisms in a

single web application without requiring the use of zones.

•     Access to external web services without requiring the user to

provide additional credentials.

•     Support for standard authentication methods increasingly being

used on the web.

•     Support for accessing non-claims-based services that use only

Windows authentication.

 

Windows Identity Foundation

SharePoint 2010 uses the Windows Identity Foundation (WIF) for

authentication irrespective of the authentication approach used by

the individual applications and services. This is a fundamental change

in the architecture of SharePoint in comparison to previous versions.

WIF is a standards-based technology for working with authentication

tokens and claims, and for interacting with security token services

(STSs). It provides a unified programming model that supports both

the Passive and Active authentication sequences.

 

The Passive authentication sequence uses the WS-Federation

protocol for authentication in web browser-based scenarios, such

as ASP.NET applications. It depends on redirection of the browser

between the relying party, token issuers, and identity providers.

 

———————– Page 352———————–

 

sharepoint 2010 authentication architecture and consider ations 315 315

 

The Active authentication sequence uses the WS-Trust protocol

for authentication in web service scenarios, such as Windows

Communication Foundation (WCF) services. The service ” knows”

(typically through configuration) how to obtain the tokens it requires

to access other services.

 

Implementation of the Claims-Based

Architecture

 

The claims-based architecture in SharePoint 2010 comprises three

main themes and the associated framework implementations. These

three themes correspond to the three main factors in the flow of

identity through SharePoint applications and services.

•     The first theme is the extension of authentication to enable

the use of multiple authentication mechanisms. Authentication

is possible using tokens containing claims, ASP.NET forms

authentication, and Windows authentication. External authenti-

cation is implemented though an STS within SharePoint 2010.

•     The second theme is identity normalization, where the identity

verified by the different authentication mechanisms is con-

verted to an IClaimPrincipal instance that the Windows

Identity Foundation authorization mechanism can use to

implement access control.

•     The third theme is supporting existing identity infrastructure,

where the identity can be used to access external services and

applications that may or may not be claims-aware. For non-

claims-aware applications and services, WIF can generate an

IPrincipal instance to allow other authentication methods (such

as Windows authentication) to be used even when the original

identity was validated using claims. This is implemented though

the Services Application Framework within SharePoint 2010.

 

Figure 1 shows a conceptual overview of these three themes, and

the mechanisms that implement them in SharePoint 2010.

 

———————– Page 353———————–

 

316316 appendix f

 

Externalized Identity Support for existing

authentication normalization identity infastructure

 

Client Application Service Content

Database

SharePoint

STS

Windows

IClaimsPrincipal IPrincipal

 

ASP.NET

 

SAML / SSO

 

Authentication methods Access control Services Application

(WIF and SP-STS) (WIF) Framework (WIF)

 

figure 1

The three authentication themes in SharePoint 2010

 

SharePoint 2010 User Identity

Internally, SharePoint 2010 uses the SPUser type to manage and flow

identity through applications and services. The fundamental change

in this version of SharePoint compared to previous versions is the

provision of an external authentication mechanism that supports

identity verification using claims, as well as ASP.NET forms and Win-

dows authentication. The external authentication mechanism con-

verts claims it receives into a SAML token, then maps this token to an

instance of the SPUser type.

The claims may be received in the form of a SAML token from

ADFS, Windows Azure™ AppFabric Access Control Service (ACS),

or another STS; as a Windows NT token; or from the ASP.NET forms

authentication mechanism. An application can be configured to use

more than one authentication mechanism, allowing users to be au-

thenticated by any of the configured mechanisms.

 

Previous to version 2010, supporting more than one authentication

method for a SharePoint application typically required the use of

zones; each of which is effectively a separate Microsoft Internet

Information Services (IIS) web site and URL pointing to the applica-

tion. Zones are no longer required in SharePoint 2010 because

applications can be configured to use a combination of authentica-

tion methods, although they can still be used if required (for

example, if a different application URL is required for each authen-

tication method).

 

It is also possible to configure the SharePoint 2010 authentication

mechanism in “Classic” mode for existing applications or services that

are not claims-aware, and which present a Windows NT token that

SharePoint can map to an instance of the SPUser type. If you select

classic mode, you can use Windows authentication in the same

way as in previous versions of SharePoint, and the user accounts

are treated as Active Directory Domain Services (AD DS) accounts.

 

———————– Page 354———————–

 

sharepoint 2010 authentication architecture and consider ations 317 317

 

However, services and service applications will use claims-based iden-

tities for inter-farm communication regardless of the mode that is

selected for web applications and users.

Figure 2 shows an overview of the sign-in methods supported by

the SharePoint 2010 authentication mechanism.

 

SAML 1.1 +

token (web SSO)

d

e

s SAML

a

b- ASP.NET Forms claims-based

s

m Authentication identity

i token

a

l

C

Windows

NT token SharePoint

 

SPUser

instance

 

c

i

s Windows

s

a

l NT token

C

 

figure 2

Authentication methods in SharePoint 2010

 

Windows certificate-based authentication is not supported by the

SharePoint 2010 claims-based authentication mechanism.

 

The SharePoint 2010 Security Token

Service

The conversion of claims received in the different formats is carried

out by an STS within SharePoint 2010. This is a fairly simple STS that

can map incoming claims to the claims required by the relying party

(the target application or service). It can also be configured to trust

external STSs such as ADFS, ACS, and other token issuers.

Applications and services running within SharePoint 2010 access

the local SharePoint STS to discover the claims for each user or pro-

cess that requires authorization. For example, applications can verify

that the current user is a member of one of the required Active Direc-

tory groups. This is done using the WIF authorization mechanisms,

and works in much the same way as (non-SharePoint) applications

that access ADFS or ACS directly to obtain a token containing the

claims.

For example, when a user accesses a SharePoint-hosted ASP.NET

application that requires authentication, the request is redirected to

 

———————– Page 355———————–

 

318318 appendix f

 

an identity provider such as ADFS or ACS. The token received from

the identity provider is then posted to the SharePoint STS (which the

application must be configured to trust). The SharePoint STS authen-

ticates the user and augments (transforms) the claims in the token or

request. It then redirects the request back to the application with the

augmented claims. Inside the application, the claims can be used to

authorize the user for specific actions. Figure 3 shows this process.

 

6 Authenticate user

SharePoint

STS 7 Augment claims

 

5 8

R

e

y n

t

t e

i

t k u

n r

n

o

e t

 

 

S

d r

i

P

e

 

d d

t

n i

o

v

e

k

o

S r

e

n

p

 

Identity Provider

1 Access application

ADFS

SharePoint-hosted 2 Redirect to identity Authenticate 3

web application provider ACS

Return token 4

Web

9 Access granted browser [other]

 

figure 3

Claims authentication sequence in SharePoint 2010

 

The SharePoint 2010 Services Application

Framework

One of the typical scenarios for a SharePoint application is to access

both internal (SharePoint hosted) and external services to obtain data

required by the application. Internal services include the Search Ser-

vice, Secure Store Service, Excel Services, and others. External ser-

vices may be any that expose data, either on a corporate network or

from a remote location over the Internet.

Accessing these services will, in most cases, require the applica-

tion to present appropriate credentials to the services. In some cases,

the services will be claims-aware and the application can present a

SAML token containing the required claims. As long as the external

service trusts the SharePoint STS, it can verify the claims.

Some services may not, however, be claims aware. A typical ex-

ample is when accessing a Microsoft SQL Server® database. In these

cases, the SharePoint Services Application Framework can be used to

generate a Windows token that the application can present to the

service. This is done using the Claims to Windows Token Service

(C2WTS), which can create a suitable Windows NT token.

 

———————– Page 356———————–

 

sharepoint 2010 authentication architecture and consider ations 319 319

 

Microsoft Visual Studio® development system provides support and

features to make building and deploying SharePoint 2010 applica –

tions easier. This support is included in all editions of Visual Studio

2010 (Professional, Premium, and Ultimate). It is not available in the

Express versions of Visual Studio.

 

Considerations When Using Claims

with SharePoint

 

The following sections provide specific guidance on topics related to

using claims authentication in SharePoint 2010 applications and ser-

vices.

 

Choosing an Authentication Mode

Claims-based authentication is now the recommended mechanism for

SharePoint, and all new SharePoint applications should use claims-

based authentication; even if the operating environment will include

only Windows accounts. SharePoint 2010 implements Windows au-

thentication in the same way regardless of the mode that is selected,

and there are no additional steps required to implement Windows

authentication with the claims-based authentication mode.

If you are upgrading an application that uses ASP.NET forms-

based authentication, you must use claims-based authentication. Clas-

sic mode authentication cannot support ASP.NET forms-based au-

thentication.

The only scenario where choosing classic mode authentication is

valid is when upgrading to SharePoint 2010 and existing accounts use

only Windows authentication. This allows existing applications to

continue to operate in the same way as in previous versions of Share-

Point.

 

The default authentication mode for a new SharePoint application is

classic mode. You must specifically select claims-based authentication

mode when creating a new application or website.

 

Supported Standards

SharePoint 2010 supports the following claims-related standards:

•     WS-Federation version 1.1

•     WS-Trust version 1.4

•     SAML Tokens version 1.1

 

SharePoint 2010 does not support SAML Protocol (SAMLP).

 

———————– Page 357———————–

 

320320 appendix f

 

Using Multiple Authentication

Mechanisms

When multiple authentication methods are configured for an applica-

tion, SharePoint displays a home realm discovery page that lists the

authentication methods available. The user must select the method to

use. This adds an extra step into the authentication process for the

user. It is possible to create a custom home realm discovery (sign-in)

page if required.

SharePoint 2010 does not discriminate between user accounts

when different authentication methods are used. Users that success-

fully authenticate with SharePoint 2010 using any of the claims-based

authentication methods have access to the same resources and ser-

vices as would be available if that user was authenticated using classic

Windows authentication.

Users who access a SharePoint 2010 application that is configured

to use claims-based authentication and has multiple authentication

methods set up will have the same access to resources and services

when using any of the authentication methods. However, if the user

has two different accounts configured (in other words, has accounts

in two different repositories, such as a social identity provider and

ASP.NET), and the authentication method used validates the user

against one of these accounts, the user will have access to only re-

sources and services configured for the account that was validated.

You can configure multiple SAML authentication providers for a

server farm and application. However, you can configure only a single

instance of an ASP.NET forms-based authentication provider. If you

configure Windows authentication when using claims-based authen-

tication mode, you can use both a Windows integrated method and a

Basic method, but you cannot configure any additional Windows au-

thentication methods.

 

For a discussion on the merits of using multiple identity providers for

authentication, see Chapter 12 of this guide, “Federated Identity

for SharePoint Applications.”

 

SharePoint Groups with Claims

Authentication

To simplify administration tasks when setting authorization permis-

sions on resources, it is recommended that you use SharePoint Groups.

Setting permissions for resources based on membership of a specific

group means that it is not necessary to continually update the permis-

sions as new users are added to groups or existing users are removed

from groups.

The SharePoint STS automatically augments the claims in the

tokens it issues to include group membership for users when Win-

 

———————– Page 358———————–

 

sharepoint 2010 authentication architecture and consider ations 321 321

 

dows authentication is used. It also automatically augments the to-

kens to include the roles specified for users when ASP.NET forms-

based authentication is used; with each role translated into a

SharePoint group name (the SharePoint groups are not created auto-

matically).

When using SAML tokens issued by external identity providers

and STSs, you must create a custom claims provider that augments the

tokens to include the relevant roles. For example, you could create a

custom claims provider that augments the SharePoint STS token with

specific roles based on a test of the claims in the token received from

the identity provider. This may be done by checking the incoming

token to see if it was issued by a specific partner’s STS, or that the

email address is within with a specific domain.

 

SharePoint Profiles and Audiences with

Claims Authentication

SharePoint user profiles are not populated automatically when using

claims-based authentication methods. You must create and populate

these profiles yourself, typically in code. Users that map to existing

accounts when you migrate to claims-based authentication will use

any existing profile information, but other users and new users will

not have profile information. For information about how you can

populate user profiles when using claims-based authentication, see

“Trusted Identity Providers & User Profile Synchronization” at http://

blogs.msdn.com/b/brporter/archive/2010/07/19/trusted-identity-

providers-amp-user-profile-synchronization.aspx.

The same limitation occurs when using SharePoint Audiences.

You cannot use user-based audiences directly unless you create cus-

tom code to support this, but you can use property-based audiences

that make use of claims values. For information, see “Using Audiences

with Claims Auth Sites in SharePoint 2010″ at http://blogs.technet.

com/b/speschka/archive/2010/06/12/using-audiences-with-claims-

auth-sites-in-sharepoint-2010.aspx.

 

Rich Client, Office, and Reporting

Applications with Claims Authentication

Claims-based authentication methods in SharePoint support almost

all of the capabilities for integration with Office client applications

and services. Office 2007 clients can use forms-based authentication

to access SharePoint 2010 applications that are configured to use

forms-based authentication and Office 2010 clients can use claims to

access SharePoint 2010 applications that are configured to use claims-

based forms-based authentication. However, there are some limita-

tions when using claims-based authentication:

 

———————– Page 359———————–

 

322322 appendix f

 

•     Microsoft Office Excel® Services can use only the Classic or the

Windows claims-based authentication methods. When using

other claims-based authentication methods you must use the

Secure Store Service for external data connections and unat-

tended data refresh.

•     Microsoft Visio® Services can use the Secure Store Service, but

only for drawings that use an Office Data Connection (ODC)

file to specify the connection. The Unattended Service Account

option can also be used with the same limitation.

•     PowerPivot can be used in workbooks with embedded data, but

data refresh from a data source is not possible when using any

of the claims-based authentication methods.

•     SQL Server 2008 R2 Reporting Services integration is only

possible when using classic Windows Authentication. It cannot

use the Claims to Windows Token Service (c2WTS), which is a

feature of Windows Identity Foundation.

•     PerformancePoint must use the Unattended Service Account

option in conjunction with Secure Store Service when using

claims-based authentication.

•     Project Server maintains a separate user database containing

logon information, and so migrating users when using claims-

based authentication is not sufficient.

 

Other Trade-offs and Limitations for

Claims Authentication

When upgrading existing applications to SharePoint 2010, be aware

of the following factors that may affect your choice of authentication

type:

•     Claims-based authentication requires communication over

HTTPS with a token issuer and identity provider. It typically also

requires multiple redirects for clients that are using a web

browser. These are likely to be slower than Windows Authenti-

cation or ASP.NET forms-based authentication lookup. Even

after initial authentication, as users move between applications

taking advantage of single sign-on, the applications and services

must make calls over HTTPS to validate the authentication

tokens.

•     Web Parts or custom code that relies on or uses Windows

identities must be modified if you choose claims-based

authentication. Consider choosing classic mode authentication

until all custom code is updated.

•     When you upgrade a web application from classic mode to

claims-based authentication, you must use Windows Power-

Shell® command-line interface to convert Windows identities

 

———————– Page 360———————–

 

sharepoint 2010 authentication architecture and consider ations 323 323

 

to claims identities. This can take time, and you must factor in

time for this operation as part of the upgrade process.

•     Search alerts are currently not supported with claims-based

authentication.

•     You cannot use custom ISAPI extensions or HTTP modules with

the forms-based authentication method because the SharePoint

STS communicates directly with the forms authentication

provider by calling its ValidateUser method.

•     Some client-hosted applications may attempt to authenticate

with the server when displaying content linked from SharePoint

application web pages. If you are using claims-based authentica-

tion and the client-hosted application is not claims-aware (as in

the case of Windows Media Player), this content might not be

displayed.

•     Managing the session lifetime is not a trivial exercise when using

claims-based authentication. For details of how you can manage

session lifetime, see Chapter 11, “Claims-Based Single Sign-On

for Microsoft SharePoint 2010.”

•     The External Content Type Designer in SharePoint Designer

2010 cannot discover claims aware WSDL endpoints. For more

information, see MSDN® Knowledge Base article 982268 at

http://support.microsoft.com/default.aspx?scid=kb;En-

us;982268 .

 

Applications are not changed to claims-based authentication mode

automatically when you upgrade to SharePoint 2010. If you later

convert an application from classic authentication mode to claims-

based authentication mode, you cannot convert it back to classic

authentication mode.

 

Claims-based authentication validates users from a variety of realms

and domains, some of which do not provide the wealth of information

about users that is available from Windows authentication against

Active Directory. This has some impact on the usability of SharePoint

in terms of administration and development.

Primarily, the People Picker user experience is different when us-

ing claims-based authentication. It does not provide the same level of

support, such as browsing repositories (lists of accounts are not

stored or available in the people picker). This means that locating ac-

counts involves using the search feature against known attributes.

However, the people picker UI does provide some assistance using

pop-up tool tips. Alternatively, you can create a custom implementa-

tion of the SPClaimProvider class to extend the people picker and

provide an enhanced user experience.

 

———————– Page 361———————–

 

324324 appendix f

 

Administrators must also configure and manage the SharePoint

STS to implement the appropriate trust relationships and the rules for

augmenting claims. This can only be done using PowerShell. In addi-

tion, the order for configuring items and the provision of the correct

claim types is critical.

The SharePoint STS is fairly simple compared to an STS such as

ADFS or ACS, and basically implements only rules for copying claims.

It requires the STSs and identity providers it trusts to implement the

appropriate claims. It also runs in the same domain as SharePoint, and

the FedAuth cookies it exposes are scoped to that domain. It does

provide a token cache.

You may need to configure a SharePoint server farm to use affin-

ity for web applications to ensure that users are directed to the

server on which they were authenticated. If users are authenticated

on more than one server, the token may be rejected in a subsequent

request, and the continual re-authentication requests may resemble a

denial-of-service attack that causes the identity provider or STS to

block authentication requests.

 

Configuring SharePoint to Use Claims

 

Many configuration tasks in SharePoint, especially when configuring

a SharePoint server farm, are performed using Windows PowerShell

commands and scripts. Many of the required scripts are provided with

SharePoint or are available for download from the SharePoint resource

sites listed at the end of this appendix.

The main tasks for configuring SharePoint web applications to

use claims are the following:

•     Configure an identity provider STS web application using

PowerShell

•     Configure a relying party STS web application

•     Establish a trust relationship with an identity provider STS using

PowerShell

•     Export the trusted identity provider STS certificate using

PowerShell

•     Define a unique identifier for claims mapping using PowerShell

•     Create a new SharePoint web application and configure it to use

SAML sign-in

 

The following resources provide step-by-step guides to setting up

claims authentication for a web application in SharePoint:

•     “Claims-based authentication Cheat Sheet Part 1” at http://

blogs.msdn.com/b/spidentity/archive/2010/01/04/claims-

based-authentication-cheat-sheet-part-1.aspx.

 

———————– Page 362———————–

 

sharepoint 2010 authentication architecture and consider ations 325 325

 

•     “Claims-based authentication Cheat Sheet Part 2” at http://

blogs.msdn.com/b/spidentity/archive/2010/01/23/claims-based-

authentication-cheat-sheet-part-2.aspx.

•     “Claims-Based Identity in SharePoint 2010” at http://blogs.

technet.com/b/wbaer/archive/2010/04/14/claims-based-

identity-in-sharepoint-2010.aspx.

•     “Configure authentication using a SAML security token (Share-

Point Server 2010)” at http://technet.microsoft.com/en-us/

library/ff607753.aspx.

•     “Configure the security token service (SharePoint Server 2010)”

at http://technet.microsoft.com/en-us/library/ee806864.aspx.

 

Tips for Configuring Claims in SharePoint

 

The following advice may be useful in resolving issues encountered

when configuring a SharePoint application to use claims authentica-

tion:

•     The SharePoint PowerShell snap-in requires developers and

administrators to have special permissions in the SharePoint

database. It is not sufficient just to be an administrator or a

domain administrator. For information on how to configure the

SharePoint database for the PowerShell snap-in, see “The local

farm is not accessible Cmdlets with FeatureDependencyId are

not registered” at http://www.sharepointassist.

com/2010/01/29/the-local-farm-is-not-accessible-cmdlets-with-

featuredependencyid-are-not-registered/.

•     When you use ADFS 2.0, the setting for enabling single sign-on

(SSO) is not available in the ADFS management interface. By

default SSO is enabled. You can change the setting by editing

the Web.config file for the ADFS website. The element to

modify is <singlesignon enabled =”true” />. It is located in the

microsoft.identityserver.web section.

•     When you create a new web application and configure it to

work over HTTPS, you must edit the website bindings. This

cannot be done in the SharePoint management tools. Instead,

you must select the SSL certificate to use for the website in the

IIS Manager Microsoft Management Console (MMC) snap-in.

•     It is possible to create more than one SharePoint application

with the same alias, although this is generally unlikely. However,

the authentication cookie served by the application uses the

alias as the cookie name. The result is that single sign-on

authentication will fail when users access one of the applica-

tions if they have previously accessed another application with

 

———————– Page 363———————–

 

326326 appendix f

 

the same alias because the authentication cookie is not valid for

the second application. To resolve this, create each application

under a different domain name and use DNS to point to the

SharePoint application, or modify the cookieHandler element in

the federatedAuthentication section of Web.config for each

application to specify a different cookie name.

 

More Information

 

For more information about SharePoint 2010, see the following

resources:

•     “Getting Started with Security and Claims-Based Identity

Model” at http://msdn.microsoft.com/en-us/library/ee536164.

aspx.

•     “Using the New SharePoint 2010 Security Model – Part 2” at

http://technet.microsoft.com/en-us/sharepoint/ff678022.

aspx#lesson2.

•     “Plan Authentication Methods (SharePoint Server 2010)” at

http://technet.microsoft.com/en-us/library/cc262350.aspx.

•     “Claims Tips 1: Learning About Claims-Based Authentication in

SharePoint 2010″ at http://msdn.microsoft.com/en-us/library/

ff953202.aspx.

•     “Claims-Based Identity in SharePoint 2010” at http://blogs.

technet.com/b/wbaer/archive/2010/04/14/claims-based-

identity-in-sharepoint-2010.aspx.

•     “Replace the Default SharePoint People Picker with a Custom

People Picker” at http://www.sharepointsecurity.com/share-

point/sharepoint-security/replace-the-default-sharepoint-

people-picker-with-a-custom-people-picker/.

•     “Understanding People Picker and Custom Claims Providers”

at http://blogs.technet.com/b/tothesharepoint/archive/

2011/02/03/new-understanding-people-picker-and-custom-

claims-providers.aspx.

 

———————– Page 364———————–

 

Glossary

 

access control. The process of making authorization decisions for a

given resource.

access control rule. A statement that is used to transform one set

of claims into another set of claims. An example of an access

control rule might be: any subject that possesses the claim

“Role=Contributor” should also have the claim

“CanAddDocuments=True”. Each access control system will have

its own rule syntax and method for applying rules to input claims.

access control system (ACS). The aspect of a software system

responsible for authorization decisions.

account management. The process of maintaining user identities.

ActAs. A delegation role that allows a third party to perform

operations on behalf of a subject via impersonation.

active client. A claims-based application component that makes

calls directly to the claims provider. Compare with passive client.

Active Directory Federation Services (ADFS). An issuer that is a

component of the Microsoft® Windows® operating system. It

issues and transforms claims, enables federations, and manages

user access.

active federation. A technique for accessing a claims provider that

does not involve the redirection feature of the HTTP protocol.

With active federation, both endpoints of a message exchange

are claims-aware. Compare with passive federation.

assertion. Within a closed-domain model of security, a statement

about a user that is inherently trusted. Assertions, with inherent

trust, may be contrasted with claims, which are only trusted if a

trust relationship exists with the issuer of the claim.

authentication. The process of verifying an identity.

authority. The trusted possessor of a private key.

 

327

 

———————– Page 365———————–

 

328328 glossary

 

authorization. See authorization decision.

authorization decision. The determination of whether a subject

with a given identity can gain access to a given resource.

back-end server. A computing resource that is not exposed to the

Internet or that does not interact directly with the user.

blind credential. A trusted fact about a user that does not reveal

the identity of the user but is relevant for making an

authorization decision. For example, an assertion that the user is

over the age of 21 may be used to grant access.

bootstrap token. A security token that is passed to a claims provider

as part of a request for identity delegation. This is part of the

ActAs delegation scenario.

certificate. A digitally signed statement of identity.

certificate authority. An entity that issues X.509 certificates.

claim. A statement, such as a name, identity, key, group, permission,

or capability made by one subject about itself or another subject.

Claims are given one or more values and then packaged in

security tokens that are distributed by the issuer.

claims model. The vocabulary of claims chosen for a given

application. The claims provider and claims-based application

must agree on this vocabulary of claims. When developing a

claims-based application, you should code to the claims model

instead of calling directly into platform-specific security APIs.

claims processing. A software feature that enables a system to act

as a claims provider, claims requester, or claims-based application.

For example, a security token service provides claims processing

as part of its feature set.

claims producer. A claims provider.

claims provider. A software component or service that generates

security tokens upon request. Also known as the issuer of a claim.

claims requester. The client of a security token service. An identity

selector is a kind of claims requester.

claims transformer. A claims provider that accepts security tokens

as input; for example, as a way to implement federated identity or

access control.

claims type. A string, typically a URI, that identifies the kind of

claim. All claims have a claims type and a value. Example claims

types include FirstName, Role, and the private personal

identifier (PPID). The claims type provides context for the claim

value.

 

———————– Page 366———————–

 

329 329

 

claims value. The value of the statement in the claim being made.

For example, if the claims type is FirstName, a value might be

Matt.

claims-based application. A software application that uses claims as

the basis of identity and access control. This is in contrast to

applications that directly invoke platform-specific security APIs.

claims-based identity. A set of claims from a trusted issuer that

denotes user characteristics such as the user’s legal name or email

address. In an application that uses the Windows Identity

Foundation (WIF), claims-based identity is represented by

run-time objects that implement the IClaimsIdentity interface.

claims-based identity model. A way to write applications so that

the establishment of user identity is external to the application

itself. The environment provides all required user information in

a secure manner.

client. An application component that invokes web services or

issues HTTP requests on behalf of a local user.

cloud. A dynamically scalable environment such as Windows

Azure™ for hosting Internet applications.

cloud application. A software system that is designed to run in the

cloud.

cloud provider. An application hosting service.

cloud service. A web service that is exposed by a cloud application.

credentials. Data elements used to establish identity or permission,

often consisting of a user name and password.

credential provisioning. The process of establishing user identities,

such as user names and initial passwords, for an application.

cryptography. The practice of obfuscating data, typically via the use

of mathematical algorithms that make reading data dependent on

knowledge of a key.

digital signature. The output of a cryptographic algorithm that

provides evidence that the message’s originator is authentic and

that the message content has not been modified in transit.

domain. Area of control. Domains are often hierarchically

structured.

domain controller. A centralized issuer of security tokens for an

enterprise directory.

DPAPI. The Data Protection API (DPAPI) is a password-based data

protection service that uses the Triple-DES cryptographic

algorithm to provide operating system-level data protection

services to user and system processes via a pair of function calls.

 

———————– Page 367———————–

 

330330 glossary

 

enterprise directory. A centralized database of user accounts for a

domain. For example, the Microsoft Active Directory® Domain

Service allows organizations to maintain an enterprise directory.

enterprise identity backbone. The chosen mechanism for providing

identity and access control within an organization; for example,

by running Active Directory Federation Services (ADFS).

federated identity. A mechanism for authenticating a system’s users

based on trust relationships that distribute the responsibility for

authentication to a claims provider that is outside of the current

security realm.

federatedAuthentication attribute. An XML attribute used in a

Web.config file to indicate that the application being configured

is a claims-based application.

federation provider. A type of identity provider that provides single

sign-on functionality between an organization and other identity

providers (issuers) and relying parties (applications).

federation provider security token service (FP-STS). A software

component or service that is used by a federation provider to

accept tokens from a federation partner and then generate claims

and security tokens on the contents of the incoming security

token into a format consumable by the relying party (application).

A security token service that receives security tokens from a

trusted federation partner or identity provider (IdP-STS). In turn,

the relying party (RP-STS) issues new security tokens to be

consumed by a local relying party application.

FedUtil. The utility provided by Windows Identity Foundation for

the purpose of establishing federation.

forest. A collection of domains governed by a central authority.

Active Directory Federation Services (ADFS) can be used to

combine two Active Directory forests in a single domain of trust.

forward chaining logic. An algorithm used by access control

systems that determines permissions based on the application of

transitive rules such as group membership or roles. For example,

using forward chaining logic, an access control system can deduce

that user X has permission Z whenever user X has role Y and role

Y implies permission Z.

home realm discovery. The process of determining a user’s issuer.

identity. In this book, this refers to claims-based identity. There are

other meanings of the word “identity,” so we will further qualify

the term when we intend to convey an alternate meaning.

identity delegation. Enabling a third party to act on one’s behalf.

 

———————– Page 368———————–

 

331 331

 

identity model. The organizing principles used to establish the

identity of an application’s user. See claims-based identity model.

identity provider (IdP). An organization issuing claims in security

tokens. For example, a credit card provider organization might

issue a claim in a security token that enables payment if the

application requires that information to complete an authorized

transaction.

identity security token service (I-STS). An identity provider.

information card. A visual representation of an identity with

associated metadata that may be selected by a user in response to

an authentication request.

input claims. The claims given to a claims transformer such as an

access control system.

issuer. The claims provider for a security token; that is, the entity

that possesses the private key used to sign a given security token.

In the IClaimsIdentity interface, the Issuer property returns the

claims provider of the associated security token. The term may be

used more generally to mean the issuing authority of a Kerberos

ticket or X.509 certificate, but this second use is always made

clear in the text.

issuer name registry. A list of URIs of trusted issuers. You can

implement a class derived from the abstract class

IssuerNameRegistry (this is part of the Windows Identity

Foundation) in order to pick an issuer-naming scheme and also

implement custom issuer validation logic.

issuing authority. Claims provider; the issuer of a security token.

(The term has other meanings that will always be made clear with

further qualification in the text.)

Kerberos. The protocol used by Active Directory domain controllers

to allow authentication in a networked environment.

Kerberos ticket. An authenticating token used by systems that

implement the Kerberos protocol, such as domain controllers.

key. A data element, typically a number or a string, that is used by a

cryptographic algorithm when encrypting plain text or decrypting

cipher text.

key distribution center (KDC). In the Kerberos protocol, a key

distribution center is the issuer of security tickets.

Lightweight Directory Access Protocol (LDAP). A TCP/IP protocol

for querying directory services in order to find other email users

on the Internet or corporate intranet.

 

———————– Page 369———————–

 

332332 glossary

 

Local Security Authority (LSA). A component of the Windows

operating system that applications can use to authenticate and

log users on to the local system.

Local Security Authority Subsystem Service (LSASS). A

component of the Windows operating system that enforces

security policy.

managed information card. An information card provided by an

external identity provider. By using managed cards, identity

information is stored with an identity provider, which is not the

case with self-issued cards.

management APIs. Programmable interface for configuration or

maintenance of a data set. Compare with portal.

moniker. An alias used consistently by a user in multiple sessions of

an application. A user with a moniker often remains anonymous.

multiple forests. A domain model that is not hierarchically

structured.

multi-tenant architecture. A cloud-based application designed for

running in multiple data centers, usually for the purpose of

geographical distribution or fault tolerance.

on-premises computing. Software systems that run on hardware

and network infrastructure owned and managed by the same

enterprise that owns the system being run.

output claims. The claims produced by a claims transformer such as

an output control system.

passive client. A web browser that interacts with a claims-based

application running on an HTTP server.

passive federation. A technique for accessing a claims provider that

involves the redirection feature of the HTTP protocol. Compare

with active federation.

perimeter network. A network that acts as a buffer between an

internal corporate network and the Internet.

permission. The positive outcome of an authorization decision.

Permissions are sometimes encoded as claims.

personalization. A variant of access control that causes the

application’s logic to change in the presence of particular claims.

Security trimming is a kind of personalization.

policy. A statement of addresses, bindings, and contracts structured

in accordance with the WS-Policy specification. It includes a list

of claim types that the claims-based application needs in order to

execute.

 

———————– Page 370———————–

 

333 333

 

portal. Web interface that allows viewing and/or modifying data

stored in a back-end server.

principal. A run-time object that represents a subject. Claims-based

applications that use the Windows Identity Foundation expose

principals using the IClaimsPrincipal interface.

private key. In public key cryptography, the key that is not

published. Possession of the correct private key is considered to

be sufficient proof of identity.

privilege. A permission to do something such as access an

application or a resource.

proof key. A cryptographic token that prevents security tokens

from being used by anyone other than the original subject.

public key. In public key cryptography, the key that is published.

Possession of a user’s public key allows the recipient of a message

sent by the user to validate the message’s digital signature against

the contents of the message. It also allows a sender to encrypt a

message so that only the possessor of the private key can decrypt

the message.

public key cryptography. A class of cryptographic algorithms that

use one key to encrypt data and another key to decrypt this data.

public key infrastructure (PKI). Conventions for applying public

key cryptography.

realm. A security realm.

relying party (RP). An application that relies on security tokens and

claims issued by an identity provider.

relying party security token service (RP-STS). See federation

provider security token service .

resource. A capability of a software system or an element of data

contained by that system; an entity such as a file, application, or

service that is accessed via a computer network.

resource security token service (R-STS). A claims transformer.

REST protocols. Data formats and message patterns for

representational state transfer (REST), which abstracts a

distributed architecture into resources named by URIs connected

by interfaces that do not maintain connection state.

role. An element of identity that may justify the granting of

permission. For example, a claim that “role is administrator” might

imply access to all resources. The concept of role is often used by

access control systems based on the role-based access control

(RBAC) model as a convenient way of grouping users with similar

access needs.

 

———————– Page 371———————–

 

334334 glossary

 

role-based access control (RBAC). An established authorization

model based on users, roles, and permissions.

SAML 2.0. A data format used for encoding security tokens that

contain claims. Also, a protocol that uses claims in SAML format.

See Security Assertion Markup Language (SAML).

scope. In Microsoft Access Control Services, a container of access

control rules for a given application.

Security Assertion Markup Language (SAML). A data format used

for encoding security tokens that contain claims. Also, a particular

protocol that uses claims in SAML format.

security attribute. A fact that is known about a user because it

resides in the enterprise directory (thus, it is implicitly trusted).

Note that with claims-based identity, claims are used instead of

security attributes.

security context. A Microsoft .NET Framework concept that

corresponds to the IPrincipal interface. Every .NET Framework

application runs in a particular security context.

security infrastructure. A general term for the hardware and

software combination that implements authentication,

authorization, and privacy.

security policy. Rules that determine whether a claims provider will

issue security tokens.

security token. An on-the-wire representation of claims that has

been cryptographically signed by the issuer of the claims,

providing strong proof to any relying party of the integrity of the

claims and the identity of the issuer.

security token service (STS). A claims provider implemented as a

web service that issues security tokens. Active Directory

Federation Services (ADFS) is an example of a security token

service. Also known as an issuer. A web service that issues claims

and packages them in encrypted security tokens (see WS-Security

and WS-Trust).

security trimming. (informal) The process of altering an

application’s behavior based on a subject’s available permissions.

service. A web service that adheres to the SOAP standard.

service provider. A service provider is an application. The term is

commonly used with the Security Assertion Markup Language

(SAML).

session key. A private cryptographic key shared by both ends of a

communications channel for the duration of the communications

 

———————– Page 372———————–

 

335 335

 

session. The session key is negotiated at the beginning of the

communication session.

SOAP. A web standard (protocol) that governs the format of

messages used by web services.

social identity provider (social IdP). A term used in this book to

refer to identity services offered by well-known web service

®

providers such as Windows Live , Facebook, Google, and Yahoo!

software as a service (SaaS). A software licensing method in which

users license software on demand for limited periods of time

rather than purchasing a license for perpetual use. The software

vendor often provides the execution environment as, for example,

a cloud-based application running as a web service.

subject. A person. In some cases, business organizations or software

components are considered to be subjects. Subjects are

represented as principals in a software system. All claims

implicitly speak of a particular subject. The Windows Identity

Foundation type, IClaimsPrincipal, represents the subject of a

claim.

System.IdentityModel.dll. A component of the .NET Framework

3.0 that includes some claims-based features, such as the Claim

and ClaimSet classes.

token. A data element or message.

trust. The acceptance of another party as being authoritative over

some domain or realm.

trust relationship. The condition of having established trust.

trusted issuer. A claims provider for which trust has been

established via the WS-Trust protocol.

user credentials. A set of identifying information belonging to a

user. An example is a user name and password.

web identity. Authenticated identifying characteristics of the

sender of an HTTP request. Often, this is an authenticated email

address.

web single sign-on (web SSO). A process that enables partnering

organizations to exchange user authentication and authorization

data. By using web SSO, users in partner organizations can

transition between secure web domains without having to

present credentials at each domain boundary.

Windows Communication Foundation (WCF). A component of

the Windows operating system that enables web services.

Windows identity. User information maintained by Active

Directory.

 

———————– Page 373———————–

 

336336 glossary

 

Windows Identity Foundation (WIF). A .NET Framework library

that enables applications to use claims-based identity and access

control.

WS-Federation. A standard that defines mechanisms that are used

to enable identity, attribute, authentication, and authorization

federation across different trust realms. This standard includes an

interoperable use of HTTP redirection in order to request

security tokens.

WS-Federation Authentication Module (FAM). A component of

the Windows Identity Foundation that performs claims

processing.

WS-Federation Passive Requestor Profile. Describes how the

cross-trust realm identity, authentication, and authorization

federation mechanisms defined in WS-Federation can be used by

passive requesters such as web browsers to provide identity

services. Passive requesters of this profile are limited to the HTTP

protocol.

WS-Policy. A web standard that specifies how web services may

advertise their capabilities and requirements to potential clients.

WS-Security. A standard that consists of a set of protocols

designed to help secure web service communication using SOAP.

WS-Trust. A standard that takes advantage of WS-Security to

provide web services with methods to build and verify trust

relationships.

x.509. A standard format for certificates.

x.509 certificate. A digitally signed statement that includes the

issuing authority’s public key.

 

———————– Page 374———————–

 

Answers to Questions

 

Chapter 1, An Introduction to Claims

 

1. Under what circumstances should your application or

service accept a token that contains claims about the user

or requesting service?

 

a. The claims include an email address.

 

b. The token was sent over an HTTPS channel.

 

c. Your application or service trusts the token issuer.

 

d. The token is encrypted.

 

Answer: Only (c) is strictly correct. While it is good practice to

use encrypted tokens and send them over a secure channel, an

application should only accept a token if it is configured to trust

the issuer. The presence of an email address alone does not

signify that the token is valid.

 

2. What can an application or service do with a valid token

from a trusted issuer?

 

a. Find out the user’s password.

 

b. Log in to the website of the user’s identity provider.

 

c. Send emails to the user.

 

d. Use the claims it contains to authorize the user for

access to appropriate resources.

 

Answer: Only (d) is true in all cases. The claims do not include

the user’s password or other credentials. They only include the

information the user and the identity provider choose to expose.

This may or may not include an email address, depending on the

identity provider.

 

337

 

———————– Page 375———————–

 

338338 asnwers to questions

 

3. What is the meaning of the term identity federation?

 

a. It is the name of a company that issues claims about

Internet users.

 

b. It is a mechanism for authenticating users so that they

can access different applications without signing on

every time.

 

c. It is a mechanism for passing users’ credentials to

another application.

 

d. It is a mechanism for finding out which sites a user has

visited.

 

Answer: Only (b) is correct. Each application must query the

original issuer to determine if the token a user obtained when

they originally authenticated is valid. The token does not include

the users’ credentials or other information about users’ browsing

history or activity.

 

4. When would you choose to use Windows Azure™ Ap-

pFabric Access Control Service (ACS) as an issuer for an

application or service?

 

a. When the application must allow users to sign on

using a range of well-known social identity credentials.

 

b. When the application is hosted on the Windows

Azure platform.

 

c. When the application must support single sign-on

(SSO).

 

d. When the application does not have access to an alter-

native identity provider or token issuer.

 

Answer: Only (a) and (d) are correct. Applications running on

Windows Azure can use ACS if they must support federated

identity, but it is not mandatory. SSO can be implemented using

a range of mechanisms other than ACS, such as a Microsoft

Active Directory ® domain server and Active Directory Federa-

tion Services.

 

5. What are the benefits of using claims to manage authoriza-

tion in applications and services?

 

a. It avoids the need to write code specific to any one

type of authentication mechanism.

 

———————– Page 376———————–

 

339 339

 

b. It decouples authentication logic from authorization

logic, making changes to authentication mechanisms

much easier.

 

c. It allows the use of more fine-grained permissions

based on specific claims compared to the granularity

achieved just using roles.

 

d. It allows secure access for users that are in a different

domain or realm from the application or service.

 

Answer: All of the answers are correct, which shows just how

powerful claims can be!

 

Chapter 2, Claims Based Architectures

 

1. Which of the following protocols or types of claims token

are typically used for single sign-on across applications in

different domains and geographical locations?

 

a. Simple web Token (SWT)

 

b. Kerberos ticket

 

c. Security Assertion Markup Language (SAML) token

 

d. Windows Identity

 

Answer: Only (a) and (c) are typically used across domains and

applications outside a corporate network. Kerberos tickets

cannot contain claims, and they are confined within a domain or

Active Directory forest. Windows Identities may contain role

information, but cannot carry claims between applications.

 

2. In a browser-based application, which of the following is

the typical order for browser requests during authentica-

tion?

 

a. Identity provider, token issuer, relying party

 

b. Token issuer, identity provider, token issuer, relying

party

 

c. Relying party, token issuer, identity provider, token

issuer, relying party

 

d. Relying party, identity provider, token issuer, relying

party

 

Answer: Only (c) is correct. The claims-aware application (the

relying party) redirects the browser to the token issuer, which

 

———————– Page 377———————–

 

340340 asnwers to questions

 

either redirects the browser to the appropriate identity provider

for the user to enter credentials (ACS) or obtains a token on the

user’s behalf using their correct credentials (ADFS). It then

redirects the browser back to the claims-aware application.

 

3. In a service request from a non-browser-based application,

which of the following is the typical order of requests

during authentication?

 

a. Identity provider, token issuer, relying party

 

b. Token issuer, identity provider, token issuer, relying

party

 

c. Relying party, token issuer, identity provider, token

issuer, relying party

 

d. Relying party, identity provider, token issuer, relying

party

 

Answer: When authenticating using ADFS and Active Direc-

tory (or a similar technology), only (b) is correct. ADFS obtains

a token on the application’s behalf using credentials provided in

the request. When authenticating using ACS, (a) and (b) are

correct.

 

4. What are the main benefits of federated identity?

 

a. It avoids the requirement to maintain a list of valid

users, manage passwords and security, and store and

maintain lists of roles for users in the application.

 

b. It delegates user and role management to the trusted

organization responsible for the user, instead of it

being the responsibility of your application.

 

c. It allows users to log onto applications using the same

credentials, and choose an identity provider that is

appropriate for the user and the application to validate

these credentials.

 

d. It means that your applications do not need to include

authorization code.

 

Answer: Only (a), (b), and (c) are correct. Even if you com-

pletely delegate the validation of users to an external federated

system, you must still use the claims (such as role membership)

in your applications to limit access to resources to only the

appropriate users.

 

———————– Page 378———————–

 

341 341

 

5. How can home realm discovery be achieved?

 

a. The token issuer can display a list of realms based on

the configured identity providers and allow the user to

select his home realm.

 

b. The token issuer can ask for the user’s email address

and use the domain to establish the home realm.

 

c. The application can use the IP address to establish the

home realm based on the user’s country/region of

residence.

 

d. The application can send a hint to the token issuer in

the form of a special request parameter that indicates

the user’s home realm.

 

Answer: Only (a), (b), and (d) are correct. Home realms are

not directly related to geographical location (although this may

have some influence). The home realm is the domain that is

authoritative for the user’s identity. It is the identity provider

that the user must be redirected to when logging in.

 

Chapter 3, Claims-Based Single Sign-On

for the Web and Windows Azure

 

1. Before Adatum updated the a-Expense and a-Order applica-

tions, why was it not possible to use single sign-on?

 

a. The applications used different sets of roles to

manage authorization.

 

b. a-Order used Windows authentication and a-Expense

used ASP.NET forms authentication.

 

c. In the a-Expense application, the access rules were

intermixed with the application’s business logic.

 

d. You cannot implement single sign-on when user

profile data is stored in multiple locations.

 

Answer: Only (b) is correct. The key factor blocking the

implementation of single sign-on is that the applications use

different authentication mechanisms. Once users authenticate

with a claims issuer, you can configure the applications to trust

the issuer. The applications can use the claims from the issuer to

implement any authorization rules they need.

 

———————– Page 379———————–

 

342342 asnwers to questions

 

2.     How does the use of claims facilitate remote web-based

access to the Adatum applications?

 

a. Using Active Directory for authentication makes it

difficult to avoid having to use VPN to access the

applications.

 

b. Using claims means that you no longer need to use

Active Directory.

 

c. Protocols such as WS-Federation transport claims in

tokens as part of standard HTTP messages.

 

d. Using claims means that you can use ASP.NET forms-

based authentication for all your applications.

 

Answer: Only (a) and (c) are correct. Protocols that use claims

such as WS-Federation make it easy to provide web-based

access to your applications. ADFS makes it easy to continue to

use Active Directory in a claims-based environment, while using

just Active Directory on its own with the Kerberos protocol is

not well suited to providing web-based access.

 

3. In a claims enabled ASP.NET web application, you typically

find that the authentication mode is set to None in the

Web.config file. Why is this?

 

a. The WSFederationAuthenticationModule is now

responsible for authenticating the user.

 

b. The user must have already been authenticated by an

external system before they visit the application.

 

c. Authentication is handled in the On_Authenticate

event in the global.asax file.

 

d. The WSFederationAuthenticationModule is now

responsible for managing the authentication process.

 

Answer: Only (d) is correct. The WSFederationAuthentica-

tionModule is responsible for managing the authentication

process. It intercepts requests in the HTTP pipeline before they

reach the application and coordinates with an external claims

issuer to authenticate the user.

 

4. Claims issuers always sign the tokens they send to a relying

party. However, although it is considered best practice, they

might not always encrypt the tokens. Why is this?

 

———————– Page 380———————–

 

343 343

 

a. Relying parties must be sure that the claims come

from a trusted issuer.

 

b. Tokens may be transferred using SSL.

 

c. The claims issuer may not be able to encrypt the token

because it does not have access to the encryption key.

 

d. It’s up to the relying party to state whether or not it

accepts encrypted tokens.

 

Answer: Only (a) and (b) are correct. A key feature of claims-

based authentication is that relying parties can trust the claims

that they receive from an issuer. A signature proves that the

claim came from a particular issuer. Using SSL helps to secure

the tokens that the issuer transmits to the relying party if the

issuer does not encrypt them.

 

5. The FederatedPassiveSignInStatus control automatically

signs a user out of all the applications she signed into in the

single sign-on domain.

 

a. True.

 

b. False. You must add code to the application to per-

form the sign-out process.

 

c. It depends on the capabilities of the claims issuer. The

issuer is responsible for sending sign-out messages to

all relying parties.

 

d. If your relying party uses HTTP sessions, you must add

code to explicitly abandon the session.

 

Answer: Only (c) and (d) are correct. It is the responsibility of

the claims issuer to notify all relying parties that the user is

signing out. Additionally, you must add any necessary code to

abandon any HTTP sessions.

 

Chapter 4, Federated Identity for Web

Applications

 

1. Federated identity is best described as:

 

a. Two or more applications that share the same set of

users.

 

b. Two or more organizations that share the same set

of users.

 

———————– Page 381———————–

 

344344 asnwers to questions

 

c. Two or more organizations that share an identity

provider.

 

d. One organization trusting users from one or more

other organizations to access its applications.

 

Answer: Only (d) is correct. Federation is about trusting the

users from another organization. Instead of creating special

accounts for external users, you trust another organization to

authenticate users on your behalf before you give them access

to your applications.

 

2. In a federated security environment, claims mapping is

necessary because:

 

a. Claims issued by one organization are not necessarily

the claims recognized by another organization.

 

b. Claims issued by one organization can never be trusted

by another organization.

 

c. Claims must always be mapped to the roles used in

authorization.

 

d. Claims must be transferred to a new ClaimsPrincipal

object.

 

Answer: Only (a) is correct. The claims used by one organiza-

tion may not be the same as the claims used by another. For

example, one organization may use a claim called role while

another organization uses a claim called group for a similar

purpose. Mapping enables you to map the claims used by one

organization to the claims used in another. Although role claims

are often used for authorization, the authorization scheme could

depend on other claims such as organization or cost center.

 

3. The roles of a federation provider can include:

 

a. Mapping claims from an identity provider to claims

that the relying party understands.

 

b. Authenticating users.

 

c. Redirecting users to their identity provider.

 

d. Verifying that the claims were issued by the expected

identity provider.

 

Answer: Only (a), (c) and (d) are correct. A federation provider

can map claims, redirect users to the correct identity provider,

 

———————– Page 382———————–

 

345 345

 

and verify that the claims were issued by the correct identity

provider.

 

4. Must an identity provider issue claims that are specific to

a relying party?

 

a. Yes

 

b. No

 

c. It depends.

 

Answer: Only (b) is correct. It is the job of the federation

provider to map the claims issued by the identity provider to

claims recognized by the relying party. Therefore, the identity

provider’s issuer should not issue claims specific to the relying

party. Using a federation provider helps to decouple the identity

provider from the relying party.

 

5. Which of the following best summarizes the trust relation-

ships between the various parties described in the federated

identity scenario in this chapter?

 

a. The relying party trusts the identity provider, which in

turn trusts the federation provider.

 

b. The identity provider trusts the federation provider,

which in turn trusts the relying party.

 

c. The relying party trusts the federation provider, which

in turn trusts the identity provider.

 

d. The federation provider trusts both the identity

provider and the relying party.

 

Answer: Only (c) is correct. The trust relationships described

in this chapter have the relying party trusting the federation

provider that trusts the identity provider.

 

Chapter 5, Federated Identity with

Windows Azure Access Control Service

 

1. Which of the following issues must you address if you want

to allow users of your application to authenticate with a

®

social identity provider such as Google or Windows Live

network of Internet services?

 

a. Social identity providers may use protocols other than

WS-Federation to exchange claims tokens.

 

———————– Page 383———————–

 

346346 asnwers to questions

 

b. You must register your application with the social

identity provider.

 

c. Different social identity providers issue different claim

types.

 

d. You must provide a mechanism to enroll users using

social identities with your application.

 

Answer: Only (a), (c) and (d) are correct. Your solution must

be able to transition protocols; the solution described in this

chapter uses ACS to perform this task. The scenario described

in this chapter also uses ACS to map the different claim types

issued by the social identity providers to claim types that

Adatum understands. You must provide a mechanism to enroll

users with social identities.

 

2. What are the advantages of allowing users to authenticate

to use your application with a social identity?

 

a. The user doesn’t need to remember yet another

username and password.

 

b. It reduces the features that you must implement in

your application.

 

c. Social identity providers all use the same protocol

to transfer tokens and claims.

 

d. It puts the user in control of their password manage-

ment. For example, a user can recover a forgotten

password without calling your helpdesk.

 

Answer: Only (a), (b), and (d) are correct. Reusing a social

identity does mean that the user doesn’t need to remember a

new set of credentials. Also, the authentication and user account

management is now handled by the social identity provider.

 

3. What are the potential disadvantages of using ACS as your

federation provider?

 

a. It adds to the complexity of your relying party

application.

 

b. It adds an extra step to the authentication process,

which negatively impacts the user experience.

 

c. It is a metered service, so you must pay for each token

that it issues.

 

———————– Page 384———————–

 

347 347

 

d. Your application now relies on an external service that

is outside of its control.

 

Answer: Only (c) and (d) are correct. Although ACS is a

metered service, you should compare its running costs to the

costs of implementing and running your own federation provider.

ACS is a third-party application outside of your control; again,

you should evaluate the SLA associated with ACS against the

service-level agreement (SLA) your IT department offers for

on-premises services.

 

4. How can your federation provider determine which identity

provider to use (perform home realm discovery) when an

unauthenticated user accesses the application?

 

a. Present the user with a list of identity providers to

choose from.

 

b. Analyze the IP address of the originating request.

 

c. Prompt the user for an email address, and then parse

it to determine the user’s security domain.

 

d. Examine the ClaimsPrincipal object for the user’s

current session.

 

Answer: Only (a) and (c) are correct. The scenario described in

this chapter lets the user select from a list of identity providers.

It’s also possible to analyze the user’s email address; for example,

if the email address were paul@gmail.com, the federation

provider would determine that the user has a Google identity.

 

5. In the scenario described in this chapter, the Adatum

federation provider trusts ACS, which in turn trusts the

social identity providers such as Windows Live and Google.

Why does the Adatum federation provider not trust the

social identity providers directly?

 

a. It’s not possible to configure the Adatum federation

provider to trust the social identity providers because

the social identity providers do not make the certifi-

cates required for a trust relationship available.

 

b. ACS automatically performs the protocol transition.

 

c. ACS is necessary to perform the claims mapping.

 

d. Without ACS, it’s not possible to allow Adatum

employees to access the application over the web.

 

———————– Page 385———————–

 

348348 asnwers to questions

 

Answer: Only (b) is correct. Using ACS simplifies the Adatum

federation provider, especially because ACS performs any

protocol transitioning automatically. It is possible to configure

the Adatum federation provider to trust the social identity

providers directly and perform the claims mapping; however,

this is likely to be complex to implement.

 

Chapter 6, Federated Identity

with Multiple Partners

 

1. In the scenario described in this chapter, who should take

what action when an employee leaves one of the partner

organizations such as Litware?

 

a. Fabrikam Shipping must remove the user from its user

database.

 

b. Litware must remove the user from its user database.

 

c. Fabrikam must amend the claims-mapping rules in its

federation provider.

 

d. Litware must ensure that its identity provider no

longer issues any of the claims that get mapped to

Fabrikam Shipping claims.

 

Answer: Only (b) is correct. If the employee leaves Litware, the

simplest and safest action is to remove the employee from its

user database. This means that the ex-employee can no longer

authenticate with Litware or be issued any claims.

 

2. In the scenario described in this chapter, how does Fabrikam

Shipping perform home realm discovery?

 

a. Fabrikam Shipping presents unauthenticated users

with a list of federation partners to choose from.

 

b. Fabrikam Shipping prompts unauthenticated users

for their email addresses. It parses this address to

determine which organization the user belongs to.

 

c. Fabrikam Shipping does not need to perform home

realm discovery because users will have already

authenticated with their organizations’ identity

providers.

 

d. Each partner organization has its own landing page

in Fabrikam Shipping. Visiting that page will automati-

 

———————– Page 386———————–

 

349 349

 

cally redirect unauthenticated users to that organiza-

tion’s identity provider.

 

Answer: Only (d) is correct. Each organization has its own

landing page in Fabrikam Shipping. For example, Adatum

employees should navigate to https://{fabrikam

host}/f-shipping/adatum.

 

3. Fabrikam Shipping provides an identity provider for its

smaller customers who do not have their own identity

provider. What are the disadvantages of this?

 

a. Fabrikam must bear the costs of providing this service.

 

b. Users at smaller customers will need to remember

another username and password.

 

c. Smaller customers must rely on Fabrikam to manage

their user’s access to Fabrikam Shipping.

 

d. Fabrikam Shipping must set up a trust relationship

with all of its smaller customers.

 

Answer: Only (a), (b) and (c) are correct. Unless Fabrikam

Shipping charges for the service, they must bear the costs.

It does mean that users will have to remember a new set

of credentials. All of the user management takes place at

Fabrikam, unless Fabrikam implements a web interface for

smaller customers to manage their users.

 

4. How does Fabrikam Shipping ensure that only users at a

particular partner can view that partner’s shipping data?

 

a. The Fabrikam Shipping application examines the email

address of the user to determine the organization

they belong to.

 

b. Fabrikam Shipping uses separate databases for each

partner. Each database uses different credentials to

control access.

 

c. Fabrikam shipping uses the role claim from the

partner’s identity provider to determine whether the

user should be able to access the data.

 

d. Fabrikam shipping uses the organization claim from

its federation provider to determine whether the user

should be able to access the data.

 

———————– Page 387———————–

 

350350 asnwers to questions

 

Answer: Only (d) is correct. It’s the organization claim that

Fabrikam Shipping uses to control access.

 

5. The developers at Fabrikam set the wsFederation passive

RedirectEnabled attribute to false. Why?

 

a. This scenario uses active redirection, not passive

redirection.

 

b. They wanted more control over the redirection

process.

 

c. Fabrikam Shipping is an MVC application.

 

d. They needed to be able to redirect to external identity

providers.

 

Answer: Only (b) is correct. For this scenario, they needed

more control over the passive redirection process.

 

Chapter 7, Federated Identity with

Multiple Partners and Windows Azure

Access Control Service

 

1.     Why does Fabrikam want to use ACS in the scenario

described in this chapter?

 

a. Because it will simplify Fabrikam’s own internal

infrastructure requirements.

 

b. Because it’s the only way Fabrikam can support

users who want to use a social identity provider

for authentication.

 

c. Because it enables users with social identities to

access the Fabrikam Shipping application more easily.

 

d. Because ACS can authenticate users with social

identities.

 

Answer: Only (a) and (c) are correct. Using ACS means that

Fabrikam Shipping no longer requires its own federation

provider. Also, ACS handles all of the necessary protocol

transition for the tokens that the social identity providers issue.

ACS does not perform the authentication; this task is handled

by the social identity provider.

 

———————– Page 388———————–

 

351 351

 

2. In the scenario described in this chapter, why is it necessary

for Fabrikam to configure ACS to trust issuers at partners

such Adatum and Litware?

 

a. Because Fabrikam does not have its own on-premises

federation provider.

 

b. Because Fabrikam uses ACS for all the claims-mapping

rules that convert claims to a format that Fabrikam

Shipping understands.

 

c. Because partners such as Adatum have some users

who use social identities as their primary method of

authentication.

 

d. Because a relying party such as Fabrikam Shipping

can only use a single federation provider.

 

Answer: Only (a) and (b) are correct. In this scenario,

Fabrikam decided to use ACS as its federation provider,

so ACS holds all of its claims-mapping rules.

 

3. How does Fabrikam Shipping manage home realm discovery

in the scenario described in this chapter?

 

a. Fabrikam Shipping presents unauthenticated users

with a list of federation partners to choose from.

 

b. Fabrikam Shipping prompts unauthenticated users

for their email addresses. It parses each address to

determine which organization the user belongs to.

 

c. ACS manages home realm discovery; Fabrikam

Shipping does not.

 

d. Each partner organization has its own landing page

in Fabrikam Shipping. Visiting that page will automati-

cally redirect unauthenticated users to that organiza-

tion’s identity provider.

 

Answer: Only (d) is correct. Although the sample application

does have a page that displays a list of partners, this is just to

simplify the use of the sample. In practice, each partner would

use its own landing page that would redirect the use to ACS,

passing the correct value in the whr parameter.

 

4. Enrolling a new partner without its own identity provider

requires which of the following steps?

 

———————– Page 389———————–

 

352352 asnwers to questions

 

a. Updating the list of registered partners stored by

Fabrikam Shipping. This list includes the home realm

of the partner.

 

b. Adding a new identity provider to ACS.

 

c. Adding a new relying party to ACS.

 

d. Adding a new set of claims-mapping rules to ACS.

 

Answer: Only (a), (c) and (d) are correct. A partner without

its own identity provider will use one of the pre-configured

social identity providers in ACS.

 

5. Why does Fabrikam use a separate web application to

handle the enrollment process?

 

a. Because the expected usage patterns of the enroll-

ment functionality are very different from the expect-

ed usage patterns of the main Fabrikam Shipping web

site.

 

b. Because using the enrollment functionality does not

require a user to authenticate.

 

c. Because the site that handles enrolling new partners

must also act as a federation provider.

 

d. Because the site that updates ACS with new relying

parties and claims-mapping rules must have a different

identity from sites that only read data from ACS.

 

Answer: Only (a) is correct. The number of new enrolments

may be only one or two a day, while Fabrikam expects thousands

of visits to the Shipping application. Using separate web sites

enables Fabrikam to tune the two sites differently.

 

Chapter 8, Claims Enabling Web Services

 

1. Which statements describe the difference between the way

federated identity works for an active client as compared to

a passive client:

 

a. An active client uses HTTP redirects to ask each token

issuer in turn to process a set of claims.

 

b. A passive client receives HTTP redirects from a web

application that redirect it to each issuer in turn to

obtain a set of claims.

 

———————– Page 390———————–

 

353 353

 

c. An active client generates tokens to send to claims

issuers.

 

d. A passive client generates tokens to send to claims

issuers.

 

Answer: Only (b) is correct. The relying party, federation

provider, and identity provider communicate with each other

through the client browser by using HTTP redirects that send

the browser with any tokens to the next step in the process.

 

2. A difference in behavior between an active client and a

passive client is:

 

a. An active client visits the relying party first; a passive

client visits the identity provider first.

 

b. An active client does not need to visit a federation

provider because it can perform any necessary claims

transformations by itself.

 

c. A passive client visits the relying party first; an active

client visits the identity provider first.

 

d. An active client must visit a federation provider first

to determine the identity provider it should use.

Passive clients rely on home realm discovery to

determine the identity provider to use.

 

Answer: Only (c) is correct. A passive client visits the relying

party first; the relying party redirects the client to an issuer.

Active clients know how to obtain the necessary claims so they

can visit the identity provider first.

 

3. The active scenario described in this chapter uses which

protocol to handle the exchange of tokens between the

various parties?

 

a. WS-Trust

 

b. WS-Transactions

 

c. WS-Federation

 

d. ADFS

 

Answer: Only (c) is correct. WS-Trust is the protocol that WIF

and Windows Communication Foundation (WCF) use for active

clients.

 

———————– Page 391———————–

 

354354 asnwers to questions

 

4. In the scenario described in this chapter, it’s necessary to

edit the client application’s configuration file manually,

because the Svcutil.exe tool only adds a binding for a single

issuer. Why do you need to configure multiple issuers?

 

a. The metadata from the relying party only includes

details of the Adatum identity provider.

 

b. The metadata from the relying party only includes

details of the client application’s identity provider.

 

c. The metadata from the relying party only includes

details of the client application’s federation provider.

 

d. The metadata from the relying party only includes

details of the Adatum federation provider.

 

Answer: Only (c) is correct. The metadata from the relying

party only includes details of the Adatum federation provider

and the client application also needs the metadata from its

identity provider.

 

5. The WCF service at Adatum performs authorization checks

on the requests that it receives from client applications.

How does it implement the checks?

 

a. The WCF service uses the IsInRole method to verify

that the caller is a member of the OrderTracker role.

 

b. The Adatum federation provider transforms claims

from other identity providers into Role type claims

with a value of OrderTracker.

 

c. The WCF service queries the Adatum federation

provider to determine whether a user is in the

OrderTracker role.

 

d. It does not need to implement any authorization

checks. The application automatically grants access

to anyone who has successfully authenticated.

 

Answer: Only (a) and (b) are correct. The WCF service checks

the role membership of the caller. The role value is created from

the claims received from the federation provider.

 

———————– Page 392———————–

 

355 355

 

Chapter 9, Securing REST Services

 

1. In the scenario described in this chapter, which of the

following statements best describes what happens the first

time that the smart client application tries to use the

RESTful a-Order web service?

 

a. It connects first to the ACS instance, then to the

Litware IP, and then to the a-Order web service.

 

b. It connects first to the Litware IP, then to the ACS

instance, and then to the a-Order web service.

 

c. It connects first to the a-Order web service, then

to the ACS instance, and then to the Litware IP.

 

d. It connects first to the a-Order web service, then

to the Litware IP, and then to the ACS instance.

 

Answer: Only (b) is correct. The Active client first obtains

a SAML token from the Litware IP, it then sends the SAML

token to ACS where it is transitioned to an SWT token, it

then attaches the SWT token to the request that it sends to

the web service.

 

2. In the scenario described in this chapter, which of the

following tasks does ACS perform?

 

a. ACS authenticates the user.

 

b. ACS redirects the client application to the relying

party.

 

c. ACS transforms incoming claims to claims that the

relying party will understand.

 

d. ACS transitions the incoming token format from

SAML to SWT.

 

Answer: Only (c) and (d) are correct. The only tasks that ACS

performs in this scenario are claims transformation and claims

transitioning.

 

3. In the scenario described in this chapter, the Web.config

file in the a-Order web service does not contain a

<microsoft.identity> section. Why?

 

a. Because it configures a custom ServiceAuthorization

Manager class to handle the incoming SWT token in

code.

 

———————– Page 393———————–

 

356356 asnwers to questions

 

b. Because it is not authenticating requests.

 

c. Because it is not authorizing requests.

 

d. Because it is using a routing table.

 

Answer: Only (a) is correct. The incoming tokens are handled

by the custom SWTAuthorizationManager class that is

instantia ted in the CustomServiceHostFactory class.

 

4. ACS expects to receive bearer tokens. What does this

suggest about the security of a solution that uses ACS?

 

a. You do not need to use SSL to secure the connection

between the client and the identity provider.

 

b. You should use SSL to secure the connection between

the client and the identity provider.

 

c. The client application must use a password to authen-

ticate with ACS.

 

d. The use of bearer tokens has no security implications

for your solution.

 

Answer: Only (b) is correct. A solution that uses bearer tokens

is susceptible to man-in-the-middle attacks; using SSL mitigates

this risk.

 

5. You should use a custom ClaimsAuthorizationManager

class for which of the following tasks.

 

a. To attach incoming claims to the IClaimsPrincipal

object.

 

b. To verify that the claims were issued by a trusted

issuer.

 

c. To query ACS and check that the current request is

authorized.

 

d. To implement custom rules that can authorize access

to web service methods.

 

Answer: Only (d) is correct. The CheckAccess method in a

custom ClaimsAuthorizationManager class has access to the

IClaimsPrincipal object and URL associated with the current

request. It can use this information to implement authorization

rules.

 

———————– Page 394———————–

 

357 357

 

Chapter 10, Accessing REST Services

from a Windows Phone 7 Device

 

1. Which of the following are issues in developing a

claims-aware application that access a web service for

the Windows Phone® 7 platform?

 

a. It’s not possible to implement a solution that uses

SAML tokens on the phone.

 

b. You cannot install custom SSL certificates on the

phone.

 

c. There is no secure storage on the phone.

 

d. There is no implementation of WIF available for the

phone.

 

Answer: Only (c) and (d) are correct. Because there is no

secure storage on the phone, you cannot securely store any

credentials on the phone; either the user enters his credentials

whenever he uses the application, or you accept the risk of the

phone being used by an unauthorized person who will be able to

use any cached credentials. There is no version of WIF available

for the phone, so you must manually implement any token

handling that your application requires.

 

2. Why does the sample application use an embedded web

browser control?

 

a. To handle the passive federated authentication

process.

 

b. To handle the active federated authentication process.

 

c. To access the RESTful web service.

 

d. To enable the client application to use SSL.

 

Answer: Only (a) is correct. The embedded web browser control

handles the passive federated authentication process, enabling

redirects between ACS and the Litware IP.

 

3. Of the two solutions (active and passive) described in the

chapter, which requires the most round trips for the initial

request to the web service?

 

a. They both require the same number.

 

b. The passive solution requires fewer than the active

solution.

 

———————– Page 395———————–

 

358358 asnwers to questions

 

c. The active solution requires fewer than the passive

solution.

 

d. It depends on the number of claims configured for the

relying party in ACS.

 

Answer: Only (c) is correct. For the initial request to the web

service, the active solution requires fewer round trips: the active

solution first calls the Litware identity provider, then ACS, and

finally the web service; the passive solution first calls ACS, the

Litware identity provider, then goes back to ACS, and finally

calls the web service.

 

4. Which of the following are advantages of the passive

solution over the active solution?

 

a. The passive solution can easily build a dynamic list of

identity providers.

 

b. It’s simpler to create code to handle SWT tokens in

the passive solution.

 

c. It’s simpler to create code to handle SAML tokens in

the passive solution.

 

d. Better performance.

 

Answer: Only (a) and (c) are correct. The passive solution can

retrieve a list of configured identity providers from ACS to

display to the user. In the passive solution, the embedded web

browser manages the SAML token as part of the automatic

redirects between the identity provider and the federation

provider.

 

5. In the sample solution for this chapter, how does the

Windows Phone 7 client application add the SWT token to

the outgoing request?

 

a. It uses a Windows Communication Foundation (WCF)

behavior.

 

b. It uses Rx to orchestrate the acquisition of the SWT

token and add it to the header.

 

c. It uses the embedded web browser control to add the

header.

 

d. It uses WIF.

 

———————– Page 396———————–

 

359 359

 

Answer: Only (b) is correct. The sample solution makes

extensive use of Rx to orchestrate asynchronous operations.

Both the active and passive solutions use Rx to add the

authorization header at the right time.

 

Chapter 11, Claims-Based Single Sign-On

for Microsoft SharePoint 2010

 

1. Which of the following roles can the embedded STS in

SharePoint perform?

 

a. Authenticating users.

 

b. Issuing FedAuth tokens that contain the claims

associated with a user.

 

c. Requesting claims from an external STS such as ADFS.

 

d. Requesting claims from Active Directory through

Windows Authentication.

 

Answer: Only (b), (c) and (d) are correct. The embedded

STS does not perform any authentication itself, but it can

request that external token issuers such as ADFS or Windows

Authentication issue tokens. The claims are then added to

the user’s FedAuth token.

 

2. Custom claim providers use claims augmentation to perform

which function?

 

a. Enhancing claims by verifying them against an external

provider.

 

b. Enhancing claims by adding additional metadata to

them.

 

c. Adding claims data to the identity information in the

SPUser object if the SharePoint web application is in

“legacy” authentication mode.

 

d. Adding additional claims to the set of claims from

the identity provider.

 

Answer: Only (d) is correct. Claims augmentation is the

function of a custom claims provider that adds to the set of

claims from an identity provider.

 

———————– Page 397———————–

 

360360 asnwers to questions

 

3. Which of the following statements about the FedAuth

cookie in SharePoint are correct?

 

a. The FedAuth cookie contains the user’s claim data.

 

b. Each SharePoint web application has its own FedAuth

cookie.

 

c. Each site collection has its own FedAuth cookie.

 

d. The FedAuth cookie is always a persistent cookie.

 

Answer: Only (a) and (b) are correct. Each SharePoint web

application has its own FedAuth token because you can

configure each SharePoint web application to have a different

token provider. By default, the FedAuth cookie is persistent,

but you can configure it to be a session cookie.

 

4. In the scenario described in this chapter, why did Adatum

choose to customize the people picker?

 

a. Adatum wanted the people picker to resolve role and

organization claims.

 

b. Adatum wanted the people picker to resolve name

and emailaddress claims from ADFS.

 

c. Adatum wanted to use claims augmentation.

 

d. Adatum wanted to make it easier for site administra-

tors to set permissions reliably.

 

Answer: Only (a) and (d) are correct. Adatum wanted the

people picker to correctly resolve role and organization claims

so that site administrators could assign permissions based on

these values.

 

5. In order to implement single sign-out behavior in Share-

Point, which of the following changes did Adatum make?

 

a. Adatum modified the standard signout.aspx page to

send a wsignoutcleanup message to ADFS.

 

b. Adatum uses the SessionAuthenticationModule

SigningOut event to customize the standard sign-out

process.

 

———————– Page 398———————–

 

361 361

 

c. Adatum added custom code to invalidate the FedAuth

cookie.

 

d. Adatum configured SharePoint to use a session-based

FedAuth cookie.

 

Answer: Only (b) and (d) are correct. The relying party must

send a wsignout message to its identity provider; the identity

provider sends wsignoutcleanup messages to all of the

currently logged-in relying parties. If the FedAuth cookie is

session-based, SharePoint will automatically invalidate it.

 

Chapter 12, Federated Identity

for SharePoint Applications

 

1. In the scenario described in this chapter, Adatum prefers to

maintain a single trust relationship between SharePoint and

ADFS, and to maintain the trust relationships with the

multiple partners in ADFS. Which of the following are valid

reasons for adopting this model?

 

a. It enables Adatum to collect audit data relating to

external sign-ins from ADFS.

 

b. It allows for the potential reuse of the trust relation-

ships with partners in other Adatum applications.

 

c. It allows Adatum to implement automatic home realm

discovery.

 

d. It makes it easier for Adatum to ensure that Share-

Point receives a consistent set of claim types.

 

Answer: Only (a), (c) and (d) are correct. There is nothing in

the model chosen by Adatum that specifically enables home

realm discovery, though it may be easier to implement by

customizing the pages in ADFS. It is easier for Adatum to

manage the authentication and claims issuing in ADFS.

 

2. When must a SharePoint user reauthenticate with the

claims issuer (ADFS in the Adatum scenario)?

 

a. Whenever the user closes and then reopens the

browser.

 

b. Whenever the ADFS web SSO cookie expires.

 

———————– Page 399———————–

 

362362 asnwers to questions

 

c. Whenever the SharePoint FedAuth cookie that

contains the SAML token expires.

 

d. Every ten minutes.

 

Answer: Only (a) and (c) are correct. Whether or not a user

must re-authenticate after closing and re-opening the browser

depends on whether the SAML token is stored in a persistent

cookie; the Adatum single sign-out implementation requires

session cookies to be enabled. The ADFS web SSO cookie

determines when a user must reauthenticate with ADFS, not

with SharePoint. The time period between authentications will

depend on the lifetime of the SAML token as specified by

ADFS and whether sliding sessions are in use.

 

3. Which of the following statements are true with regard to

the Adatum sliding session implementation?

 

a. SharePoint tries to renew the session cookie before it

expires.

 

b. SharePoint waits for the session cookie to expire and

then creates a new one.

 

c. When SharePoint renews the session cookie, it always

requests a new SAML token from ADFS.

 

d. SharePoint relies on sliding sessions in ADFS.

 

Answer: Only (a) and (c) are correct. SharePoint tries to renew

the session cookie before it expires. If the cookie expires, then

SharePoint will request a new SAML token from ADFS.

 

4. Where is the organization claim that SharePoint uses to

authorize access to certain documents in the a-Portal web

application generated?

 

a. In the SharePoint STS.

 

b. In the identity provider’s STS; for example in the

Litware issuer.

 

c. In ADFS.

 

d. Any of the above.

 

Answer: Only (c) is correct. The solution relies on ADFS to

generate the organization claim. It’s important not to rely on

the partner’s identity provider because a malicious administrator

could spoof another partner’s identity.

 

———————– Page 400———————–

 

363 363

 

5. Why does Adatum rely on ADFS to perform home realm

discovery?

 

a. It’s easier to implement in ADFS than in SharePoint.

 

b. You can customize the list of identity providers for

each SharePoint web application in ADFS.

 

c. You cannot perform home realm discovery in Share-

Point.

 

d. You can configure ADFS to remember a user’s choice

of identity provider.

 

Answer: Only (a), (b), and (d) are correct. (a), (b), and (d)

are all reasons for Adatum to implement home realm discovery

in ADFS. It is possible to implement it SharePoint.

 

———————– Page 401———————–

 

 

———————– Page 402———————–

 

Index

 

A see also federated identity for Windows

1SingleSignOn, 49 Azure ACS; federated identity with

2-Federation, 77 multiple partners and Windows Azure

3FederationWithMultiplePartners, 109 ACS; SharePoint 2010 authentication;

4ActiveClientFederation, 148 Windows Azure Appfabric ACS

6-FederationWithAcs, 93-94 acknowledgments, xxxiii-xxxv

7FederationWithMultiplePartnersAndAcs, 135 active client

8ActiveRestClientFederation, 162, 164 certificates, 293-294

9WindowsPhoneClientFederation, 183 claims enabling web services, 150-153

10SharePoint, 210, 212, 214, 229 defined, xxv

<bindings> subsection, 150 -151 REST services, 167-171

<Microsoft.identityModel> element, 149 scenario, 252-258

<service> element, 150 scenario message-diagram, 252

<sharedListeners> section, 154-155 Active Directory, xxx, 62, 80

<system.serviceModel> section, 150 changing schemas, 46

Access Control Service (ACS), 81-83 choosing claims, 5

and ADFS for Services, Smart Clients, and Active Directory Federation Services (ADFS) see

SharePoint BCS, 301 ADFS 2.0

and ADFS for users of a website, 300-301 active federation, 179-181

ADFS issuer as a trusted identity provider, active SAML token handling, 183-185

309 AdatumIssuerIssuedToken binding, 151-152

ADFS issuer as a trusted issuer, 310 Adatum people picker solution, 203

authenticating services, smart clients, and Adatum scenario, 43-69

mobile devices, 299 ASP.NET 4.0, 43-45

authenticating users of a website, 298 claims-enabled SharePoint applications, 197

claims, 7 infrastructure before claims, 44

diagram of functions, 8 ADFS 2.0, xxix

federated identity message sequence, 31 as claims issuer, 5-6

home realm discovery page, 312 claims rule language, 74-75

REST services, 162-163 default authentication method, 216

and STSs generated in Visual Studio 2010, 311 server requirements, xxx

what does it do?, 296-297 trusted identity provider, 309

 

365

 

———————– Page 403———————–

 

366

 

trusted issuer, 310 B

for web services, 155-156, 172 BCS, 25-26, 299

adfs.cer file, 206 Bharath see security specialist role (Bharath)

a-Expense, 44-67 <bindings> subsection, 150 -151

application, 44-46 browser-based applications, 17-23

before claims, 49-52 browser-based message sequence, 20

with claims, 52-58 WSFederationAuthenticationModule (FAM),

with forms authentication, 49 20-21

hosting on Windows Azure, 64-67 browser-based scenario, 240-251

anonymity, 9-12 browser closing, 232

answer key, 337-363 Business Connectivity Services (BCS) see BCS

a-Order, 43-49, 59 business trust relationship, 89

a-Order.OrderTracking Web service, 148

appendixes see certificates; FedUtil.exe tool; C

industry standards; message sequences; Cameron, Kim, xvii-xviii

SharePoint 2010 authentication; Windows Certificate for Message Security (Active Client

Azure Appfabric ACS Host), 293-294

applications Certificate for Message Security (Web Service

browser-based applications, 17-23 Host, Active Scenario), 292

initial request to, 248 Certificate for TLS/SSL (Web Server, Browser

making claims-aware, 237-238 Scenario), 288

server requirements, xxx Certificate for Token Encryption (Issuer, Active

SharePoint web applications, 200-201 Scenario), 291

signing out of, 60-61 Certificate for Token Signing (Issuer, Active

site collections in a SharePoint web Scenario), 287-288, 291

application, 199-200 Certificate for Transport Security (TLS/SSL) (Web

see also claims-based applications Service Host, Active Scenario), 292

architecture see claims-based architectures certificates, 287-294

architecture of the Adatum people picker solution, on the active client host, 293-294

203 for browser-based applications, 287

ASP.NET 4.0, 43-45 Certificate for Message Security (Active

ASP.NET MVC, 101, 109-112, 114 Client Host), 293-294

see also federated identity with multiple Certificate for Message Security (Issuer,

partners Active Scenario), 291

audience, xxiii-xxiv Certificate for Message Security (Web

AuthenticateAndAuthorizeAttribute class, 112, Service Host, Active Scenario), 292

139-141 Certificate for TLS/SSL (Web Server, Browser

AuthenticateUser method, 112-113, 115, 139 Scenario), 288

authentication Certificate for Token Encryption (Issuer,

extension, 315-316 Active Scenario), 291

logic, 3-5 Certificate for Token Signing (Issuer, Active

mechanism, 197-199 Scenario), 291

methods in SharePoint, 316-317 Certificate for Token Signing (Issuer, Active

mode, 319 Scenario), 287-288

authoritative data, 5 Certificate for Transport Security (TLS/SSL)

authorization rules, 232-233 (Issuer, Active Scenario), 290-291

AuthorizeUser method, 115-116, 140 Certificate for Transport Security (TLS/SSL)

automated provisioning, 117-118 (Web Service Host, Active Scenario), 292

Azure see Windows Azure

 

———————– Page 404———————–

 

367

 

Cookie Encryption/Decryption (Web Server, uses for, 2-5

Browser Scenario), 290 vs. Kerberos, 1-2

importing from adfs.cer file, 206 where they should be issued, 37-38

on the issuer (active scenario), 290-291 claims-based applications, 4-5

Optional Certificate for Token Encryption design considerations, 35-40

(Issuer, Browser Scenario), 288 diagram, 8

Optional Token Decryption (Web Server, good claims, 35

Browser Scenario), 289-290 setting up, 61

TLS/SSL (Issuer, Browser Scenario), 287 claims-based architectures, 15-42

Token Decryption (Web Service Host, Active benefits, 313-314

Scenario), 293 implementation, 315-319

Token Signature Chain of Trust Verification performance optimizations, 23

(Web Server, Browser Scenario), 289 smart clients, 23-25

Token Signature Chain Trust Verification claims-based identity

(Web Service Host, Active Scenario), 293 familiar example, 3-5

Token Signature Verification (Web Server, over the Internet, 48

Browser Scenario), 289 powerful feature of, 27

Token Signature Verification (Web Service vs. Windows authentication, 46-47

Host, Active Scenario), 292-293 claims-based single sign-on for Microsoft

on the web application server, 288-290 SharePoint 2010, 195-218

on the web service host, 292-293 ADFS default authentication method, 216

certificate section, 118-119 authentication mechanism, 197-199

ClaimHelper class, 57-58 claims mappings in SharePoint, 207

Claim Provider Identifier, 215 displaying claims in a web part, 214

Claim Provider Type, 215 end-to-end walkthroughs, 199

claims, 1-14 FedAuth tokens, 215

Access Control Service (ACS), 7 goals and requirements, 196-197

authentication limitations, 322-324 overview, 197-205

authentication logic, 3-5 people picker, 202-204

authentication profiles and audiences, 321 people picker customizations, 210-212

benefits, 12 relying party (RP) configuration in ADFS,

building a collection, 198 205-206

claims rule language, 74 server deployment, 216

displaying in a web part, 214 SharePoint authorization, 201-202

external issuers, 7-8 SharePoint STS configuration, 206-209

getting lists of, 36-37 SharePoint trusted identity token issuer,

good, 5 207-209

implementing identity, 9-12 SharePoint trusted root authority, 206-207

mappings, 107-108 SharePoint web application configuration,

mappings in SharePoint, 207 209

relationships to security tokens and issuers, 2 single sign-out, 204-205

SharePoint, 25-26 single sign-out control, 212-214

table, 2 user profile synchronization, 214-215

technologies used by, 38-40 visiting two SharePoint web applications,

transformation, 32-33 200-201

use configuration, 324-325 visiting two site collections in a SharePoint

use considerations, 319-324 web application, 199-200

user anonymity, 9-12 claims-based single sign-on for the Web, 43-69

user distinguishing, 36 inside the implementation, 49-59

 

———————– Page 405———————–

 

368

 

overview, 46-48 F

setup and physical deployment, 61 Fabrikam scenario see federated identity with

Windows Azure, 64-67 multiple partners; federated identity with

claims-enabled SharePoint applications at Adatum, multiple partners and Windows Azure ACS

197 Fabrikam Shipping

claims enabling web services, 145-157 for customers with an identity provider,

active client, 150-153 103-105

ADFS 2.0 for web services, 155-156 using ACS, 128

authorization strategy, 153-154 Facebook

debugging the application, 154-155 authentication, 91

goals and requirements, 146 federated identity for Windows Azure ACS,

inside the implementation, 148-155 98-99

overview, 146-147 FedAuth tokens, 215, 225

setup and physical deployment, 155-156 federatedAuthentication attribute, 55

web service, 148-150 federated identity, 1, 5-6, 28-32

ClaimsPrincipal object, 21-22, 54, 248-250 across realms, 26-32

claims rule language, 74 with ACS as the issuer, 30

Claims to Windows Token Service (C2WTS), between Adatum and Litware diagram, 73

318-319 with a smart client, 146-147, 161

Claim User Identifier, 215 message sequence, 31

CleanUp.aspx page, 60-61 federated identity for SharePoint applications,

client computer requirements, xxx 219-236

code requirements, xxix-xxx authorization rules, 232-233

configurable claims transformation rules, 119 closing the browser, 232

Contoso, 105-106, 133-134 goals and requirements, 220

Cookie Encryption/Decryption (Web Server, home realm discovery, 233-234

Browser Scenario), 290 inside the implementation, 224-234

cookies, 21-23, 49, 205, 224, 249-251 overview, 220-224

see also FedAuth tokens SAML token expiration in SharePoint,

credentials, 30 225-228

cross-realm identity, 26-28 sliding sessions, 231

custom claims authorization manager class, token expiration and sliding sessions, 224

166-167 federated identity for Web applications, 71-80

CustomHeaderMessageInspector class, 169 -171 benefits and limitations, 77

goals and requirements, 72

D mapping input to claims, 75

decentralization, 26-27 mock issuers, 78

default ACS home realm discovery page, 312 overview, 72

direct trust model, 223 setup and physical deployment, 77

DisplayClaimsWebPart, 214 single sign-on (SSO), 71

DoFederatedSignOut method, 212-213 trust relationships, 78

domains, 72, 296 federated identity for Windows Azure ACS, xxvii,

81-100

E alternative solutions, 91-99

8ActiveRestClientFederation, 162, 164 customer using a social identity, 86-88

errors, 308 customer with its own identity provider,

federation metadata document, 311-312 84-86

experts, xxxi Facebook, 98-99

 

———————– Page 406———————–

 

369

 

goals and requirements, 82-83 FederatedPassiveSignInStatus control, 60, 63

initializing ACS, 95-96 federation binding, 152

inside the implementation, 93-94 federation metadata, 11-12, 118

mapping rules in a federation provider, 89-91 document uploading, 311-312

overview, 83-84 federation provider (FP), 73-75

reporting errors from ACS, 95 FederationResult handler, 114

setup and physical deployment, 94 FedUtil.exe tool, 12, 110, 237-238

social identity providers, 96-99 forward, xvii-xxii

trust relationships with social identity 4ActiveClientFederation, 148

providers, 88-89

trust relationship with ACS, 94-96 G

users with social identities, 96-97 GetOrders method, 188

Windows Live IDs, 97-98 GetPickerEntry method, 210 -211

federated identity with multiple partners, 101-121 Global.asax file, 56, 62, 110, 163

certificate section, 118-119 glossary, 327-336

claims in Fabrikam shipping, 107-108 Google authentication, 89-90

goals and requirements, 102-103

inside the implementation, 109-117 H

issuer section, 118 home realm discovery, 72, 74, 76, 83

mapping input to claims, 108 Access Control Service (ACS), 296, 312

organization section, 118 custom pages, 306-307

overview, 103-107 federated identity for SharePoint

setup and physical deployment, 117-119 applications, 233-234

trust relationship, 117-119 federated identity with multiple partners, 105

user-configurable claims transformation rules, senior software developer role (Markus), 129

119 and Web services, 33-34

federated identity with multiple partners and HTTP traffic, 241, 253, 260, 274

Windows Azure ACS, 123-144 HttpWebRequestExtensions class, 183-184

ACS initialization, 142-143 hub model, 222-224

adding a new identity provider to, 137

authenticating a user of Fabrikam Shipping,

I

139-140

identity

authorizing access to Fabrikam Shipping data,

federating across realms, 28

140-141

federating with Litware, 221

claims-mapping rules in ACS, 137-138

federation, 5-6

enrolling a new partner organization, 132-133

infrastructure, 315-316

Fabrikam Shipping using ACS, 127-128

normalization, 315-316

Fabrikam Shipping websites, 141-142

transformation, 32-33

goals and requirements, 125-127

see also claims-based identity

inside the implementation, 135-141

Identity.Claims property, 9

listing identity providers from ACS, 135-137

listing partner organizations, 138-139 identity provider (IdP), xxv, 73, 296

Index method, 111, 116 -117

multiple partners with a single identity,

industry standards, 285-286

133-134

Internet Security Association and Key

overview, 125-135

sample claims issuers, 142 Management Protocol (ISAKMP), 285

setup and physical deployment, 141-143 Security Assertion Markup Language (SAML),

285

users at a partner organization, 134-135

 

———————– Page 407———————–

 

370

 

Security Association Management Markus see senior software developer role

Protocol(SAMP), 285 (Markus)

WS-Federation, 285 message sequences, 239-283

WS-Federation Passive Requestor Profile, 286 for ACS, 297-301

WS-SecureConversation, 286 active client scenario, 252-258

WS-Security, 286 browser-based, 20

WS-Trust, 286 browser-based scenario, 240-251

XML Encryption, 286 browser-based scenario with ACS, 258-272

Internet single sign-out, 273-283

accessing, 64 smart-client based, 23-25

claims-based identity, 48 metadata, 11, 78

Internet Security Association and Key <Microsoft.identityModel> element, 149

Management Protocol (ISAKMP), 285 microsoft.identityModel section, 165-166

IsSocial method, 135-136 Microsoft SharePoint Business Connectivity

issuers, 2, 4, 32 Services (BCS) see BCS

see also mock issuers Microsoft Windows Identity Foundation (WIF),

issuer section, 118 17-23

IT professional role (Poe), xxxi mock issuers, 61-63

LogonTokenCacheExpirationWindow federated identity for Web applications, 78

property, 226 Model View Controller (MVC) framework, 101,

Organization claim, 97 109-112, 114

multiple partners see federated identity with

J multiple partners

Jana see software architect role (Jana)

JavaScript, 187 N

JSON-encoded responses, 308 nameidentifier claim, 88, 96-97

Navigating event, 186-187

K 9WindowsPhoneClientFederation, 183

Kerberos NTLM handshake, 244

downfall of, 15

limitations, 3 O

vs. claims-based approach, 1-2 OnAuthenticate event, 50 -51

Kwan, Stuart, xix-xx 1SingleSignOn, 49

OnLoad method, 51

L on the issuer (active scenario), 290-291

layout of this book, xxv-xxix Optional Certificate for Token Encryption (Issuer,

Litware, 221 Browser Scenario), 288

LitwareIssuerUsernameMixed binding, 152 Optional Token Decryption (Web Server, Browser

LogonTokenCacheExpirationWindow property, Scenario), 289-290

226, 229-232 OrderTrackingService class, 164

Organization claim, 97, 137-138, 141-142, 232-233

M organization section, 118

management service API, 307-308

mappings P

of claims, 107-108 Pace, Eugenio, xxxiii-xxxv

of this book, xxvi Page_Load event handler, 60 -61

partner organizations, 138-139

passive client, xxv

 

———————– Page 408———————–

 

371

 

passive federation, 15 REST services from a Windows phone device,

REST services from a Windows phone device, 175-193

177-179 active federation, 179-181

passiveRedirectEnabled attribute, 110 active SAML token handling, 183-185

people picker, 202-204 asynchronous behavior, 187-191

customizations, 210-212 comparing the solutions, 181-182

Peschka, Steve, xxi-xxii goals and requirements, 176

phone devices see REST services from a Windows overview, 177-182

phone device passive federation, 177-179

Poe see IT professional role (Poe) Web browser control, 185-187

PowerShell command parameters, 208 RetrieveIdentityProviders method, 136-137

preface, xxiii-xxxi ReturnURL parameter, 18

principal, xxiv-xxv rich client, office, and reporting applications with

privacy, 30, 34, 36, 127 claims authentication, 321-322

production issuer, 63-64 Role-Based Access Control (RBAC) see RBAC

profiles roles, xxxi

and audiences with claims authentication, 321 see also IT professional role (Poe); security

synchronization, 214-215 specialist role (Bharath); senior software

proof keys, 25 developer role (Markus); software

protocol transitions, 297 architect role (Jana)

route mappings, 110

R

RBAC, 2, 37-38 S

Reactive Extensions (Rx), 187-188 SAML see Security Assertion Markup Language

realms, 6, 26-32, 296 (SAML)

RegisterRoutes method, 110 -111 scenarios see active client; certificates; message

relying party (RP), 74, 296 sequences; roles

configuration in ADFS, 205-206 security

defined, xxiv between domains, 72

Fabrikam scenario, 134 Windows Azure Appfabric ACS, 310-311

identifiers, 206 Security Assertion Markup Language (SAML), xxiv,

mapping table, 205 39, 72

production issuer, 63-64 industry standards, 285

remote users, 26-27 token expiration in SharePoint, 225-228

RequestSecurityTokenResponse, 245 token request, 184-185

RequestSecurityToken (RST), 253 tokens, 38-39, 147, 171

requirements, xxix-xxx Security Association Management

resource security token service (R-STS), xxv Protocol(SAMP), 285

REST services, 158-174 security specialist role (Bharath)

ACS configuration, 162-163 cross-realm identity, 27

active client, 167-171 federated identity, 6

ADFS for web services, 172 overview, xxxi

federated identity with a smart client, 161 RBAC, 2

goals and requirements, 160 security tokens, 4

inside the implementation, 162-171 table, 2

overview, 161-162 security token service (STS), xxv, 296

setup and physical deployment, 172 generated in Visual Studio 2010, 311

SWT token, 168 SelfSTS tool, 80, 142

web service, 163-167

 

———————– Page 409———————–

 

372

 

senior software developer role (Markus) SharePoint Business Connectivity Services (BCS)

ASP.NET MVC, 111 see BCS

home realm discovery, 129 ShipmentController class, 111

overview, xxxi sign-on, 43-69

ReturnURL parameter, 18 SimpleClaimsAuthorizationManager class,

server deployment, 216 153-154

<service> element, 150 Simple Web Token (SWT), 39

service providers (SP), xxiv single sign-on (SSO), 43

Session_SignedIn event, 213-214 with a browser, 17, 19

Session_Start method, 56-57 federated identity for Web applications, 71

7FederationWithMultiplePartnersAndAcs, 135 see also claims-based single sign-on for

<sharedListeners> section, 154-155 Microsoft SharePoint 2010; claims-based

SharePoint single sign-on for the Web

claims, 25-26 single sign-out, 204-205

claims-enabled applications at Adatum, 197 control, 212-214

configuration values, 226 SingleSignOutModule, 212

permissions table, 201 site collections in SharePoint web applications,

security token service, 317-318 199-201

services application framework, 318-319 6-FederationWithAcs, 93-94

sliding sessions in, 228-232 SlidingSessionModule, 229

standard token expiration, 227 sliding sessions

STS configuration, 206-209 in the a-Portal web application, 224, 231

supported standards, 319 in SharePoint, 228-232

token expiration, 225-228 smart client, 146-147, 161-162

trusted identity token issuer, 207-209 with federated identity, 146-147, 161-162

trusted root authority, 206-207 smart client-based message sequence, 24

user identity, 316-317 social identity, 81, 83-84

web application configuration, 209 customer using a social identity, 86-88

see also claims-based single sign-on for social identity providers, 96-99

Microsoft SharePoint 2010; federated trust relationships with social identity

identity for SharePoint applications providers, 88-89 users with social

SharePoint 2010 authentication, 313-326 identities, 96-97

choosing a mode, 319 software architect role (Jana)

claims-based architecture benefits, 313-314 ASP.NET MVC, 109

claims-based architecture implementation, identity transformation, 32

315-319 overview, xxxi

Claims to Windows Token Service (C2WTS), software requirements, xxix-xxx

318-319 SPClaimsManager component, 202

claim use configuration, 324-325 SPUser instance, 215

claim use considerations, 319-324 SPUser type, 316

configuration tips, 325-326 SSO see single sign-on (SSO)

groups with claims authentication, 320-321 standards, 319

limitations, 321-324 standard token expiration in SharePoint, 227

methods in, 316-317 structure of this book, xxv-xxix

multiple authentication mechanisms, 320 STS see security token service (STS)

profiles and audiences with claims stub issuers, 10

authentication, 321 subject defined, xxiv-xxv

sequence, 318 SWT tokens, 168, 171

supported standards, 319 <system.serviceModel> section, 150

 

———————– Page 410———————–

 

373

 

T see also claims enabling web services

technical trust relationship, 89 websites

10SharePoint, 210, 212, 214, 229 ACS and ADFS, 300-301

terminology, xxiv-xxv user authentication, 298

3FederationWithMultiplePartners, 109 who’s who, xxxi

thumbprints, 55-56 whr parameters, 76, 87, 98-99, 128-129, 312

TLS/SSL (Issuer, Browser Scenario), 288 Windows authentication vs. claims-based identity,

Token Decryption (Web Service Host, Active 46-47

Scenario), 293 Windows Azure, xxix, 64-67

tokens Windows Azure Appfabric ACS, xxix, 81-82,

expiration in SharePoint, 225-228 295-312

issuers, 296 creating, configuring, and using an ACS issuer,

request, 184-185 302-306

sliding sessions, 224 custom home realm discovery pages, 306-307

trusted identity token issuer, 207-209 error management, 308

Token Signature Chain Trust Verification (Web integrating ACS and a local ADFS issuer,

Service Host, Active Scenario), 293 308-310

Token Signature Verification (Web Server, Browser management service API, 307-308

Scenario), 289 message sequences for ACS, 297-301

Token Signature Verification (Web Service Host, security considerations, 310-311

Active Scenario), 292-293 tips for using, 311-312

transformation what does ACS do?, 296-297

claims, 32-33 Windows Identity Foundation (WIF), xix-xx, xxviii,

configurable claims transformation rules, 119 9, 17-23, 314-315

rules, 297 Windows Live IDs

trusted identity token issuer, 207-209 authentication, 90

trust relationship, 2, 10, 88-89, 297 federated identity for Windows Azure ACS,

2-Federation, 77 97-98

Windows Live Messenger Connect, 98

U Windows Phone

unique identification, 36 active federation, 180

UserName property, 153 passive federation, 177

users, 36-37 Windows phone devices see REST services from a

anonymity, 9-12 Windows phone device

configurable claims transformation rules, 119 WS* Extensions, 39-40

profile synchronization, 214-215 WSFederatedAuthenticationModule (FAM), 241,

website authentication, 298 260, 272, 275

WS-Federation, 11, 40, 72, 113

W data sent to the issuer, 275

industry standards, 285

wa argument, 18

protocol, 113, 223

Web see claims-based single sign-on for the Web

Web applications see federated identity for Web WSFederationAuthenticationModule (FAM),

applications 53-54

Web browser control, 185-187 browser-based message sequence, 20-21

WS-Federation Passive Requestor Profile, 16, 40

Web.config file, 163

web services industry standards, 286

WS-SecureConversation, 40

ADFS 2.0, 155-156, 172

REST services, 163-167 industry standards, 286

 

———————– Page 411———————–

 

374

 

WS-Security, 39

industry standards, 286

WS-Trust, 39

industry standards, 286

WSTrustChannel, 153

wtrealm argument, 18, 25

 

x

XML Encryption, 286

Posted in Uncategorized | Leave a comment

Accessibility, A Guide for Educators


 

Accessibility

A Guide for Educators

Empower students with accessible technology
that enables personalized learning

 

Accessibility:
A Guide for Educators

Empower students with accessible technology
that enables personalized learning

 

Revision 4: Windows 8, Office 2013, Internet Explorer 10, Office 365, Lync 2013, Kinect for Xbox 360, and Kinect for Windows

 

Published by Microsoft Corporation    
Trustworthy Computing
One Microsoft Way
Redmond, Washington 98052-6399

Managing editors: Carla Hurd, Microsoft Education; and, Dan Hubbell, Trustworthy Computing, Accessibility Outreach

Edition 4: Revised and published in 2013    

This document is provided “as-is.” Information and views expressed in this document, including URLs and other Internet website references, may change without notice.

This document does not provide you with any legal rights to any intellectual property in any Microsoft product.

Permission for Reuse: This guide may be used for non-profit educational and training purposes only. These materials may be printed and duplicated when used for educational or training purposes and not for resale. If you or your organization wants to use these materials for any other purpose, you may submit a request to and obtain written permission from Microsoft (www.microsoft.com/About/Legal/EN/US/IntellectualProperty/Permissions/Default.aspx). Requests will be considered on a case-by-case basis.
Terms of use: www.microsoft.com/About/Legal/EN/US/IntellectualProperty/Permissions/Default.aspx
Trademarks: www.microsoft.com/about/legal/en/us/IntellectualProperty/Trademarks/Default.aspx

To download a copy of this guide, visit: www.microsoft.com/enable/education/, and www.microsoft.com/education/enable/

Copyright © 2013 Microsoft Corporation. All rights reserved.

Microsoft, Windows, Internet Explorer, Access, Excel, InfoPath, OneNote, Outlook, PowerPoint, SharePoint, Lync, Office 365, SmartArt, Surface, Kinect, Xbox, Visio, SkyDrive, Skype, Natural, Backstage are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are property of their respective owners.

    

Table of Contents

About This Guide    5

Purpose of This Guide    5

Chapter 1: Personalized Learning & Accessibility    6

What is Accessibility and Accessible Technology?    7

The Need for Accessible Technology in Schools    7

The Challenge: Inclusive Classrooms with Equal Access for All Students    8

Chapter 2: Impairment Types & Technology Solutions    9

Defining Disability and Impairment    9

Vision Impairments    10

Learning Impairments    16

Mobility and Dexterity Impairments    18

Hearing Impairments and Deafness    25

Language Impairments    29

Chapter 3: Accessibility in Microsoft Products    32

Accessibility in Windows 8    32

Accessibility in Internet Explorer 10    37

Accessibility in Microsoft Office 2013    39

Accessibility in Microsoft Office 365    45

Accessibility in Microsoft Lync 2013    47

Kinect in the Classroom: Engaging Students in New Ways    49

Chapter 4: Selecting Accessible Technology    53

Accessibility Consultants    53

Assistive Technology Decision Tree    55

Assistive Technology Product Starter Guide    59

Resources    63

Resources from Microsoft    63

Additional Resources and Annual Conferences    64

Glossary of Terms    65

Links    68

 

About This Guide        

Purpose of This Guide

In the era of personalized learning, the focus has shifted from what is being taught to what is being learned. The student’s needs and style are now key. Personalized learning requires attention to the unique learning abilities of all students—including students with learning or physical disabilities. As teachers urge students to take more responsibility for their learning, and require students to use technology to acquire new skills, schools have to provide accessible technology that is appropriate for each student’s needs.

This guide provides information about accessibility and accessible technology to help educators worldwide ensure that all students have equal access to learning with technology. For educators new to accessibility and working with students with disabilities, accessibility can seem overwhelming. To help educators teach students with all types of abilities, you will find specific information about each type of impairment and accessible technology solutions to use in the classroom. Educators can also visit the Partners in Learning Network (www.pil-network.com) for further information, community discussions, learning activities and other resources to support teaching and learning for all students.

How to Use This Guide

Chapter 1 provides an overview of accessibility, defines accessible technology, and discusses the importance of providing students with accessible technology in the era of personalized learning.

Chapter 2 provides an overview of types of disabilities and impairments organized by vision, learning, mobility and dexterity, hearing, and language. Each type of impairment is defined. A section on how to access built-in accessibility features and options in Windows 8 is provided, as well as descriptions of assistive technology products teachers and their students may find useful in relation to specific impairments.

Chapter 3 provides an overview of accessibility features and options in Windows 8, Internet Explorer 10, Office 2013, Office 365, Lync 2013, Kinect for Xbox 360, and Kinect for Windows. Brief descriptions and links to further information are provided.

Chapter 4 provides guidance on selecting accessible technology including how to identify the right mix of accessibility solutions, an assistive technology product starter guide, and an assistive technology decision tree, as well as additional resources available to educators through associations and disability advocacy organizations.

Resources provides Microsoft, accessibility association, and international
disability advocacy group contact information.

Glossary provides definitions of words and terms used in this document.

Links provides the full URL (web address) for some hyperlinked (blue underlined links) within this document if the URL is too long to include in the description it references.

Download

This guide is available for download on the Microsoft Accessibility Website: www.microsoft.com/enable/education/, and the Microsoft Accessibility in the classroom webpage: www.microsoft.com/education/enable/

Chapter 1:
Personalized Learning & Accessibility

Education leaders around the world are focused on preparing students in primary and secondary schools for tomorrow’s world, with the objective of helping each one meet his or her maximum potential. This focus, combined with the realization that every child learns in a unique way, is at the heart of “personalized learning.” As educators strive to reach this goal, technology emerges as a key component in making personalized learning a reality.

I have long believed in the power of technology to make a profound impact in education and I’ve been fortunate enough to see some amazing examples around the world where teachers are truly making magic happen for their students. The examples that often most stand out and illustrate the transformative potential of technology are those that use accessible technology integration to empower and enrich the world of students that otherwise might have had an extremely difficult time communicating, collaborating, or socializing with their peers.”

― Anthony Salcito, Worldwide Vice President of Education, Microsoft Corporation

Personalized learning requires attention to the unique needs of all students—including students with learning or physical impairments and disabilities. As students are encouraged to take greater responsibility for their learning and for using technology to acquire new skills, schools have a responsibility to provide accessible technology that can be personalized for each student’s needs. Providing accessible technology in the classroom to students with a wide range of disabilities and impairments—from mild to severe, and from temporary to permanent—enables all students to have equal educational opportunities.

At Microsoft, we embrace our role and responsibility in helping to ensure students of all abilities have opportunities to learn 21st century skills. Microsoft has a long history of commitment to accessibility (www.microsoft.com/enable/microsoft/default.aspx), and we support the personalized learning vision by providing technology that is accessible to every student—regardless of ability.

 

 

What is Accessibility and Accessible Technology?

In this guide, accessible technology is defined as computer technology that enables individuals to adjust a computer to meet their vision, hearing, dexterity and mobility, learning, and language needs. For many, accessibility is what makes computer use possible in the first place. Moreover, accessibility makes it easier for all students to see, hear, and use a computer, and to personalize their computers to meet their own needs and preferences.

Although many people believe that accessibility is just for computer users with disabilities, in reality, the majority of people benefit from accessibility features. For example, most people want to adjust colors, font styles and sizes, background images, and sounds to make it easier and more comfortable to use a computer. Using voice control to create a text message on a mobile phone lets users choose the way they want to access information.

Accessible technology encompasses:

  • Accessibility features or settings built into the operating system and other software programs. These features can be adjusted to meet vision, hearing, dexterity and mobility, language, and learning needs. For example, in Windows 8, you can change the font size and color, and mouse pointer size, color, and movement options. Microsoft Windows, Microsoft Office, Microsoft Office 365, and Microsoft Internet Explorer include additional accessibility features and settings that can be adjusted to make the computer easier to see, hear, and use.
  • Assistive technology products (specialty hardware and software products) that accommodate an individual’s impairment, disability, or multiple disabilities. Examples include a screen magnification program for a computer user who has low vision or an ergonomic keyboard for a computer user with wrist pain. The products are usually add-ons to a computer system and are available from independent technology companies (www.microsoft.com/enable/at/).

Note: Windows RT only supports the installation of apps through the Windows Store. Windows 8, or Windows 8 Professional, is required for individuals using assistive technology software or devices. Also, be sure to check with the assistive technology manufacturer regarding compatibility with Windows 8 before purchasing a new device.

The Need for Accessible Technology in Schools

Accessible technology in schools is important for several reasons. First and foremost, many countries require schools, by law, to provide equal access to technologies for students with disabilities. Among the many reasons for legislating equal access is the inclusion of students with disabilities in mainstream classrooms.

In many countries, students with special needs are being integrated into mainstream classrooms, rather than isolated in schools that focus solely on students with disabilities. This trend makes it especially important for schools and educators to understand how accessible technology benefits all students.

 

Prevalence of Adults and Students with Disabilities Across the Globe

According to the World Health Organization’s 2011 World Report on Disability, based on 2010 world population estimates, more than one billion people live with some form of disability—about 15% of the world’s population.

The number of children 0 – 14 years living with disabilities is estimated between 93 – 150 million. UNESCO (pointing to WHO data, 2008) and UNICEF (2006) use the figure 150 million children with disabilities worldwide.

The definition of disability varies by research organization and ranges from mental disability or developmental delay to impairments in seeing, hearing, speaking, and walking.

A significant number of individuals need educational aids such as accessible and assistive technology during their learning years. Meanwhile, overall student use of computers is increasing. This increase drives the requirement to provide assistive technology for those with disabilities.

Educational Technology in Schools and the Workforce of the Future

The use of computers and other forms of technology used in education—as well as in the home, and virtually all phases of life in the modern world—is rising. In many countries, almost all students have access to a computer at school.

Students with and without disabilities are our future workforce. Proficiency in computer technology is an important and powerful skill, and increases employment opportunities for people with disabilities. Integrating accessible technology into schools, and introducing it to students with disabilities early in their educational lives, not only enhances their learning, but their future employment options as well.

The Challenge: Inclusive Classrooms with Equal Access for All Students

With the increased use of computers in schools, and the increased number of students with disabilities included in general education classrooms, it is even more important to make sure that all students have equal access to computer technology and the educational opportunities it provides.

Fortunately, personal productivity software publishers and educational software developers are today including children with disabilities in their target audiences. As an educator, you can help ensure that students with disabilities have the same access to technology as their peers by seeking out solutions that are accessible for all. Accessibility benefits everyone.

 

 

Chapter 2:
Impairment Types & Technology Solutions

This chapter discusses the term “disability” and outlines the different types of impairments. This includes vision, learning, mobility and dexterity, hearing and deafness, and language impairments. Specific examples of accessible technology solutions are provided for each type of impairment or disability.

Defining Disability and Impairment

A quick Internet search on the question “What is the definition of disability?” is likely to net thousands of matches. Each person who tackles the question does so from a particular perspective and bias. In fact, most of us already have our own definition of what disability means, based on our own frame of reference. In many cases, the definition is all about legal contracts and insurance benefits.

The definition of a “disability,” is relevant in this discussion only because we discuss accessible technology solutions for different types of disabilities and impairments. Later in the guide, we use the term “impairment” to include the wide range of impairments and disabilities from mild to severe.

Before determining how accessible technology can benefit your students, it is beneficial to understand the types of impairments and how those impairments impact computer use.

Following are descriptions of impairment types and suggested accessibility features and assistive technology products for:

  • Vision impairments
  • Learning impairments
  • Mobility and dexterity impairments
  • Hearing impairments and deafness, and
  • Language impairments

 

Vision Impairments

According to UNICEF there are an estimated 150 million children with disabilities in the world. The 2011 American Community Survey found that out of an estimated U.S. population of 306.6 million people, more than 37.5 million live with some type of disability, and, more than 6.6 million have vision difficulties.

Vision impairments include:

  • Low vision. Students with low vision do not have clear vision even with the use of eyeglasses, contact lenses, or intraocular lens implants. For students with vision impairments and low vision, the computer monitor, appearance, text and icon size, and resolution can all be modified to make text and images more legible and easier to see. For students who still have difficulty seeing things on the screen, Magnifier (as well as sound and touch options) is available through Windows and compatible assistive technology products to make computing possible.
  • Colorblindness. Students who are colorblind have difficulty seeing particular colors or distinguishing between certain color-combinations. Software that allows users to choose the display’s color combinations and adjust screen contrast is helpful for people who are colorblind. Individuals with a variety of vision impairments often find it easier to read white text on a black background instead of black on white. Windows makes available the use of High Contrast color schemes, or you can select your own color schemes so you may choose colors that are easiest for you to read.
  • Blindness. Blindness occurs in a variety of degrees, and many people who are considered blind do have some measure of sight. For example, a person whose level of sight is equal to or less than 20/200—even with corrective glasses or lenses—is “legally blind.” A person who is sightless is referred to as “blind.” Many diseases and conditions contribute to, or cause, blindness, including cataracts, cerebral palsy, diabetes, glaucoma, multiple sclerosis, macular degeneration, and accidents.

    Students who are blind can interact with a computer through screen readers, keyboards, Braille devices, and audio/voice rather than a traditional monitor and mouse. The use of sophisticated assistive technology provides for both computer input and output, and is critical for people who are blind.

    Students who are both deaf and blind can also interact with computers using assistive technology products. To someone who is both deaf and blind, captioning and other sound options are of no use, but Braille assistive technology products are critical. People who are both deaf and blind can use computers by using refreshable Braille displays and Braille embossers, discussed below.

 

Accessibility Features in Windows for Students with Vision Impairments

Windows includes numerous features and options for students who have difficulty seeing the screen, or for students who are blind and need to use the computer without a display. This section describes the features and options available in Windows 8 and how to access them. See Chapter 3 for more information on these features as well as accessibility features in other Microsoft products that also support Windows accessibility options.

Make the Computer Easier to See

For students who have vision impairments and low vision, turn on or adjust settings to Make the computer easier to see in the Ease of Access Center in Windows 8.

  1. In Windows 8, open the Ease of Access Center by pressing the Windows logo key +U. Under Explore all settings select Make the computer easier to see.
  2. On the Make the computer easier to see screen, you can select the options that you want to use:
  • Choose a High Contrast theme. Use this option to set a high-contrast color scheme (such as white on black) that heightens the color contrast of some text and images on your computer screen, making those items more distinct and easier to identify.
  • Turn on or off High Contrast when left Alt+Left Shift+Print Screen is pressed. Use this option to toggle a high-contrast theme on or off by pressing the Left Alt+Left Shift+Print Screen keys.
  • Turn on Narrator. Use this option to set Narrator (the basic built-in Windows screen reader) to run when you log on to your computer. Narrator reads aloud on-screen text and describes some on-screen events (such as error messages appearing) while you’re using the computer. For more information about using Narrator, see Hear text read aloud with Narrator (http://windows.microsoft.com/en-US/windows-8/hear-text-read-aloud-with-narrator/).
  • Turn on Audio Description. Use this option to set Audio Descriptions to run when you log on to your computer. Audio Descriptions describe what’s happening in videos (when available).
  • Change the size of text and icons. Use this option to make text and other items on your screen appear larger, so they’re easier to see. For more information, see Make the text on your screen larger or smaller (http://windows.microsoft.com/en-US/windows-8/make-text-screen-larger-smaller/).
  • Turn on Magnifier. One of the most common accessibility solutions for a computer user with low vision is a screen magnifier. Microsoft Windows includes a screen magnification tool called Magnifier that enlarges portions of the screen making it easier to view text and images and to see the whole screen more easily. Magnifier in Windows 8 includes full-screen mode, lens mode (Figure 2-1), and docked mode. The magnification quality is improved and you can set the magnification level up to 16 times the original size, and choose to track what you magnify by movement of your mouse, the keyboard, or text editing. For more information about using Magnifier, see Use Magnifier to see items on the screen (http://windows.microsoft.com/en-US/windows-8/use-magnifier-to-see-items/).

Figure 2-1. Magnifier in lens mode

  • Adjust the color and transparency of the window borders. Use this option to change the appearance of window borders to make them easier to see.
  • Fine tune display effects. Use this option to customize how certain items appear on your desktop.
  • Make the focus rectangle thicker. Use this option to make the rectangle around the currently selected item in dialog boxes thicker, which makes it easier to see.
  • Set the thickness of the blinking cursor. Use this option to make the blinking cursor in dialog boxes and programs thicker and easier to see.
  • Turn off all unnecessary animations. Use this option to turn off animation effects, such as fading effects, when you close windows and other elements.
  • Remove background images. Use this option to turn off unimportant, overlapped content and background images to help make the screen easier to see.

For additional information about how to use accessibility features in Windows and other Microsoft products, see the Microsoft Accessibility Tutorials available online at: www.microsoft.com/enable/training/

 

Use the Computer Without a Display

For students who are blind, or partially blind, accessibility options and assistive technology products are critical for productive computer use. To get started, Windows has many features that enable students to use the computer without a display. For example, you can have screen text read aloud by using Narrator or you can have Windows describe screen activity to you.

For students who are blind and cannot use a monitor you can turn on or adjust settings to Use the computer without a display in the Ease of Access Center.

  1. In Windows 8, open the Ease of Access Center by pressing the Windows logo key +U. Under Explore all settings, select Use the computer without a display.
  2. On the Use the computer without a display screen, select the options that you want to use:
  • Narrator. Narrator is a basic screen reader that reads aloud the text that appears on screen, and describes events such as error messages. It has been redesigned in Windows 8 to be substantially faster, and to support many new features. Whether you’re an individual who is blind, has low vision, or, are fully sighted, you can use Narrator from the first time you start your device. For more information about Narrator, see Hear text read aloud with Narrator (http://windows.microsoft.com/en-US/windows-8/hear-text-read-aloud-with-narrator/).
  • Turn on Audio Description. Use this option to set Audio Descriptions to run when you log on to Windows. Audio descriptions describe what’s happening in videos (when available).
  • Turn off all unnecessary animations. Use this option to turn off animation effects, such as fading effects, which can be distracting to some users, when windows and other elements are closed.
  • How long should Windows notification dialog boxes stay open? Use this option to set how long notifications are displayed on the screen before they are closed. This option can be set to ensure the needed amount of time to read the notifications.

Keyboard Shortcuts

Keyboard shortcuts are combinations of two or more keys that, when pressed, initiate a command that typically requires a mouse or other pointing device. For example, you can use the key combination Ctrl+C to copy text, and then Ctrl+V to paste it in your document. Keyboard shortcuts can make it easier for students with all kinds of impairments, particularly vision and mobility/dexterity impairments, to interact with their computers. Memorizing a few keyboard shortcuts makes it easier for some students who have difficulty seeing the monitor or keyboard to quickly accomplish tasks.

A list of keyboard shortcuts for Windows 8 is available at http://windows.microsoft.com/en-US/windows-8/keyboard-shortcuts/.

Assistive Technology for Students with Vision Impairments

For the operating system or an application to be accessible to someone who is blind, it must provide information about its interactions with the user. Then, assistive technology can present the information in an alternative format such as text-to-speech or Braille. For example, if the computer displays a list box that contains several selections to choose from, the assistive technology product must inform a computer user who is blind that he or she needs to choose from a list of selections. The list of selections might be spoken or presented in a tactile fashion with a Braille display. A common assistive technology product used by people who are blind is a screen reader. Screen readers are software programs that present graphics and text as speech. Computer users who are blind also may use Braille displays and Braille printers—in fact, a combination of assistive technology products is quite common.

Assistive Technology Product Guide

Chapter 4 includes a table with details about specific assistive technology products.

Assistive Technology Products for Students with Vision Impairments

Assistive technology products with different capabilities are available to help people with vision impairments. Some assistive technology products provide a combination of capabilities that help specific individuals. Some of the assistive technology products available from independent technology companies (www.microsoft.com/enable/at/) helpful to students and adults with vision impairments are:

  • Screen magnifiers, which work like a magnifying glass. They enlarge a portion of the screen as the user moves the focus—increasing legibility for some users. Some screen enlargers (software or hardware) allow a user to zoom in and out on a particular area of the screen. An example of a screen magnifier program is ZoomText by AiSquared. See also: Microsoft Hardware website mice (www.microsoft.com/hardware/en-us/mice) and keyboard (http://www.microsoft.com/hardware/en-us/keyboards) products.
  • Screen readers (or software programs) that present graphics and text as speech. A screen reader is used to verbalize, or “speak,” everything on the screen including names and descriptions of control buttons, menus, text, and punctuation. An example of a screen reader is Window-Eyes.
  • Braille printers (or embossers) that transfer computer-generated text into embossed Braille output. Braille translation programs convert text scanned in or generated via standard word processing programs into Braille, which can be printed into raised Braille. The Tiger Cub Jr. is an example of a Braille printer.
  • Braille displays (as shown in Figure 2-2) that provide tactile output of information represented on the computer screen. The user reads the Braille letters with his or her fingers, and then, after a line is read, refreshes the display to read the next line. The Seika Braille Display is an example.

    Figure 2-2. Braille display

  • Braille notetakers that enable a student who is blind to capture notes and then transfer them to a PC. Braille notetakers take advantage of refreshable Braille technology. In some cases, Braille notetakers replace or supplement a standard keyboard. An example of such a notetaker is the Eurobraille Esys.
  • Book readers. Students with low vision need book reading assistance. Magnification devices are available as a desktop magnification aid (such as Desktop SenseView DSV) or as a portable magnification aid (such as SmartView Nano Magnifier). A student may also use a PC configuration for book reading assistance (for example, Cicero Text Reader). Some students may also use a dedicated reading device (such as the Victor Reader Wave).
    A student’s ability to read classroom materials depends upon the format in which the material is available and what accessibility needs the student has. For example, students with low vision can use a desktop or portable magnification aid to read printed materials and books. A student who is blind can have printed material scanned and read aloud through a text-to-speech software program on the PC. In addition, books are available in digital formats through organizations such as Bookshare (www.benetech.org/literacy/bookshare.shtml) and Learning Ally (https://www.learningally.org/) (formerly, Recording for the Blind and Dyslexic).

Learning Impairments

According to UNICEF there are an estimated 150 million children with disabilities in the world. The 2011 American Community Survey
found that out of an estimated U.S. population of 306.6 million people, more than 37.5 million live with some type of disability, and, more than 14 million have cognitive difficulties.

Learning impairments range from conditions such as dyslexia and attention deficit disorder to Down syndrome, for example. Processing problems are the most common and have the most impact on a person’s ability to use a computer. These conditions interfere with the learning process.

Many students with these types of impairments are perfectly able to learn when information is presented to them in a form, and at a pace, that is appropriate for them. For example, some students find it easier to understand information that is presented in short, discrete units. In addition, many individuals with learning disabilities learn more efficiently using visual rather than auditory senses or vice versa. To provide a good learning experience, control over the individual learner’s single- or multi-sensory experience is critical.

Accessibility Features in Windows for Students with Learning Impairments

Windows includes numerous features and options for students who have learning impairments. This section describes the features and options available in Windows 8 and how to access them. See Chapter 3 for more information on these features as well as accessibility features in other Microsoft products.

Make it Easier to Focus on Reading and Typing Tasks

You can use the settings on the Make it easier to focus on tasks screen in the Ease of Access Center in
Windows 8 to reduce the amount of information on the screen and to help students focus on reading and typing tasks.

  1. In Windows 8, open the Ease of Access Center by pressing the Windows logo key +U. Under Explore all settings, select Make it easier to focus on tasks.
  2. On the Make it easier to focus on tasks screen, select the options that you want to use:
  • Turn on Narrator. Windows comes with a built-in basic screen reader called Narrator, which reads text on the screen aloud and describes some events (such as error messages) that happen while you’re using the computer. This option sets Narrator to run when you log on to Windows. For more information about Narrator, see Hear text read aloud with Narrator (http://windows.microsoft.com/en-US/windows-8/hear-text-read-aloud-with-narrator/).
  • Remove background images. Use this option to turn off all unimportant, overlapped content and background images to help make the screen easier to see and less cluttered.
  • Turn on Sticky Keys. Use this option to set Sticky Keys to run when you log on to Windows. With Sticky Keys turned on, instead of having to press three keys at once (such as when you must press the Ctrl, Alt, and Delete keys together to log on to Windows), you can use one key at a time. Then, you can press a modifier key (a key that modifies the normal action of another key when the two are pressed in combination, such as the Alt key) and have it remain active until another key is pressed.
  • Turn on Toggle Keys. Use this option to set Toggle Keys to run when you log on to Windows. With Toggle Keys turned on, you can receive an alert each time you press the Caps Lock, Num Lock, or Scroll Lock keys. These alerts can help prevent the frustration of inadvertently pressing a key and not realizing it.
  • Turn on Filter Keys. Use this option to set Filter Keys to run when you log on to Windows. With Filter Keys turned on, Windows will ignore keystrokes that occur in rapid succession, or keystrokes that are held down for several seconds unintentionally.
  • Turn off all unnecessary animations. Use this option to turn off animation effects, such as fading, when windows and other elements are closed.
  • Choose how long Windows notification dialog boxes stay open. Use this option to choose how long notifications are displayed on the screen before they close—allowing adequate time to read them.

Assistive Technology Products for Students with Learning Impairments

Some of the assistive technology products available from independent technology companies (www.microsoft.com/enable/at/) used with computers by people with learning impairments are:

  • Word prediction programs. These allow the user to select a desired word from an on-screen list located in the prediction window. The program predicts words from the first one or two letters typed by the user. The word can then be selected from the list and inserted into the text by typing a number, clicking the mouse, or scanning with a switch. These programs help support literacy, increase written productivity and accuracy, and increase vocabulary skills through word prompting. ClaroRead Standard and TextHelp Read & Write Standard are just two examples of such programs.
  • Reading tools and learning disabilities programs. These include software designed to make text-based materials more accessible for people who struggle with reading. Options can include scanning, reformatting, navigating, or speaking text out loud. These programs help people who have difficulty seeing or manipulating conventional print materials; people who are developing new literacy skills, or who are learning English as a foreign language; and people who comprehend better, when they hear and see text highlighted simultaneously. The Universal Reader is an example of assistive technology that can make reading easier.
  • Speech synthesizers. Also known as text-to-speech, these programs speak information aloud in a computerized voice. Speech synthesizers can be helpful for students with learning, language, or vision impairments. Products such as Scan and Read Pro produce natural sounding speech synthesis that can support reading skills development.
  • Speech recognition programs. These allow computer navigation by voice rather than entering data by keyboard or mouse. You can still use a mouse and keyboard as well as voice, to enter data, write text, and navigate applications. Students who have difficulty typing or reading text because of a learning, language, or mobility impairment can often successfully work on a computer with the use of speech recognition. Speech Recognition is available in Windows 8 (http://windows.microsoft.com/en-US/windows-8/using-speech-recognition/). Some may prefer or require a more robust speech recognition program, such as Dragon NaturallySpeaking.

 

Mobility and Dexterity Impairments

Mobility and dexterity impairments can be caused by a wide range of common illnesses and accidents such as cerebral palsy, multiple sclerosis, loss of limbs or digits, spinal cord injuries, and repetitive stress injury, among others. As a result, students might be unable to use arms or fingers to interact with their computers using a standard keyboard or mouse. Temporary mobility impairments might occur with a broken arm, for example, and are also included in this category.

Others who have dexterity impairments or pain in their hands, arms, and wrists might need to adjust settings to make it more comfortable to use a keyboard or mouse. For example, some people cannot press multiple keys simultaneously (like Ctrl+Alt+Delete). Still others might strike multiple keys or repeat keys unintentionally. Some students might have use of their hands and arms but have a limited range of motion. All of these conditions can make using a standard mouse or keyboard difficult, if not impossible.

Mobility and dexterity impairments need to be addressed individually to set up the right mix of accessibility features in Windows and assistive technology hardware and software solutions.

There are many types of products available that allow students to use a computer, even if the students can move only their eyes. Outlined below are accessibility features in Windows to make the mouse and keyboard more comfortable. In addition, you can set up a computer for a student who needs to use an on-screen keyboard and other alternative input options rather than a standard keyboard or mouse.

Accessibility Features in Windows for Students with Mobility and Dexterity Impairments

Windows includes numerous features and options for students with mobility and dexterity impairments. This section describes the features and options available in Windows 8 and how to access them. See Chapter 3 for more information on these features as well as accessibility features in other Microsoft products.

Make the Mouse Easier to Use

For students who have pain or discomfort when using the mouse, or other dexterity impairments, consider a different style of mouse (options discussed below), and try changing the size of the mouse cursor and the mouse button options to make the mouse easier to use. Start by exploring the mouse options available on the Make the mouse easier to use screen in the Ease of Access Center.

  1. In Windows 8, open the Ease of Access Center by pressing the Windows logo key +U. Under Explore all settings, select Make the mouse easier to use.
  2. On the Make the mouse easier to use screen, select the options that you want to use:
  • Change the color and size of mouse pointers. Use this option to make the mouse pointer larger, or change the colors to make it easier to see.
  • Turn on Mouse Keys. Use this option to control the movement of the mouse pointer by using the numeric keypad.
  • Activate a window by hovering over it with the mouse. Use this option to make it easier to select and activate a window by pointing at it with the mouse rather than by clicking it.
  • Prevent
    windows from being automatically arranged when moved to the edge of the screen. Use this option to prevent windows from automatically resizing and docking along the sides of your screen when you move them there.
  • You can also change mouse settings including customizing the mouse in a variety of ways, such as reversing the functions of your mouse buttons, making the mouse pointer more visible, and altering the scroll wheel speed. In Windows 8, open the Mouse Control Panel by typing mouse in the Search box, clicking Settings, and then clicking Mouse.

Make the Keyboard Easier to Use

For a student who has pain or discomfort when using the keyboard, consider a different style of keyboard (options discussed below). You can also adjust the keyboard controls on the Make the keyboard easier to use screen in the Ease of Access Center.

  1. In Windows 8, open the Ease of Access Center by pressing the Windows logo key +U. Under Explore all settings, select
    Make the keyboard easier to use
    .
  2. On the Make the keyboard easier to use screen, select the options that you want to use:
  • Turn on Mouse Keys. Use this option to set Mouse Keys to run when you log on to Windows. With Mouse Keys turned on, instead of using the mouse, you can use the arrow keys on your keyboard or the numeric keypad to move the pointer.
  • Turn on Sticky Keys. Use this option to set Sticky Keys to run when you log on to Windows. With Sticky Keys turned on, instead of having to press three keys at once (such as when you must press the Ctrl, Alt, and Delete keys together to log on to Windows), you can use one key at a time. Then, you can press a modifier key (a key that modifies the normal action of another key when the two are pressed in combination, such as the Alt key) and have it remain active until another key is pressed.
  • Turn on Toggle Keys. Use this option to set Toggle Keys to run when you log on to Windows. With Toggle Keys turned on, you can receive an alert each time you press the Caps Lock, Num Lock, or Scroll Lock keys. These alerts can help prevent the frustration of inadvertently pressing a key and not realizing it.
  • Turn on Filter Keys. Use this option to set Filter Keys to run when you log on to Windows. With Filter Keys turned on, Windows will ignore keystrokes that occur in rapid succession, or keystrokes that are held down for several seconds unintentionally.
  • Underline keyboard shortcuts and access keys. Use this option to make keyboard access in dialog boxes easier by highlighting access keys for the controls in them. (For more information about keyboard shortcuts, see below).
  • Prevent windows from being automatically arranged when moved to the edge of the screen. Use this option to prevent windows from automatically resizing and docking along the sides of your screen when you move them there.

Keyboard Shortcuts

Keyboard shortcuts are combinations of two or more keys that, when pressed, initiate a command that would typically require a mouse or other pointing device. Keyboard shortcuts can make it easier for students with all kinds of impairments, particularly those with dexterity issues who might find using the mouse difficult or tiring. Memorizing a few keyboard shortcuts makes navigating the computer faster for students.

A list of keyboard shortcuts for Windows 8 is available at: http://windows.microsoft.com/en-US/windows-8/keyboard-shortcuts/.

Here are a few keyboard shortcuts for the features mentioned in this section:

Press this key

To-do-this

Right Shift for eight seconds

Turn Filter Keys on and off

Left Alt+Left Shift+Print Screen

Turn High Contrast on or off

Left Alt+Left Shift+Num Lock

Turn Mouse Keys on or off

Shift five times

Turn Sticky Keys on or off

Num Lock for five seconds

Turn Toggle Keys on or off

Windows logo key +U

Open the Ease of Access Center

Use the Computer Without the Mouse or Keyboard

Windows 8 includes features that make it possible to use the computer without a mouse or keyboard. Windows Speech Recognition lets you use voice commands to navigate your computer screen. On-Screen Keyboard, lets you enter text by selecting keys on a visual keyboard displayed on the computer screen. Touchscreen-enabled Windows 8 computers and tablets also let you navigate the screen without the use of a mouse or keyboard. See the section “Touchscreens” in the assistive technology section of this topic.

You can turn on or adjust settings for these features on the Use the computer without a mouse or keyboard screen in the Ease of Access Center.

Assistive Technology Products for Students with Mobility and Dexterity Impairments

Some of the assistive technology products available from independent technology companies (www.microsoft.com/enable/at/) used with computers by people with mobility and dexterity impairments are:

  • Ergonomic keyboards and mice. Ergonomic keyboards and mice are designed to be more comfortable than a standard keyboard and mouse. To improve the quality and health of your PC experience, Microsoft designers and ergonomists created industry-leading keyboard and mouse products to encourage healthier hand and wrist positions. Microsoft Natural keyboards and mice have set the industry standard for comfort, and can reduce carpal tunnel syndrome symptoms. Microsoft keyboards and mice (http://www.microsoft.com/hardware/) also have built-in zoom and magnifier options.
  • Joysticks can be plugged into the computer’s mouse port and used to control the cursor on the screen. Joysticks benefit users who need to operate a computer with or without the use of their hands. For example, some people might operate the joystick with their feet or with the use of a cup on top of the joystick that can be manipulated with their chin. An example of a joystick is the SAM-Joystick.
  • Trackballs look like a mouse with a movable ball on top of a stationary base. An example of a trackball is shown in Figure 2-4. The ball can be rotated with a pointing device or a hand. People who have fine motor skills but lack gross motor skills can use these devices more easily and comfortably than a traditional mouse. BigTrack is an example of a trackball style mouse that is more comfortable for many people with dexterity issues—as well as young children and seniors.


    Figure 2-4. Trackball

  • On-screen keyboard programs provide an image of a standard or modified keyboard on the computer screen. The user selects the keys with a mouse, touchscreen, trackball, joystick, switch, or electronic pointing device. On-screen keyboards often have a scanning option. With the scanning capability turned on, the individual keys on the on-screen keyboard are highlighted. When a desired key is highlighted, the user is able to select it by using a switch positioned near a body part that he or she has under voluntary control. An example is ScreenDoors 2000, an on-screen keyboard product that can be helpful for some students.
    See also the built-in On-Screen Keyboard in Windows 8 (http://windows.microsoft.com/en-US/windows-8/type-with-the-on-screen-keyboard/)
  • Keyboard filters include typing aids such as word prediction utilities and add-on spelling checkers. These products can often be used to reduce the number of keystrokes. As an example, imagine you have to type the letter “G.” However, in order to type the letter, you first have to move your finger over the entire first row of your keyboard and halfway across the second row. Along the way, you might accidentally depress “R,” “P,” or “D” keys, but you only want the letter “G.” Keyboard filters enable users to quickly access the letters they need and to avoid inadvertently selecting keys they don’t want. SoothSayer Word Prediction is an example of a keyboard filter.
  • Touchscreens are monitors, or devices placed on top of computer monitors, which allow direct selection or activation of the computer by touching the screen. These devices can benefit some users with mobility impairments because they present a more accessible target. It is easier for some people to select an option directly rather than through a mouse movement or keyboard. Moving the mouse or using the keyboard for some might require greater fine motor skills than simply touching the screen to make a selection. Other users might make their selections with assistive technology such as mouth sticks. With Windows 8 and a touchscreen monitor, you can just touch your computer screen for a more direct and natural way to work. Use your fingers to scroll, resize windows, play media, and pan and zoom. Learn about Microsoft touchscreen technologies such as Microsoft Surface (www.microsoft.com/Surface/en-US).
  • Alternative PC hardware and all-access workstations. In some cases, alternative PC hardware is needed. Some individuals with mobility impairments find it challenging to open the monitor of a laptop because the laptop latch isn’t accessible for them. Or, some students might need a laptop to be mounted to a wheelchair. Assistive technology solutions such as these are referred to as “all-access workstations.” The Desktop SenseView DSV is an alternative PC workstation that is easy to control with dexterity impairments and enlarges text for students with vision impairments.
  • Alternative input devices allow users to control their computers through means other than a standard keyboard or pointing device.

    Alternative input devices include:

    • Alternative keyboards available in different sizes with different keypad arrangements and angles. Larger keyboards (one example is BigKeys LX) are available with enlarged keys (see the example shown in Figure 2-5, below), which are easier to access by people with limited motor skills. Smaller keyboards are available with smaller keys (or keys placed closer together) to allow someone with a limited range of motion to reach all the keys. Many other keyboards are also availableone hand keyboards, keyboards with keypads located at various angles, and split keyboards where the keypad is split into sections.


      Figure 2-5. Alternative keyboard with large keys and ABC layout

    • Electronic pointing devices used to control the cursor on the screen using ultrasound, an infrared beam, eye movements, nerve signals, or brain waves. When used with an on-screen keyboard, electronic pointing devices also allow the user to enter text or data. The assistive technology product HeadMouse Extreme is an example of a pointing device.
    • Sip-and-puff device, shown in Figure 2-6, refers to just one of many different types of switch access. In typical configurations, a dental saliva extractor is attached to a switch. An individual uses his or her breath to activate the switch. For example, a puff generates the equivalent of a keystroke, the pressing of a key, a mouse click, and so on. Maintaining constant “pressure” on the switch (more like sucking than sipping) is the equivalent of holding a key down. With an on-screen keyboard, the user “puffs” out the letters. Moving the cursor over a document’s title bar and “sipping” enables the user to drag items around on the screen just as you would with a mouse. This technology is often used with on-screen keyboards. The Jouse 2 is an example of a sip-and-puff device.


      Figure 2-6. Sip-and-puff device

 

  • Wands and sticks are typing aids used to strike keys on the keyboard. They are most commonly worn on the head, held in the mouth, strapped to the chin, or held in the hand. They are useful for people who need to operate their computers without the use of their hands or who have difficulty generating fine movements. The majority of these devices are customized for a user by adapting a pencil, or wooden dowel, which can be purchased in most hardware stores.

 

Hearing Impairments and Deafness

Over five percent of the world’s population – 360 million people – has disabling hearing loss (328 million adults and 32 million children), according to the World Health Organization. Hearing impairments encompass a range of conditions—from slight hearing loss to deafness. Hearing impairments include:

  • Hearing loss and hard-of-hearing. Students who have hearing loss or are hard-of-hearing may be able to hear some sound, but might not be able to distinguish words. Often, people with this type of hearing impairment can use an amplifying device to provide functional hearing. On the computer, adjusting sounds, using alternatives for sounds such as visual indicators and captions, and headphones to eliminate background noise can be helpful options.
  • Deafness. Students who are deaf may not be able to hear any sounds or words spoken. It is helpful to adjust the computer to turn on visual alternatives for sounds (http://windows.microsoft.com/en-US/windows-8/use-visual-alternatives-to-sounds/).

Computer Use by People Who Are both Deaf and Blind

People who are both deaf and blind can, and do, use computers with the aid of assistive technology. To someone who is both deaf and blind, captioning and other sound options are of no use, but Braille assistive technology products are critical. People who are both deaf and blind can use computers with assistive technology such as refreshable Braille displays and Braille embossers.

Accessibility Features in Windows for Students with Hearing Impairments

Accessibility features in Windows 8 for those with hearing impairments include changing notifications from sound to visual notifications, volume control, and captioning. Visual notifications and captions allow users to choose to receive visual warnings and text captions, rather than sound messages, for system events such as a new email message arriving.

Accessibility features helpful for students who have hearing impairments include:

  • Adjusting volume
  • Changing computer sounds
  • Using text or visual alternatives for sounds

More Information

See Chapter 3 for more information on these features as well as accessibility features in other Microsoft products.

Adjust volume

Although most speakers and many keyboards have a volume control buttons, you can also control speaker volume using Windows. One of the easiest ways to adjust it is to click the Speakers button in the notification area of the taskbar when using desktop view; and then moving the slider up or down to increase or decrease the speaker volume. Or, from the Start screen, swipe in from the right edge of the screen and click Settings. Click the volume control icon and move the slider up or down to increase or decrease the speaker volume.

To adjust overall sound volume in Windows 8:

  • Swipe in from the right edge of the screen, and then tap Search.
    (If you’re using a mouse, point to the upper-right corner of the screen, move the mouse pointer down, and then click Search.) Enter Adjust system volume in the search box, and tap or click Settings, and then tap or click Adjust system volume.
  • Move the slider up to increase the volume.

Make sure the Mute button isn’t turned on. If the button looks like this: , muting is turned off. If the button looks like this: , tap or click it to turn off muting.

Change computer sounds

You can select the sounds that play when certain events occur on screen. This is helpful for students who have trouble hearing some sounds—high or low-pitched sounds, for example, or sounds associated with other devices. To change sounds in Windows 8:

  • Open Personalization by swiping in from the right edge of the screen, tapping Search (or if you’re using a mouse, pointing to the upper-right corner of the screen, moving the mouse pointer down, and then clicking Search), entering Personalization in the search box, tapping or clicking Settings, and then tapping or clicking Personalization.

To change the sounds you hear when something happens on your computer, tap or click Sounds, tap or click an item in the Sound Scheme list, and then tap or click OK.


Figure 2-7. Sound options dialog box with Sounds tab open

Use Text or Visual Alternatives to Sounds

Windows 8 provides settings for using visual cues to replace sounds in many programs. You can adjust these settings on the Use text or visual alternatives for sounds screen in the Ease of Access Center.

  1. Swipe in from the right edge of the screen, and then tap Search.
    (If you’re using a mouse, point to the upper-right corner of the screen, move the mouse pointer down, and then click Search.)
  2. In the search box, enter Replace sounds with visual cues, tap or click Settings, and then tap or click Replace sounds with visual cues. Select the options that you want to use:
  • Turn on visual notifications for sounds. Use this option to set sound notifications to run when you log on to Windows. Sound notifications replace system sounds with visual cues, such as a flash on the screen, so that system alerts are noticeable even when they’re not heard. You can also choose how you want sound notifications to warn you.
  • Turn on text captions for spoken dialog. Use this option (when available) to display text captions in place of sounds to indicate that activity is happening on your computer (for example, when a document starts or finishes printing).


Figure 2-8. Use text or visual alternatives for sounds screen in the Ease of Access Center

Assistive Technology Products for Students with Hearing Impairments

Individuals with hearing impairments may need a classroom sign language interpreter or other accessibility solutions to be able to communicate actively in their classroom.

Personal listening devices and personal amplifying products can also be helpful for students with some hearing.

One product that may be useful for schools is iCommunicator—a graphical sign language translator that converts speech to sign language in real time to enable people who are deaf to communicate more easily with hearing people.

Depending on the learning environment, students may be able to use several Microsoft programs and apps to communicate. Microsoft Outlook, for example, can be used to transmit textual conversations. Instant messaging programs such as Microsoft Lync in Office 365 provide a real time conversational environment for students who are deaf. The Skype app for Windows 8 and Windows RT
allows users to communicate by video using a webcam so students who communicate by sign language can readily interact.

Language Impairments

Language impairments include conditions such as aphasia (loss or impairment of the power to use or comprehend words, often as a result of brain damage), delayed speech (a symptom of cognitive impairment), and other conditions resulting in difficulties remembering, solving problems, or perceiving sensory information. For students who have these impairments, complex or inconsistent visual displays or word choices can make using computers more difficult. For most computer users, in fact, software that is designed to minimize clutter and competing objects on the screen is easier to use, more inviting, and more useful.

Some individuals with language impairments do not have the ability to communicate orally. These individuals can use augmentative and assistive communication devices to “speak” for them. To communicate, these individuals either type out words and phrases they wish to “say” or select from a series of images that, when arranged in a particular way, generate a phrase. For example, an individual could use the combination of a picture of an apple, a sandwich, and a carton of milk plus a lunch pail to communicate what she wants her mom to pack for lunch tomorrow.

Accessibility Features in Windows for Students with Language Impairments

Windows includes numerous features and options for students with language impairments. This section describes the features and options available in Windows 8 and how to access them. See Chapter 3 for more information on these features as well as accessibility features in other Microsoft products.

Make it Easier to Focus on Reading and Typing Tasks

You can use the settings on the Make it easier to focus on tasks screen in the Ease of Access Center in Windows 8 to reduce the amount of information on the screen and to help students focus.

  1. In Windows 8, open the Ease of Access Center by pressing the Windows logo key +U. Under Explore all settings, select Make it easier to focus on tasks.
  2. Then, select the options that are most helpful:
  • Turn on Narrator. Use this option to set Narrator to run when you log on to Windows. Narrator reads aloud on-screen text and describes some events (such as error messages appearing) while you’re using the computer. For more information about using Narrator, see Hear text read aloud with Narrator (http://windows.microsoft.com/en-US/windows-8/hear-text-read-aloud-with-narrator/).
  • Remove background images. Use this option to turn off all unimportant, overlapped content and background images to help make the screen easier to see.
  • Turn on Sticky Keys. Use this option to set Sticky Keys to run when you log on to Windows. Instead of having to press three keys at once (such as when you must press the Ctrl, Alt, and Delete keys together to log on to Windows), you can press one key at a time by turning on Sticky Keys and adjusting the settings. Then, you can press a modifier key and have it remain active until another key is pressed.
  • Turn on Toggle Keys. Use this option to set Toggle Keys to run when you log on to Windows. Toggle Keys can play an alert each time you press the Caps Lock, Num Lock, or Scroll Lock keys. These alerts can help prevent the frustration of inadvertently pressing a key.
  • Turn on Filter Keys. Use this option to set Filter Keys to run when you log on to Windows. You can set Windows to ignore keystrokes that occur in rapid succession, or keystrokes that are held down for several seconds unintentionally.
  • Turn off all unnecessary animations. Use this option to turn off animation effects, such as fading, when windows and other elements are closed.
  • Choose how long Windows notification dialog boxes stay open. Use this option to choose how long notifications are displayed on the screen before they close—so you have adequate time to read them.
  • Prevent windows from being automatically arranged when moved to the edge of the screen. Use this option to prevent windows from automatically resizing and docking along the sides of your screen when you move them there.

Assistive Technology Products for Students with Language Impairments

Some of the assistive technology products available from independent technology companies (www.microsoft.com/enable/at/) used with computers by people with language impairments are:

  • Augmentative and assistive communication (AAC) devices. These are used by individuals who cannot speak or who find speaking difficult. The user types in a word, phrase, or sentence to communicate—or selects a series of symbols or pictures on the device—and the device “speaks” aloud for the user. Often these devices are used to replace a PC keyboard. One example of an augmentative communication device is QualiSPEAK Pro.

    Some apps, found in the Windows Store and available for download, provide augmentative communication capabilities. Mozzaz TalkingTILES (www.mozzaz.com) is an example. It’s an assistive communication and learning app that is fully customizable and can progress with a student’s development from K to 12. It delivers an integrated and coordinated learning environment accessible from any device. Teachers can remotely collaborate with other teaching professionals and support staff, to share data and teaching lessons, and to monitor progress through instant reports and dashboards for special needs students.



    Figure 2-9. Screen shot of Mozzaz TalkingTILES showing an example of tiles that can be selected to communicate simple phrases or whole conversations

     

  • Touchscreens. These are monitors, or devices placed on top of computer monitors, that allow direct selection or activation of the computer by touching the screen. Touchscreens benefit people with mobility impairments, as well as people with language impairments. The ability to touch the computer screen to make a selection is advantageous for people with language and learning impairments because it is a simpler, more direct, and intuitive process than making a selection using a mouse or keyboard. With Windows 8 and a touchscreen monitor or tablet, such as Microsoft Surface (www.microsoft.com/Surface/), you can just touch your computer screen for a more direct and natural way to work. Use your fingers to scroll, resize windows, play media, and pan and zoom. Additional touchscreen technologies are available for Windows. The Gus Communicator PC10 Touch Screen Tablet PC is an example of an assistive technology product that can be used via touch to communicate.
  • Speech synthesizers. Defined earlier, these programs provide the user with information through a computer voice. Also known as text-to-speech (TTS), the speech synthesizer receives information in the form of letters, numbers, and punctuation marks, and then “speaks” it out loud to the user in a computer voice. Scan and Read Pro is an example of an assistive technology product that produces more natural sounding speech synthesis.

 

 

Chapter 3:
Accessibility in Microsoft Products

This chapter lists important accessibility features and options built into Microsoft products along with a brief description and links to further information. Products included are:

  • Windows 8
  • Internet Explorer 10
  • Office 2013
  • Office 365
  • Lync 2013
  • Microsoft Kinect for Xbox 360 and Kinect for Windows

More Information

Find product accessibility information, demos, tutorials, and more on the Microsoft Accessibility Website (www.microsoft.com/enable/).

Accessibility in Windows 8

Windows 8 includes accessibility options and programs that make it easier to see, hear, and use your computer including ways to personalize your PC.

Magnifier now includes a lens mode and full-screen mode. On-Screen Keyboard can be resized to make it easier to see and includes text prediction. Windows 8 also gives you more ways to interact with your PC by taking advantage of new strides in speech recognition and touch technology.

Find more information about Windows 8 accessibility (www.microsoft.com/enable/products/windows8/).

 


Figure 3-1. Windows 8 Ease of Access Center screen in Control Panel

 

Overview of Accessibility Features in Windows 8

Feature

Description

Ease of Access Center

A central location to explore accessibility settings and programs to make your computer easier to use. The Ease of Access Center in Control Panel can be opened by selecting Windows logo key +U after you log on to Windows.

The Ease of Access Center includes:

  • Quick access to common tools. Start Magnifier, On-Screen Keyboard, Narrator, and High Contrast quickly.
  • Get recommendations to make your computer easier to use. An optional questionnaire provides a personalized list of recommended settings based on your answers to a series of questions about your eyesight, dexterity, hearing, and more. A custom list of recommended settings is provided so you can choose which options you want to try.
  • Explore all settings by category. Instead of looking for accessibility settings in various places, settings are organized so you can explore how to:
    • Make the computer easier to see
    • Use the computer without a display
    • Make the mouse easier to use
    • Make the keyboard easier to use
    • Use the computer without a mouse or keyboard
    • Use text or visual alternatives for sounds
    • Make it easier to focus on tasks

Magnifier

Enlarges portions of the screen making it easier to view text and images and see the whole screen more easily. Magnifier in Windows 8 now includes full-screen mode, lens mode, and docked mode.

The magnification quality is improved and you can set the magnification level up to 16 times the original size and choose to track what you magnify by movement of your mouse, the keyboard, or text editing. Options includes:

  • Choose where Magnifier focuses so that it follows the movement of the mouse cursor, keyboard focus, or text editing
  • Change the zoom level
  • Set the zoom increment
  • Set the lens size
  • Turn on color inversion for better screen legibility
  • Display the Magnifier toolbar

Make the text on your screen larger or smaller

Make the text and other items, such as icons, on your screen easier to see by making them larger. You can do this without changing the screen resolution of your monitor or laptop screen. This allows you to increase or decrease the size of text and other items on your screen while keeping your monitor or laptop set to its optimal resolution.

On-Screen Keyboard

Displays a visual keyboard with all the standard keys. Instead of relying on the physical keyboard to type and enter data, you can use On-Screen Keyboard to select keys using the mouse or another pointing device.

On-Screen Keyboard in Windows 8 can be resized and customized to make it easier to see and use. On-Screen Keyboard now also includes text prediction in eight languages. When text prediction is enabled, as you type, On-Screen Keyboard displays a list of words that you might be typing. Options include:

  • Change how information is entered
  • Set On-Screen Keyboard to use audible clicks
  • Use a numeric keypad
  • Enable text prediction

Speech Recognition

Command your PC with your voice including the capability to dictate into almost any application. You can dictate documents, emails and surf the web by saying what you see. An easy setup process and an interactive tutorial are available to familiarize you with the speech commands and train your computer to better understand you. Options include:

  • Dictate text using Speech Recognition
  • Use the dictation scratchpad
  • Add or edit words in the Speech Dictionary
  • Use common commands

Windows Touch

With Windows 8 and a touchscreen monitor you can use your fingers to scroll, resize windows, play media, and pan and zoom.

Narrator

Windows comes with a basic screen reader called Narrator that reads text on the screen aloud and describes some events (such as an error message appearing) that happen while you’re using the computer. You can find Narrator in the Ease of Access Center. Options include:

  • Choose which text Narrator reads aloud
  • Change the Narrator voice
  • Start Narrator minimized

Keyboard shortcuts

Keyboard combinations of two or more keys that, when pressed, can be used to perform a task that would typically require a mouse or other pointing device. Keyboard shortcuts can make it easier to interact with your computer, saving you time and effort.

Mouse Keys

Instead of using the mouse, you can use the arrow keys on the numeric keypad to move the pointer.

Sticky Keys

Instead of having to press three keys at once (such as when you must press the Ctrl Alt, and Delete keys simultaneously to log on to Windows), you can press one key at a time when Sticky Keys is turned on.

Filter Keys

Ignore keystrokes that occur in rapid succession and keystrokes that are held down for several seconds unintentionally.

Visual notifications

Replace system sounds with visual cues, such as a flash on the screen, so system alerts are announced with visual notifications instead of sounds.

Captions

Get information via animations and video that some programs use to indicate that activity is happening on your computer.

 

Accessibility Through Windows Store Apps

In addition to the built-in accessibility features and options of Windows 8, you can download apps (http://windows.microsoft.com/en-us/windows-8/apps/) from the Windows Store (http://windows.microsoft.com/en-us/windows-8/windows-store/) including many apps that either provide accessibility for people with disabilities (e.g. apps for augmentative communication, AAC) or have been designed to be compatible with other assistive technology. You can search specifically for apps that have been marked as accessible.

Note: Windows RT only supports the installation of apps through the Windows Store. Windows 8, or Windows 8 Professional, is required for individuals using assistive technology software or devices. Also, be sure to check with the assistive technology manufacturer (www.microsoft.com/enable/at/) regarding compatibility with Windows 8 before purchasing a new device.

 

Accessibility in Internet Explorer 10

The Internet is easier to see and explore with accessibility features and options in Internet Explorer 10. Internet Explorer 10 lets you select text and move around a webpage with the keyboard, makes it easier to copy and paste text from a webpage, and, lets you zoom in on a webpage. Enhanced keyboard access can also be found in the toolbar buttons, search box items, address bar, and tabs. Find more information on using these features at: www.microsoft.com/enable/products/ie10/.

Overview of Accessibility Features in Internet Explorer 10

Feature

Description

Zoom in on a webpage

Zoom lets you enlarge or reduce your view of a webpage. Unlike changing font size, zoom enlarges or reduces everything on the page, including text and images.

Make text larger or smaller

You can increase or decrease the font size on a webpage to make it more legible in Internet Explorer for the desktop.

Use the keyboard to surf the web

Press the Tab key to move forward, and Shift+Tab to move backward, through screen elements such as links that are text or images, text fields on website forms, hotspots on image maps, the address bar, the tabs bar, and more.

Change the font, formatting, and colors on webpages

Make webpages easier to see by changing the text, background, link and hover colors. Internet Explorer 10 supports the system link color, so High Contrast mode and color preferences you have chosen in Windows will work in Internet Explorer too.

Customize Internet Explorer 10
to work with a screen reader or voice recognition software

Some Internet Explorer 10 features can cause screen readers to give confusing or incorrect information, but you can customize to make them work more smoothly.

Select text and move around a
webpage with the keyboard

Rather than using a mouse to select text and move around within a webpage, you can use standard navigation keys on your keyboard—Home, End, Page Up, Page Down, and the arrow keys. This feature is called Caret Browsing and is named after the caret—or cursor. This makes it easier to select, copy, and paste text to another document with a keyboard instead of a mouse. To use Caret Browsing, tap or click the Tools menu, tap or click File, and then tap or click Caret Browsing. Press F7 when Internet Explorer is open on your desktop to turn Caret Browsing on and off.

Simplify common tasks
with Accelerators

Make tasks like copying, navigating, and pasting easier by using accelerators to save time and keystrokes. Accelerators help you quickly perform tasks without navigating to other websites to get things done. Highlight text from any webpage, and then click on the blue Accelerator icon that appears above your selection to obtain driving directions, translate and define words, email content, or search.

Learn more about Internet Explorer 10 accessibility options

Visit: www.microsoft.com/enable/products/ie10/.

 

 

Accessibility in Microsoft Office 2013

Microsoft Office 2013 makes it easier to create documents, spreadsheets, and presentations with rich content; and, finding commands you need is easier with the redesigned user interface. Find more information at www.microsoft.com/enable/products/office2013/.

Overview of Accessibility Features in Office 2013

Feature

Description

Accessibility Checker

With the click of a button in Word 2013, Excel 2013, and PowerPoint 2013 you can scan a document, spreadsheet, or presentation to identify areas that may be problematic for users with disabilities. The feature, called “Accessibility Checker,” helps you create more accessible content. It highlights and explains accessibility issues, so you can fix them before the content is finalized. This is a great tool for educators to use before handing out digital files to their students.

Get quick access to frequently used commands in Backstage view

When you want to do things to a whole file like print, save, or open a different file, click the File tab (Alt+F) to go to the Microsoft Office Backstage view. This large view provides more detail about available commands and how to use them. This organization reduces keystrokes and searching, and makes navigation easier.

Zoom in or out of a document presentation, or worksheet for better visibility on screen

You can zoom in to get a close-up view of your file or zoom out to see more of the page at a reduced size. You can zoom either by selecting the slider bar in the zoom area of the status bar at the bottom of your document; or, on the View tab, in the Zoom group, click Zoom, and then enter a percentage.

Use the keyboard to work with ribbon programs

The menus and toolbars in all Office 2013 programs use the ribbon as in Office 2010. The ribbon contains all of the commands used in the program on a series of tabs across the top of the program. To move through the ribbon with a keyboard instead of a mouse, press F10 and then press Ctrl+Right Arrow or Ctrl+Left Arrow to move to the ribbon tab you want. You can also access any command in a few keystrokes by using keyboard shortcuts. Press F10 until the KeyTips appear, then select the number or letter next to the command you want.

Command your computer by voice

Speech recognition, which comes with Windows 8, enables you to move around your computer by using voice commands instead of the keyboard or mouse. To use Windows to dictate text and to control your computer by just saying what you see, click Control Panel, and type speech in the search box. Then click Windows Speech Recognition. As soon as Speech Recognition is set up, you can start it by saying Start listening.

Use Read Mode for a clearer view

Use the new Read Mode in Word 2013 for a beautiful, distraction-free reading experience. Read Mode hides most of the buttons and tools so you can get absorbed in your reading without distractions. Press ALT+W, and then press the F key to open Read Mode. Also while in Read Mode you can double-click a picture to get an enlarged view. Click outside the image to return to reading.

Use Spelling and Grammar checker to verify your work

All Microsoft Office programs can check the spelling and grammar of your files. In Microsoft Word 2013, start the Spelling and Grammar checker by clicking Review, then clicking Spelling and Grammar.

Automatically correct spelling errors as you type

Correct typos and misspelled words as you compose by using the AutoCorrect feature in Office 2013. You can insert symbols and other pieces of text automatically as well. AutoCorrect automatically includes a list of typical misspellings and symbols, but you can modify the list to suit your needs.

Hear foreign text read aloud with
Mini Translator

For those who receive email messages or documents that contain words in different languages, Microsoft Office 2013 features a Mini Translator that lets you point to a word or selected phrase with your mouse to display a translation in a small window. The Mini Translator also includes a Play button so you can hear an audio pronunciation of the word or phrase, and a Copy button so you can paste the translation into another document.
The list of languages available in the Mini Translator depends on the language version of Office you are using. To learn more about translation language pairs, see Translate text in a different language.

Use the Speak text-to-speech feature

Text-to-speech (TTS) is the ability of your computer to play back written text as spoken words. Depending upon your configuration and installed TTS engines, you can hear most text that appears on your screen in Word 2013, Outlook 2013, PowerPoint 2013, and OneNote 2013. Just highlight the text you want to hear then click the Speak selected text icon (or, press Alt+the access key number).

Use the keyboard to work with SmartArt graphics

A SmartArt graphic is a visual representation of information—like a diagram—that you can use to enhance your documents and presentation. You can create SmartArt graphics in Excel, Outlook, PowerPoint, and Word, and you can copy and paste SmartArt graphics as images into other Office programs.

Add alternative text descriptions to shapes, pictures, tables, and graphics

For people who cannot see shapes, pictures, tables and other objects in your documents, you should add a description to each using alternative, or Alt text. People who use screen readers will then hear a description of the pictures or object as they scan your document. The location to add Alt text has changed slightly in Office 2013. It used to be in the Format Object dialog box in Office 2010, but in Office 2013 it is now in the Format Object task pane. After inserting a photo, for example, the Format Picture Tools menu opens. On the right side under Format Picture, select the Layout and Properties icon and click ALT TEXT to display the text boxes used to describe the picture.

Create accessible web portals

Using SharePoint 2013, you can set up websites to share information with others, manage documents from start to finish, and publish reports to help everyone make better decisions. SharePoint products include features that make the software easier for more people to use, including people who have low vision, limited dexterity, or other impairments. For example, SharePoint has keyboard shortcuts and access keys that let you do many things without a mouse. And, for people who use assistive technologies such as screen readers, SharePoint offers More Accessible Mode, a special feature that can create a different version of software elements, such as customized forms, if a screen reader can’t handle the original element.

Create accessible Office files

Learn to create more accessible Word documents. You can add alternative text to images and objects, organize content so that it’s easy for screen readers to follow, include captions for audio and video files. Also, learn how to create accessible Excel files by including alternative text for images and objects and specifying table headers. In PowerPoint, you can even add closed captions for audio and video.

Create accessible PDFs

Learn to tag PDF files so that screen readers and other assistive technologies can determine a logical reading order and navigation for large type displays, personal digital assistants (PDA) and mobile phones. Microsoft Office 2013 versions of Word, Excel, PowerPoint, and Visio all enable you to tag PDF files automatically when you save a file in PDF format.

Ideas for Educators

Create a visual bank to help students learn to count and manipulate objects on screen. By inserting an image or clip art into a Word document, then replicating it by Copy/Paste, students can learn to add and subtract by moving objects around on screen rather than drawing, writing, or cutting and pasting on a hard copy. This activity can help students with learning and dexterity impairments.

 

Figure 3-2. Illustration of coins on screen that can be manipulated by students to practice dexterity and counting skills

PowerPoint Hyperlinks. PowerPoint slides can be constructed with hyperlinks so that when text, graphics, or action buttons are hovered over or clicked on, another slide will be displayed within the presentation, or a website, or another document. This can be useful in allowing students to work at their own pace to study and learn the assigned material.

Figure 3-3. Screen shot of a PowerPoint slide with three shapes giving examples of what new slide would appear when a triangle, circle, or square was selected in answer to the instruction: “Look at the shapes below. Which shape is a square? Click on the square.”

Enhance and embellish your documents. Students and teachers can quickly enhance and embellish documents with customizable charts, animations, sounds and more. These enhancements provide a visual representation of your information and ideas such as relationships, processes, and more. Inserting a SmartArt object in a Word 2013 document is as easy as clicking Insert and choosing pictures, shapes, SmartArt, charts, screen shots and even audio and video—anything that supports your point. In the example below, text can be added in each shape to replace the word “text.”

 

Figure 1-4. Diagram showing 6 text box shapes surrounding one in the center. Text can be inserted and sized within each of the shapes

 

Accessibility in Microsoft Office 365

Microsoft Office 365 is an online subscription service that provides email, shared calendars, the ability to create and edit documents online, instant messaging, web conferencing, a public website for your organization, and internal team sites. Learn more about Office 365 Education, and Office Professional Plus.

Office 365 combines Office services including:

  • Ability to access and edit Office files online with Office Web Apps
  • Instant messaging, calls, and meetings with Lync Online
  • Team document sharing and websites with SharePoint Online
  • Email and calendaring with Exchange Online.

Microsoft Office Web Apps, available in Office 365, allow you to access and edit files online in your web browser. Office Web Apps include a number of accessibility features and provide screen reader support, keyboard accessibility, and high contrast modes.

Overview of Accessibility Features in Microsoft Office 365 and Office Web Apps

Feature

Description

Support for assistive technology products

The Word Web App and PowerPoint Web App have display modes that make them accessible to screen readers. Those who use assistive technologies, such as screen readers or speech recognition software, will have the best experience in Office Web Apps if the assistive technology that you use supports WAI-ARIA (Web Accessibility Initiative-Accessible Rich Internet Applications).

Use familiar keyboard shortcuts

Keyboard shortcuts from the Office desktop applications such as Ctrl+B, Ctrl+S, and Ctrl+C all work as they do in Office on your desktop. You can also press the Tab key and Shift+Tab to move back and forth between elements on any page.

Read and edit Office files in your browser

Office Web Apps are web versions of Word, Excel, PowerPoint, and OneNote that let you read and edit documents right in your browser, and easily share those documents with others.

Use accessibility features in your web browser to improve accessibility of Office Web Apps

Office Web Apps run in a web browser so you can use your web browser’s accessibility features to improve the readability and accessibility of Office Web Apps.

Work on and share Office documents with SkyDrive

Office Web Apps and SkyDrive let you access files from anywhere and share files with others. Office Web Apps are available for personal use in SkyDrive, for organizations that have installed and configured Office Web Apps on their SharePoint website, and for professionals and organizations that subscribe to select Office 365 services.

 

Feature

Description

Change how your screen appears

Most Office 365 components are viewed in a web browser so the accessibility features in Windows, Internet Explorer, and other browsers are utilized when you are using Office Web Apps. Learn how to make the computer easier to use to improve your Office 365 experience.

Use accessibility features of your browser to improve accessibility of Office Web Apps

Use accessibility features in Internet Explorer 10 to zoom in on a webpage and change the color and fonts used on webpages. If you’re using a different browser, look for information in that browser’s Help about how to customize your display to the size, fonts, and colors you prefer.

 

 

Accessibility in Microsoft Lync 2013

Microsoft Lync is an enterprise-ready unified communications platform. Lync connects people everywhere, on Windows 8 and other devices, as part of their everyday productivity experience. Lync provides a consistent, single client experience for presence, instant messaging, voice, video and a great meeting experience. Lync 2013 users can connect to anyone on Skype, enabling rich communication with hundreds of millions of people around the world. Lync provides many accessibility features including keyboard navigation, high contrast, keyboard shortcuts, sharing notification, and screen reader support. Learn more about accessibility in Microsoft Lync 2013, and read this TechNet blog describing what’s new in Lync 2013.

Overview of Accessibility Features in Microsoft Lync 2013

Feature

Description

Hear incoming messages read aloud

Incoming instant messages and “toast” notifications can be read aloud by screen readers. You’re also notified if your screen is being shared, and will be told the keyboard combination to access the sharing toolbar.

Expanded keyboard support

Lync now offers more than 100 keyboard shortcuts for important functions, giving you direct access without a mouse. For example, you can now press Windows logo key+A to accept a call, or Windows logo key+Esc to decline an invite notification. You can also use your keyboard to end a call (Alt+Q), start OneNote (Ctrl+N), and open the Tools menu (Alt+T).


Lync includes several frequently used keyboard shortcuts that make it easier to navigate and move between active windows. For example, Press Ctrl+1 to go to the Contact List tab in the main window, or press Ctrl+F to send a file from a conversation window.

High Contrast support

Microsoft Lync provides support for high contrast color schemes you select in Windows. For more information, see High Contrast under Customizing the Ease of Access page (http://windows.microsoft.com/en-US/windows-8/make-pc-easier-use/)

Support for text and graphics scaling

Lync provides high DPI support, enabling you to scale text and graphics for 125% and 150% dots per inch. A Full Screen icon lets you expand your Lync conversation window to fill the screen for better readability.

Enhanced screen reader support

Extensive screen reader support in Lync 2013 ensures that all notifications, incoming requests, and instant messages are read aloud so you’re always kept in the loop.

Magnification support

To view a portion of the Lync window larger, use magnification tools like Windows Magnifier (http://windows.microsoft.com/en-US/windows-8/use-magnifier-to-see-items/).

TTY support

Lync supports TTY (Telephone typewriter) communication. Once, TTY mode is turned on via Lync >Options >Phone, Lync can be used with a peripheral TTY device to communicate with a TTY enabled PSTN (public switched telephone network) endpoint.

Ideas for Educators

 

Microsoft Lync Aids Distance Learning
Microsoft Lync is helping students at the Washington State School for the Blind learn algebra and software programming remotely.

Read the article: School for the Blind Bridges Distances with Microsoft Lync
(www.microsoft.com/en-us/news/features/2011/dec11/12-08Lync.aspx)

View the video: Distance Math Classes for the Blind and Visually Impaired on the Partners in Learning Website
(www.pil-network.com/Resources/LearningActivities/Details/81BB7C33-1C4B-4EDF-9A5B-4C807CB39C07)

Read the article: School for Blind Leads the Way in Distance Learning
(http://thejournal.com/Articles/2012/08/15/School-for-Blind-Leads-the-Way-in-Distance-Learning.aspx)

Read the article: Educators Win Awards for Cutting-Edge Use of Technology at Partners in Learning Global Forum 2012 (www.microsoft.com/en-us/news/press/2012/dec12/12-03GlobalForumPR.aspx)

Microsoft Lync Aids Communication for Australian Organization
Learn about the deployment of Microsoft Lync for the Victorian Deaf Society. Incorporating Lync into their operations allowed for easier communication in Auslan (Australian sign language) all across Australia.

View the video:
VicDeaf Microsoft Lync Case Study (www.youtube.com/watch?v=XNnYIlF83bc)

 

 

Kinect in the Classroom: Engaging Students in New Ways

Teachers work hard to make their classroom a place where kids are actively involved in learning, instead of watching the clock and waiting for the bell to ring. They know that engagement is the key to unlocking the magic that lies within each student.

With either an Xbox 360 console and a Kinect for Xbox 360 sensor, or a computer and a Kinect for Windows sensor, educators are enhancing traditional lesson plans and after-school programs with attention-grabbing, body-moving experiences that help students get engaged and stay on task—while keeping instruction fun and rewarding for everyone. With either Kinect for Xbox 360, or Kinect for Windows, educators can:

  • Create an interactive learning environment that connects students with subjects in exciting new ways
  • Transform lesson plans into powerful, memorable experiences
  • Break through learning barriers with fun, energetic, and easy-to-follow classroom activities
  • Promote physical activity using the entire body as part of the learning process
  • Promote an inclusive learning environment where students with impairments and disabilities can fully and enjoyably participate

Overview of Education Opportunities Using Kinect for Xbox 360 in the Classroom

All that’s needed is an Xbox 360 console, a Kinect for Xbox 360 sensor, and a game that’s been developed for this platform.

Feature

Description

Kinect for Xbox 360 lets students take center stage

Of all the challenges teachers face, motivating students to learn—truly capturing their attention and interest—ranks at the top of the list. With Kinect for Xbox 360, which applies full body engagement to standards-based content, teachers can put students in the center of the learning experience to make concepts come alive.

 

Feature

Description

Kinect for Xbox 360 can activate learning in the classroom and beyond

Kinect for Xbox 360 is a highly versatile and valuable learning tool with numerous applications. Teachers and program coordinators can tap a fast-growing portfolio of educational and entertainment titles that span academic disciplines, sports, and adventure scenarios to energize classroom and after-school activities. Students of varying abilities are enthusiastic participants—learning while having fun.

Avatar Kinect

Educators can also take advantage of Avatar Kinect (www.xbox.com/en-US/live/avatars/) to pursue unique opportunities for intra-school competitions, distance learning, and collaboration with colleagues, students, and parents. Students who are unable to be present in the classroom because of a permanent or temporary disability can participate in this way. And, because Kinect works with existing audio-visual equipment, such as televisions, projectors, and Smart Board systems, setup is fast and easy.

   

Overview of Education Opportunities Using Kinect for Windows in the Classroom

Kinect for Windows gives educational organizations the ability to develop and deploy classroom-based solutions that are designed for specific developmental needs with specific learning goals in mind. Several companies are building rich, customized applications that target students and educators. All that’s needed is a computer, a Kinect for Windows sensor, and a Kinect for Windows application that’s been created especially for students.

Feature

Description

With Kinect for Windows, solutions are being developed explicitly for classroom learning

Kinect for Windows puts the power of creation in the hands of Windows developers—giving education companies the freedom to create content that teaches traditional standards-based course work in new, immersive ways. This lets educators enhance their traditional learning with full-body experiences. Kinect for Windows educational solutions are built by educators for education.

Kinect for Windows is versatile, mobile, and affordable

All a school needs to take advantage of Kinect for Windows learning applications is a computer, a monitor, and a Kinect for Windows sensor—no extra equipment is necessary. Additionally, this equipment can be easily transported from classroom to classroom. This makes it affordable for many classrooms to benefit from this technology throughout the school day.

Kinect for Windows puts people first

Kinect for Windows gives computers eyes, ears, and the capacity to use them. With Kinect for Windows, students of all ages can naturally communicate with a computer by simply moving and speaking. Students can use their whole bodies to engage and learn—making concepts come alive and putting students at the center of the educational experience.

Ideas For Educators

 

Blended Learning with Kinect—See how teachers are using Kinect for Xbox 360 in their classrooms to engage students in learning through the use of a tool kids understand and welcome. From reading to math to physical education, teachers make lessons come alive through active participation. Special education teachers find students with social communications issues such as autism and emotional disabilities respond positively to the avatar Kinect environment.
View the video:
www.youtube.com/watch?v=QRnOycG2WuI

The Down Syndrome Corporation adopted Microsoft Kinect for Xbox 360, which offers students the capability to interact with educational gaming content in a natural way—using body gestures and voice commands. Now, students with Down syndrome are developing math and reading skills, as well as hand-eye coordination, by using Kinect learning activities.
Read the case study:
www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=710000000282

View the video:
http://mediadl.microsoft.com/mediadl/www/c/casestudies/Files/710000000282/Down_Syndrome_Corporation_128kbps.wmv

Lagoa Secondary School in Portugal is enriching classroom instruction while making learning activities more accessible for students with disabilities. Incorporating Kinect for Xbox 360 into its curriculum gives teachers exciting new ways to encourage learning, promote class cohesion, and empower students of all abilities to strengthen social skills and boost subject-matter proficiency—all while students learn and have fun.
Read the case study:

www.microsoft.com/casestudies/Xbox-360-Kinect-Sensor/Lagoa-Secondary-School/Students-with-Disabilities-Use-Innovative-Gaming-System-to-Interact-with-Curriculum/710000000393

View the video:
http://mediadl.microsoft.com/mediadl/www/c/casestudies/Files/710000000393/Lagoa_Kinect_CS.wmv

Alex’s Place: Unique Cancer Treatment Center Uses Kinect for Windows to Help Put Kids at Ease


View the video
:
http://www.youtube.com/watch?v=_kXTlqPulb0&feature=player_embedded

 

Do more with Kinect: Kinect for Windows is a new vehicle to engage kids of all ages in learning new concepts and skills. This platform gives organizations the ability to develop and deploy classroom-based solutions that are designed for specific users with specific educational goals in mind—everything from early childhood education to adult training and simulation.

Several companies are building Kinect for Windows applications that target students and educators, which give people the power to communicate with computers simply by gesturing and speaking naturally.

 

See more examples of Kinect for Windows:

http://www.microsoft.com/en-us/kinectforwindows/discover/gallery.aspx

 

 

Chapter 4:
Selecting Accessible Technology

This chapter provides guidance on how to go about identifying accessibility solutions by providing a sample needs assessment tool, an assistive technology starter guide, and a list of accessibility consultants and other resources available to educators.

Improving the learning experience can mean different things to different individuals: having a multisensory experience of audio paired with a visual representation may benefit one student, while reducing visual and auditory distractions may be better for another. There are hundreds of types of accessibility solutions availableboth built-in operating system and program features, and assistive technology hardware and software productsso it is important to take the time to identify the right mix of accessibility solutions for each student.

Identifying the best assistive technology solution often requires an in-depth needs assessment to understand how a difficulty or impairment impacts computer use. You can evaluate a student’s needs through an assessment tool or provide assistive technology consultation with an AT expert before deciding what product or products you may wish to purchase. Following are ideas for both approaches.

Accessibility Consultants

Many schools and districts have accessibility and special education staff for student assessment. If your school doesn’t have such resources available, some resources that may be helpful to you are listed here.

Assistive technology centers and occupational therapists often have accessibility consultants to help individuals identify the right mix of accessibility features and products. Some centers offer computer training and many organizations have lending libraries, so you can try a product before committing to purchase it.

In the United States

  • Microsoft Accessibility Resource Centers (www.microsoft.com/enable/centers/marc.aspx) are available in the United States. These centers provide expert consultation on assistive technology and accessibility features built into Microsoft products.
  • The Alliance for Technology Access (www.ataccess.org) and the Assistive Technology Act Programs (www.ataporg.org/) are other U.S. national networks dedicated to providing information and technology support services to children and adults with disabilities.
  • The Rehabilitation Engineering and Assistive Technology Society of North America, known as RESNA, (www.resna.org) offers certification programs for assistive technology practitioners. RESNA is another source for identifying AT experts who can assist schools in North America.
  • The Assistive Technology Industry Association (www.atia.org) provides online training and web seminars for learning specific types of assistive technology products.
  • HP’s Guide to Selecting Assistive Technology (www.hp.com/hpinfo/abouthp/accessibility/atproduct.html)

 

  • Dell, in collaboration with Intel, provides access to, integration with, and support for assistive solutions.
    • Access: The Dell Assistive Technology Configuration Tool, (www.dell-at.com) developed by Dell and Electronic Vision Access Solutions (EVAS), helps you select best-in-class software and hardware aligned to the needs of your students.
    • Integration: The team will install, configure, and test assistive technology hardware, software, and peripherals.
    • Support: Dell’s AT support staff and system engineers are available weekdays, 8 a.m.–8 p.m. ET via a dedicated toll-free service and have access to all assistive technology device specifications.
    • With its diverse, select group of assistive technology partners (www.dell.com/spredir.ashx/k12/k12-ats-partner), Dell offers a single source for your assistive technology needs.

In Asia

In Latin America

In Europe

  • AbilityNet in the UK (www.abilitynet.org.uk/) ensures people with disabilities in the UK, whether as individuals or through supporting organizations, have accessible IT that enables and improves their lives. AbilityNet is the leading UK charity for computing and disability, and has a network of centers. A range of free resources are available from their website. AbilityNet also offers advice on web and software accessibility including user testing.
  • ONCE The Spanish National Organization for the Blind (www.once.es) and its foundation, ONCE Foundation for Cooperation and the Social Integration of People with Disabilities in Spain, provide work-related training and employment for people with disabilities, and universal accessibility, promoting the creation of universally accessible environments, products, and services.
  • Enable Ireland (www.enableireland.ie/) works in partnership with those who use its services to achieve maximum independence, choice, and inclusion in their communities. Enable Ireland runs a national assistive technology training service specializing in electronic assistive technology, providing advice and training on AT products to Enable Ireland service users and staff.
  • Charta 77/PCs without Barriers in the Czech Republic (en.kontobariery.cz/) provides technology knowledge and support to those living with disabilities through 16 PC centers across the Czech Republic.
  • The Organization of People with Disabilities and Their Friends APEIRONS in Latvia (www.apeirons.lv/) has a goal to integrate people with disabilities into society as well as creating more accepting attitudes towards them from the general public. The organization operates a lab in downtown Riga where people can take classes, learn about accessible technologies, and test various options.
  • The eCentrum project in Poland (www.idn.org.pl/) specializes in the use of modern technologies in education and mobilization for those living with disabilities, creating e-learning and blended learning educational programs as well as desktop training at several locations throughout Poland.

Assistive Technology Decision Tree

The following Assistive Technology Decision Tree, by Unum (http://www.unum.com/), helps select assistive technology by leading you through a process of selecting impairment type, then level of functionality, and finally providing suggested technology. The table has been adapted from the original flowchart with permission from Unum. Download the original Assistive Technology Decision Tree flowchart by Unum.

Table 4-1. Assistive Technology Decision Tree – Range of Motion

Impairment

Level of Functionality

Suggested Technology

Range of Motion

Good range of motion-grasp and point

  • Alternative pointing devices
  • Ergo keyboard
  • Movable numeric keypad
  • Electric office equipment
  • Touchscreen

Range of Motion

Point, but no grasp

  • Alternative pointing devices
  • Over/undersized keyboard
  • Word prediction software
  • Voice integrated software
  • Wireless headset
  • Large button phone
  • Touchscreen

Range of Motion

No dexterity

  • Foot mouse
  • Over/undersized keyboard
  • Movable numeric keypad
  • Electric office equipment
  • Macro writing software
  • Voice integrated software
  • Wireless headset
  • Telephone foot switch
  • Hands-free telephone

Range of Motion

Moderate range of motion – grasp and point

  • Auto-adjust workstation
  • Over/Undersized keyboard
  • Movable numeric keyboard
  • Electric office equipment
  • Arm/wrist supports
  • Articulated keyboard/mouse tray

Range of Motion

Point but no grasp

  • Multiple mice
  • Over/undersized keyboard
  • Word prediction software
  • Voice integrated software
  • Arm/elbow supports
  • Articulated keyboard/mouse tray
  • Large button telephone

Range of Motion

Minimal range of motion or no dexterity

  • Voice integrated software
  • Hands-free telephone
  • Wireless headset
  • Over/undersized keyboard
  • Foot mouse

 

Table 4-2. Assistive Technology Decision Tree – Quadriplegia

Impairment

Level of Functionality

Suggested Technology

Quadriplegia

Some upper extremity range of motion – grasp and point

  • Alternative pointing devices
  • Ergonomic keyboard
  • Movable numeric keypad
  • Electric office equipment
  • Touchscreen

Quadriplegia

Point but no grasp

  • Alternative pointing devices
  • Over/undersized keyboard
  • Word prediction software
  • Voice integrated software
  • Wireless headset
  • Large button phone
  • Touchscreen
  • Tape recorder notes
  • Scanner & software
  • OCR page reader
  • Tape recorder – phone

Quadriplegia

Some lower extremity ROM

  • Foot mouse
  • Phone foot switch
  • Micro writing software
  • Voice integrated software
  • Scanner & software
  • OCR page reader
  • Wireless headset
  • Hands-free telephone
  • Tape recorder – notes
  • Tape recorder – phone

Quadriplegia

No upper or lower ROM

  • Voice integrated software
  • Wireless headset
  • Hands-free telephone
  • Over/undersized keyboard
  • Scanner & software
  • OCR page reader
  • Tape recorder – phone
  • Tape recorder – notes
  • Alternative pointing devices

 

Table 4-3. Assistive Technology Decision Tree – Back Impairment

Impairment

Level of Functionality

Suggested Technology

Back Impairment

Static position preferred

  • Foot mouse
  • Ergo keyboard
  • Movable numeric keypad
  • Electric office equipment
  • Articulated keyboard/mouse tray

Back Impairment

Static position to be avoided

  • Auto-adjust workstation
  • Ergo keyboard
  • Movable numeric keyboard
  • Electric office equipment
  • Articulated keyboard/mouse tray
  • Alternative pointing devices

 

 

Table 4-4. Assistive Technology Decision Tree – Visual Impairment

Impairment

Level of Functionality

Suggested Technology

Visual Impairment

Can see clearly

  • High-resolution monitor
  • Glare guard

Visual Impairment

Can see monitor up close

  • High-resolution monitor
  • Oversized monitor
  • Glare guard
  • Talking calculator
  • Telephone LED reader
  • Closed Circuit TV

Visual Impairment

Can see with enlarged type

  • High-resolution monitor
  • Oversized monitor
  • Screen magnifier
  • Glare guard
  • Talking calculator
  • Telephone LED reader
  • Closed Circuit TV
  • Oversized keyboard
  • Large button phone

Visual Impairment

Uses other senses

  • Screen reader
  • Braille display
  • Telephone LED reader
  • Talking calculator
  • OCR system
  • Tape recorder – telephone
  • Tape recorder – notes
  • Personal reader

 

Table 4-5. Assistive Technology Decision Tree – Auditory Impairment

Impairment

Level of Functionality

Suggested Technology

Auditory Impairment

Some audition

  • Amplified telephone
  • Computer-aided note taking

Auditory Impairment

Little or no audition – deaf

  • TTY/TDD
  • Light ringer
  • Kinetic beeper
  • Real time captioning
  • Computer-aided note taking
  • Personal translator

 

 

Table 4-6. Assistive Technology Decision Tree – Speech Impairment

Impairment

Level of Functionality

Suggested Technology

Speech Impairment

Some speech

  • Assisted comm. device
  • Voice synthesizer – Computer
  • Voice synthesizer – Larynx

Speech Impairment

Little or no speech

  • TTY/TDD
  • Assisted communication device
  • Voice synthesizer – computer
  • Voice synthesizer – Larynx

 

Table 4-7. Assistive Technology Decision Tree – Psych Impairment

Impairment

Level of Functionality

Suggested Technology

Psych Impairment

Inability to focus/maintain concentration

  • Oversized monitor
  • Task organization software
  • Graphical idea tree
  • Palm Pilot
  • White noise generator

Psych Impairment

Cognitive impairment

  • Alternate pointing devices
  • Word prediction software
  • Tape recorder – phone
  • White noise generator

 

Assistive Technology Product Starter Guide

The following tables provide lists of assistive technology hardware and software products by category. Specific examples of the assistive technology products are provided. The table is by no means exhaustive or an endorsement of these products but is provided as a sampling of what is available today. These types of assistive technology products are also referenced in the assistive technology decision tree.

Purchasing Assistive Technology

The Enablemart website (www.enablemart.com/) is one source where you can purchase assistive technology products for schools. Another is Boundless Assistive Technology (www.boundlessat.com/) Education and government customers may be eligible for volume pricing. See also Assistive Technology Products for Windows (www.microsoft.com/enable/at/matvplist.aspx) for links to assistive technology manufacturers.

 

Table 4-8. Assistive Technology – Hardware Products

Product Type

Vision

Mobility

Hearing

Language

Learning

Example

Amplified phone

   

   

Clarity JV35

Augmentative communication device

     

 

Accent 700SB

Augmentative communication app for Windows 8

     

 

Mozzaz TalkingTILES

Alternative input

 

     

HeadMouse Extreme or Tracker Pro

Alternate keyboard

 

 

ZoomText Large-Print Keyboard and Orbitouch Keyless Keyboard

Alternative mouse or pointing device

         

BigTrack

Braille display

       

Eurobraille Esys 12
Braille Display

Braille printer

       

Emprint™ SpotDot

DAISY reader

       

Victor Reader Stream

Ergonomic keyboard

 

     

Microsoft Natural Ergonomic Keyboard 4000

Foot mouse

 

     

Footime™ Foot Mouse

Hands-free telephone

 

     

QualiPHONE*

High-resolution monitor

       

HP w1858 18.5″ Diagonal Monitor

Large-button phone

     

Clarity JV35

Listening aid

   

   

Motiva™ Personal FM System

Monitor glare guard

       

Fellowes Anti-glare Screen

Movable numeric keypads

 

     

Maxim Low-Force Keypad (USB)

Notetaker

     

Livescribe Pulse Smartpen
and GW Sense Navigation

One-handed keyboard

 

     

Maltron One-Handed Keyboard

Optical character recognition system

     

Scan and Read Pro**

Oversized or undersized keyboard

 

     

BigKeys LX and WinMini

Oversized monitor

       

HP w2338h 23″ Diagonal

Scanner

     

Scan N Talk Ultra

Screen magnifier/magnifier

       

Merlin Desktop LCD CCTV

Switch

 

 

 

Big Red Twist Switch

Talking calculators

     

Sci-Plus 300 Large Display Talking Calculator

Touchscreens

 

 

Dell S2340T 23″ Multi-Touch Monitor

TTY

   

   

Compact/C

Voice synthesizer/ Speech to text

 

TextSpeak TS Wireless AAC Speech Generator and iCommunicator***

* Requires PC to phone line connection    ** Requires flatbed scanner (e.g. Scan N Talk Ultra)
***Requires installation on a PC

 

Table 4-9. Assistive Technology – Software Products

Product Type

Vision

Mobility

Hearing

Language

Learning

Example

Braille translator

       

Duxbury Braille Translator

Communication aid

     

 

Overboard

Graphical idea trees

       

Read & Write GOLD

Macro writing software

       

ClaroRead Standard

On-screen keyboard

 

     

ScreenDoors and SofType

Reading aid

       

gh Player 2.2

Scanner software

     

Scan N Talk Ultra

Screen magnifier

       

ZoomText v9.1

Screen readers

       

Window-Eyes v8.1

Speech recognition/Voice dictation

       

Dragon Naturally Speaking Preferred

Talking calculators

     

Sci-Plus 300 Large Display Talking Calculator*

Task organizer software

MyLifeOrganized

Voice synthesizer/Speech to text

 

TextSpeak TS Wireless AAC Speech Generator and iCommunicator

Word prediction software

       

Read & Write GOLD and ClaroRead Standard

* Includes a large-display calculator

 

 

Resources

Resources from Microsoft

Microsoft’s mission is to enable people and businesses throughout the world to realize their full potential. Computer technology is an important and powerful tool that enables and empowers individuals of all abilities. At Microsoft, we strive to develop technology that is accessible and usable by everyone, including individuals who experience the world in different ways because of impairments or disabilities.

For two decades, we have been exploring and evolving accessibility solutions that are integrated with our products. Microsoft’s accessibility work is a part of Microsoft Trustworthy Computing (www.microsoft.com/mscorp/twc/) business practices, which focus on integrity and responsibility.

Microsoft Accessibility Website
www.microsoft.com/enable/

Microsoft Education Web Resources

 

Additional Resources and Annual Conferences

Teaching Children with Disabilities in Inclusive Settings
www2.unescobkk.org/elib/publications/243_244/

This toolkit published by UNESCO provides activities for embracing diversity in the classroom.

Annual Conferences About Accessible Technology
The following organizations host annual accessible technology conferences.

 

Glossary of Terms

Accessible technology—software and hardware that is flexible and adjustable to a person’s visual, mobility and dexterity, hearing, language and learning needs and can therefore be accessed by persons regardless of their abilities. Accessibility encompasses three elements: built-in accessibility features, assistive technology products that provide access to computers for people with specific disabilities, and, compatibility between the operating system, software and the assistive technology products.

App—abbreviated form of “application,” a type of software program that typically interacts with the end user, such as a calendar program, game, or a chat program. Apps differ from other software programs such as device drivers, which are mostly invisible to the user.

Assistive technology—hardware and software programs and devices (such as screen readers and voice recognition products), which are chosen specifically to accommodate an individual’s impairment or disability. They are “added onto” or, used with, a computer’s operating system such as Windows 8.

Disability—used in this guide to refer to a physical, cognitive, mental, sensory, emotional, developmental condition (or some combination), that makes it more difficult for a person to see, hear, or use a computer (See also “impairment”).

High Contrast color scheme—an accessibility option that heightens the color contrast of some text and images on a computer screen. Particular color schemes make items more distinct and easier to see by certain individuals with vision impairments, and can help reduce eye strain for some computer users.

Impairment—a physiological, psychological, or environmental condition, either permanent or temporary, which makes it more difficult for a person to see, hear, or use a computer. (See also “disability”)

Inclusive learning—designing the learning environment so that the individual needs of students are met so they can effectively participate with their peers.

Magnifier—an accessibility feature included in Windows 8 and earlier versions of Windows. It makes the computer screen more readable by people who have low vision by enlarging a portion or all of the screen. The Magnified window can be positioned and otherwise adjusted to meet individual needs.

Microsoft Accessibility—design features and options in Microsoft products that enable all persons to effectively use computers. It also encompasses the teams throughout Microsoft that focus on the accessibility needs of individuals of all ages and abilities and making Microsoft products easier to use for all. Also, the Microsoft Accessibility website (www.microsoft.com/enable/)

Microsoft accessibility features—the features and options included in Microsoft Windows and products such Microsoft Office, Internet Explorer, and Lync that enable all people to effectively and comfortably use them. Computer technology that enables individuals to adjust a computer to meet their vision, hearing, dexterity and mobility, learning, and language needs.

Microsoft Kinect for Xbox 360—Kinect for Xbox 360 is a motion sensing input device by Microsoft for the Xbox 360 video game console. It enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a natural user interface using gestures and spoken commands..

Microsoft Kinect for Windows—Kinect for Windows is a motion sensing input device by Microsoft. It enables users to communicate naturally with computers by simply gesturing and speaking, making it possible to interact with computers without the need to touch the device.

Microsoft Lync 2013—Microsoft Lync 2013 is an enterprise-ready unified communications platform. Lync connects people everywhere, on Windows 8 and other devices, as part of their everyday productivity experience. Lync provides a consistent, single client experience for presence, instant messaging, voice, video and a great meeting experience.

Microsoft Office—Microsoft Office is a collection of applications (programs) used at home, school, and in businesses to produce documents, spreadsheets, and presentations; to communicate through email; to collaborate with others; to publish websites and print materials, and more. Office includes these and other programs: Word, Excel, PowerPoint, Outlook, Access, Lync, SharePoint Server, Office 365, Project, OneNote, Publisher, and Visio.

Microsoft Office 365—Microsoft Office 365 is an online subscription service that provides email, shared calendars, the ability to create and edit documents online, instant messaging, web conferencing, a public website for your organization, and internal team sites.

Microsoft Windows—the latest operating system from Microsoft. It comes in several versions:

  • Windows 8—includes the desktop that you’re used to—with its taskbar, folders, and icons—is still here and better than ever, with a new taskbar and streamlined file management. Windows 8 starts up faster, switches between apps faster, and uses power more efficiently than Windows 7.
  • Windows RT—a version of Windows that runs on some tablets and PCs. Windows RT comes with Microsoft Office Home & Student 2013 RT. This version of Office is optimized for touchscreens and automatically updates so you always have the latest version Note: You can’t install Windows RT on your current PC. You can only get it by buying a Windows RT PC.
  • Microsoft Surface—a touchscreen tablet made by Microsoft available in these versions:
    • Surface Pro—a powerful PC in tablet form compatible with the broadest range of peripherals and software. It includes a full 3.0 USB port. It can run the full Office Suite and desktop apps, such as Quicken and Adobe Photoshop and can be connected to some assistive technology.
    • Surface RTloaded with Office Home & Student 2013 RT it includes versions of Word, PowerPoint, Excel, and OneNote optimized for touch. Note: It is not compatible with third-party assistive technology software.

Narrator—a text-to-speech accessibility feature in Windows 8 and earlier versions of Windows designed for people who are blind or have low vision. Narrator reads the text displayed on the screen, the contents of the active window, menu options, or text that has been typed.

On-Screen Keyboard—an accessibility feature included in Windows 8 and earlier versions of Windows. It displays a visual keyboard on screen. Letters can be typed by selecting keys using a mouse or another pointing device—rather than a physical keyboard. On-Screen Keyboard can be resized, moved around the computer desktop, and otherwise customized to meet individual needs. It includes text prediction in eight languages.

Operating system—software that manages computer hardware resources and provides common services for the computer programs that operate on that system (e.g. Microsoft Windows 8).

Partners in Learning Network—a global community of educators dedicated to improving student learning worldwide.

Personalized learning—the personalized design of curriculum, teaching tools, accessible and assistive technology to help individual students achieve their maximum potential.

TTY (Telephone typewriter)—a series of devices developed to allow hearing impaired individuals to use PSTN networks to exchange text information.  These devices, called TTY (Telephone typewriter) or TTD (telecommunications device for the deaf), operate by converting user entered text to series of tones that are passed over the PSTN (public switched telephone network) network and interpreted and displayed as text to the receiver.  These TTY devices work either by attaching to the handset and generating/receiving tone acoustically, or by replacing the handset.  In a VoIP (voice over Internet Protocol) solution, these TTY tones are carried as in-band audio payload type, unlike DTMF (dual-tone multi-frequency), which is usually sent in a separate payload type. 

Links

The following list of hyperlinks used within this document is current at the time of publication. If subsequently changed, you may be able to find the updated link by using the title phrase below in your Internet search.

Windows 8

 

Office 2013

Office 365

Lync 2013

Internet Explorer 10

Posted in Uncategorized | Leave a comment

Improving the Reach and Manageability of Microsoft® Access 2010 Database Applications with Microsoft® Access Services

Improving the Reach and Manageability of Microsoft® Access 2010 Database Applications with Microsoft® Access Services

January, 2010

 

 

Contents

Introduction    1

Empowering End Users with Access    1

Benefits    1

Manageability Challenges    2

Meeting Manageability Challenges by Centralizing Storage    3

Managing Split Access Applications    3

Moving Data to SQL Server    3

Using Terminal Services to Deploy Access Applications    4

Increasing Manageability with SharePoint    4

Publishing Access 2010 Databases to Access Services    6

Web Access, Better Performance and Enhanced Manageability    6

Web Databases, Web Objects, and Client Objects    7

Deploying Access databases in SharePoint    9

Storing databases into SharePoint Document Libraries    9

Publishing an Access Services application    9

Publishing Client only Applications    10

Hosted SharePoint Options    10

Migrating Legacy Data to Web Tables    11

Using the Web Compatibility Checker    11

Handling Compatibility Issues    11

Creating New Compatible Tables and Importing Legacy Data    13

Synchronizing data between web tables and external sources    13

Migrating Legacy Application Objects    15

Handling Publishing, Compilation, and Runtime Errors    15

Publishing Issues    15

Compilation Issues    16

Runtime Issues    16

Upgrading Databases to Access 2010    17

64-bit VBA Issues    17

Summary    17

Appendix    18

Access 2010 Features by Object Type    18

Tables    18

Forms    19

Reports    21

Queries    22

Macros    23

Expressions    26

 

Introduction

Microsoft Access empowers end users in business units to develop applications quickly and at low cost. This agility is valued by large organizations around the world. However, some of these organizations struggle with managing the myriad Access databases in use. With Microsoft Access 2010 and Microsoft SharePoint 2010 Products working together, the best of both worlds is possible: you can satisfy the need for agile development from your business units and still rest assured that the data is secured, backed up and managed appropriately.

Access Services is a new feature of Microsoft® SharePoint Server 2010 Enterprise Edition that supports close integration with Microsoft® Access 2010. This integration enables users to extend the ease of Access application development to the creation of Web forms and reports, and it enables IT managers to extend the ease of SharePoint -2010 Products administration to the management of Access data, application objects, and application behavior. This paper explains the benefits and architecture of this new level of integration, and it provides technical details that will be helpful in implementing successful migration of existing Access applications to this new architecture.

 

Empowering End Users with Access

Access has long been one of the most popular desktop tools for building database applications, because it empowers end users to accomplish tasks that would otherwise require the services of IT professionals.

The easy-to-use graphical interfaces and wizards in Access enable users to create related tables for data storage, as well as links to external data, and to build forms and reports for entering, manipulating, and analyzing data. Unlike Microsoft Excel, Access is built on a relational engine and is therefore inherently optimized for data validation, referential integrity, and quick, versatile queries.

Benefits

The ever-evolving data management needs of an organization cannot all be met by trained programmers and database administrators, whose time is costly and limited. End users frequently can’t wait for these resources to become available or can’t justify the expense. So they turn to alternatives, from manual notes to spreadsheets, often limiting their productivity and effectiveness. When users discover Access and invest just a short time learning to use it, they are usually delighted to see how much they can accomplish.

Access empowers users to gather information from many disparate sources, including spreadsheets, HTML tables, SharePoint lists, structured text files, XML, Web services, or any ODBC data source such as Microsoft SQL Server, Oracle, or MySQL. Depending on the source and the user’s permissions, this data can even be updated from Access, and can be combined in distributed queries.

As project requirements and user skills evolve, Access applications can become increasingly complex and full-featured. Users go online and easily find a rich ecosystem of help and other resources available, including tips and samples that have accumulated since Access was first released in 1992.

By satisfying user needs without burdening IT resources, Access often plays a valued role in meeting an organization’s data management needs.

Manageability Challenges

The vast majority of Access applications never come to the attention of the IT department. However, a percentage of Access databases do create problems that draw the attention of IT managers. Access data and application files may be lost or corrupted. Long-running queries can burden network or server resources. Sensitive data can inadvertently be left unprotected. Performance can degrade as more data and users are added. And departments can come to depend on applications developed by people who are no longer available or whose skills are inadequate to meet current requirements.

IT managers sometimes take the position that Access should be prohibited to avoid these issues. However, users either defy the ban or revert to alternatives such as spreadsheets that are no more manageable or secure. The needs that drive Access adoption don’t go away and can’t all be met by IT-mediated application development.

Most IT departments eventually conclude that the best approach is to provide guidance and management that encourages users to leverage the capabilities of Access safely and productively. This management and guidance can take several forms, including distribution of templates and samples that encourage best practices.

In addition, centralizing storage of Access data and application files can help improve reliability, security, and manageability, without unduly inconveniencing users. Centralizing data storage can also enable Access applications to scale to serve many users. The remainder of this article discusses several options for centralizing storage of Access data and applications, with an emphasis on the new capabilities provided by integrating Access 2010 and SharePoint Server 2010.

Meeting Manageability Challenges by Centralizing Storage

As summarized below, a range of options have emerged over the years for centralizing the storage of Access data and application objects, including storage of files on managed network shares or on terminal servers, migration of data to database servers such as SQL Server, migration of data to SharePoint lists, and storage of applications in SharePoint document libraries. Access Services introduces a new set of options that not only extend Access application to the Web, but also significantly enhance manageability.

Managing Split Access Applications

Access is distinctive in its ability to combine a query processor and storage engine with an application development environment and an application runtime framework. A single Access database file can contain both data tables and application objects stored in system tables. However, Access users commonly split the storage of application objects, such as forms and reports, from the storage of user data.

Especially in applications that are shared by multiple users, the most manageable pattern is to place only the user data tables in one Access database, usually called the back end, on a file share. All the other objects, including forms, reports, VBA code modules, Access macros, and saved queries, are stored in a separate database file, which also contains links to the data tables. Multiple copies of this front-end file are distributed to users for storage on their local drives. This allows front-end objects to be updated without disturbing the data, and local storage of these objects provides better performance than sharing a single remote copy of the front end.

Microsoft has encouraged this pattern of deployment by providing a wizard that automatically creates a second database file, exports all local tables to the new database, and creates links to the exported tables. Any application object that had worked with the local tables automatically then work with the new links to exported tables.

However, the manageability of these database files is limited. Users must have full permissions to create and delete files on the data share, and database files can proliferate uncontrollably. It is not uncommon for enterprises to discover that tens of thousands of Access database files are scattered across their networks.

In addition, collaborative design work is difficult for users to manage. When multiple users have their own copies of a front-end application file, it is difficult for them to receive a new version without losing any customizations they may have made.

Moving Data to SQL Server

Once data tables have been separated from application objects, it requires little work to migrate those tables to a SQL Server database and change the links to ones that use ODBC to access the data. Access includes an “upsizing” wizard for exporting tables to SQL Server and linking to them, and SQL Server also provides a tool call the SQL Server Migration Assistant for Access (http://www.microsoft.com/downloads/details.aspx?FamilyID=d842f8b4-c914-4ac7-b2f3-d25fff4e24fb&DisplayLang=en).

This improves reliability, scalability, and security. However, users require privileges and training to create, maintain, and administer SQL Server databases. Some IT organizations restrict the number of users who have such privileges.

Using Terminal Services to Deploy Access Applications

Another option is to centralize Access applications on terminal servers. This provides the significant benefit of allowing users to access their applications across a wide area network or the Internet, while maintaining good performance. IT managers have better control over backup, reliability, and security for those applications.

Users can’t get to their applications from any browser on any device. Using Terminal Services works well for intranet deployments of canned Access applications, but is less useful for supporting ad hoc user-generated solutions.

 

Increasing Manageability with SharePoint 2010 Products

SharePoint 2010 Products are architected to support safe and scalable creation of thousands of sites and lists by minimally privileged and minimally trained users. In addition, SharePoint Server is highly manageable. It has a security model that is tightly integrated with Active Directory. Data backups are assured, and a multi-level recycle bin provides easy recovery of deleted data items. A highly scalable architecture supports handling increased loaded by adding servers. Plus, SharePoint 2010 Products are engineered to provide highly configurable activity throttles to protect server and network resources, while still supporting end-user creation of new applications and new content.

As a Web-based platform that employs standard Internet protocols, SharePoint 2010 Products enable users to access their applications from any browser on any device. Users are often delighted to learn how easy it is for them to collaborate over the Web with SharePoint 2010 Products. This ease of use, combined with IT-friendly manageability has made it the fastest growing server product in the history of Microsoft.

For all these reasons, the case for integrating Access and SharePoint 2010 Products is strong, and with each of the last three versions of the products that integration has deepened.

 

A Brief History of Access/SharePoint Products and Technologies Integration

Access 2003 introduced integration with Microsoft SharePoint Portal Server 2003 and Windows SharePoint Services by adding an Installable ISAM driver that enabled the Jet database engine, the engine used by Access 2003, to connect to SharePoint lists. This allowed Access users to view and edit SharePoint data and to create queries that join SharePoint data to data from other sources.

Access 2007 added significant new support for Microsoft Office SharePoint Server 2007 and Windows SharePoint Services by enabling users to take list data offline and then synchronize with the server when they reconnect. To accomplish this, the Access team branched off a proprietary version of the Jet database engine, renamed ACE. They added new data engine features to provide parity with SharePoint data types, including support for file attachments and complex multi-valued columns. New Access UI made it easier to move data to SharePoint lists, and SharePoint Products and Technologies also added UI features to support working with Access applications stored in document libraries.

With Access 2007, users had a more seamless experience of working with SharePoint lists, but those lists still lacked full suitability for use in most Access applications. Performance was often slow and important database features, such as referential integrity and enforcement of validation rules, couldn’t be implemented without resorting to complex Workflow programming on the computer that is running Office SharePoint Server or Windows SharePoint Services.

Access 2010 and SharePoint 2010 Products address these shortcomings. Performance issues have been eliminated through server-side and client-side data caching, as explained below. Referential integrity and the basic expected forms of data validation are now enabled on the server without requiring any custom programming. For more advanced validation, users can easily use Access macros to create server-based Workflow customizations.

In addition, Access 2010 offers exciting new ways of integrating with SharePoint 2010 Products that allow users to run Access applications using only a Web browser. These new capabilities, based on Access Services running on SharePoint Server 2010, require Enterprise CAL licensing, but economical hosted solutions are or will soon be made available from Microsoft and third parties for organizations that don’t have in-house installations.

 

Publishing Access 2010 Databases to Access Services

Access 2010 introduces the ability to publish a database to Access Services, which creates a SharePoint site for the application. Any local tables are moved to SharePoint lists. If any of the data can’t be moved into a list, then publishing cannot happen until the data is exported to a separate database or changed to be compatible. A compatibility checker supports this by listing any problems that would prevent publishing.

After a database is published, it becomes a Web database, meaning that users can add Web forms, reports, queries and macros that can execute on the computer that is running SharePoint Server when the application runs in a browser or on the client when the application runs in Access. Users can browse to Web forms and reports over the Internet, or they can run them in the Access client.

Published databases can also contain objects, with a fuller feature set, that only run in the Access client. Tables linked to external data, such as data in other Access databases, Excel spreadsheets, SQL Server tables, or even in SharePoint lists on other sites, are only available in the Access client, not to Web forms and reports. All design work occurs in the Access client.

Publishing Access databases to Access Services on SharePoint Server, rather than simply saving them in SharePoint document libraries, provides three key advantages:

●    Published applications can contain forms and reports that are enabled to run in a browser as well as in the Access client

●    Published applications are stored and synchronized with greater granularity and efficiency than applications in document libraries

●    Published applications are more manageable than applications stored in document libraries

Web Access, Better Performance and Enhanced Manageability

Making forms and reports available over the Web provides a key advantage. In today’s distributed workforce, being able to collaborate with colleagues all around the world is critical. Increasingly, users are looking for a no-install solution for collaboration that can work with varied bandwidth and varied proximity to data. Web applications also enable users to work with Access applications without being distracted by tools for customizing the application, a benefit that has in the past often required professional programming services.

In published databases, individual Access objects are serialized and saved in a hidden SharePoint list. This is similar to the way that programming objects are saved in source control systems. When users choose to open published applications in the Access client, rather than in a browser, the local version of the database is synchronized with the version on the server. Synchronization affects all the objects in the database, not just the data. Because objects are stored as individual data items, SharePoint Server maintains the user identity and date for each modification, as it does for all data changes.

As in source control, the Access client downloads the entire database only when a user doesn’t already have a local copy. Subsequently, Access fetches only objects and data items that have changed. This arrangement is much more efficient than working with applications saved monolithically in document libraries and provides noticeably faster performance. Different users can make changes to different objects or different data items in a list without causing conflicts. When data conflicts do occur, a conflict resolution wizard enables the user to choose which version of data items to preserve. For object conflicts, Access provides the name of the user who made the saved change and creates a renamed copy of the local object before downloading the other user’s changes. This fosters collaboration on resolving any design conflicts and ensures that no design work is lost.

Publishing also enables greater administrative control. Permissions can limit some users from being able to modify, delete, or create objects in a site, while still allowing them to run the application. In addition, because application objects are stored individually as list items rather than monolithically in document libraries, the throttles in SharePoint Server for limiting list traffic and user activity can apply to individual types of application objects.

Because of the many advantages of using published databases rather than document libraries to centralize application storage on a computer that is running SharePoint Server, document libraries should only be considered for storing legacy Access databases that cannot be upgraded to Access 2010, or when a server supporting Access Services isn’t available.

 

Web Databases, Web Objects, and Client Objects

To create an Access 2010 Web database, you can choose Blank Web Database when creating a new one or you can publish an Access 2010 database that wasn’t originally created as a Web database. If it publishes successfully, it automatically becomes a Web database.

In a Web database, you can create two types of objects: Web objects that can run either in a browser or in the Access client, and non-Web objects that can only run in the Access client. All design changes for all types of objects must be made in the Access client.

When you are working on a Web database in the Access client, the Create ribbon refers to non-Web objects as Client objects, specifically Client Forms, Client Reports, Client Queries, and Client Macros. That terminology might be confusing, because these so-called “client” objects actually do get published to the server along with your Web objects. Any design changes you make to them are propagated to the server when you synchronize, and you also receive any synchronized changes made to them by other users, just like with Web objects. What makes non-Web objects special is that they can only run in the Access client, not in a browser. They use the Web for synchronizing design changes but not for execution. All linked tables are client tables, and only client objects can see them, but the definitions of the links get synchronized with the server like other client objects. Synchronization provides a useful way to collaborate and deploy applications, even if all the objects in the database are client objects and all the tables are client linked tables.

To create Web objects, you must be working in a Web database, where you can create Web objects even before you’ve published. To add Web objects to a non-Web database, you must first publish it. However, you can create a Web database from scratch and add Web objects even before you’ve published. When you create local tables in a Web database that you haven’t yet published, the table schema is guaranteed to be compatible with SharePoint lists. So it is very useful to create a Web database when you start a new application, even if you have no immediate plans to publish it.

The only data available to Web objects is the data contained in the application’s Web tables. Only client objects can work with linked tables. In the Access client, when connected to the computer that is running SharePoint Server, data and design changes to Web tables automatically synchronize with the server, and Access works against local copies of the tables. When disconnected, Access seamlessly continues to work against the local copy, and doesn’t allow design changes to the tables. When reconnected, Access notifies users that they can now synchronize and resolve any conflicts. Design changes to objects other than Web tables synchronize only when users explicitly request synchronization by clicking the Sync All button in the Info section of Backstage view. Backstage view is the name for the view displayed in the File menu.

Web objects support the same feature set whether running in a browser or in the Access client, but some features of client objects are not supported by the corresponding Web object types. For example, VBA code only runs in client forms, not Web forms. Web forms rely on macros for programmability, but this is less of a restriction than you might expect, because Access 2010 macro capabilities are significantly enhanced. Separate sections below, under the heading Access 2010 Features by Object Type, provide more details on how the different types of Web objects differ from their client counterparts.

 

Deploying Access databases in SharePoint Technologies

The following sections provide an overview of several different topologies that are supported for integrating Access and SharePoint Technologies.

Storing databases into SharePoint Document Libraries

Access 2007 supports the use of SharePoint document libraries for centralizing storage and deployment of Access applications. This includes support for Access forms and reports in databases stored in document libraries. When users open these, the databases automatically open in the Access client. Users can move entire applications to SharePoint libraries. The applications always run in the Access client, and they are downloaded only when a user first opens the application or when the server version is updated. One restriction is that the design objects in these applications are read-only. To make design changes, a user must work on another local copy and upload the new version, replacing the old one on the server. These applications can work with local Access tables, data in SharePoint lists via linked tables, or any other supported external data source.

Moving forward, document libraries should only be used as legacy support for users who have not upgraded to Access 2010 or for SharePoint Technology installations that do not support Access Services. When Access Services are available, they provide many advantages over using traditional Access applications stored in documents libraries. Access 2010 applications published to SharePoint Server using Access Services support design changes and allow multiple users to collaborate on design. Design changes are tracked per object rather than per project, resulting in fewer conflicts. In addition, Access Services supports the addition of Web objects that users can access through a browser, without depending exclusively on the Access client.

Publishing an Access Services application

Access 2010 combined with Access Services introduces the ability to publish a database to SharePoint Server. A site is created for the database and tables are stored as SharePoint lists. Web forms will be available from within a browser and the site and data will be backed-up and permission levels can be maintained by SharePoint Server. The publish process moved the data from the database to SharePoint Server and converts the tables to linked tables. Users will have the option to use the Web objects in the browser or open the database from the website in their client, enabling access to the client objects that are also stored inside the site.

Users also can link to SharePoint lists from databases that are not Web databases and that will never be published to SharePoint Server. For example, a user might create a simple Web database to collect data from other users over the Web. In a separate application, the user can link to that data and create reports that combine the data with other data sources.

Linked SharePoint lists in Access 2010 also have the same support as Web tables for offline work. Disconnected users can view or modify the data offline and then synchronize with the server when they reconnect. In addition, users can work through the standard Web interfaces in SharePoint Server to work with list data, even if the data isn’t in a published Access Web database.

Publishing Client-only Applications

Even with Access 2010 applications rely exclusively on linked data from external Access databases, spreadsheets, database servers, web services, or from linked SharePoint lists, there are advantages to publishing the applications. For the users, published applications support convenient deployment, versioning and collaboration. For IT managers, published applications benefit from the backup, security, and manageability features in SharePoint Server.

By adding Web tables to these applications, users have the ability to extend their applications to include some forms and reports that run on the Web.

Hosted SharePoint Server Options

For organizations or users that do not have Enterprise CAL SharePoint Server licenses or do not want to maintain their own installation of SharePoint Server, Microsoft and third parties provide hosted options with economical monthly per-user rates. These options include multi-tenant hosting where data from multiple organizations is segregated on one server, or dedicated options that provide the added assurance of complete segregation on dedicated servers maintained by the service provider.

Migrating Legacy Data to Web Tables

Web objects in Access 2010 can only work with data in an application’s Web tables, which are implemented on the server as SharePoint lists. To create Web forms and reports that work with legacy data, users must import their data into local Access tables and publish the database, or import the data into existing Web tables in the database. Publishing succeeds only when the table schema and the data itself in local tables are compatible with SharePoint lists.

Some or all of the legacy data in an application can remain in external data sources that appear in Access as linked tables, but this data isn’t available to Web objects. Linked table data is only available to client forms, reports, queries, and macros running in the Access client.

Using the Web Compatibility Checker

The Web Compatibility Checker inspects table designs and logs each incompatibility that it finds in a table named Web Compatibility Issues. You can run the Web Compatibility Checker by right-clicking on a table and selecting Check Web Compatibility, or click the Run Compatibility Checker button that appears when you choose to Publish to Access Services in the Save and Send section of Backstage.

Handling Compatibility Issues

The most common compatibility issues found by the Web Compatibility Checker involve invalid names of tables or columns, compound multi-column indexes, incompatible lookup definitions, composite and text-based keys, and the use of table relationships to enforce referential integrity.

Invalid Names

Table and column naming restrictions are described in the previous section on tables. You must ensure that your names do not conflict with SharePoint reserved words and do not contain illegal characters. The Access Name AutoCorrect feature will propagate changes to dependent objects, such as queries and bound controls, but you should thoroughly inspect and test the application to ensure that no required changes were missed. VBA code and values in all expressions, for example, are not automatically corrected.

Compound Indexes

Indexes based on multiple columns are not supported in Web applications, as explained in the section on tables above.

Lookup Definitions

Access tables support queries in lookup definitions that are not supported in SharePoint lists. SharePoint Server requires the input source to be a single table with a Long Integer primary key. SQL queries in lookup definitions also must not contain the DISTINCT or DISTINCTROW keywords. When lookups use value lists, the bound column must be the first column.

 

 

Referential Integrity

Declarative referential integrity is not supported for SharePoint lists. Instead properties have been added to SharePoint Server 2010 lookups to enforce data restrictions. Users can opt to prohibit insertions, deletions, and updates that would create “orphaned” rows in “child” lists.

For example, suppose you have a list of employees with a column showing the department of each employee. The Department column is a lookup to a separate Departments list. Using the Lookup Wizard in Access to configure the column, you can select Enable Data Integrity, as shown in the following figure. This prevents an employee being assigned to a department that doesn’t appear in the Departments list.


In some cases, you want to restrict deletions in parent tables to avoid creating orphans, as with Departments and Employees, but in other cases you want those deletions to propagate, or “cascade,” to the child list. For example, with Orders in one list and Order Items in another, you may want to allow users to delete an order and automatically delete the related line items for that order. In that case, you choose Cascade Delete rather than Restrict Delete in the Lookup Wizard.

These lookup properties are also supported in unpublished Access 2010 databases for local Access tables, but they are separate from the Relationships window that users are familiar with for configuring referential integrity in previous versions. Users must configure a lookup and set Enable Data Integrity before publishing an Access table to SharePoint Server. If referential integrity has already been configured using the Relationships window in Access, then users will have to delete the relationship before they can use the Lookup Wizard, which is invoked from the list of field types in the table designer. Tables with relationships that aren’t implemented in lookups can’t be published, and those lookups must be based on columns that have a numeric data type of Long Integer.

Primary and Foreign Keys

In non-Web local Access tables, Access supports the use of composite primary and foreign keys, which combine the values in two or more columns to create the key. Access also supports the use of a variety of data types for primary and foreign keys, including text and dates. These are not supported for published applications, because composite and string values cannot be used to create SharePoint Server lookups. String values can be displayed in lookups but the underlying relationship is always based on a numeric ID.

If users have composite primary and foreign keys based on multiple columns or even on text columns, which is quite common, they will need to change to using a Long Integer numeric key before they publish, or they will receive a compatibility error.

The easiest way to achieve compatibility is to add an autonumber column to the parent table, such as Department, and to add a corresponding Long Integer column in the child table as the foreign key. You can then use an update query to place the correct foreign key values in the child table.

For example, if you have a Departments table with a text primary key named Department and you’ve used these department names as foreign keys in the Employees table, add an autonumber DepartmentID column to the Department table and a Long Integer DepartmentID column to the Employees table. Then run this Access query:

UPDATE Departments INNER JOIN Employees ON Departments.Department = Employees.Department SET Employees.DepartmentID = Departments.DepartmentID;

You would also need to delete the old relationship and create a new one using the Lookup Wizard. Additional work would be required to change existing queries, forms, reports, and VBA code to use the new DepartmentID column in the Employees table, allowing you to delete the old text-based foreign key.

 

Creating New Compatible Tables and Importing Legacy Data

In a Web database, the table designer restricts the available options to ones that are Web compatible. It is often much easier to use this designer to create new tables than to work through all the issues raised when attempting to publish an incompatible legacy table. Create Lookup columns to enforce referential integrity, as explained in the previous section covering referential integrity.

You can then create a linked table pointing to your legacy data and create a (client only) append query to append data from the legacy tables to the new Web-compatible tables.

One disadvantage of this technique is that you cannot use Name AutoCorrect to fix up names used in legacy dependent objects such as queries and bound controls. You will need to do this manually and carefully test for errors. An alternative is to create client queries with columns aliased to the original names, which can simplify the changes required in dependent objects.

Synchronizing data between web tables and external sources

Web forms, web reports, and web queries cannot work with data from external data sources, such as SQL Server tables or SharePoint lists outside the current application site. To work around this limitation, you may want to build administrative applications that regularly copy data from external sources into the SharePoint lists of Web applications. This enables you to include the data in Web reports or display it read-only in Web forms. Several different strategies can support this.

One option is to create Access linked tables that connect to the external data, and to execute client queries that move the data into your Web tables. These queries can exist in the Web database you want to update or in another database that has tables linked to the SharePoint lists corresponding to your Web tables.

If the Web data that you want to maintain is not used in any lookups, then you can execute queries that first delete all the old data and then append the current data. If lookups require you to preserve existing key values in the data, then you can use a more complex process that updates existing values and handles deletions. In some cases, related child rows may also need to be deleted.

Another option is to perform the data maintenance on a local disconnected copy of your Web database and to rely on Access/SharePoint Server synchronization to propagate the changes automatically when the database is reconnected to the server.

To ensure that data maintenance runs automatically on a defined schedule, you could create a SQL Server Integration Services package. Alternatively, you can use a Windows scheduled task to open an administrative Access application with an autoexec macro or a startup option that executes the data maintenance and closes the application. You can also execute an Access macro from a command line using the /x command-line switch.

Migrating Legacy Application Objects

You must recreate as Web objects any form and reports that you want users to run in a browser. You must also recreate as Web objects any supporting queries and macros. You need to create Web-compatible macros to replace any VBA code. Controls from legacy forms and reports cannot be copied and pasted into Web forms and reports, but control formatting can be copied and pasted.

Legacy application objects can remain in the database without interfering in any way with publishing, and design changes that you make can be synchronized with the server version of the database, enabling easy versioning and deployment. However, these objects can run on the Access client only. Centralizing storage of application objects on a computer that is running SharePoint Server by publishing the application to Access Services improves manageability even for databases that don’t contain any Web objects and always run in the rich Access client.

Handling Publishing, Compilation, and Runtime Errors

The Web Compatibility Checker inspects table designs and logs each incompatibility that it finds to a local table named Web Compatibility Issues. The most common incompatibilities relate to primary keys and lookups, which were discussed in the previous section on handling compatibility issues.

However, the Web Compatibility Checker doesn’t detect certain types of incompatibilities, which either cause publishing errors when Access attempts to publish incompatible schema that the Web Compatibility Checker overlooked, or when Access attempts to populate tables with incompatible data.

After publishing succeeds, Web objects compile asynchronously and can generate compile errors. Even after successful compilation, runtime errors can result from invalid object definitions that don’t interfere with compilation or from logic errors.

Publishing Issues

Access logs publishing issues in a local table named Move to SharePoint Site Issues, and the message informing you that publishing failed provides a link to the table.

Most schema issues that cause publishing to fail after the Web Compatibility Checker reports success are related to expressions. The Expression Builder and IntelliSense guide users toward creation of valid expressions in Access, but users can easily enter invalid expressions and the Web Compatibility Checker does not evaluate them. As explained in the previous section on expressions, the expression services used on the client and on the server are different, and some expressions that are valid on the client are not valid on the server.

Another common cause of publishing errors is incompatible data, because the Web Compatibility Checker does not check data values, only data schema. Data values that are valid in Access but not in SharePoint Server will generate errors that are also logged to the local Move to SharePoint Site Issues table.

The following sections discuss several types of data incompatibility.

URLs

The Hyperlink data type in Access uses an underlying memo column to store display text and URLs. SharePoint Server also supports Hyperlink columns. However, it performs validation on Hyperlink URLs that may reject data contained in Access Hyperlink columns. For example, relative URLs are incompatible and must be replaced with ones that are fully qualified. In addition, SharePoint Server rejects URLs with more than 255 characters.

Dates

Access and SharePoint Server both store date/time values using the Double data type. The integer portion of the number represents the number of days since day 0, and the fractional part of the number represents time as a fraction of a full day. However, the two systems use different timelines, and SharePoint Server does not support dates with underlying numeric values less than 1. In Access, day 1 is December 30, 1899, and prior dates are stored as negative numbers. In SharePoint Server, day1 is January 1, 1900, and prior dates are not supported.

Many legacy Access applications contain dates that cannot convert to SharePoint Server. A common practice in Access is to use day-0 dates in columns designed to show time-only values. In addition, data entry errors in Access applications frequently result in dates prior to 1900, unless such errors have been prevented by validation rules.

To check for Access date values that are not Web compatible, you can create a query that filters for dates prior to January 1, 1900, for example:

SELECT InvoiceID, InvoiceDate FROM Invoices WHERE InvoiceDate < #1/1/1900#;

Compilation Issues

Invalid expressions in data schema definitions, such as in validation rules and calculated columns, can cause publishing errors. However, invalid expressions in Web forms, reports, and queries surface only when the objects compile, which occurs asynchronously after publishing has succeeded.

Access Services logs compilation errors to the USysApplicationLog table, which is accessible through the View Application Log Table button in the Info section of Backstage view. Access uses the status bar to cue the user when issues are pending in the application log.

Runtime Issues

Even after publishing and compilation succeed, invalid expressions can still cause runtime errors. For example, invalid expressions in form or report control sources don’t surface only when the object executes, causing the control to display #Error.

When a macro fails at runtime, Access 2010 records the error in the application log.

Another type of runtime error relates to images. All the images in a Web application are available through a single image gallery and must be uniquely named. When synchronizing design changes, image naming conflicts may be resolved by appending “_username” to the name of a new image. This won’t generate an error, but the new or modified form or report may unexpectedly display the wrong image, because the reference is to the original name. The affected image names and control properties must be modified to correct this.

Upgrading Databases to Access 2010

Access 2010 supports the mdb file format for backward compatibility, but to use the new features, including support for Web databases, you must use the accdb format that was introduced in Access 2007. If you have a legacy database that you want to upgrade to take advantage of Access Services, you must convert it to an accdb file. You can expect a smooth upgrade for Access 2007 databases that are already in the accdb format.

For guidance on upgrading an mdb to an accdb, see the white paper, Transitioning Your Existing Access Applications to Access 2007 (http://msdn.microsoft.com/en-us/library/bb203849.aspx).

64-bit VBA Issues

Office 2010 provides 64-bit support primarily to enable Excel and Project users to work with a much larger address space. There are no advantages to running the 64-bit version of Access 2010. However, users who need 64-bit support for Excel or Project may try to run Access and could encounter some incompatibilities in applications that run fine in 32-bit mode. When 64-bit Office is installed on a machine, the user is required to uninstall any 32-bit versions of Office applications, including prior versions. 32-bit versions can be installed after 64-bit is installed, but Microsoft has not thoroughly tested these scenarios. The best practice is to run any 64-bit Office instance on a machine dedicated to that version only. 64-bit Office does not support any ActiveX controls or any COM add-ins.

All compiled VBA must be recompiled in 64-bit instances, meaning that mde and accde Access applications will not run. VBA code containing Declare statements must be rewritten before being recompiled, because pointer and handle values can no longer be contained in variables using the Long data type. Instead they must use the new LongLong or LongPtr data types. In addition, the new PtrSafe indicator must be added after the Declare keyword (Declare PtrSafe…) for the code to compile successfully. Conditional compilation, using #If, must be used in code that needs to compile under both legacy 32-bit and new 64-bit versions of Office. A convenient workaround is to give the few users who really need 64-bit support separate machines for running the applications that need the added memory space.

Summary

Access provides compelling benefits to end users, who love being able to create their own data tracking and reporting applications, and to IT departments that cannot otherwise fulfill all the application building requirements of their organizations. Access 2010 significantly extends its value proposition for users by integrating with SharePoint Server 2010 to support convenient creation of full-featured Web applications with broad reach, and client applications that are easily shared, revised, and deployed. Of equal significance are the manageability improvements provided through this deep integration with SharePoint Server.

 

Access 2010 Features by Object Type

The following sections discuss the technologies used for implementing the various types of Web objects, the differences between Web and client objects, and many of the new features introduced in Access 2010.

Tables

Web databases do not support local tables. They must be compatible with SharePoint lists or publishing will fail, and they are always converted to SharePoint lists when you successfully publish or synchronize the database. This is enforced in the table view, which is used to modify tables in 2010. It only allows you to create schema that is compatible with SharePoint Server when you are working in a Web database. Not all system tables are moved to SharePoint Server. System tables other than log tables are not stored as tables on SharePoint Server. Any tables in a Web database that are linked to SharePoint lists outside the application’s site, or to any other type of external data, can only run in the Access client. Linked tables can’t be seen by Web objects in the database, even when running in the Access client. Web objects only use Web tables, which transparently link to SharePoint lists in the application site.

A configurable administrative setting determines the maximum size of attachments in Web tables, which may interfere with publishing or synchronizing if you have an attachment that is too big. The default limit is 50 MB. Web tables don’t support multiple attachment columns in one record, and when Web tables are taken off line, you can’t add an attachment.

SharePoint Server doesn’t support certain table names that are allowed in Access client applications, and tables with illegal names will prevent publishing. The following illegal names conflict with reserved SharePoint list names: Lists, Docs, WebParts, ComMd, Webs, Workflow, WFTemp, Solutions, and ReportDefinitions. In addition, the following illegal names conflict with tables created during publishing: MSysASO, MSsLocalStatus, UserInfo, and USysApplicationLog.

To be publishable, table column names, as well as all Access objects, cannot contain the following characters and character pairs: /, \\, :, *, ?, \”, <, >, |, #, {, }, %, ~, &, \t, ;.

SharePoint lists implement a form of indexing to speed filtering performance, and single-field indexes in Access tables propagate to the server. However, SharePoint Server doesn’t support indexes composed of multiple columns, and multi-column Access indexes prevent tables from being Web compatible. This also means that composite keys and uniqueness constraints are not Web compatible. A workaround for enforcing uniqueness based on multiple column values is to use a BeforeChange data macro, discussed in the section on data macros below.

Native Access tables limit the total number of columns (to 255) and the total number of characters in a row (4000 with Unicode compression set), but they do not limit the number of columns of a particular type. To be Web compatible, however, tables must also conform to the SharePoint Server 2010 limits for each data type, which are 5 times greater than the limits enforced in previous versions. Here are default limits for how many columns of a type you can have in a SharePoint list: date/time 40, bit 30, uniqueidentifier 5, number 60, note 160, and text 320.

The maximum number of lookup columns and multi-value columns is also limited. Published memo column values are truncated if they contain more than 8,192 characters.

Both native and Web tables support calculated columns based on expressions. This is a new feature in Access 2010 that can provide significant performance improvements compared to calculating values when queries execute. Centralizing the calculation definition in the table also improves reliability and consistency. If the application logic changes, you make the change in one place, rather than attempting to find every place the calculation was used. Because the calculation is maintained by the database engine, this storage of a calculated value does not violate normalization rules. Expressions that are not Web-compatible, which might also appear in column and table validation rules, will prevent publishing, even in Web databases that appear to meet the requirements of the compatibility checker. The section below on expressions provides more details on expression compatibility.

Forms

Web forms in Access 2010 enable users with no Web development experience to create full-featured Web pages. The design experience is familiar to anyone who has worked with Access client forms, and easy to learn for new users. On the server, Access Services does the work of translating the user’s design into ASP.NET pages that run in any standard browser (IE7, IE8, Firefox, and Safari are all explicitly supported). These pages do not use any ActiveX controls. Macros that users attach to form and control events are implemented as JavaScript code. Popup forms are implemented as floating divs. Design themes are implemented as CSS style sheets. The resulting pages are highly responsive and frequently employ asynchronous JavaScript with XML (AJAX) to refreshing views rather than performing full postbacks, which provides snappy performance. The browser Back and Forward buttons are fully supported.

Like all Access objects in Web applications, forms are serialized using the open Access Application Transfer Protocol (MS-AXL), which is documented at http://msdn.microsoft.com/en-us/library/dd927584.aspx. Forms also make use of the open standard XAML protocol used by Windows Presentation Foundation (WPF). Access uses these protocols to synchronize design changes with the server and to generate ASP.NET pages.

In the Access designer, Layout view for creating Web forms and reports is enhanced to make it much easier to control the exact positioning of controls by splitting and combining columns and cells. On the server, these layouts get implemented as hidden HTML tables. Design view is not available for Web forms.

Experienced Access users with little or no Web development experience are likely to be delighted at their ability to create attractive, highly functional pages using only the skills they’ve developed in Access. Several new form features are geared specifically toward creating “Web-like” interfaces.

For example, the new Navigation form creates Web forms (or client forms) with versatile hierarchical menus than can display other forms and reports embedded in the resulting page. Parameters can filter the record source query of the displayed object. A new Browser control supports parameterized URLs based on form control values, enabling easy creation of “mashup” interfaces that embed maps or other context-specific external content.

Web forms provide parity in the feature sets available in the browser and in the Access client, as do all Web objects. However, an application may need to behave differently based on the runtime environment. Web applications support separate Web and Client startup form properties, available in Current Database settings in Backstage view. In addition, the IsServer and IsClient expressions return True or False, allowing macros to branch based on the runtime environment. Web forms “just work” in the client with no special programming required. The figures that follow show the same form, first in the Access client and then in a browser.



One difference between Web and client forms could occasionally cause confusion: client forms allow expressions to reference all columns that appear in the record source of the form, even if those columns are not bound to controls on the form. However Web forms only support references to columns that are bound to controls on the form. Users may need to add invisible controls to work around this limitation. The use of such hidden controls is a common practice in Access client reports, which have always enforced a similar restriction.

Reports

Web reports use SQL Server Reporting Services. They are deployed to the server using AXL and Report Definition Language (RDL). As described below, the Access Database Service mediates all data access to the SharePoint lists of a Web database, providing the same caching behavior and performance benefits available to forms.

Web reports support exporting to PDF from the browser, which provides a great printing experience, and users can also export to Word or Excel document formats. A handy new feature in Access 2010 enables subforms to host reports, and Navigation forms display reports by using this feature.

Both forms and reports support standard and custom Office Themes for configuring display appearance. Organizations can specify preferred themes as a way of encouraging design consistency. When a user changes a database theme, all the forms and reports that use the theme are affected. To propagate theme changes in Web objects to the server, you must open the objects in layout view, save them, and then synchronize. This pattern also applies to Name AutoCorrect, an Access client feature that propagates name changes to dependent objects. The dependent objects aren’t renamed until you open and save them, and you can then synchronize to rename the corresponding objects on the server.

To publish and synchronize successfully, Web forms, reports, and controls, as well as all other Access objects, cannot contain the following characters and character pairs: /, \\, :, *, ?, \”, <, >, |, #, {, }, %, ~, &, \t, ;. In addition, Web form and report controls must have names that begin with an upper or lower case letter or an underscore (no numbers), and they must contain only upper or lower case letters, underscores, or numbers (the naming rules for C# variables). This is more restrictive than the list of allowed characters in names for client forms and reports.

In Web forms, record source queries that include lookup columns automatically join to the related tables to display the text values for lookups. However, in Web reports this automatic joining behavior for lookups doesn’t occur — you must explicitly add the related tables to your report queries. Design view is not supported for Web reports.

Queries

Web queries are stored and implemented by Access Services using AXL and Collaborative Application Markup Language (CAML). The query processor in Access Services caches execution plans as well as data, and pushes as much filtering to the server as possible to improve performance. On disconnected clients, Access uses the client-side ACE query processor to execute Web queries against locally cached data. Client queries can work with Web tables or linked tables from other data sources and can join the two.

SQL View is not available for Web queries in the query designer. Users must employ the Access query design grid, which enforces Web query restrictions. In the Access client, you can still use VBA to get a Web query’s SQL representation by retrieving the SQL property of a DAO QueryDef object. In the Immediate Window of the Visual Basic Editor, enter the following, replacing “QueryName” with the actual name of your query:

?CurrentDb.QueryDefs!QueryName.SQL

Web queries support projection of selected columns from multiple joined data sources, including both inner and outer joins. The data sources for a Web query can include other saved Web queries in addition to tables. Web queries also support filtering based on multiple criteria, including expressions, sorting based on multiple columns, and calculated columns based on expressions.

However, Web queries do not support the full range of features available in client queries. They do not support aggregates, crosstabs, unions, Cartesian products (cross joins), subqueries, or any actions that modify data or create new objects.

Several client query properties are unavailable in Web queries, including Output All Fields (“SELECT *” in SQL), Top Values (TOP), Unique Values (DISTINCT), UniqueRecords (DISTINCTROW), Max Records, and Subdatasheet properties.

Although Web queries cannot calculate aggregate values, aggregation is fully supported on Web reports. In addition, as explained below, data macros can maintain aggregate values in tables.

Web queries support the use of parameters. Macros that open forms, reports, and query datasheets, and ones that set subform source object property values with the new BrowseTo macro action, all allow you to specify parameter values for the record source queries of these objects. You can use any valid expression to set the parameter, including expressions that refer to data values in form controls.

Traditional Access client queries are very flexible about how users can define parameters. A Parameters dialog allows users to specify the name and data type of each parameter, but for most queries this is optional. Users can simply enclose any text in square brackets and the Access client query processor treats the value as a parameter name unless it is the name of a column in the query. In Web queries, however, all parameters must be explicitly defined with the Parameters dialog.

Expressions that define calculated columns or criteria in Web queries must be compatible with the Excel-based expression library that Access Services uses. For more information on this, see the Expressions section below. In addition, expressions in Web queries cannot reference controls on forms or macro variables.

Configurable administrative settings limit many aspects of Web query design to protect resource usage. For example, you can limit the number of outer joins, which are resource intensive, in addition to the number of output columns, data sources, etc. See the section covering administration below for more details. Also, note that each lookup column a query uses adds an extra data source, because of the hidden join needed to retrieve the displayed value.

Macros

Web objects do not support VBA code. Programmability for Web objects relies instead on Access macros, which have been significantly enhanced to provide greater ease of use, security, resilience, and manageability.

The macro designer was completely recreated in Access 2010 to improve ease of use and to support more complex logic. Users select from context sensitive options to generate readable, structured, collapsible blocks of code. Full IntelliSense guides users to appropriate syntax and argument values.

Access 2010 supports two different types of macros. UI macros, which are simply called Macros in the Access user interface, extend the capabilities of traditional Access macros to respond to user actions and to control application flow and navigation. In Web forms running in a browser, these macros are implemented as JavaScript. Data macros, which are new in Access 2010, are similar to SQL triggers. They run in response to data modifications. On the server, data macros are implemented as SharePoint Workflow actions.

These two types of macros create a clean separation between presentation tier and data tier code in Access applications. An architectural shortcoming of many traditional Access applications that rely heavily on VBA code is that they often muddy the distinction between these logical tiers. UI macros can only perform data-related actions by calling named macros, which are saved data macros that aren’t attached to specific table events. Data macros can also call named macros, supporting code reuse and maintainability. Named macros support parameters, as shown in the following figure.


In addition to parameters, data macros support the use of local variables, and UI macros support both local and global variables. Originally added in Access 2007, macros support robust error handling, and for debugging, the MacroError object provides properties, such as Number, Description, and ActionName, which you can record in the application log with the new LogEvent action. You can also nest If/Then/ElseIf/Else blocks to create complex conditional logic. Macros can serialize to XML or load from XML, to support sharing and reuse.

All macros that run in Web objects on the server are mediated by Access Services with configurable throttles to ensure safe execution. Web macros support a subset of the macro actions that client macros support, and client macros can use a sandboxed subset of actions to create applications that don’t require full trust.

 

 

Data Macros

Data Macros, introduced in Access 2010, are available for Web tables in Web databases or native tables in unpublished databases. Even tables linked to Access data in other databases support data macros, but the data macros must be defined in the database containing the native tables, not the links. Data macros use an event model similar to that of triggers in SQL Server, to enable reliable enforcement of data rules.

Once you define a data macro for a table event, that macro will run no matter how the data is accessed. This provides a significant new capability for Access that enables much more application reliability than was previously available for Access tables.

In past versions, the Access database engine was able to enforce referential integrity defined in relationships between tables, and domain integrity enforced by table-level and column-level validation rules. Users could also configure columns to enforce rules for unique and required values. (These constraints are all still supported in Web databases.) Any other rules for data, however, relied on logic in data entry forms for enforcement. Developers attempted to prevent users from circumventing the rules by hiding tables from view, but this was never foolproof. In addition, having data rules enforced in multiple forms and possibly in multiple applications invited inconsistency.

In tables that have been published to SharePoint lists, the rules are enforced on the server by SharePoint Workflow actions. In native Access tables, data macro execution is enforced locally by the Access database engine, which provides parity with the actions available to server-side data macros.

Here are a few examples of data macro scenarios you might find in a Donations Management database:

●    Validate that a contributor doesn’t have outstanding donations before accepting a new donation.

●    Keep a history of any changes to a donation record.

●    Send a “Thank You” email when a contributor makes a donation greater than $1,000.

●    Maintain a total of all donations and the last donation date in summary columns of the contributors table. (Although Web reports support aggregates, Web queries do not. So using data macros to maintain aggregate values in tables can be useful.)

You can attach data macros to the BeforeChange, BeforeDelete, AfterInsert, AfterUpdate, and AfterDelete events of tables. Data macros attached to After events, and all UI macros, can call named data macros associated with a table. The calling macros can pass in parameter values and can get back a collection of return values as well as errors. Errors are also logged to the USysApplicationLog table, which is easily discovered in Backstage view and is maintained in both Web and non-Web databases.

The BeforeChange and BeforeDelete events are designed to support fast, light-weight operations. Data macros attached to these events can inspect the old and new values in the current record, and can compare them with a record in the current table or another table by using LookupRecord.  They can also use SetField to alter data in the row being changed or prevent the change from occurring.  To assure that the operations remain lightweight, however, they cannot iterate over a collection of records.  The BeforeChange event fires for both inserts and updates, but data macros can use the IsInsert expression to distinguish the type of operation.

The AfterUpdate, AfterInsert, and AfterDelete events can support more long-running operations that require iteration. The old and updated values are available, and macros invoked from these events can inspect and modify other records in the table or in other tables. Typically, users should not use these events to modify the current record; the BeforeChange and BeforeDelete events are more appropriate.

Occasionally, a user may need a data macro to modify the current record, potentially causing the macro to be called again recursively. Data macros are limited to 10 levels of recursion, but they can call the Updated (“FieldName”) function, which returns True or False, to determine which column or columns were affected by the current change. Judicious use of this feature can usually prevent cyclical recursion.

A few cautions concerning data macros:

●    In some instances, when SharePoint lists are taken offline in disconnected Access applications, data macro execution is delayed until the user reconnects. Data changes made on the disconnected client are automatically propagated to the server when the connection is restored, and the data macros run on the server.

●    Unlike SQL Server triggers, data macros are not performed within a transactional context. Access 2010 does not provide transactional contexts for any serial data operations, which are all atomic.

●    Data macros cannot process data from multi-valued or attachment columns.

●    Access 2007 SP1 can read but not write data in linked Access 2010 tables with data macros, because the Access 2007 data engine can’t execute them.

Expressions

Access supports the use of expressions in form and report control sources and events, query criteria, calculated columns in queries and tables, validation rules for tables, columns, and controls, default values for columns or controls, and macro arguments.

Access expressions are similar to Excel formulas, and Access Services uses a modified version of the Excel Calculation Services library. One important modification is to support the use of database nulls. However, this library does not provide complete parity with the Access client expression service.

The Expression Builder is significantly improved in Access 2010 to show a context-sensitive list of available options and to provide IntelliSense support, which is available anywhere that users can enter expressions.

Incompatible expressions used in the validation rules or calculated columns of Access tables in unpublished client databases may not be detected by the compatibility checker and could cause compile errors when the database is published.

Access could also fail to detect an incompatible expression in the design of a Web object, causing a runtime error when the object executes. For example, a form control that displays “#Error” could indicate that its control source uses an invalid expression. A Web query containing an incompatible expression returns a runtime error indicating an invalid expression.

Access 2010 adds support for expression keywords targeted at Web applications. For example, you can use CurrentWebUser to get the email, display name or network name of the current user when IsServer is true.

Here are a few expression issues that can cause errors when executed on the server:

●    You must fully qualify control references: Use Forms!MyForm!MySubform.Form!MyControl, not MySubform.Form!MyControl

●    Don’t rely on type coercion. For example, If conditions must return Boolean values. Use If (15<>0), not If (15). When possible, use the Format function to convert expressions to the proper type.

●    Dates do coerce to doubles, but they use a different numbering system on the server. SharePoint Server does not recognize dates prior to 1/1/1900. Use FormatDateTime if you need to convert to strings.

●    For Booleans, use True/False, not -1/0.

●    Access Services doesn’t support the DateAdd, DatePart, and Date Diff functions. Instead, use DateSerial, Day, Month, Year

●    Field references in expressions in forms must refer to fields used in bound controls.

●    You can’t use the Between operator, which is commonly used in expressions in query criteria. Use >= and <= instead.

Expressions in legacy databases require scrutiny to assure successful publishing, but the new server-based expression service is very full-featured and supports almost all the tasks that Access users have come to expect. In addition, the new Expression Builder makes it much easier to create compliant expressions in Web objects.

 

This is a preliminary document and may be changed substantially prior to final commercial release of the software described herein.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place or event is intended or should be inferred.

© 2009 Microsoft Corporation. All rights reserved.

Microsoft, Excel, InfoPath, MSDN, the Office logo, Outlook, PowerPoint, SharePoint, Visual Basic, Visual Studio, Win32, and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Posted in Uncategorized | Leave a comment

Explore Microsoft SharePoint 2013

  1. Configuring the Base Configuration test lab.
  2. Installing and configuring a new server named SQL1.
  3. Installing SQL Server 2012 on the SQL1 server.
  4. Installing SharePoint Server 2013 on the APP1 server.
  5. Installing and configuring a new server named WFE1.
  6. Installing SharePoint Server 2013 on WFE1.
  7. Demonstrating the facilities of the default Contoso team site on WFE1.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Configuring the intranet collaboration features on APP1.
  3. Demonstrating the intranet collaboration features on APP1.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Create a My Site site collection and configure settings.
  3. Configure Following settings.
  4. Configure community sites.
  5. Configure site feeds.
  6. Demonstrate social features.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Configuring AD FS 2.0.
  3. Configuring SAML-based claims authentication.
  4. Demonstrating SAML-based claims authentication.
  1. Setting up the SharePoint Server 2013 three-tier farm test lab.
  2. Configuring forms-based authentication.
  3. Demonstrating forms-based authentication.
Posted in Uncategorized | Leave a comment

Manually Back Up Team Foundation Server Visual Studio 2012

Manually Back Up Team Foundation Server

            Visual Studio 2012               
 
 
            This topic has not yet been rated – Rate this topic                         
 

 

You can manually back up data for Visual Studio Team Foundation Server by using the tools that SQL Server provides. As of Cumulative Update 2, TFS includes a Scheduled Backups feature to automatically configure backups. However, you might need to configure backups manually if your deployment has security restrictions that prevent use of that tool. To manually back up Team Foundation Server, you must not only back up all databases that the deployment uses, you must also synchronize the backups to the same point in time. You can manage this synchronization most effectively if you use marked transactions. If you routinely mark related transactions in every database that Team Foundation uses, you establish a series of common recovery points in those databases. If you regularly back up those databases, you reduce the risk of losing productivity or data because of equipment failure or other unexpected events.

If your deployment uses SQL Server Reporting Services, you must back up not only the databases but also the encryption key. For more information, see Back Up the Reporting Services Encryption Key.

The procedures in this topic explain how to create maintenance plans that perform either a full or an incremental backup of the databases and how to create tables and stored procedures for marked transactions. For maximum data protection, you should schedule full backups to run daily or weekly and incremental backups to run hourly. You can also back up of the transaction logs. For more information, see the following page on the Microsoft website: Creating Transaction Log Backups.

Note                   Note                

Many procedures in this topic specify the use of SQL Server Management Studio. If you installed SQL Server Express Edition, you cannot use that tool unless you download SQL Server Management Studio Express. To download this tool, see the following page on the Microsoft website: Microsoft SQL Server 2008 Management Studio Express.

In this topic:          

Required Permissions          

To perform this procedure, you must be a member of all the following groups:

  • The Administrators security group on the server that is running the administration console for Team Foundation.

  • The SQL Server System Administrator security group. Alternatively, your SQL Server Perform Back Up and Create Maintenance Plan permissions must be set to Allow on each instance of SQL Server that hosts the databases that you want to back up. 

  • The Farm Administrators group in SharePoint Foundation 2010, or an account with the permissions required to back up the farm.

        

Identify Databases              


            

Before you begin, you should take the time to identify all the databases you will need to back up if you would ever have to fully restore your deployment. This includes databases for SharePoint Foundation 2010 and SQL Server Reporting Services. These might be on the same server, or you might have databases distributed across multiple servers. For a complete table and description of TFS databases, including the default names for the databases, see Understanding Backing Up Team Foundation Server.

To identify databases

  1. Open SQL Server Management Studio, and connect to the database engine.

  2. In SQL Server Management Studio, in Object Explorer, expand the name of the server and then expand Databases.

  3. Review the list of databases and identify those used by your deployment.

    For example, Fabrikam, Inc.’s TFS deployment is a single-server configuration, and it uses the following databases:

    • the configuration database (Tfs_Configuration)

    • the collection database (Tfs_DefaultCollection)

    • the database for the data warehouse (Tfs_Warehouse)

    • the reporting databases (ReportServer and ReportServerTempDB)

    • the databases used by SharePoint Foundation 2010 (WSS_AdminContent, WSS_Config, WSS_Content, and WSS_Logging)

      Important note                               Important                            

      Unlike the other databases in the deployment, the databases used by SharePoint Foundation 2010 should not be backed up using the tools in SQL Server. Follow the separate procedure “Create a Back Up Plan for SharePoint Foundation 2010” later in this topic for backing up these databases.

        

Create tables in databases              


            

To make sure that all databases are restored to the same point, you can create a table in each database to mark transactions. You can use the Query function in SQL Server Management Studio to create an appropriate table in each database.

Important note                     Important                  

Do not create tables in any databases that SharePoint Products uses.

To create tables to mark related transactions in databases that Team Foundation uses

  1. Open SQL Server Management Studio, and connect to the database engine.

  2. In SQL Server Management Studio, highlight the name of the server, open the submenu, and then choose New Query.

    The Database Engine Query Editor window opens.

  3. On the Query menu, choose SQLCMD Mode.

    The Query Editor executes sqlcmd statements in the context of the Query Editor. If the Query menu does not appear, select anywhere in the new query in the Database Engine Query Editor window.

  4. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

    Note                           Note                        

    TFS_Configuration is the default name of the configuration database. This name is customizable and might vary.

  5. In the query window, enter the following script to create a table in the configuration database:

     
    Use Tfs_Configuration
    Create Table Tbl_TransactionLogMark
    (
    logmark int
    )
    GO
    Insert into Tbl_TransactionLogMark (logmark) Values (1)
    GO
    
  6. Choose the F5 key to run the script.

    If the script is well-formed, the message “(1 row(s) affected.)” appears in the Query Editor.

  7. (Optional) Save the script.

  8. Repeat steps 4−7 for every database in your deployment of TFS, except for those used by SharePoint Products. In the fictitious Fabrikam, Inc. deployment, you would repeat this process for all of the following databases:

    • Tfs_Warehouse

    • Tfs_DefaultCollection

    • ReportServer

    • ReportServerTempDB

        

            

After the tables have been created in each database that you want to back up, you must create a procedure for marking the tables.

To create a stored procedure to mark transactions in each database that Team Foundation Server uses

  1. In SQL Server Management Studio, open a query window, and make sure that SQLCMD Mode is turned on.

  2. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

  3. In the query window, enter the following script to create a stored procedure to mark transactions in the configuration database:

     
    Create PROCEDURE sp_SetTransactionLogMark
    @name nvarchar (128)
    AS
    BEGIN TRANSACTION @name WITH MARK
    UPDATE Tfs_Configuration.dbo.Tbl_TransactionLogMark SET logmark = 1
    COMMIT TRANSACTION
    GO
    
  4. Choose the F5 key to run the procedure.

    If the procedure is well-formed, the message “Command(s) completed successfully.” appears in the Query Editor.

  5. (Optional) Save the procedure.

  6. Repeat steps 2−5 for every database in your deployment of TFS.  In the Fabrikam, Inc. deployment, the administrator, Jill, repeats this process for all of the following databases:

    • Tfs_Warehouse

    • Tfs_DefaultCollection

    • ReportServer

    • ReportServerTempDB

    Tip                           Tip                        

    Make sure that you select the name of the database you want to create the stored procedure for from the Available Database list in Object Explorer before you create the procedure. Otherwise when you run the script the command will display an error that the stored procedure was already exists.

        

            

To make sure that all databases are marked, you can create a procedure that will run all the procedures that you just created for marking the tables. Unlike the previous procedures, this procedure runs only in the configuration database.

To create a stored procedure that will run all stored procedures for marking tables

  1. In SQL Server Management Studio, open a query window, and make sure that SQLCMD Mode is turned on.

  2. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

  3. In the query window, create a stored procedure that executes the stored procedures that you created in each database that TFS uses. Replace ServerName with the name of the server that is running SQL Server, and replace Tfs_CollectionName with the name of the database for each team project collection.

    In the example deployment, the name of the server is FABRIKAMPRIME, and there is only one team project collection in the deployment, the default one created when she installed Team Foundation Server (DefaultCollection). With that in mind, Jill creates the following script:

     
    CREATE PROCEDURE sp_SetTransactionLogMarkAll
    @name nvarchar (128)
    AS
    BEGIN TRANSACTION
    EXEC [FABRIKAMPRIME].Tfs_Configuration.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].ReportServer.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].ReportServerTempDB.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].Tfs_DefaultCollection.dbo.sp_SetTransactionLogMark @name
    EXEC [FABRIKAMPRIME].Tfs_Warehouse.dbo.sp_SetTransactionLogMark @name
    COMMIT TRANSACTION
    GO
    
  4. Choose the F5 key to run the procedure.

    Note                           Note                        

    If you have not restarted SQL Server Management Studio since you created the stored procedures for marking transactions, one or more red wavy lines might underscore the name of the server and the names of the databases. However, the procedure should still run.

    If the procedure is well-formed, the message “Command(s) completed successfully.” appears in the Query Editor.

  5. (Optional) Save the procedure.

        

            

When you have a procedure that will run all stored procedures for table marking, you must create a procedure that will mark all tables with the same transaction marker. You will use this marker to restore all databases to the same point.

To create a stored procedure to mark the tables in each database that Team Foundation Server uses

  1. In SQL Server Management Studio, open a query window, and make sure that SQLCMD Mode is turned on.

  2. On the SQL Editor toolbar, open the Available Databases list, and then choose TFS_Configuration.

  3. In the query window, enter the following script to mark the tables with ‘TFSMark’:

     
    EXEC sp_SetTransactionLogMarkAll 'TFSMark'
    GO
    
    NoteNote

    TFSMark is an example of a mark. You can use any sequence of supported letters and numbers in your mark. If you have more than one marked table in the databases, record which mark you will use to restore the databases. For more information, see the following page on the Microsoft website: Using Marked Transactions.

  4. Choose the F5 key to run the procedure.

    If the procedure is well-formed, the message “(1 row(s) affected)” appears in the Query Editor. The WITH MARK option applies only to the first “BEGIN TRAN WITH MARK” statement for each table that has been marked.

  5. Save the procedure.

        

            

Now that you have created and stored all the procedures that you need, you must schedule the table-marking procedure to run just before the scheduled backups of the databases. You should schedule this job to run approximately one minute before the maintenance plan for the databases runs.

To create a scheduled job for table marking in SQL Server Management Studio

  1. In Object Explorer, expand SQL Server Agent, open the Jobs menu, and then choose New Job.

    The New Job window opens.

  2. In Name, specify a name for the job. For example, Jill types the name “MarkTableJob” for her job name.

  3. (Optional) In Description, specify a description of the job.

  4. In Select a page, choose Steps and then choose New.

  5. The New Job Step window opens.

  6. In Step Name, specify a name for the step.

  7. In Database, choose the name of the configuration database. For example, Jill’s deployment uses the default name for that database, TFS_Configuration, so she chooses that database from the drop-down list.

  8. Choose Open, browse to the procedure that you created for marking the tables, choose Open two times, and then choose OK.

    Note                           Note                        

    The procedure that you created for marking the tables runs the following step:

     
    EXEC sp_SetTransactionLogMarkAll 'TFSMark'
    
  9. In Select a page, choose Schedules, and then choose New.

    The New Job Schedule window opens.

  10. In Name, specify a name for the schedule.

  11. In Frequency, change the frequency to match the plan that you will create for backing up the databases. In the example deployment, Jill wants to run incremental backups daily at 2 A.M., and full backups on Sunday at 4 A.M. For marking the databases for the incremental backups, she changes the value of Occurs to Daily. When she creates another job to mark the databases for the weekly full backup, she keeps the value of Occurs at Daily, and selects the Sunday check box.

  12. In Daily Frequency, change the occurrence so that the job is scheduled to run one minute before the backup for the databases, and then choose OK. In the example deployment, in the job for the incremental backups, Jill specifies 1:59 A.M.. In the job for the full backup, Jill specifies 3:59 A.M..

  13. In New Job, choose OK to finish creating the scheduled job.

        

Create a maintenance plan for full backups              


            

After you create a scheduled job for marking the databases, you can use the Maintenance Plan Wizard to schedule full backups of all of the databases that your deployment of TFS uses.

Important note                     Important                  

If your deployment is using the Enterprise or Datacenter editions of SQL Server, but you think you might want to restore databases to a server running Standard edition, you must use a backup set that was made with SQL Server compression disabled. Unless you disable data compression, you will not be able to successfully restore Enterprise or Datacenter edition databases to a server running Standard edition. You should turn off compression before creating your maintenance plans. To turn off compression, follow the steps in the Microsoft Knowledge Base article.

To create a maintenance plan for full backups

  1. In SQL Server Management Studio, expand the Management node, open the Maintenance Plans sub-menu, and then choose Maintenance Plan Wizard.

  2. On the welcome page for the SQL Server Maintenance Plan Wizard, choose Next.

    The Select Plan Properties page appears.

  3. In the Name box, specify a name for the maintenance plan.

    For example, Jill decides to create a plan for full backups named TfsFullDataBackup.

  4. Choose Single schedule for the entire plan or no schedule, and then choose Change.

  5. Under Frequency and Daily Frequency, specify options for your plan. For example, Jill specifies a weekly backup to occur on Sunday in Frequency, and specifies 4 A.M. in Daily Frequency.

    Under Duration, leave the default value, No end date. Choose OK, and then choose Next.

  6. On the Select Maintenance Tasks page, select the Backup Database (Full), Execute SQL Server Agent Job, and Back up Database (Transaction Log) check boxes, and then choose Next.

  7. On the Select Maintenance Task Order page, change the order so that the full backup runs first, then the Agent job, and then the transaction log backup, and then choose Next.

    For more information about this dialog box, choose the F1 key. Also, search for Maintenance Plan Wizard on the following page of the Microsoft website: SQL Server Books Online.

  8. On the Define Back Up Database (Full) Task page, choose the down arrow, choose All Databases, and then choose OK.

  9. Specify the backup options for saving the files to disk or tape, as appropriate for your deployment and resources, and then choose Next.

  10. On the Define Execute SQL Server Agent Job Task page, select the check box for the scheduled job that you created for table marking, and then choose Next.

  11. On the Define Back Up Database (Transaction Log) Task page, choose the down arrow, choose All Databases, and then choose OK.

  12. Specify the backup options for saving the files to disk or tape as appropriate for your deployment and resources, and then choose Next.

  13. On the Select Report Options page, specify report distribution options, and then choose Next two times.

  14. On the Complete the Wizard page, choose Finish.

    SQL Server creates the maintenance plan and backs up the databases that you specified based on the frequency that you specified.

        

            

You can use the Maintenance Plan Wizard to schedule differential backups for all databases that your deployment of TFS uses.

Important note                     Important                  

SQL Server Express does not include the Maintenance Plan Wizard. You must manually script the schedule for your differential backups. For more information, see the following topic on the Microsoft website: How to: Create a Differential Database Backup (Transact-SQL).

To create a maintenance plan for differential backups

  1. Log on to the server that is running the instance of SQL Server that contains the databases that you want to back up.

  2. Choose Start, choose All Programs, choose Microsoft SQL Server 2008, and then choose SQL Server Management Studio.

    1. In the Server type list, choose Database Engine.

    2. In the Server name and Authentication lists, choose the appropriate server and authentication scheme.

    3. If your instance of SQL Server requires it, in User name and Password, specify the credentials of an appropriate account.

    4. Choose Connect.

  3. In SQL Server Management Studio, expand the Management node, open the sub-menu, choose Maintenance Plans, and then choose Maintenance Plan Wizard.

  4. On the welcome page for the SQL Server Maintenance Plan Wizard, choose Next.

  5. On the Select Plan Properties page, in the Name box, specify a name for the maintenance plan.

    For example, you could name a plan for differential backups TfsDifferentialBackup.

  6. Choose Single schedule for the entire plan or no schedule, and then choose Change.

  7. Under Frequency and Daily Frequency, specify options for your backup plan.

    Under Duration, leave the default value, No end date. Choose OK, and then choose Next.

  8. On the Select Maintenance Tasks page, select the Back up Database (Differential) check box, and then choose Next.

  9. On the Define Back Up Database (Differential) Task page, choose the down arrow, choose All Databases, and then choose OK.

  10. Specify the backup options for saving the files to disk or tape as appropriate for your deployment and resources, and then choose Next.

  11. On the Select Report Options page, specify report distribution options, and then choose Next two times.

  12. On the Complete the Wizard page, choose Finish.

    SQL Server creates the maintenance plan and backs up the databases that you specified based on the frequency that you specified.

        

            

You can use the Maintenance Plan Wizard to schedule transaction log backups for all databases that your deployment of TFS uses.

Important note                     Important                  

SQL Server Express does not include the Maintenance Plan Wizard. You must manually script the schedule for transaction-log backups. For more information, see the following topic on the Microsoft website: How to: Create a Transaction Log Backup (Transact-SQL).

To create a maintenance plan for transaction log backups

  1. Log on to the server that is running the instance of SQL Server that contains the databases that you want to back up.

  2. Choose Start, choose All Programs, choose Microsoft SQL Server 2008, and then choose SQL Server Management Studio.

  3. In the Server type list, choose Database Engine.

    1. In the Server name and Authentication lists, choose the appropriate server and authentication scheme.

    2. If your instance of SQL Server requires it, in User name and Password, specify the credentials of an appropriate account.

    3. Choose Connect.

  4. In SQL Server Management Studio, expand the Management node, open the submenu, choose Maintenance Plans, and then choose Maintenance Plan Wizard.

  5. On the welcome page for the SQL Server Maintenance Plan Wizard, choose Next.

    The Select Plan Properties page appears.

  6. In the Name box, specify a name for the maintenance plan.

    For example, you could name a plan to back up transaction logs TfsTransactionLogBackup.

  7. Choose Single schedule for the entire plan or no schedule, and then choose Change.

  8. Under Frequency and Daily Frequency, specify options for your plan.

    Under Duration, leave the default value, No end date.

  9. Choose OK, and then choose Next.

  10. On the Select Maintenance Tasks page, select the Execute SQL Server Agent Job and Back up Database (Transaction Log) check boxes, and then choose Next.

  11. On the Select Maintenance Task Order page, change the order so that the Agent job runs before the transaction-log backup, and then choose Next.

    For more information about this dialog box, choose the F1 key. Also, search for Maintenance Plan Wizard on the following page of the Microsoft website: SQL Server Books Online.

  12. On the Define Execute SQL Server Agent Job Task page, select the check box for the scheduled job that you created for table marking, and then choose Next.

  13. On the Define Back Up Database (Transaction Log) Task page, choose the down arrow, choose All Databases, and then choose OK.

  14. Specify the backup options for saving the files to disk or tape as appropriate for your deployment and resources, and then choose Next.

  15. On the Select Report Options page, specify report distribution options, and then choose Next two times.

  16. On the Complete the Wizard page, choose Finish.

    SQL Server creates the maintenance plan and backs up the transaction logs for the databases that you specified based on the frequency that you specified.

            

You must back up the encryption key for Reporting Services as part of backing up your system. Without this encryption key, you will not be able to restore the reporting data. For a single-server deployment of TFS, you can back up the encryption key for SQL Server Reporting Services by using the Reporting Services Configuration tool. You could also choose to use the RSKEYMGMT command-line tool, but the configuration tool is simpler. For more information about RSKEYMGMT, see the following page on the Microsoft website: RSKEYMGMT Utility.

To back up the encryption key by using the Reporting Services Configuration tool

  1. On the server that is running Reporting Services, choose Start, point to All Programs, point to Microsoft SQL Server, point to Configuration Tools, and then choose Reporting Services Configuration Manager.

    The Report Server Installation Instance Selection dialog box opens.

  2. Specify the name of the data-tier server and the database instance, and then choose Connect.

  3. In the navigation bar on the left side, choose Encryption Keys, and then choose Backup.

    The Encryption Key Information dialog box opens.

  4. In File Location, specify the location where you want to store a copy of this key.

    You should consider storing this key on a separate computer from the one that is running Reporting Services.

  5. In Password, specify a password for the file.

  6. In Confirm Password, specify the password for the file again, and then choose OK.

        

            

Unlike Team Foundation Server, which uses the scheduling tools in SQL Server Management Studio, there is no built-in scheduling system for backups in SharePoint Foundation 2010, and SharePoint specifically recommends against any scripting that marks or alters its databases. To schedule backups so that they occur at the same time as the backups for TFS, SharePoint Foundation 2010 guidance recommends that you create a backup script by using Windows PowerShell, and then use Windows Task Scheduler to run the backup script at the same time as your scheduled backups of TFS databases. This will help you keep your database backups in sync.

Important note                     Important                  

Before proceeding with the procedures below, you should review the latest guidance for SharePoint Foundation 2010. The procedures below are based on that guidance, but might have become out of date. Always follow the latest recommendations and guidance for the version of SharePoint Products you use when managing that aspect of your deployment. For more information, see the links included with each of the procedures in this section.

To create scripts to perform full and differential backups of the farm in SharePoint Foundation 2010

  1. Open a text editor, such as Notepad.

  2. In the text editor, type the following, where BackupFolder is the UNC path to a network share where you will back up your data:

     
    Backup-SPFarm -Directory BackupFolder -BackupMethod Full
    
    TipTip

    There are a number of other parameters you could use when backing up the farm. For more information, see Back up a farm and Backup-SPFarm.

  3. Save the script as a .PS1 file. Consider giving the file an obvious name, such as “SharePointFarmFullBackupScript.PS1” or some meaningful equivalent.

  4. Open a new file, and create a second backup file, only this time specifying a differential backup:

     
    Backup-SPFarm -Directory BackupFolder -BackupMethod Differential
    
  5. Save the script as a .PS1 file. Consider giving the file an obvious name, such as “SharePointFarmDiffBackupScript.PS1”.

    Important note                           Important                        

    By default, PowerShell scripts will not execute on your system unless you have changed PowerShell’s execution policy to allow scripts to run. For more information, see Running Windows PowerShell Scripts.

After you have created your scripts, you must schedule them to execute following the same schedule and frequency as the schedule you created for backing up Team Foundation Server databases. For example, if you scheduled differential backups to execute daily at 2 A.M., and full backups to occur on Sundays at 4 A.M., you will want to follow the exact same schedule for your farm backups.

To schedule your backups, you must use Windows Task Scheduler. In addition, you must configure the tasks to run using an account with sufficient permissions to read and write to the backup location, as well as permissions to execute backups in SharePoint Foundation 2010. Generally speaking, the simplest way to do this is to use a farm administrator account, but you can use any account as long as all of the following criteria are met:

  • The account specified in Windows Task Scheduler is an administrative account.

  • The account specified for the Central Administration application pool and the account you specify for running the task have read/write access to the backup location.

  • The backup location is accessible from the server running SharePoint Foundation 2010, SQL Server, and Team Foundation Server.

To schedule backups for the farm

  1. Choose Start, choose Administrative Tools, and then choose Task Scheduler.

  2. In the Actions pane, choose Create Task.

  3. On the General tab, in Name, specify a name for this task, such as “Full Farm Backup.” In Security options, specify the user account under which to run the task if it is not the account you are using. Then choose Run whether user is logged on or not, and select the Run with highest privileges check box.

  4. On the Actions tab, choose New.

    In the New Action window, in Action, choose Start a program. In Program/script, specify the full path and file name of the full farm backup .PS1 script you created, and then choose OK.

  5. On the Triggers tab, choose New.

    In the New Trigger window, in Settings, specify the schedule for performing the full backup of the farm. Make sure that this schedule exactly matches the schedule for full backups of the Team Foundation Server databases, including the recurrence schedule, and then choose OK.

  6. Review all the information in the tabs, and then choose OK to create the task for the full backup for the farm.

  7. In the Actions pane, choose Create Task.

  8. On the General tab, in Name, specify a name for this task, such as “Differential Farm Backup.” In Security options, specify the user account under which to run the task if it is not the account you are using, choose Run whether user is logged on or not, and select the Run with highest privileges check box.

  9. On the Actions tab, choose New.

    In the New Action window, in Action, choose Start a program. In Program/script, specify the full path and file name of the differential farm backup .PS1 script you created, and then choose OK.

  10. On the Triggers tab, choose New.

    In the New Trigger window, in Settings, specify the schedule for performing the full backup of the farm. Make sure that this schedule exactly matches the schedule for full backups of the Team Foundation Server databases, including the recurrence schedule, and then choose OK.

  11. Review all the information in the tabs, and then choose OK to create the task for the differential backup for the farm.

  12. In Active Tasks, refresh the list and make sure that your new tasks are scheduled appropriately, and then close Task Scheduler. For more information about creating and scheduling tasks in Task Scheduler, see Task Scheduler How To.

Home | Prepare for Installation | Install Prerequisites and Team Foundation Server | Configure Team Foundation Server to Support Your Development Teams | Create Back Up Schedule and Plan

        

            

If you use Visual Studio Lab Management in your deployment of Team Foundation Server, you must also back up each machine and component that Lab Management uses. The hosts for the virtual machines and the SCVMM library servers are separate physical computers that are not backed up by default. You must specifically include them when you plan your backup and restoration strategies. The following table summarizes what you should back up whenever you back up Team Foundation Server.

 

Machine                     

Component                    

Server that is running System Center Virtual Machine Manager 2008 (SCVMM) R2

  • SQL Server database (user accounts, configuration data)

Physical host for the virtual machines

  • Virtual machines (VMs)

  • Templates

  • Host configuration data (virtual networks)

SCVMM library server

  • Virtual machines

  • Templates

  • Virtual hard disks (VHDs)

  • ISO images

The following table contains tasks and links to procedural or conceptual information about how to back up the additional machines for an installation of Lab Management. You must perform the tasks in the order shown, without skipping any tasks.

To back up the machines that are running any SCVMM components, you must be a member of the Backup Operators group on each machine.

 

Common Tasks                    

Detailed instructions                    

  1. Back up the server that is running System Center Virtual Machine Manager 2008 R2.

  2. Back up the library servers for SCVMM.

  3. Back up each physical host for the virtual machines.

See Also              


            

Tasks

Restore Data to the Same Location                           

Other Resources

Managing Data                           
Managing Team Foundation Server Data-Tier Servers                           
Managing Team Foundation Server                           
Back Up the Reporting Services Encryption Key                           
Posted in Uncategorized | Leave a comment

ADO.NET

Microsoft ADO .NET Step by Step

by Rebecca M. Riordan ISBN: 0735612366

Microsoft Press © 2002 (512 pages)

Learn to use the ADO.NET model to expand on data-bound Windows and

Web Forms, as well as how XML and ADO.NET intermingle.

Table of Contents

Microsoft ADO.NET Step by Step

Introduction

Part I – Getting Started with ADO.NET

Chapter 1 – Getting Started with ADO.NET

Part II – Data Providers

Chapter 2 – Creating Connections

Chapter 3 – Data Commands and the DataReader

Chapter 4 – The DataAdapter

Chapter 5 – Transaction Processing in ADO.NET

Part III – Manipulating Data

Chapter 6 – The DataSet

Chapter 7 – The DataTable

Chapter 8 – The DataView

Part IV – Using the ADO.NET Objects

Chapter 9 – Editing and Updating Data

Chapter 10 – ADO.NET Data-Binding in Windows Forms

Chapter 11 – Using ADO.NET in Windows Forms

Chapter 12 – Data-Binding in Web Forms

Chapter 13 – Using ADO.NET in Web Forms

Part V – ADO.NET and XML

Chapter 14 – Using the XML Designer

Chapter 15 – Reading and Writing XML

Chapter 16 – Using ADO in the .NET Framework

Index

List of Tables

List of Sidebars

Microsoft ADO.NET Step by Step

PUBLISHED BY

Microsoft Press

A Division of Microsoft Corporation

One Microsoft Way

Redmond, Washington 98052-6399

Copyright © 2002 by Rebecca M. Riordan

All rights reserved. No part of the contents of this book may be reproduced or transmitted

in any form

or by any means without the written permission of the publisher.

Microsoft ADO.Net – Step by Step 1

Library of Congress Cataloging-in-Publication Data

Riordan, Rebecca.

Microsoft ADO.NET Step by Step / Rebecca M. Riordan.

p. cm.

Includes index.

ISBN 0-7356-1236-6

1. Database design. 2. Object oriented programming (Computer

science) 3. ActiveX. I.

Title.

QA76.9.D26 R56 2002

005.75’85—dc21 2001054641

Printed and bound in the United States of America.

1 2 3 4 5 6 7 8 9 QWE 7 6 5 4 3 2

Distributed in Canada by Penguin Books Canada Limited.

A CIP catalogue record for this book is available from the British Library.

Microsoft Press books are available through booksellers and distributors worldwide. For

further information about international editions, contact your local Microsoft Corporation

office or contact Microsoft Press International directly at fax (425) 936-7329. Visit our

Web site at http://www.microsoft.com/mspress. Send comments to mspinput@microsoft.com.

ActiveX, IntelliSense, Internet Explorer, Microsoft, Microsoft Press, the .NET logo, Visual

Basic, Visual C#, and Visual Studio are either registered trademarks or trademarks of

Microsoft Corporation in the United States and/or other countries. Other product and

company names mentioned herein may be the trademarks of their respective owners.

The example companies, organizations, products, domain names, e-mail addresses,

logos, people, places, and events depicted herein are fictitious. No association with any

real company, organization, product, domain name, e-mail address, logo, person, place,

or event is intended or should be inferred.

Acquisitions Editor: Danielle Bird

Project Editor: Rebecca McKay

Body Part No. X08-05018

To my very dear friend, Stephen Jeffries

About the Author

Rebecca M. Riordan

With almost 20 years’ experience in software design, Rebecca M. Riordan has earned

an international reputation as an analyst, systems architect, and designer of database

and work -support systems.

She works as an independent consultant, providing systems design and consulting

expertise to an international client base. In 1998, she was awarded MVP status by

Microsoft in recognition of her work in Internet newsgroups. Microsoft ADO.NET Step by

Step is her third book for Microsoft Press.

Rebecca currently resides in New Mexico. She can be reached at

rebeccar@attglobal.net.

Introduction

Overview

ADO.NET is the data access component of Microsoft’s new .NET Framework. Microsoft

bills ADO.NET as “an evolutionary improvement” over previous versions of ADO, a claim

that has been hotly debated since its announcement. It is certainly true that the

ADO.NET object model bears very little relationship to earlier versions of ADO.

Microsoft ADO.Net – Step by Step 2

In fact, whether you decide to love it or hate it, one fact about the .NET Framework

seems undeniable: it levels the playing ground. Whether you’ve been at this computer

game longer than you care to talk about or you’re still sorting out your heaps and stacks,

learning the .NET Framework will require a major investment. We’re all beginners now.

So welcome to Microsoft ADO.NET Step by Step. Through the exercises in this book, I

will introduce you to the ADO.NET object model, and you’ll learn how to use that model

in developing data-bound Windows Forms and Web Forms. In later topics, we’ll look at

how ADO.NET interacts with XML and how to access older versions of ADO from the

.NET environment.

Since we’re all beginners, an exhaustive treatment would be, well, exhausting, so this

book is necessarily limited in scope. My goal is to provide you with an understanding of

the ADO.NET objects—what they are and how they work together. So fair warning: this

book will not make you an expert in ADO.NET. (How I wish it were that simple!)

What this book will give you is a road map, a fundamental understanding of the

environment, from which you will be able to build expertise. You’ll know what you need to

do to start building data applications. The rest will come with time and experience. This

book is a place to start.

Although I’ve pointed out language differences where they might be confusing, in order

to keep the book within manageable proportions I’ve assumed that you are already

familiar with Visual Basic .NET or Visual C# .NET. If you’re completely new to the .NET

environment, you might want to start with Microsoft Visual Basic .NET Step by Step by

Michael Halvorson (Microsoft Press, 2002) or Microsoft Visual C# .NET Step by Step by

John Sharp and Jon Jagger (Microsoft Press, 2002), depending on your language of

choice.

The exercises that include programming are provided in both Microsoft Visual Basic and

Microsoft C#. The two versions are identical (except for the difference between the

languages), so simply choose the exercise in the language of your choice and skip the

other version.

Conventions and Features in This Book

You’ll save time by understanding, before you start the lessons, how this book displays

instructions, keys to press, and so on. In addition, the book provides helpful features that

you might want to use.

§ Numbered lists of steps (1, 2, and so on) indicate hands-on exercises. A

rounded bullet indicates an exercise that has only one step.

§ Text that you are to type appears in bold.

§ Terms are displayed in italic the first time they are defined.

§ A plus sign (+) between two key names means that you must press those

keys at the same time. For example, “Press Alt+Tab” means that you hold down

the Alt key while you press Tab.

§ Notes labeled “tip” provide additional information or alternative methods for a

step.

§ Notes labeled “important” alert you to essential information that you should

check before continuing with the lesson.

§ Notes labeled “ADO” point out similarities and differences between ADO and

ADO.NET.

§ Notes labeled “Roadmap” refer to places where topics are discussed in depth.

§ You can learn special techniques, background information, or features related

to the information being discussed by reading the shaded sidebars that appear

throughout the lessons. These sidebars often highlight difficult terminology or

suggest future areas for exploration.

§ You can get a quick reminder of how to perform the tasks you learned by

reading the Quick Reference at the end of a lesson.

Microsoft ADO.Net – Step by Step 3

Using the ADO.NET Step by Step CD-ROM

The Microsoft ADO.NET Step by Step CD-ROM inside the back cover contains practice

files that you’ll use as you complete the exercises in the book. By using the files, you

won’t need to waste time creating databases and entering sample data. Instead, you can

concentrate on how to use ADO.NET. With the files and the step-by-step instructions in

the lessons, you’ll also learn by doing, which is an easy and effective way to acquire and

remember new skills.

System Requirements

In order to complete the exercises in this book, you will need the following software:

§ Microsoft Windows 2000 or Microsoft Windows XP

§ Microsoft Visual Studio .NET

§ Microsoft SQL Server Desktop Engine (included with Visual Studio .NET)

or Microsoft SQL Server 2000

This book and practice files were tested primarily using Windows 2000 and Visual Studio

.NET Professional; however, other editions of Visual Studio .NET, such as Visual Basic

.NET Standard and Visual C# .NET Standard, should also work.

Since Windows XP Home Edition does not include Internet Information Services (IIS),

you won’t be able to create local ASP.NET Web applications (discussed in chapters 12

and 13) using Windows XP Home Edition. Windows 2000 and Windows XP Professional

do include IIS.

Installing the Practice Files

Follow these steps to install the practice files on your computer so that you can use them

with the exercises in this book.

1. Insert the CD in your CD-ROM drive.

A Start menu should appear automatically. If this menu does not appear,

double-click StartCD.exe at the root of the CD.

2. Click the Getting Started option.

3. Follow the instructions in the Getting Started document to install the

practice files and setup SQL Server 2000 or the Microsoft SQL Server

Desktop Engine (MSDE).

Using the Practice Files

The practice files contain the projects and completed solutions for the ADO.NET Step by

Step book. Folders marked ‘Finish’ contain working solutions. Folders marked ‘Start’

contain the files needed to perform the exercises in the book.

Uninstalling the Practice Files

Follow these steps to remove the practice files from your computer.

1. Insert the CD in your CD-ROM drive.

A Start menu should appear automatically. If this menu does not appear,

double-click StartCD.exe at the root of the CD.

2. Click the Uninstall Practice Files option.

3. Follow the steps in the Uninstall Practice Files document to remove

the practice files.

Need Help with the Practice Files?

Every effort has been made to ensure the accuracy of the book and the contents of this

CD-ROM. As corrections or changes are collected for this book, they will be placed on a

Web page and any errata will also be integrated into the Microsoft online Help tool

known as the Knowledge Base. To view the list of known corrections for this book, visit

the following page:

http://support.microsoft.com/support/misc/kblookup.asp?id=Q314759

Microsoft ADO.Net – Step by Step 4

To search the Knowledge Base and review your support options for the book or CDROM,

visit the Microsoft Press Support site:

http://www.microsoft.com/mspress/support/

If you have comments, questions, or ideas regarding the book or this CD-ROM, or

questions that are not answered by searching the Knowledge Base, please send them to

Microsoft Press via e-mail to:

mspinput@microsoft.com

or by postal mail to:

Microsoft Press

Attn: Microsoft ADO.NET Step by Step Editor

One Microsoft Way

Redmond, WA 98052-6399

Please note that product support is not offered through the above addresses.

Part I: Getting Started with ADO.NET

Chapter List

Chapter 1: Getting Started with ADO.NET

Chapter 1: Getting Started with ADO.NET

Overview

In this chapter, you’ll learn how to:

§ Identify the primary objects that make up Microsoft ADO.NET are and how

they interact

§ Create Connection and DataAdapter objects by using the DataAdapter

Configuration Wizard

§ Automatically generate a DataSet

§ Bind control properties to a DataSet

§ Load data into a DataSet at run time

Like other components of the .NET Framework, ADO.NET consists of a set of objects

that interact to provide the required functionality. Unfortunately, this can make learning to

use the object model frustrating—you feel like you need to learn all of it before you can

understand any of it.

The solution to this problem is to start by building a conceptual framework. In other

words, before you try to learn the details of how any particular object functions, you need

to have a general understanding of what each object does and how the objects interact.

That’s what we’ll do in this chapter. We’ll start by looking at the main ADO.NET objects

and how they work together to get data from a physical data store, to the user, and back

again. Then, just to whet your appetite, we’ll work through building a set of objects and

binding them to a simple data form.

On the Fundamental Interconnectedness of All Things

In later chapters in this section, we’ll examine each object in the ADO.NET object model

in turn. At least in theory. In reality, because the objects are so closely interlinked, it’s

impossible to look at any single object in isolation.

Microsoft ADO.Net – Step by Step 5

Roadmap A roadmap note like this will point you to the discussion of a

property or method that hasn’t yet been introduced.

Where it’s necessary to use a method or property that we haven’t yet examined, I’ll use

roadmap notes, like the one in the margin next to this paragraph, to point you to the

chapter where they are discussed.

The ADO.NET Object Model

The figure below shows a simplified view of the primary objects in the ADO.NET object

model. Of course, the reality of the class library is more complicated, but we’ll deal with

the intricacies later. For now, it’s enough to understand what the primary objects are and

how they typically interact.

The ADO.NET classes are divided into two components: the Data Providers (sometimes

called Managed Providers), which handle communication with a physical data store, and

the DataSet, which represents the actual data. Either component can communicate with

data consumers such as WebForms and WinForms.

Data Providers

The Data Provider components are specific to a data source. The .NET Framework

includes two Data Providers: a generic provider that can communicate with any OLE DB

data source, and a SQL Server provider that has been optimized for Microsoft SQL

Server versions 7.0 and later. Data Providers for other databases such as Oracle and

DB2 are expected to become available, or you can write your own. (You may be relieved

to know that we won’t be covering the creation of Data Providers in this book.)

The two Data Providers included in the .NET Framework contain the same objects,

although their names and some of their properties and methods are different. To

illustrate, the SQL Server provider objects begin with SQL (for example,

SQLConnection), while the OLE DB objects begin with OleDB (for example,

OleDbConnection).

The Connection object represents the physical connection to a data source. Its

properties determine the data provider (in the case of the OLE DB Data Provider), the

data source and database to which it will connect, and the string to be used during

connecting. Its methods are fairly simple: You can open and close the connection,

change the database, and manage transactions.

The Command object represents a SQL statement or stored procedure to be executed at

the data source. Command objects can be created and executed independently against

a Connection object, and they are used by DataAdapter objects to handle

communications from a DataSet back to a data source. Command objects can support

SQL statements and stored procedures that return single values, one or more sets of

rows, or no values at all.

Microsoft ADO.Net – Step by Step 6

A DataReader is a fast, low-overhead object for obtaining a forward-only, read-only

stream of data from a data source. They cannot be created directly in code; they are

created only by calling the ExecuteReader method of a Command.

The DataAdapter is functionally the most complex object in a Data Provider. It provides

the bridge between a Connection and a DataSet. The DataAdapter contains four

Command objects: the SelectCommand, UpdateCommand, InsertCommand, and

DeleteCommand. The DataAdapter uses the SelectCommand to fill a DataSet and uses

the remaining three commands to transmit changes back to the data source, as required.

Microsoft ActiveX

Data Objects

(ADO)

In functional terms, the Connection and Command

objects are roughly equivalent to their ADO

counterparts (the major difference being the lack of

support for server-side cursors), while the

DataReader functions like a firehose cursor. The

DataAdapter and DataSet have no real equivalent in

ADO.

DataSets

The DataSet is a memory-resident representation of data. Its structure is shown in the

figure below. The DataSet can be considered a somewhat simplified relational database,

consisting of tables and their relations. It’s important to understand, however, that the

DataSet is always disconnected from the data source—it doesn’t “know” where the data

it contains came from, and in fact, it can contain data from multiple sources.

The DataSet is composed of two primary objects: the DataTableCollection and the

DataRelationCollection. The DataTableCollection contains zero or more DataTable

objects, which are in turn made up of three collections: Columns, Rows, and Constraints.

The DataRelationCollection contains zero or more DataRelations.

The DataTable’s Columns collection defines the columns that compose the DataTable.

In addition to ColumnName and DataType properties, a DataColumn’s properties allow

you to define such things as whether or not it allows nulls (AllowDBNull), its maximum

length (MaxLength), and even an expression that is used to calculate its value

(Expression).

The DataTable’s Rows collection, which may be empty, contains the actual data as

defined by the Columns collection. For each Row, the DataTable maintains its original,

current, and proposed values. As we’ll see, this ability greatly simplifies certain kinds of

programming tasks.

Microsoft ADO.Net – Step by Step 7

ADO The ADO.NET DataTable provides essentially the same

functionality as the ADO Recordset object, although it obviously

plays a very different role in the object model.

The DataTable’s Constraints collection contains zero or more Constraints. Just as in a

relational database, Constraints are used to maintain the integrity of the data. ADO.NET

supports two types of constraints: ForeignKeyConstraints, which maintain relational

integrity (that is, they ensure that a child row cannot be orphaned), and

UniqueConstraints, which maintain data integrity (that is, they ensure that duplicate rows

cannot be added to the table). In addition, the PrimaryKey property of the DataTable

ensures entity integrity (that is, it enforces the uniqueness of each row).

Finally, the DataSet’s DataRelationCollection contains zero or more DataRelations.

DataRelations provide a simple programmatic interface for navigating from a master row

in one table to the related rows in another. For example, given an Order, a DataRelation

allows you to easily extract the related OrderDetails rows. (Note, however, that the

DataRelation itself doesn’t enforce relational integrity. A Constraint is used for that.)

Binding Data to a Simple Windows Form

The process of connecting data to a form is called data binding. Data binding can be

performed in code, but the Microsoft Visual Studio .NET designers make the process

very simple. In this chapter, we’ll use the designers and the wizards to quickly create a

simple data bound Windows form.

Important If you have not yet installed this book’s practice files, work

through “Installing and Using the Practice Files” in the

Introduction, and then return to this chapter.

Adding a Connection and DataAdapter to a Form

Roadmap We’ll examine the Connection object in Chapter 2 and the

DataAdapter in Chapter 4.

The first step in binding data is to create the Data Provider objects. Visual Studio

provides a DataAdapter Configuration Wizard to make this process simple. Once the

DataAdapter has been added, you can check that its configuration is correct by using the

DataAdapter Preview window within Visual Studio.

Add a Connection to a Windows Form

1. Open the EmployeesForm project from the Visual Studio Start Page.

2. Double-click Employees.vb (or Employees.cs if you’re using C#) in the

Solution Explorer to open the form.

Visual Studio displays the form in the form designer.

3. Drag a SQLDataAdapter onto the form from the Data tab of the

Toolbox.

Visual Studio displays the first page of the DataAdapter Configuration Wizard.

Microsoft ADO.Net – Step by Step 8

4. Click Next.

The DataAdapter Configuration Wizard displays a page asking you to choose

a connection.

5. Click New Connection.

The Data Link Properties dialog box opens.

Microsoft ADO.Net – Step by Step 9

6. Specify the name of your server, the appropriate logon information,

select the Northwind database, and then click Test Connection.

The DataAdapter Configuration Wizard displays a message indicating that the

connection was successful.

Tip If you’re unsure how to complete step 6, check with your system

administrator.

7. Click OK to close the message, click OK to close the Data Link

Properties dialog box, and then click Next to display the next page of

the DataAdapter Configuration Wizard.

The DataAdapter Configuration Wizard displays a page requesting that you

choose a query type.

Microsoft ADO.Net – Step by Step 10

8. Verify that the Use SQL statements option is selected, and then click

Next.

The DataAdapter Configuration Wizard displays a page requesting the SQL

statement(s) to be used.

9. Click Query Builder.

The DataAdapter Configuration Wizard opens the Query Builder and displays

the Add Table dialog box.

Microsoft ADO.Net – Step by Step 11

10. Select the Employees table, click Add, and then click Close.

The Add Table dialog box closes, and the Employees table is added to the

Query Builder.

11. Add the following fields to the query by selecting the check box next to

the field name in the top pane: EmployeeID, LastName, FirstName,

Title, TitleOfCourtesy, HireDate, Notes.

The Query Builder creates the SQL command.

Microsoft ADO.Net – Step by Step 12

12. Click OK to close the Query Builder, and then click Next.

The DataAdapter Configuration Wizard displays a page showing the results of

adding the Connection and DataAdapter objects to the form.

13. Click Finish to close the DataAdapter Configuration Wizard.

The DataAdapter Configuration Wizard creates and configures a

SQLDataAdapter and a SQLConnection, and then adds them to the

Component Designer.

Creating DataSets

Roadmap We’ll examine the DataSet in Chapter 6.

Microsoft ADO.Net – Step by Step 13

The Connection and DataAdapter objects handle the physical communication with the

data store, but you must also create a memory -resident representation of the actual data

that will be bound to the form. You can bind a control to almost any structure that

contains data, including arrays and collections, but you’ll typically use a DataSet.

As with the Data Provider objects, Visual Studio provides a mechanism for automating

this process. In fact, it can be done with a simple menu choice, although because Visual

Studio exposes the code it creates, you can further modify the basic DataSet

functionality that Visual Studio provides.

Create a DataSet

1. On the Data menu, choose Generate Dataset.

The Generate Dataset dialog box opens.

2. In the New text box, type dsEmployees.

Microsoft ADO.Net – Step by Step 14

3. Click OK.

Visual Studio creates the DataSet class and adds an instance of it to the

bottom pane of the forms designer.

Simple Binding Controls to a DataSet

The .NET Framework supports two kinds of binding: simple and complex. Simple binding

occurs when a single data element, such as a date, is bound to a control. Complex

binding occurs when a control is bound to multiple data values, for example, binding a

list box to a DataSet that contains a list of Order Numbers.

Roadmap We’ll examine simple and complex data binding in more

detail in Chapters 10 and 11.

Almost any property of a control can support simple binding, but only a subset of

Windows and WebForms controls (such as DataGrids and ListBoxes) can support

complex binding.

Bind the Text Property of a Control to a DataSet

1. Click the txtTitle text box in the forms designer to select it.

2. Click the plus sign next to DataBindings to expand the DataBindings

properties.

3. Click the drop-down arrow for the Text property.

Visual Studio displays a list of available data sources.

4. In the list of available data sources for the Text property, click the plus

sign next to the DsEmployees1 data source, and then click the plus

sign next to the Employees DataTable.

Microsoft ADO.Net – Step by Step 15

5. Click the TitleOfCourtesy column to select it.

6. Repeat steps 1 through 5 to bind the Text property of the remaining

controls to the columns of the Employees DataTable, as shown in the

following table.

Control DataTable

Column

lblEmployeeID EmployeeID

txtGivenName FirstName

txtSurname LastName

txtHireDate HireDate

txtPosition Title

txtNotes Notes

Loading Data into the DataSet

We now have all the components in place for manipulating the data from our data

source, but we have one task remaining: We must actually load the data into the

DataSet.

If you’re used to working with data bound forms in environments such as Microsoft

Access, it may come as a surprise that you need to do this manually. Remember,

however, that the ADO.NET architecture has been designed to operate without a

permanent connection to the database. In a disconnected environment, it’s appropriate,

Microsoft ADO.Net – Step by Step 16

and indeed necessary, that the management of the connection be under programmatic

control.

Roadmap The DataAdapter’s Fill method is discussed in Chapter 4.

The DataAdapter’s Fill method is used to load data into the DataSet. The DataAdapter

provides several versions of the Fill method. The simplest version takes the name of a

DataSet as a parameter, and that’s the one we’ll use in the exercise below.

Load Data into the DataSet

Visual Basic .NET

1. Press F7 to view the code for the form.

2. Expand the region labeled “Windows Form Designer generated code”

and navigate to the New Sub.

3. Add the following line of code just before the end of the procedure:

SqlDataAdapter1.Fill(DsEmployees1)

Roadmap The DataAdapter’s Fill method is discussed in Chapter 4.

This line calls the DataAdapter’s Fill method, passing the name of the

DataSet to be filled.

4. Press F5 to build and run the program.

Visual Studio displays the form with the first row displayed.

5. Admire your data bound form for a few minutes (see, that wasn’t so

hard!), and then close the form.

Visual C# .NET

1. Press F7 to view the code for the form.

2. Add the following line of code to the end of the Employees procedure:

sqlDataAdapter1.Fill(dsEmployees1);

Roadmap The DataAdapter’s Fill method is discussed in Chapter 4.

This line calls the DataAdapter’s Fill method, passing the name of the

DataSet to be filled.

3. Press F5 to build and run the program.

Visual Studio displays the form with the first row displayed.

Microsoft ADO.Net – Step by Step 17

4. Admire your data bound form for a few minutes (see, that wasn’t so

hard!), and then close the form.

Chapter 1 Quick Reference

To Do this

Add a Connection and DataAdapter to a

form by using the DataAdapter

Configuration Wizard

Drag a DataAdapter object onto the

form and follow the wizard

instructions

Use Visual Studio to automatically

generate a typed DataSet

Select Create DataSet from the

Data menu, complete the Generate

Dataset dialog box as required, and

then click OK

Simple bind properties of a control to a

data source

In the Properties window

DataBindings section, select the

data source, DataTable, and

column

Load data into a DataSet Use the Fill method of the

DataAdapter. For example:

myDataAdapter.Fill(myDataS

et)

Part II: Data Providers

Chapter 2: Creating Connections

Chapter 3: Data Commands and the DataReader

Chapter 4: The DataAdapter

Chapter 5: Transaction Processing in ADO.NET

Chapter 2: Creating Connections

Overview

In this chapter, you’ll learn how to:

§ Add an instance of a Server Explorer Connection to a form

§ Create a Connection using code

§ Use Connection properties

§ Use an intermediary variable to reference multiple types of Connections

§ Bind Connection properties to form controls

§ Open and close Connections

Microsoft ADO.Net – Step by Step 18

§ Respond to a Connection.StateChange event

In the previous chapter, we took a brief tour through the ADO.NET object model. In this

chapter, we’ll begin to examine the objects in detail, starting with the lowest level object,

the Connection.

Understanding Connections

Connections are responsible for handling the physical communication between a data

store and a .NET application. Because the Connection object is part of a Data Provider,

each Data Provider implements its own version. The two Data Providers supported by

the .NET Framework implement the OleDbConnection in the System.Data.OleDB

namespace and the SqlConnection in the System.Data.SqlClient namespace,

respectively.

Note It’s important to understand that if you’re using a Connection

object implemented by another Data Provider, the details of the

implementation may vary from those described here.

The OleDbConnection, not surprisingly, uses OLE DB and can be used with any OLE DB

provider, including Microsoft SQL Server. The SqlConnection goes directly to SQL

Server without going through the OLE DB provider and so is more efficient.

Microsoft

ActiveX

Data

Objects

(ADO)

Since ADO.NET merges the ADO object model with OLE

DB, it is rarely necessary to go directly to OLE DB for

performance reasons. You might still need to use OLE DB

directly if you need specific functionality that isn’t exposed

by ADO.NET, but again, these situations are likely to be

rarer than when using ADO.

Creating Connections

In the previous chapter, we created a Connection object by using the DataAdapter

Configuration Wizard. The Data Form Wizard, accessed by clicking Add Windows Form

on the Project menu, also creates a Connection automatically. In this chapter, we’ll look

at several other methods for creating Connections in Microsoft Visual Studio .NET.

Design Time Connections

Visual Studio’s Server Explorer provides the ability, at design time, to view and maintain

connections to a number of different system services, including event logs, message

queues, and, most important for our purposes, data connections.

Important If you have not yet installed this book’s practice files, work

through ‘Installing and Using the Practice Files’ in the

Introduction and then return to this chapter.

Add a Design Time Connection to the Server Explorer

1. Open the Connection project from the Visual Studio start page or from

the Project menu.

2. Double-click ConnectionProperties.vb (or ConnectionProperties.cs, if

you’re using C#) in the Solution Explorer to open the form.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 19

3. Open the Server Explorer.

4. Click the Connect to Database button.

Visual Studio displays the Data Link Properties dialog box.

Tip You can also display the Data Link Properties dialog box by choosing

Connect to Database on the Tools menu.

5. Click the Provider tab and then select Microsoft Jet 4.0 OLE DB

Provider.

Microsoft ADO.Net – Step by Step 20

6. Click Next.

Visual Studio displays the Connection tab of the dialog box.

7. Click the ellipsis button after Select or enter a database name,

navigate to the folder containing the sample files, and then select the

nwind sample database.

8. Click Open.

Visual Studio creates a Connection string for the database.

Microsoft ADO.Net – Step by Step 21

9. Click OK.

Visual Studio adds the Connection to the Server Explorer.

10. Right -click the Connection in the Server Explorer, click Rename from

the context menu, and then rename the Connection Access nwind.

Microsoft ADO.Net – Step by Step 22

Database References

In addition to Database Connections in the Server Explorer, Visual Studio also

supports Database References. Database References are set as part of a Database

Project, which is a special type of project used to store SQL scripts, Data Commands,

and Data Connections.

Database References are created in the Solution Explorer (rather than the Server

Explorer) and, unlike Database Connections defined in the Server Explorer, they are

stored along with the project.

Data connections defined through the Server Explorer become part of your Visual

Studio environment—they will persist as you open and close projects. Database

references, on the other hand, exist as part of a specific project and are only available

as part of the project.

Design time connections aren’t automatically included in any project, but you can drag a

design time connection from the Server Explorer to a form, and Visual Studio will create

a pre-configured Connection object for you.

Microsoft ADO.Net – Step by Step 23

Add an Instance of a Design Time Connection to a Form

§ Select the Access nwind Connection in the Server Explorer and drag it

onto the Connection Properties form.

Visual Studio adds a pre-configured OleDbConnection to the Component

Designer.

Creating a Connection at Run Time

Using Visual Studio to create form-level Connections is by far the easiest method, but if

you need a Connection that isn’t attached to a form, you can create one at run time in

code.

Note You wouldn’t ordinarily create a form-level Connection object in

code because the Visual Studio designers are easier and just as

effective.

The Connection object provides two overloaded versions of its constructor, giving you

the option of passing in the ConnectionString, as shown in Table 2-1.

Table 2-1: Connection Constructors

Method Description

New() Creates a

Connection

with the

ConnectionSt

ring property

set to an

empty string

New(ConnectionString) Creates a

Connection

with the

ConnectionSt

ring property

specified

The ConnectionString is used by the Connection object to connect to the data source.

We’ll explore it in detail in the next section of this chapter.

Create a Connection in Code

Visual Basic .NET

1. Display the code for the ConnectionProperties form by pressing F7.

2. Add the following lines after the Inherits statement:

3. Friend WithEvents SqlDbConnection1 As New _

Microsoft ADO.Net – Step by Step 24

System.Data.SqlClient.SqlConnection()

This code creates the new Connection object using the default values.

Visual C# .NET

1. Display the code for the ConnectionProperties form by pressing F7.

2. Add the following lines after the opening bracket of the class

declaration:

3. internal System.Data.SqlClient.SqlConnection

SqlDbConnection1;

This code creates the new Connection object. (For the time being, ignore the

warning that the variable is never assigned to.)

Using Connection Properties

The significant properties of the OleDbConnection and SqlDbConnection objects are

shown in Table 2-2 and Table 2-3, respectively.

Table 2-2: OleDbConnection Properties

Property Meaning Default

ConnectionString The string

used to

connect to

the data

source when

the Open

method is

executed

Empty

ConnectionTimeout The

maximum

time the

Connection

object will

continue

attempting to

make the

connection

before

throwing an

exception

15

second

s

Database The name of

the database

to be opened

once a

connection is

opened

Empty

DataSource The location

and file

containing

the database

Empty

Provider The name of

the OLE DB

Data

Provider

Empty

ServerVersion The version

of the server,

Empty

Microsoft ADO.Net – Step by Step 25

Table 2-2: OleDbConnection Properties

Property Meaning Default

as provided

by the OLE

DB Data

Provider

State A

ConnectionS

tate value

indicating the

current state

of the

Connection

Closed

Table 2-3: SqlConnection Properties

ConnectionString The string

used to

connect to

the data

source when

the Open

method is

executed

Empty

ConnectionTimeout The

maximum

time the

Connection

object will

continue

attempting to

make the

connection

before

throwing an

exception

15

secon

ds

Database The name of

the database

to be opened

once a

connection is

opened

Empty

DataSource The location

and file

containing

the database

Empty

PacketSize The size of

network

packets used

to

communicate

with SQL

Server

8192

bytes

ServerVersion The version

of SQL

Server being

Empty

Microsoft ADO.Net – Step by Step 26

used

State A

ConnectionS

tate value

indicating the

current state

of the

Connection

Closed

WorkStationID A string

identifying

the database

client, or, if

that is not

specified, the

name of the

workstation

Empty

As you can see, the two versions of the Connection object expose a slightly different set

of properties: The SqlDbConnection doesn’t have a Provider property, and the

OleDbConnection doesn’t expose PacketSize or WorkStationID. To make matters worse,

not all OLE DB Data Providers support all of the OleDbConnection properties, and if

you’re working with a custom Data Provider, all bets are off.

What this means in real terms is that we still can’t quite write code that is completely data

source-independent unless we’re prepared to give up the optimization of specific Data

Providers. However, as we’ll see, the problem isn’t as bad as it might at first seem, since

the .NET Framework provides a number of ways to accommodate run-time configuration.

Rather more tedious to deal with are the different names of the objects, but using an

intermediate variable can minimize the impact, as we’ll see later in this chapter.

The ConnectionString Property

The ConnectionString is the most important property of any Connection object. In fact,

the remaining properties are read-only and set by the Connection based on the value

provided for the ConnectionString.

All ConnectionStrings have the same format. They consist of a set of keywords and

values, with the pairs separated by semicolons, and the whole thing is delimited by either

single or double quotes:

“keyword = value;keyword = value;keyword = value”

Keyword names are case-insensitive, but the values may not be, depending on the data

source. The use of single or double quotes follows the normal rules for strings. For

example, if the database name is Becca’s Data, then the ConnectionString must be

delimited by double quotes: “Database=Becca’s Data”. ‘Database = Becca’s Data’ would

cause an error.

If you use the same keyword multiple times, the last instance will be used. For example,

given the ConnectionString “database=Becca’s Data; database=Northwind”, the initial

database will be set to Northwind. The use of multiple instances is perfectly legal; no

syntax error will be generated.

ADO Unlike ADO, the ConnectionString returned by the .NET

Framework is the same as the user-set string, with the exception

that the user name and password are returned only if Persist

Security Info is set to true (it is false by default).

Microsoft ADO.Net – Step by Step 27

Unfortunately, the format of the ConnectionString is the easy part. It’s determining the

contents that can be difficult because it will always be unique to the Data Provider. You

can always cheat (a little) by creating a design time connection using the Data Link

Properties dialog box, and then copying the values.

The ConnectionString can only be set when the Connection is closed. When it is set, the

Connection object will check the syntax of the string and then set the remaining

properties (which, you’ll remember, are read-only). The ConnectionString is fully

validated when the Connection is opened. If the Connection detects an invalid or

unsupported property, it will generate an exception (either an OleDbException or a

SqlDbException, depending on the object being used).

Setting a ConnectionString Property

In this exercise, we’ll set the ConnectionString for the SqlDbConnection that we created

in the previous exercise. The ConnectionString that your system requires will be different

from the one in my installation. (I have SQL Server installed locally, and my machine

name is BUNNY, for example.)

Fortunately, the DataAdapter Configuration Wizard in Chapter 1 created a design time

Connection for you. If you select that connection in the Server Explorer, you can see the

values in the Properties window. In fact, you can copy and paste the entire

ConnectionString from the Properties window if you want. (If you didn’t do the exercise in

Chapter 1, you can create a design time connection by using the technique described in

the Add a Design Time Connection exercise in this chapter.)

Set a ConnectionString Property

Visual Basic .NET

1. Expand the region labeled “Windows Form Designer generated code”

and navigate to the New Sub.

2. Add the following line to the procedure after the InitializeComponent

call, filling in the ConnectionString values required for your

implementation:

3. Me.SqlDbConnection1.ConnectionString = “<<add your

ConnectionString here>>”

Visual C# .NET

1. Scroll down to the ConnectionProperties Sub.

2. Add the following lines to the procedure after the InitializeComponent

call, filling in the ConnectionString values required for your

implementation:

3. this.SqlDbConnection1 = new

4. System.Data.SqlClient.SqlConnection();

5. this.SqlDbConnection1.ConnectionString =

“<<add your ConnectionString here>>”;

Using Other Connection Properties

With the Connection objects in place, we can now add the code to display the

Connection properties on the sample form. But first, we need to use a little bit of objectoriented

sleight of hand in order to accommodate the two different types of objects.

One method would be to write conditional code. In Visual Basic, this would look like:

If Me.rbOleChecked then

Me.txtConnectionString.Text = Me.OleDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.OleDbConnection1.Database.String

Microsoft ADO.Net – Step by Step 28

Me.txtTimeOut.Text = Me.OleDbConnection1.ConnectionTimeout

Else

Me.txtConnectionString.Text = Me.SqlDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.SqlDbConnection1.Database.String

Me.txtTimeOut.Text = Me.SqlDbConnection1.ConnectionTimeout

End If

Another option would be to use compiler constants to conditionally compile code. Again,

in Visual Basic:

#Const SqlVersion

#If SqlVersion Then

Me.txtConnectionString.Text = Me.OleDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.OleDbConnection1.Database.String

Me.txtTimeOut.Text = Me.OleDbConnection1.ConnectionTimeout

#Else

Me.txtConnectionString.Text = Me.SqlDbConnection1.ConnectionString

Me.txtDatabase.Text = Me.SqlDbConnection1.Database.String

Me.txtTimeOut.Text = Me.SqlDbConnection1.ConnectionTimeout

#End If

But either option requires a lot of typing, in a lot of places, and can become a

maintenance nightmare. If you only need to access the ConnectionString, Database, and

TimeOut properties (and these are the most common), there’s an easier way.

Connection objects, no matter the Data Provider to which they belong, must implement

the IDbConnection interface, so by declaring a variable as an IDbConnection, we can

use it as an intermediary to access a few of the shared properties.

Create an Intermediary Variable

Visual Basic .NET

1. Declare the variable by adding the following line of code at the

beginning of the class module, under the Connection declarations

we added previously:

Dim myConnection As System.Data.IDbConnection

2. Add procedures to set the value of the myConnection variable when

the user changes their choice in the Connection Type group box. Do

that by using the CheckedChanged event of the two Radio Buttons.

Select the rbOleDB control in the Class Name box of the editor and the

CheckedChanged event in the Method Name box.

Visual Studio adds the CheckedChanged event handler template to the class.

3. Add the following assignment statement to the procedure:

myConnection = Me.OleDbConnection1

4. Repeat steps 2 and 3 for the rbSql radio button, substituting the

SqlDbConnection object:

5. myConnection = Me.SqlDbConnection1

Microsoft ADO.Net – Step by Step 29

Visual C# .NET

1. Declare the variable by adding the following line of code at the

beginning of the class module, under the Connection declaration we

added previously:

private System.Data.IDbConnection myConnection;

2. Add procedures to set the value of the myConnection variable when

the user changes their choice in the Connection Type group box. Do

that by using the CheckedChanged event of the two radio buttons.

Add the following event handlers to the code window below the Dispose

procedure:

private void rbOleDB_CheckChanged(object sender, EventArgs e)

{

myConnection = this.oleDbConnection1;

}

private void rbSQL_CheckChanged (object sender, EventArgs e)

{

myConnection = this.SqlDbConnection1;

}

3. Connect the event handlers to the actual radio button events. Add the

following code to the end of the ConnectionProperties sub:

4. this.rbOleDB.CheckedChanged += new

5. EventHandler(this.rbOleDB_CheckChanged);

6. this.rbSQL.CheckedChanged += new

EventHandler(this.rbSQL_CheckChanged);

Binding Connection Properties to Form Controls

Now that we have the intermediary variable in place, we can add the code to display the

Connection (or rather, the IDbConnection properties) in the control:

Bind Connection Properties to Form Controls

Visual Basic .NET

1. Add the following procedure to the class module:

2. Private Sub RefreshValues()

3. Me.txtConnectionString.Text =

Me.myConnection.ConnectionString

4. Me.txtDatabase.Text = Me.myConnection.Database

5. Me.txtTimeOut.Text = Me.myConnection.ConnectionTimeout

6. End Sub

7. Add a call to the RefreshValues procedure to the end of each of the

CheckedChanged event handlers.

8. Save and run the program by pressing F5. Choose each of the

Connections in turn to confirm that their properties are displayed in

the text boxes.

Microsoft ADO.Net – Step by Step 30

9. Close the application.

Visual C# .NET

1. Add the following procedure to the class module below the

CheckChanged event handlers:

2. private void RefreshValues()

3. {

4. this.txtConectionString.Text =

this.myConnection.ConnectionString;

5. this.txtDatabase.Text = this.myConnection.Database;

6. this.txtTimeOut.Text =

this.myConnection.ConnectionTimeout.ToString();

}

7. Add a call to the RefreshValues procedure to the end of each of the

CheckedChanged event handlers.

8. Save and run the program by pressing F5. Choose each of the

Connections in turn to confirm that their properties are displayed in

the text boxes.

9. Close the application.

Using Dynamic Properties

Another way to handle ConnectionString configurations is to use .NET Framework

dynamic properties. When an application is deployed, dynamic properties are stored in

an external configuration file, allowing them to be easily changed.

Microsoft ADO.Net – Step by Step 31

Connection Methods

Both the SqlConnection and OleDbConnection objects expose the same set of methods,

as shown in Table 2-4.

Table 2-4: Connection Methods

Method Description

BeginTransaction Begins a

database

transaction

ChangeDatabase Changes

the current

database on

an open

Connection

Close Closes the

connection

to the data

source

CreateCommand Creates and

returns a

Data

Command

associated

with the

Connection

Open Establishes

a

connection

to the data

source

Roadmap We’ll examine transaction processing in Chapter 5.

The Connection methods that you will use most often are Open and Close, which do

exactly what you would expect them to—they open and close the connection. The

BeginTransaction method begins transaction processing for a Connection, as we’ll see in

Chapter 5.

Roadmap We’ll examine Data Commands in Chapter 3.

The CreateCommand method can be used to create an ADO.NET Data Command

object. We’ll examine this method in Chapter 3.

Opening and Closing Connections

The Open and Close methods are invoked automatically by the two objects that use a

Connection, the DataAdapter and Data Command. You can also invoke them explicitly in

code, if required.

Roadmap We’ll examine the DataAdapter in Chapter 4.

If the Open method is invoked on a Connection by the DataAdapter or a Data Command,

these objects will leave the Connection in the state in which they found it. If the

Connection was open when a DataAdapter.Fill method is invoked, for example, it will

remain open when the Fill operation is complete. On the other hand, if the Connection is

closed when the Fill method is invoked, the DataAdapter will close it upon completion.

If you invoke the Open method explicitly, the data source connection will remain open

until it is explicitly closed. It will not be closed automatically, even if the Connection

object goes out of scope.

Microsoft ADO.Net – Step by Step 32

Important You must always explicitly invoke a Close method when you

have finished using a Connection object, and for scalability

and performance purposes, you should call Close as soon as

possible after you’ve completed the operations on the

Connection.

Connection Pooling

Although it’s easiest to think of Open and Close methods as discrete operations, in fact

the .NET Framework pools connections to improve performance. The specifics of the

connection pooling are determined by the Data Provider.

The OLE DB Data Provider automatically uses OLE DB connection pooling. You have

no programmatic control over the process. The SQL Server Data Provider uses implicit

pooling by default, based on an exact match in the connection string, but the OLE DB

Data Provider supports some additional keywords in the ConnectionString to control

pooling. See online help for more details.

Open and Close a Connection

Visual Basic .NET

1. Select the btnTest control in the Class Name combo box of the editor

and the Click event in the Method Name combo box.

Visual Studio adds the click event handler template.

2. Add the following lines to the procedure to open the connection,

display its status in a message box, and then close the connection:

3. myConnection.Open()

4. MessageBox.Show(Me.myConnection.State.ToString)

myConnection.Close()

5. Press F5 to save and run the application.

6. Change the Connection Type, and then click the Test button.

The application displays the Connection state.

7. Close the application.

Visual C# .NET

1. Add the following procedure to the code window to open the

connection, display its status in a message box, and then close the

connection:

Microsoft ADO.Net – Step by Step 33

2. private void btnTest_Click(object sender, System.EventArgs e)

3. {

4. this.myConnection.Open();

5. MessageBox.Show(this.myConnection.State.ToString());

6. this.myConnection.Close();

}

7. Add the following code, which connects the event handler to the

btnTest.Click event, to the end of the ConnectionProperties sub:

this.btnTest.Click += new EventHandler(this.btnTest_Click);

8. Press F5 to save and run the application.

9. Change the Connection Type and then click the Test button.

The application displays the Connection state.

10. Close the application.

Handling Connection Events

Both the OLE DB and the SQL Server Connection objects provide two events:

StateChange and InfoMessage.

StateChange Events

Not surprisingly, the StateChange event fires whenever the state of the Connection

object changes. The event passes a StateChangeEventArgs to its handler, which, in

turn, has two properties: OriginalState and CurrentState. The possible values for

OriginalState and CurrentState are shown in Table 2-5.

Table 2-5: Connection States

State Meaning

Broken The

Connecti

on is

open, but

not

functiona

l. It may

be

closed

and reopened

Closed The

Connecti

Microsoft ADO.Net – Step by Step 34

Table 2-5: Connection States

State Meaning

on is

closed

Connecting The

Connecti

on is in

the

process

of

connecti

ng, but

has not

yet been

opened

Executing The

Connecti

on is

executin

g a

comman

d

Fetching The

Connecti

on is

retrieving

data

Open The

Connecti

on is

open

Respond to a StateChange Event

Visual Basic .NET

1. Select OleDbConnection1 in the Class Name combobox of the editor

and the StateChange event in the Method Name combobox.

Visual Studio adds the event declaration to the class.

2. Add the following code to display the previous and current Connection

states:

3. Dim theMessage As String

4. theMessage = “The Connection is changing from ” & _

5. e.OriginalState.ToString & _

6. ” to ” & e.CurrentState.ToString

MessageBox.Show(theMessage)

7. Repeat steps 1 and 2 for SqlDbConnection1.

8. Save and run the program.

9. Click the Test button.

The application displays MessageBoxes as the Connection is opened and

closed.

Microsoft ADO.Net – Step by Step 35

Visual C# .NET

1. Add the following procedure code to display the previous and current

Connection states for each of the two Connection objects:

2. private void oleDbConnection1_StateChange (object sender,

3. StateChangeEventArgs e)

4. {

5. string theMessage;

6. theMessage = “The Connection State is changing from ” +

7. e.OriginalState.ToString() +

8. ” to ” + e.CurrentState.ToString();

9. MessageBox.Show(theMessage);

10. }

11. private void SqlDbConnection1_StateChange (object sender,

12. StateChangeEventArgs e)

13. {

14. string theMessage;

15. theMessage = “The Connection State is changing from ” +

16. e.OriginalState.ToString() +

17. ” to ” + e.CurrentState.ToString();

18. MessageBox.Show(theMessage);

}

19. Add the code to connect the event handlers to the

ConnectionProperties sub:

20. this.oleDbConnection1.StateChange += new

21.

System.Data.StateChangeEventHandler(this.oleDbConnection1

_StateChange);

22. this.SqlDbConnection1.StateChange += new

System.Data.StateChangeEventHandler(this.SqlDbConnection1_StateCha

nge);

23. Save and run the program.

24. Change the Connection Type and then click the Test button.

The application displays two MessageBoxes as the Connection is opened and

closed.

InfoMessage Events

The InfoMessage event is triggered when the data source returns warnings. The

information passed to the event handler depends on the Data Provider.

Chapter 2 Quick Reference

To Do this

Create a Server Explorer Connection Click the Connect to Database

button in the Server Explorer,

or choose Connect to

Database on the Tools menu

Add an instance of a Server Explorer

Connection to a form

Drag the Connection from the

Server Explorer to the form

Create a Connection using code Use the New constructor. For

example:

Dim myConn as New

OleDbConnection()

Use an intermediary variable to reference Declare the variable as an

Microsoft ADO.Net – Step by Step 36

To Do this

multiple types of IDbConnection. For example:

Dim myConn As

System.Data.IDbConnect

ion Connections

Open a Connection Use the Open method. For

example: myConn.Open

Close a Connection Use the Close method. For

example: myConn.Close

Chapter 3: Data Commands and the DataReader

Overview

In this chapter, you’ll learn how to:

§ Add a Data Command to a form

§ Create a Data Command at run time

§ Set Command properties at run time

§ Configure the Parameters collection in Microsoft Visual Studio .NET

§ Add and configure Parameters at run time

§ Set Parameter values

§ Execute a Command

§ Create a DataReader to return Command results

The Connection object that we examined in Chapter 2 represents the physical

connection to a data source; the conduit for exchanging information between an

application and the data source. The mechanism for this exchange is the Data

Command.

Understanding Data Commands and DataReaders

Essentially, an ADO.NET data command is simply a SQL command or a reference to a

stored procedure that is executed against a Connection object. In addition to retrieving

and updating data, the Data Command can be used to execute certain types of queries

on the data source that do not return a result set and to execute data definition (DDL)

commands that change the structure of the data source.

When a Data Command does return a result set, a DataReader is used to retrieve the

data. The DataReader object returns a read-only, forward-only stream of data from a

Data Command. Because only a single row of data is in memory at a time (unlike a

DataSet, which, as we’ll see in Chapter 6, stores the entire result set), a DataReader

requires very little overhead. The Read method of the DataReader is used to retrieve a

row, and the GetType methods (where Type is a system data type, such as GetString to

return a data string) return each column within the current row.

As part of the Data Provider, Data Commands and DataReaders are specific to a data

source. Each of the .NET Framework Data Providers implements a Command and a

DataReader object: OleDbCommand and OleDbDataReader in the System.Data.OleDb

namespace; and SqlCommand and SqlDataReader in the System.Data.SqlClient

namespace.

Creating Data Commands

Like most of the objects that can exist at the form level, Data Commands can either be

created and configured at design time in Visual Studio or at run time in code.

DataReaders can be created only at run time, using the ExecuteReader method of the

Data Command, as we’ll see later in this chapter.

Microsoft ADO.Net – Step by Step 37

Creating Data Commands in Visual Studio

A Command object is created in Visual Studio just like any other control—simply drag

the control off of the Data tab of the Toolbox and drop it on the form. Since the Data

Command has no user interface, like most of the objects we’ve covered, Visual Studio

will add the control to the Component Designer.

Add a Data Command to a Form at Design Time

In this exercise we’ll create and name a Data Command. We’ll configure its properties in

later lessons.

1. Open the DataCommands project from the Visual Studio start page or

from the Project menu.

2. Double-click DataCommands.vb (or DataCommands.cs, if you’re using

C#) in the Solution Explorer to open the form.

Visual Studio displays the form in the form designer.

3. Drag a SqlCommand control from the Data tab of the Toolbox to the

form.

Visual Studio adds the command to the form.

4. In the Properties window, change the name of the Command to

cmdGetEmployees.

Creating Data Commands at Run Time

Roadmap We’ll discuss the version of the Command constructor that

supports transactions in Chapter 5.

Microsoft ADO.Net – Step by Step 38

The Data Command supports four versions of its constructor, as shown in Table 3-1. The

New() version sets all the properties to their default values, while the other versions allow

you to set properties of the Command object during creation. Whichever version you

choose, of course, you can set or change property values after the Command is created.

Table 3-1: Command Constructors

Property Description

New() Creates a

new, default

instance of

the Data

Command

New(Command) Creates a

new Data

Command

with the

CommandT

ext set to

the string

specified in

Command

New(Command, Connection) Creates a

new Data

Command

with the

CommandT

ext set to

the string

specified in

Command

and the

Connection

property set

to the

SqlConnecti

on specified

in

Connection

New(Command, Connection, Transaction) Creates a

new Data

Command

with the

CommandT

ext set to

the string

specified in

Command,

the

Connection

property set

to the

Connection

specified in

Connection,

and the

Transaction

property set

to the

Microsoft ADO.Net – Step by Step 39

Table 3-1: Command Constructors

Property Description

Transaction

specified in

Transaction

Create a Command Object at Run Time

Once again, we will create the Command object in this exercise and set its properties

later in the chapter.

Visual Basic .NET

1. Press F7 to display the code editor window.

2. Add the following line after the Inherits statement:

Friend WithEvents cmdGetCustomers As

System.Data.SqlClient.SqlCommand

This line declares the command variable. (One variable, cmdGetOrders, has

already been declared in the exercise project.)

3. Expand the region labeled ‘Windows Form Designer generated code’.

4. Add the following line to end of the New Sub:

Me.cmdGetCustomers = New System.Data.SqlClient.SqlCommand()

This command instantiates the Command object using the default constructor.

(cmdGetOrders has already been instantiated.)

Visual C# .NET

1. Press F7 to display the code editor window.

2. Add the following line after the opening bracket of the class

declaration:

internal System.Data.SqlClient.SqlCommand cmdGetCustomers;

This line declares the command variable.

3. Scroll down to the frmDataCmds Sub.

4. Add the following line to the procedure after the InitializeComponent

call:

this.cmdGetCustomers = new System.Data.SqlClient.SqlCommand();

This command instantiates the Command object using the default constructor.

(cmdGetOrders has already been declared and instantiated.)

Command Properties

The properties exposed by the Data Command object are shown in Table 3-2. These

properties will only be checked for syntax errors when they are set. Final validation

occurs only when the Command is executed by a data source.

Table 3-2: Data Command Properties

Property Description

CommandText The SQL

statement or

stored

procedure to

execute

CommandTimeout The time (in

seconds) to

Microsoft ADO.Net – Step by Step 40

Table 3-2: Data Command Properties

Property Description

wait for a

response

from the

data source

CommandType Indicates

how the

CommandT

ext property

is to be

interpreted,

defaults to

Text

Connection The

Connection

object on

which the

Data

Command is

to be

executed

Parameters The

Parameters

Collection

Transaction The

Transaction

in which the

command

will execute

UpdatedRowSource Determines

how results

are applied

to a

DataRow

when the

Command is

used by the

Update

method of a

DataAdapter

The CommandText property, which is a string, contains either the actual text of the

command to be executed against the connection or the name of a stored procedure in

the data source.

The CommandTimeout property determines the time that the Command will wait for a

response from the server before it generates an error. Note that this is the wait time

before the Command begins receiving results, not the time it takes the command to

execute. The data source might take ten or fifteen minutes to return all the rows of a

huge table, but provided the first row is received within the specified CommandTimeout

period, no error will be generated.

The CommandType property tells the command object how to interpret the contents of

the CommandText property. The possible values are shown in Table 3-3. TableDirect is

only supported by the OleDbCommand, not the SqlCommand, and is equivalent to

SELECT * FROM <tablename>, where the <tablename> is specified in the

CommandText property.

Microsoft ADO.Net – Step by Step 41

Table 3-3: CommandType Values

Property Description

StoredProcedure The name of

a stored

procedure

TableDirect A table

name

Text A SQL text

command

The Connection property contains a reference to the Connection object on which the

Command will be executed. The Connection object must belong to the same namespace

as the Command object, that is, a SqlCommand must contain a reference to a

SqlConnection and an OleDbCommand must contain a reference to an

OleDbConnection.

The Command object’s Parameters property contains a collection of Parameters for the

SQL command or stored procedure specified in CommandText. We’ll examine this

collection in detail later in this exercise.

Roadmap We’ll examine the Transaction property in Chapter 5.

The Transaction property contains a reference to a Transaction object and serves to

enroll the Command in that transaction. We’ll examine this property in detail in Chapter

5.

Roadmap We’ll examine the DataAdapter in Chapter 4 and the

DataRow in Chapter 7.

The UpdatedRowSource property determines how results are applied to a DataRow

when the Command is executed by the Update method of the DataAdapter. The possible

values for the UpdatedRowSource property are shown in Table 3-4.

Table 3-4: UpdatedRowSource Values

Property Description

Both Both the

output

parameters

and the first

row returned

by the

Command

will be

mapped to

the changed

row

FirstReturnedRecord The first row

returned by

the

Command

will be

mapped to

the changed

row

None Any

returned

parameters

or rows are

Microsoft ADO.Net – Step by Step 42

Table 3-4: UpdatedRowSource Values

Property Description

discarded

OutputParameters Output

parameters

of the

Command

will be

mapped to

the changed

row

If the Data Command is generated automatically by Visual Studio, the default value of

the UpdatedRowSource property is None. If the Command is generated at run time or

created by the user at design time, the default value is Both.

Setting Command Properties at Design Time

As might be expected, the properties of a Command control created in Visual Studio are

set using the Properties window. In specifying the CommandText property, you can

either type the value directly or use the Query Builder to generate the required SQL

statement. You must specify the Connection property before you can set the

CommandText property.

Set Command Properties in Visual Studio

1. In the form designer, select cmdGetEmployees in the Component

Designer.

2. In the Properties window, select the Connection property, expand the

Existing node in the drop-down list, and then click cnNorthwind.

3. Select the CommandText property, and then click the ellipsis button.

Visual Studio displays the Query Builder’s Add Table dialog box.

Microsoft ADO.Net – Step by Step 43

4. Click the Views tab in the Add Table dialog box, and then click

EmployeeList.

5. Click Add, and then click Close.

Visual Studio adds EmployeeList to the Query Builder.

6. Select the check box next to (All Columns) in the Diagram pane of the

Query Builder to select all columns.

Visual Studio updates the SQL text in the SQL pane.

Microsoft ADO.Net – Step by Step 44

7. Click OK.

Visual Studio generates the SQL command and sets the CommandText

property in the Properties window.

Setting Command Properties at Run Time

The majority of the properties of the Command object are set by using simple

assignment statements. The exception is the Parameters collection, which because it is

a collection, uses the Add method.

Set Command Properties at Run Time

Visual Basic .NET

1. In the Code window, add the following lines below the variable

instantiations of the New Sub:

2. Me.cmdGetCustomers.CommandText = “SELECT * FROM

CustomerList”

3. Me.cmdGetCustomers.CommandType = CommandType.Text

Me.cmdGetCustomers.Connection = Me.cnNorthwind

4. The first line specifies the command to be executed on the

Connection—it simply returns all rows from the CustomerList view.

The second line specifies that the CommandText property is to be

treated as a SQL command, and the third line sets the Connection

on which the command is to be executed.

Visual C# .NET

1. In the Code window, add the following lines below the variable

instantiation:

2. this.cmdGetCustomers.CommandText = “SELECT * FROM

CustomerList”;

3. this.cmdGetCustomers.CommandType = CommandType.Text;

this.cmdGetCustomers.Connection = this.cnNorthwind;

4. The first line specifies the command to be executed on the

Connection—it simply returns all rows from the CustomerList view.

The second line specifies that the CommandText property is to be

treated as a SQL command, and the third line sets the Connection

on which the command is to be executed.

Microsoft ADO.Net – Step by Step 45

Using the Parameters Collection

There are three steps to using parameters in queries and stored procedures—you must

specify the parameters in the query or stored procedure, you must specify the

parameters in the Parameters collection, and finally you must set the parameter values.

If you’re using a stored procedure, the syntax for specifying parameters will be

determined by the data source when the stored procedure is created. If you are using

parameters in a SQL command specified in the CommandText property of the Command

object, the syntax requirement is determined by the .NET Data Provider.

Unfortunately, the two Data Providers supplied in the .NET Framework use different

syntax. OleDbCommand objects use a question mark (?) as a placeholder for a

parameter:

SELECT * FROM Customers WHERE CustomerID = ?

SqlDbCommand objects use named parameters, prefixed with the @ character:

SELECT * FROM Customers WHERE CustomerID = @custID

Having created the stored procedure or SQL command, you must then add each of the

parameters to the Parameters collection of the Command object. Again, if you are using

Visual Studio, it will configure the collection for you, but if you are creating or reconfiguring

the Command object at run time, you must use the Add method of the

Parameters collection to create a Parameter object for each parameter in the query or

stored procedure.

The Parameters collection provides a number of methods for configuring the collection at

run time. The most useful of these are shown in Table 3-5. Note that because the

OleDbCommand doesn’t support named parameters, the parameters will be substituted

in the order they are found in the Parameters collection. Because of this, it is important

that you configure the items in the collection correctly. (This can be a very difficult bug to

track, and yes, that is the voice of experience.)

Table 3-5: Parameters Collection Methods

Property Description

Add(Value) Adds a new

parameter

at the end of

the

collection

with the

specified

Value

Add(Parameter) Adds a

Parameter

to the end of

the

collection

Add(Name, Value) Adds a

Parameter

with the

name

specified in

the Name

string and

the specified

Value to the

end of the

collection

Microsoft ADO.Net – Step by Step 46

Table 3-5: Parameters Collection Methods

Property Description

Add(Name, Type) Adds a

Parameter

of the

specified

Type with

the name

specified in

the Name

string to the

end of the

collection

Add(Name, Type, Size) Adds a

Parameter

of the

specified

Type and

Size with

the name

specified in

the Name

string to the

end of the

collection

Add(Name, Type, Size, SourceColumn) Adds a

Parameter

of the

specified

Type and

Size with

the name

specified in

the Name

string to the

end of the

collection,

and maps it

to the

DataTable

column

specified in

the

SourceColu

mn string

Clear Removes all

Parameters

from the

collection

Insert(Index, Value) Inserts a

new

Parameter

with the

Value

specified at

the position

specified by

Microsoft ADO.Net – Step by Step 47

Table 3-5: Parameters Collection Methods

Property Description

the zerobased

Index

into the

collection

Remove(Value) Removes

the

parameter

with the

specified

Value from

the

collection

RemoveAt(Index) Removes

the

parameter

at the

position

specified by

the zerobased

Index

into the

collection

RemoveAt(Name) Removes

the

parameter

with the

name

specified by

the Name

string from

the

collection

Configure the Parameters Collection in Visual Studio

1. In the form designer, drag a SqlCommand object onto the form.

Visual Studio adds a new command to the Component Designer.

2. In the Properties window, change the new Command’s name to

cmdOrderCount.

3. In the Properties window, expand the Existing node in the Connection

property’s drop-down list, and then click cnNorthwind.

4. Select the CommandText property, and then click the ellipsis button.

Visual Studio opens the Query Builder and the Add Table dialog box.

5. Click the Views tab in the Add Table dialog box, and then click

OrderTotals.

6. Click Add, and then click Close.

Visual Studio adds OrderTotals to the Query Builder.

7. Change the SQL statement in the SQL pane to read as follows:

8. SELECT Count(*) AS OrderCount

9. FROM OrderTotals

WHERE (EmployeeID = @empID) AND (CustomerID = @custID)

Microsoft ADO.Net – Step by Step 48

10. Verify that the Regenerate parameters collection for this command

check box is selected, and then click OK.

Visual Studio displays a warning message.

11. Click Yes.

Visual Studio generates the CommandText property and the Parameters

collection.

12. In the Properties window, select the Parameters property, and then

click the ellipsis button.

Visual Studio displays the SqlParameter Collection Editor. Because the Query

Builder generated the parameters for us, there is nothing to do here.

However, you could add, change, or remove parameters as necessary.

13. Click OK.

Microsoft ADO.Net – Step by Step 49

Add and Configure Parameters at Run Time

Visual Basic .NET

1. Press F7 to display the code editor.

2. Add the following lines to the end of the New Sub:

3. Me.cmdGetOrders.Parameters.Add(“@custID”,

SqlDbType.VarChar)

Me.cmdGetOrders.Parameters.Add(“@empID”, SqlDbType.Int)

Visual C# .NET

1. Press F7 to display the code editor.

2. Add the following lines after the property instantiations:

3. this.cmdGetOrders.Parameters.Add(“@custID”,

SqlDbType.VarChar);

4. this.cmdGetOrders.Parameters.Add(“@empID”, SqlDbType.Int);

Set Parameter Values

After you have established the Parameters collection and before you execute the

command, you must set the values for each of the Parameters. This can be done only at

run time with a simple assignment statement.

Visual Basic .NET

1. In the Code Editor window, select btnOrderCount in the Object Name

list, and Click in the Method Name box.

Visual Studio adds the click event handler for the button.

2. Add the following code to the event handler:

3. Dim cnt As Integer

4. Dim strMsg As String

5.

6. Me.cmdOrderCount.Parameters(“@empID”).Value = _

7. Me.lbEmployees.SelectedItem(“EmployeeID”)

8. Me.cmdOrderCount.Parameters(“@custID”).Value = _

Me.lbClients.SelectedItem(“CustomerID”)

The code first declares a couple of variables that will be used in the next

exercise, and then sets the value of each of the parameters in the

cmdOrderCount.Parameters collection to the value of the Employees and

Clients list boxes, respectively.

Visual C# .NET

1. Add the following event handler to the code below the existing

btnGetOrders_Click procedure:

2. private void btnOrderCount_Click(object sender,

3. System.EventArgs e)

4. {

5. int cnt;

6. string strMsg;

7. System.Data.DataRowView drv;

8.

9. drv = (System.Data.DataRowView)

10. this.lbEmployees.SelectedItem;

11. this.cmdOrderCount.Parameters[“@empID”].Value =

12. drv[“EmployeeID”];

13. drv = (System.Data.DataRowView)

Microsoft ADO.Net – Step by Step 50

14. this.lbClients.SelectedItem;

15. this.cmdOrderCount.Parameters[“@custID”].Value =

16. drv[“CustomerID”];

17. }

The code first declares a couple of variables that will be used in the next

exercise, and then sets the value of each of the parameters in the

cmdOrderCount.Parameters collection to the value of the Employees and

Clients list boxes, respectively.

18. Connect the event handler to the click event by adding the following

line to the end of the frmDataCmds sub:

19. this.btnOrderCount.Click += new

EventHandler(this.btnOrderCount_Click);

Command Methods

The methods exposed by the Command object are shown in Table 3-6. Of these, the

most important are the four Execute methods: ExecuteNonQuery, ExecuteReader,

ExecuteScalar, and ExecuteXmlReader.

ExecuteNonQuery is used when the SQL command or stored procedure to be executed

returns no rows. An Update query, for example, would use the ExecuteNonQuery

method.

ExecuteScalar is used for SQL commands and stored procedures that return a single

value. The most common example of this sort of command is one that returns a count of

rows:

SELECT Count(*) from OrderTotals

Table 3-6: Command Methods

Method Description

Cancel Cancels

execution of a

Data

Command

CreateParameter Creates a new

parameter

ExecuteNonQuery Executes a

command

against the

Connection

and returns the

number of

rows affected

ExecuteReader Sends the

CommandText

to the

Connection

and builds a

DataReader

ExecuteScalar Executes the

query and

returns the first

column of the

first row of the

result set

ExecuteXmlReader Sends the

Microsoft ADO.Net – Step by Step 51

Table 3-6: Command Methods

Method Description

CommandText

to the

Connection

and builds an

XMLReader

Prepare Creates a

prepared

(compiled)

version of the

command on

the data

source

ResetCommandTimeout Resets the

CommandTim

eout property

to its default

value

The ExecuteReader method is used for SQL Commands and stored procedures that

return multiple rows. The method creates a DataReader object. We’ll discuss

DataReaders in detail in the next section.

The ExecuteReader method may be executed with no parameters, or you can supply a

CommandBehavior value that allows you to control precisely how the Command will

perform. The values for CommandBehavior are shown in Table 3-7.

Table 3-7: CommandBehavior Values

Property Description

CloseConnection Closes the

associated

Connection

when the

DataReader

is closed

KeyInfo Indicates

that the

query

returns

column and

primary key

information

SchemaOnly Returns the

database

schema

only, without

affecting

any rows in

the data

source

SequentialAccess The results

of each

column of

each row

will be

Microsoft ADO.Net – Step by Step 52

Table 3-7: CommandBehavior Values

Property Description

accessed

sequentially

SingleResult Returns only

a single

value

SingleRow Returns only

a single row

Most of the CommandBehavior values are self-explanatory. Both KeyInfo and

SchemaOnly are useful if you cannot determine the structure of the command’s result

set prior to run time.

The SequentialAccess behavior allows the application to read large binary column

values using the GetBytes or GetChars methods of the DataReader, while the

SingleResult and SingleRow behaviors can be optimized by the Data Provider.

Execute a Command

Visual Basic .NET

§ Add the following code to the btnOrderCount_Click event handler that

we began in the last exercise:

§ Me.cnNorthwind.Open()

§ cnt = Me.cmdOrderCount.ExecuteScalar

§ Me.cnNorthwind.Close()

§

§ strMsg = “There are ” & cnt.ToString & ” Orders for this ”

§ strMsg &= “Employee/Customer combination.”

§ MessageBox.Show(strMsg)

The first three lines of code open the cnNorthwind Connection, call the ExecuteScalar

method to return a single value from the cmdOrderCount Command, and then close

the Connection. The last three lines simply display the results in a message box.

Visual C# .NET

§ Add the following code to the btnOrderCount_Click event handler that

we began in the last exercise:

§ this.cnNorthwind.Open();

§ cnt = (Int) this.cmdOrderCount.ExecuteScalar();

§ this.cnNorthwind.Close();

§

§ strMsg = “There are ” + cnt.ToString() + ” Orders for this “;

§ strMsg += “Employee/Customer combination.”;

MessageBox.Show(strMsg);

The first three lines of code open the cnNorthwind Connection, call the ExecuteScalar

method to return a single value from the cmdOrderCount Command, and then close

the Connection. The last three lines simply display the results in a message box.

DataReaders

The DataReader’s properties are shown in Table 3-8. The Item property supports two

versions: Item(Name), which takes a string specifying the name of the column as a

parameter, and Item(Index), which takes an Int32 as an index into the columns

collection. (As with all collections in the .NET Framework, the collection index is zerobased.)

Table 3-8: DataReader Properties

Property Description

Microsoft ADO.Net – Step by Step 53

Table 3-8: DataReader Properties

Property Description

Depth The depth of

nesting for

the current

row in

hierarchical

result sets.

SQL Server

always

returns zero.

FieldCount The number

of columns

in the

current row.

IsClosed Indicates

whether the

DataReader

is closed.

Item The value of

a column.

RecordsAffected The number

of rows

changed,

inserted, or

deleted.

The methods exposed by the DataReader are shown in Table 3-9. The Close method, as

we’ve seen, closes the DataReader and, if the CloseConnection behavior has been

specified, closes the Connection as well. The GetDataTypeName, GetFieldType,

GetName, GetOrdinal and IsDbNull methods allow you to determine, at run time, the

properties of a specified column.

Note that IsDbNull is the only way to check for a null value, since the .NET Framework

doesn’t have an intrinsic Null data type.

Table 3-9: DataReader Methods

Method Description

Close Closes the

DataReader

GetType Gets the

value of the

specified

column as

the specified

type

GetDataTypeName Gets the

name of the

data source

type

GetFieldType Returns the

system type

of the

specified

Microsoft ADO.Net – Step by Step 54

Table 3-9: DataReader Methods

Method Description

column

GetName Gets the

name of the

specified

column

GetOrdinal Gets the

ordinal

position of

the column

specified

GetSchemaTable Returns a

DataTable

that

describes

the structure

of the

DataReader

GetValue Gets the

value of the

specified

column as

its native

type

GetValues Gets all the

columns in

the current

row

IsDbNull Indicates

whether the

column

contains a

nonexistent

value

NextResult Advances

the

DataReader

to the next

result

Read Advances

the

DataReader

to the next

row

The Read method retrieves the next row of the result set. When the DataReader is first

opened, it is positioned at the beginning of file, before the first row, not at the first row.

You must call Read before the first row of the result set will be returned.

The NextResult method is used when a SQL command or stored procedure returns

multiple result sets. It positions the DataReader at the beginning of the next result set.

Again, the DataReader will be positioned before the first row, and you must call Read

before accessing any results.

Microsoft ADO.Net – Step by Step 55

The GetValues method returns all of the columns in the current row as an object array,

while the GetValue method returns a single value as one of the .NET Framework types.

However, if you know the data type of the value to be returned in advance, it is more

efficient to use one of the GetType methods shown in Table 3-10.

Note The SqlDataReader object supports additional GetType methods

for values of System.Data.SqlType. They are detailed in online

help.

Table 3-10: GetType Methods

Method Name Method

Name

Method

Name

GetBoolean GetFloat GetInt16

GetByte GetGuid GetInt32

GetBytes GetDateTime GetInt64

GetChar GetDecimal GetString

GetChars GetDouble GetTimeSpan

Create a DataReader to Return Command Results

Visual Basic .NET

1. In the code editor window, select btnFillLists in the Object Name list,

and Click in the Method Name box.

Visual Studio adds the click event handler to the code.

2. Add the following variable declarations to the event handler:

3. Dim dr As System.Data.DataRow

4. Dim rdrEmployees As System.Data.SqlClient.SqlDataReader

Dim rdrCustomers As System.Data.SqlClient.SqlDataReader

5. Add the following code to fill the EmployeeList table:

6. Me.cnNorthwind.Open()

7. rdrEmployees = Me.cmdGetEmployees.ExecuteReader()

8.

9. With rdrEmployees

10. While .Read

11. dr = Me.dsMaster1.EmployeeList.NewRow

12. dr(0) = .GetInt32(0)

13. dr(1) = .GetString(1)

14. dr(2) = .GetString(2)

15. Me.dsMaster1.EmployeeList.Rows.Add(dr)

16. End While

17. End With

18. rdrEmployees.Close()

19. Me.cnNorthwind.Close()

Roadmap We’ll examine the DataSet in Chapter 6.

20. The code first opens the Connection, and then creates the

DataReader with the ExecuteReader method. The While .Read loop

first creates a new DataRow, retrieves each column from the

DataRow and assigns its value to a column, and then adds the new

row to the EmployeeList table. Finally, the DataReader and the

Connection are closed.

21. Add the final code to the procedure:

Microsoft ADO.Net – Step by Step 56

22. Me.cnNorthwind.Open()

23. rdrCustomers = Me.cmdGetCustomers.ExecuteReader()

24. With rdrCustomers

25. While .Read

26. dr = Me.dsMaster1.CustomerList.NewRow

27. dr(0) = .GetString(0)

28. dr(1) = .GetString(1)

29. Me.dsMaster1.CustomerList.Rows.Add(dr)

30. End While

31. End With

32. rdrCustomers.Close()

Me.cnNorthwind.Close()

This code is almost identical to the previous section, except that it uses the

cmdGetCustomers command to fill the CustomerList table. Note that the

Connection is closed and re-opened between calls to the ExecuteReader

method. This is necessary because the Connection will return a status of

Busy until either it or the DataReader are explicitly closed.

33. Press F5 to run the application.

34. Click Fill Lists.

35. Select different combinations of Employee and Customer, and then

click Order Count, and, if you like, click Get Orders.

Microsoft ADO.Net – Step by Step 57

The Get Orders button click event handler, which is provided for you, also

calls the ExecuteReader method, but this time against the cmdGetOrders

object.

Visual C# .NET

1. Create the following event handler in the code editor window:

2. private void btnFillLists_Click(object sender, System.EventArgs e)

3. {

4. System.Data.DataRow dr;

5. System.Data.SqlClient.SqlDataReader rdrEmployees;

6. System.Data.SqlClient.SqlDataReader rdrCustomers;

}

7. Add the following code to fill the EmployeeList table:

8. this.cnNorthwind.Open();

9. rdrEmployees = this.cmdGetEmployees.ExecuteReader();

10.

11. while (rdrEmployees.Read())

12. {

13. dr = this.dsMaster1.EmployeeList.NewRow();

14. dr[0] = rdrEmployees.GetInt32(0);

15. dr[1] = rdrEmployees.GetString(1);

16. dr[2] = rdrEmployees.GetString(2);

17. this.dsMaster1.EmployeeList.Rows.Add(dr);

18. }

19.

20. rdrEmployees.Close();

this.cnNorthwind.Close();

Roadmap We’ll examine the DataSet in Chapter 6.

The code first opens the Connection, and then creates the DataReader with

the ExecuteReader method. The while (rdrEmployees.Read()) loop first

creates a new DataRow, retrieves each column from the DataRow and

assigns its value to a column, and then adds the new row to the EmployeeList

table. Finally, the DataReader and the Connection are closed.

21. Add the final code to the procedure:

22. this.cnNorthwind.Open();

Microsoft ADO.Net – Step by Step 58

23. rdrCustomers = this.cmdGetCustomers.ExecuteReader();

24.

25. while (rdrCustomers.Read())

26. {

27. dr = this.dsMaster1.CustomerList.NewRow();

28. dr[0] = rdrCustomers.GetString(0);

29. dr[1] = rdrCustomers.GetString(1);

30. this.dsMaster1.CustomerList.Rows.Add(dr);

31. }

32.

33. rdrCustomers.Close();

34. this.cnNorthwind.Close();

This code is almost identical to the previous section, except that it uses the

cmdGetCustomers command to fill the CustomerList table. Note that the

Connection is closed and re-opened between calls to the ExecuteReader

method. This is necessary because the Connection will return a status of

Busy until either it or the DataReader are explicitly closed.

35. Link the event handler to the event by adding the following line to

the frmDataCmds sub:

36. this.btnFillLists.Click += New

EventHandler(this.btnFillLists_Click);

37. Press F5 to run the application.

38. Click Fill Lists.

Microsoft ADO.Net – Step by Step 59

39. Select different combinations of Employee and Customer, and then

click Order Count, and, if you like, click Get Orders.

The Get Orders button click event handler, which is provided for you, also

calls the ExecuteReader method, but this time against the cmdGetOrders

object.

Chapter 3 Quick Reference

To Do this

Add a Data Command to a form Drag an OleDbCommand or

SqlCommand from the Data tab of

the Toolbox to the form.

Create a Data Command at run time Use one of the New constructors.

For example: Dim myCmd as New

System.Data.SqlClient.SqlComma

nd()

Configure the Parameters collection in

Visual Studio

Click the ellipsis button in the

Parameters property of the

Property window.

Add and configure Parameters at run

time

Use one of the Add methods of the

Parameters collection. For

example:

Microsoft ADO.Net – Step by Step 60

To Do this

mySqlCmd.Parameters.Add

(“@myParam”, SqlDbType.Type)

Execute a Command that doesn’t return

a result

Use the ExecuteNonQuery

method. For example: intResults =

myCmd.ExecuteNonQuery()

Execute a Command that returns a

single value

Use the ExecuteScalar method.

For example: myResult =

myCmd.ExecuteScalar()

Create a DataReader to return

Command results

Use the ExecuteReader method.

For example: myReader =

myCmd.ExecuteReader()

Chapter 4: The DataAdapter

Overview

In this chapter, you’ll learn how to:

§ Create a DataAdapter

§ Preview the results of a DataAdapter

§ Set a DataAdapter’s properties

§ Use the Table Mappings dialog box

§ Use the DataAdapter’s methods

§ Respond to DataAdapter events

In this chapter, we’ll examine the DataAdapter, which sits between the Connection object

we looked at in the previous chapter and the DataSet, which we’ll examine in Chapter 5.

Understanding the DataAdapter

Like the Connection and Command objects, the DataAdapter is part of the Data

Provider, and there is a version of the DataAdapter specific to each Data Provider. In the

release version of the .NET Framework, this means the OleDbDataAdapter in the

System.Data.OleDb namespace and the SqlDataAdapter in the System.Data.SqlClient

namespace. Both of these objects inherit from the System.Data.DbDataAdapter, which in

turn inherits from the System.Data.DataAdapter.

DataAdapters act as the ‘glue’ between a data source and the DataSet object. In very

abstract terms, the DataAdapter receives the data from the Connection object and

passes it to the DataSet. It then passes changes back from the DataSet to the

Connection to update the data in the data source. (Remember that the data source can

be any kind of data, not just a database.)

Tip Typically, there is a one-to-one relationship between a DataAdapter

and a DataTable within a DataSet, but a SelectCommand that

returns multiple result sets may link to multiple tables in the

DataSet.

To perform updates on the data source, DataAdapters contain references to four Data

Commands, one for each possible action: SelectCommand, UpdateCommand,

InsertCommand, and DeleteCommand.

Note With the exception of some minor differences in the Fill method,

which we’ll look at later, the SqlDataAdapter and

OleDbDataAdapter have identical properties, methods, and

events. For the sake of simplicity, we’ll only use the

SqlDataAdapter in this chapter, but all of the code samples will

Microsoft ADO.Net – Step by Step 61

work equally well with OleDb if you change the class names of the

objects.

Creating DataAdapters

Microsoft Visual Studio .NET provides several different methods for creating

DataAdapters interactively. We saw one in Chapter 1, when we used the Data Adapter

Configuration Wizard, and we’ll explore a couple more in this section. Of course, if you

need to, you can create a DataAdapter manually in code, and we’ll look at that in this

section, as well.

Using the Server Explorer

If you have created a design time connection to a data source in the Server Explorer,

you can automatically create a DataAdapter by dragging the appropriate table, query, or

stored procedure onto your form. If you don’t already have a connection on the form,

Visual Studio will create a preconfigured connection as well.

Create a DataAdapter from the Server Explorer

1. Open the DataAdapters project from the Visual Studio start page or by

using the Open menu.

2. In the Solution Explorer, double-click DataAdapters.vb (or

DataAdapters.cs, if you’re using C#) to open the form.

Visual Studio displays the form in the form designer.

3. In the Server Explorer, expand the SQL Northwind connection (the

name of the Connection will depend on your system configuration),

and then expand its Tables collection.

Microsoft ADO.Net – Step by Step 62

4. Drag the Categories table onto the form.

Visual Studio adds an instance of the SqlDataAdapter and becaus e it didn’t

already exist, an instance of the SqlConnection to the component designer.

5. Select the SqlDataAdapter1 on the form, and then in the Properties

window, change its name to daCategories.

Using the Toolbox

As we saw in Chapter 1, if you drag a DataAdapter from the Toolbox (either an

SqlDataAdapter or an OleDbDataAdapter), Visual Studio will start the Data Adapter

Configuration Wizard. If you want to configure the DataAdapter manually, you can simply

Microsoft ADO.Net – Step by Step 63

cancel the wizard and set the DataAdapter’s properties using code or the Properties

window.

Create a DataAdapter Using the Toolbox

In this exercise, we’ll only create the DataAdapter. We’ll set its properties later in the

chapter.

1. In the Toolbox, drag a SqlDataAdapter from the Data tab onto the

form.

Visual Studio starts the Data Adapter Configuration Wizard.

2. Click Cancel.

Visual Studio creates an instance of the SqlDataAdapter in the component

designer.

3. In the Properties window, change the name of the DataAdapter to

daProducts.

Creating DataAdapters at Run Time

When we created ADO.NET objects in code in previous chapters, we first declared them

and then initialized them. The process is essentially the same to create a DataAdapter,

but it has a little twist—because a DataAdapter references four command objects, you

must also declare and instantiate each of the commands, and then set the DataAdapter

to reference them.

Create a DataAdapter in Code

Visual Basic .NET

1. Press F7 to display the code for the DataAdapters form.

2. Type the following statements after the Inherits statement:

3. Friend WithEvents cmdSelectSuppliers As New _

4. System.Data.SqlClient.SqlCommand()

5. Friend WithEvents cmdInsertSuppliers As New _

6. System.Data.SqlClient.SqlCommand()

7. Friend WithEvents cmdUpdateSuppliers As New _

8. System.Data.SqlClient.SqlCommand()

9. Friend WithEvents cmdDeleteSuppliers As New _

10. System.Data.SqlClient.SqlCommand()

11. Friend WithEvents daSuppliers As New _

System.Data.SqlClient.SqlDataAdapter()

These lines declare the four command objects and the DataAdapter, and

initialize each object with its default constructor.

12. Open the region labeled “Windows Form Designer generated code”

and add the following lines to the New Sub after the call to

InitializeComponent:

13. Me.daSuppliers.DeleteCommand = Me.cmdDeleteSuppliers

14. Me.daSuppliers.InsertCommand = Me.cmdInsertSuppliers

15. Me.daSuppliers.SelectCommand = Me.cmdSelectSuppliers

Me.daSuppliers.UpdateCommand = Me.cmdUpdateSuppliers

These lines assign the four Command objects to the daSuppliers

DataAdapter.

Microsoft ADO.Net – Step by Step 64

Visual C# .NET

1. Press F7 to display the code for the DataAdapters form.

2. Type the following statements at the beginning of the class definition:

3. private System.Data.SqlClient.SqlCommand cmdSelectSuppliers;

4. private System.Data.SqlClient.SqlCommand cmdInsertSuppliers;

5. private System.Data.SqlClient.SqlCommand

cmdUpdateSuppliers;

6. private System.Data.SqlClient.SqlCommand cmdDeleteSuppliers;

7. private System.Data.SqlClient.SqlDataAdapter daSuppliers;

These lines declare the four Command objects and the DataAdapter.

8. Scroll down to the DataAdapters function and add the following lines

after the call to InitializeComponent:

9. this.cmdDeleteSuppliers = new

10. System.Data.SqlClient.SqlCommand();

11. this.cmdInsertSuppliers = new

12. System.Data.SqlClient.SqlCommand();

13. this.cmdSelectSuppliers = new

14. System.Data.SqlClient.SqlCommand();

15. this.cmdUpdateSuppliers = new

16. System.Data.SqlClient.SqlCommand();

17. this.daSuppliers = new

System.Data.SqlClient.SqlDataAdapter();

These lines instantiate each object using the default constructor.

18. Add the following lines to assign the four command objects to the

daSuppliers DataAdapter:

19. this.daSuppliers.DeleteCommand = this.cmdDeleteSuppliers;

20. this.daSuppliers.InsertCommand = this.cmdInsertSuppliers;

21. this.daSuppliers.SelectCommand = this.cmdSelectSuppliers;

this.daSuppliers.UpdateCommand = this.cmdUpdateSuppliers;

Previewing Results

Visual Studio provides a quick and easy method to check the configuration of a formlevel

DataAdapter: the DataAdapter Preview dialog box.

Preview the Results of a DataAdapter

1. Make sure that daCategories is selected in the component designer.

2. Select Preview Data in the bottom portion of the Properties window.

Visual Studio opens the DataAdapter Preview window.

Microsoft ADO.Net – Step by Step 65

3. Click Fill Dataset.

Visual Studio displays the rows returned by the DataAdapter.

4. Click Close.

Visual Studio closes the DataAdapter Preview window.

DataAdapter Properties

The properties exposed by the DataAdapter are shown in Table 4-1. The

SqlDataAdapter and OleDbDataAdapter objects expose the same set of properties.

Table 4-1: DataAdapter Properties

Property Description

AcceptChangesDuringFill Determines

whether

AcceptChange

s is called on a

Microsoft ADO.Net – Step by Step 66

Table 4-1: DataAdapter Properties

Property Description

DataRow after

it is added to

the DataTable

DeleteCommand The Data

Command

used to delete

rows in the

data source

InsertCommand The Data

Command

used to insert

rows in the

data source

MissingMappingAction Determines the

action that will

be taken when

incoming data

cannot be

matched to an

existing table or

column

MissingSchemaAction Determines the

action that will

be taken when

incoming data

does not match

the schema of

an existing

DataSet

SelectCommand The Data

Command

used to retrieve

rows from the

data source

TableMappings A collection of

DataTableMap

ping objects

that determine

the relationship

between the

columns in a

DataSet and

the data source

UpdateCommand The Data

Command

used to update

rows in the

data source

Note Roadmap

We’ll examine AcceptChanges in Chapter 9.

The AcceptChangesDuringFill property determines whether the AcceptChanges method

is called for each row that is added to a DataSet. The default value is true.

Microsoft ADO.Net – Step by Step 67

The MissingMappingAction property determines how the system reacts when a

SelectCommand returns columns or tables that are not found in the DataSet. The

possible values are shown in Table 4-2. The default value is Passthrough.

Table 4-2: MissingMappingAction Values

Value Description

Error Throws a

SystemExcep

tion

Ignore Ignores any

columns or

tables not

found in the

DataSet

Passthrough The column

or table that is

not found is

added to the

DataSet,

using its

name in the

data source

Similarly, the MissingSchemaAction property determines how the system will respond if

a column is missing in the DataSet. The MissingSchemaAction property will be called

only if the MissingMappingAction is set to Passthrough. The possible values are shown

in Table 4-3. The default value is Add.

Table 4-3: MissingSchemaAction Values

Value Description

Add Adds the

necessary

columns to

the DataSet

AddWithKey Adds both the

necessary

columns and

tables and

PrimaryKey

constraints

Error Throws a

SystemExcep

tion

Ignore Ignores the

extra columns

In addition, the DataAdapter has two sets of properties that we’ll examine in detail: the

set of Command objects that tell it how to update the data source to reflect changes

made to the DataSet and a TableMappings property that maintains the relationship

between columns in a DataSet and columns in the data source.

DataAdapter Commands

As we’ve seen, each DataAdapter contains references to four Command objects, each of

which has a CommandText property that contains the actual SQL command to be

executed.

Microsoft ADO.Net – Step by Step 68

If you create a DataAdapter by using the Data Adapter Configuration Wizard or by

dragging a table, view, or stored procedure from the Server Explorer, Visual Studio will

attempt to automatically generate the CommandText property for each command. You

can also edit the SQL command in the Properties window, although you must first

associate the command with a Connection object.

Note Every DataAdapter command must be associated with a

Connection. In most cases, you will use a single Connection for all

of the commands, but this isn’t a requirement. You can associate a

different Connection with each command, if necessary.

You must specify the CommandText property for the SelectCommand object, but the

.NET Framework can generate the commands for update, insert, and delete if they are

not specified.

Internally, Visual Studio uses the CommandBuilder object to generate commands. You

can instantiate a CommandBuilder object in code and use it to generate commands as

required. However, you must be aware of the CommandBuilder’s limitations. It cannot

handle parameterized stored procedures, for example.

Set CommandText in the Properties Window

1. Select the daProducts object in the form designer, and then in the

Properties window, expand the Select Command properties.

2. Select the SelectCommand’s Connection property, expand the

Existing node in the list, and then choose SqlConnection1.

Microsoft ADO.Net – Step by Step 69

3. Select the CommandText property, and then click the ellipsis

button.Visual Studio opens the Query Builder and opens the Add

Table dialog box.

4. Select the Products table, click Add, and then click Close.

Visual Studio closes the Add Table dialog box and adds the table to the

Query Builder.

5. Add the CategoryID, ProductID, and ProductName columns to the

query by selecting each column’s check box.

6. Click OK.

Microsoft ADO.Net – Step by Step 70

Visual Studio generates the CommandText property.

Set CommandText in Code

Visual Basic .NET

§ In the code editor, add the following lines of code to the bottom of the

New Sub:

§ Me.cmdSelectSuppliers.CommandText = “SELECT * FROM

Suppliers”

Me.cmdSelectSuppliers.Connection = Me.SqlConnection1

Visual C# .NET

§ In the code editor, add the following lines to the bottom of the

DataAdapters Sub:

§ this.cmdSelectSuppliers.CommandText = “SELECT * FROM

Suppliers”;

this.cmdSelectSuppliers.Connection = this.sqlConnection1;

The TableMappings Collection

A DataSet has no knowledge of where the data it contains comes from, and a

Connection has no knowledge of what happens to the data it retrieves. The DataAdapter

maintains the connection between the two. It does this by using the TableMappings

collection.

The structure of the TableMappings collection is shown in the following figure. At the

highest level, the TableMappings collection contains one or more DataTableMapping

objects. Typically, there is only one DataTableMapping object because most

DataAdapters return only a single record set. However, if a DataAdapter manages

multiple record sets, as might be the case with a stored procedure that returns multiple

result sets, there will be a DataTableMapping object for each record set.

The DataTableMapping object is another collection, which contains one or more

DataColumnMapping objects. The DataColumnMapping object consists of two

properties: the SourceColumn, which is the case-sensitive name of the column within the

data source, and the DataSetColumn, which is the case-insensitive name of the column

within the DataSet. There is a DataColumnMapping object for each column managed by

the DataAdapter.

Microsoft ADO.Net – Step by Step 71

By default, the .NET Framework will create a TableMappings collection (and all of the

objects it contains) with the DataSetColumn name set to the SourceColumn name. There

are times, however, when this isn’t what you want. For example, you might want to

change the mappings for reasons of convenience or because you’re working with a preexisting

DataSet with difference column names.

Change a DataSet Column Name Using the Table Mappings Dialog Box

1. Select the daCategories DataAdapter in the form designer.

2. In the Properties window, expand the Mapping properties.

3. Select the TableMappings property and click the ellipsis button.

Visual Studio displays the Table Mappings dialog box.

Microsoft ADO.Net – Step by Step 72

4. Change the name of the Dataset column from CategoryName to

Name.

5. Click OK.

Visual Studio updates the collection.

DataAdapter Methods

The DataAdapter supports two important methods: Fill, which loads data from the data

source into the DataSet, and Update, which transfers data the other direction—loading it

from the DataSet into the data source. We’ll examine both in this set of exercises.

Generating DataSets and Binding Data

Roadmap We’ll examine DataSets in Chapter 6.

Before we can examine the Fill and Update methods, we must create and link the

DataSets to be used to store the data. We haven’t examined DataSets yet (we’ll do that

in Chapter 6), so just follow the steps outlined and try not to worry about them.

Generate and Bind DataSets

1. Select the daCategories DataAdapter in the form designer.

2. On the Data menu, choose Generate Dataset.

Microsoft ADO.Net – Step by Step 73

Visual Studio displays the Generate Dataset dialog box.

3. In the New text box, change the name of the new DataSet to

dsCategories.

4. Click OK.

Visual Studio creates the dsCategories DataSet and adds an instance of it to

the form designer.

5. Repeat steps 1 through 4 for the daProducts DataAdapter. Name the

new DataSet dsProducts.

Microsoft ADO.Net – Step by Step 74

6. Select the dgCategories object in the drop-down list box of the

Properties window.

7. In the Properties window, expand the DataBindings section.

8. Select dsCategories1 in the DataSource list.

Microsoft ADO.Net – Step by Step 75

9. Select Categories in the DataMember list.

Microsoft ADO.Net – Step by Step 76

10. Repeat steps 6 through 9 for the dgProducts control, binding it to the

dsProducts1 DataSource and Table DataMember.

The Fill Method

The Fill method loads data from a data source into one or more tables of a DataSet by

using the command specified in the DataAdapter’s SelectCommand. The

DbDataAdapter object, from which both the OleDbDataAdapter and the SqlDataAdapter

are inherited, supports several variations of the Fill method, as shown in Table 4-4.

Table 4-4: DbDataAdapter Fill Methods

Method Description

Fill(DataSet) Creates a

DataTable

named Table

and populates it

with the rows

returned from

the data source

Fill(DataTable) Fills the

specified

DataTable with

the rows

returned from

the data source

Fill(DataSet, tableName) Fills the

DataTable

named in the

tableName

Microsoft ADO.Net – Step by Step 77

Table 4-4: DbDataAdapter Fill Methods

Method Description

string, within

the DataSet

specified, with

the rows

returned from

the data source

Fill(DataTable, DataReader) Fills the

DataTable

using the

specified

DataReader

(Because

DataReader is

declared as an

IDataReader,

either an

OleDbDataRea

der or a

SQLDataReade

r can be used)

Fill(DataTable, command, CommandBehavior) Fills the

DataTable

using the SQL

string passed in

command and

the specified

CommandBeha

vior

Fill(DataSet, startRecord,maxRecords, tableName) Fills the

DataTable

specified in the

tableName

string,

beginning at the

zero-based

startRecord and

continuing for

maxRecords or

until the end of

the result set

Fill(DataSet, tableName, DataReader, startRecord,

maxRecords)

Fills the

DataTable

specified in the

tableName

string,

beginning at the

zero-based

startRecord and

continuing for

maxRecords or

until the end of

the result set,

using the

specified

Microsoft ADO.Net – Step by Step 78

Table 4-4: DbDataAdapter Fill Methods

Method Description

DataReader

(Since

DataReader is

declared as an

IDataReader,

either an

OleDbDataRea

der or a

SQLDataReade

r can be used)

Fill(DataSet, startRecord, maxRecords, tableName,

command, CommandBehavior)

Fills the

DataTable

specified in the

tableName

string,

beginning at the

zero-based

startRecord and

continuing for

maxRecords or

until the end of

the result set,

using the SQL

text contained

in command

and the

specified

CommandBeha

vior

In addition, the OleDbDataAdapter supports the two additional versions of the Fill

method shown in Table 4-5, which are used to load data from Microsoft ActiveX Data

Objects (ADO).

Table 4-5: OleDbDataAdapter Fill Methods

Method Description

Fill(DataTable, adoObject) Fills the

specified

DataTable

with rows

from the

ADO

Recordset

or Record

object

specified in

adoObject

Fill(DataSet, adoObject, tableName) Fills the

specified

DataTable

with rows

from the

ADO

Recordset

or Record

Microsoft ADO.Net – Step by Step 79

Table 4-5: OleDbDataAdapter Fill Methods

Method Description

object

specified in

adoObject,

using the

DataTable

specified in

the

tableName

string to

determine

the

TableMappi

ngs

The SqlDataAdapter supports only the methods provided by the DbDataAdapter.

DataAdapters included in other Data Providers can, of course, support additional

versions of the Fill method.

Important The Microsoft SQL Server decimal data type allows a

maximum of 38 significant digits, while the .NET Framework

decimal type only allows a maximum of 28. If a row in a SQL

table contains a decimal field with more than 28 significant

digits, the row will not be added to the DataSet and a

FillError will be raised.

Use the Fill Method

Visual Basic .NET

1. Press F7 to display the code editor for the DataAdapters form.

2. Select btnFill in the ClassName list, and then select Click in the

MethodName list.

Visual Studio displays the Click event handler template.

3. Add the following lines of code to clear each dataset to the sub:

4. Me.dsCategories1.Clear()

Me.dsProducts1.Clear()

5. Add the following code to fill each DataSet from the DataAdapters:

6. Me.daCategories.Fill(Me.dsCategories1.Categories)

7. Me.daProducts.Fill(Me.dsProducts1.Table)

8. Press F5 to run the program.

Microsoft ADO.Net – Step by Step 80

9. Click Fill.

10. Verify that each of the data grids has been filled correctly, and then

close the application.

Visual C# .NET

1. Double-click the Fill button.

Visual Studio adds a Click event handler to the code window.

2. Add the following code to the event handler:

3. private void btnFill_Click(object sender, System.EventArgs e)

4. {

5. this.dsCategories1.Clear();

6. this.dsProducts1.Clear();

}

These lines clear the contents of each DataSet.

7. Add the following code to fill each DataSet from the DataAdapters:

8. this.daCategories.Fill(this.dsCategories1.Categories);

this.daProducts.Fill(this.dsProducts1._Table);

9. Press F5 to run the program.

Microsoft ADO.Net – Step by Step 81

10. Click Fill.

11. Verify that each of the data grids has been filled correctly, and then

close the application.

The Update Method

Remember that the DataSet doesn’t retain any knowledge about the source of the data it

contains, and that the changes you make to DataSet rows aren’t automatically

propagated back to the data source. You must use the DataAdapter’s Update method to

do this. The Update method calls the DataAdapter’s InsertCommand, DeleteCommand,

or UpdateCommand, as appropriate, for each row in a DataSet that has changed.

The System.Data.Common.DbDataAdapter, which you will recall is the DataAdapter

class from which relational database Data Providers inherit their DataAdapters, supports

a number of versions of the Update method, as shown in Table 4-6. Neither the

SqlDataAdapter nor the OleDbDataAdapter add any additional versions.

Table 4-6: DbDataAdapter Update Methods

Method Description

Update(DataSet) Updates the

data source

from a

DataTable

named Table in

the specified

Microsoft ADO.Net – Step by Step 82

Table 4-6: DbDataAdapter Update Methods

Method Description

DataSet

Update(dataRows) Updates the

data source

from the

specified array

of dataRows

Update(DataTable) Updates the

data source

from the

specified

DataTable

Update(dataRows, DataTableMapping) Updates the

data source

from the

specified array

of dataRows,

using the

specified

DataTableMap

ping

Update(DataSet, sourceTable) Updates the

data source

from the

DataTable

specified in

sourceTable in

the specified

DataSet

Update a Data Source Using the Update Method

Visual Basic .NET

1. In the code editor, select the btnUpdate control in the ControlName

list, and then select the Click event in the MethodName list.

Visual Studio displays the Click event handler template.

2. Add the following code to call the Update method:

Me.daCategories.Update(Me.dsCategories1.Categories)

3. Press F5 to run the application.

4. Click Fill.

The application fills the data grids.

Tip You can drag the data grid’s column headings to widen them.

5. Click the CategoryName of the first row, and then change its value

from Beverages to Old Beverages.

Microsoft ADO.Net – Step by Step 83

6. Click Update.

The application updates the data source.

7. Click Fill to ensure that the change has been propagated to the data

source.

8. Close the application.

Visual C# .NET

1. Add the following event handler in the code editor, below the

btnFill_Click handler we added in the previous exercise:

2. private void btnUpdate_Click (object sender, System.EventArgs

e)

3. {

4. this.daCategories.Update(this.dsCategories1.Categories);

}

5. Add the following code to connect the event handler in the class

definition:

6. this.btnUpdate.Click += new

EventHandler(this.btnUpdate_Click);

7. Press F5 to run the application.

8. Click Fill.

The application fills the data grids.

Tip You can drag the data grid’s column headings to widen them.

9. Click the CategoryName of the first row, and then change its value

from Beverages to Old Beverages.

Microsoft ADO.Net – Step by Step 84

10. Click Update.

The application updates the data source.

11. Click Fill to ensure that the change has been propagated to the data

source.

12. Close the application.

Handling DataAdapter Events

Other than the events caused by errors, the DataAdapter supports only two events:

OnRowUpdating and OnRowUpdated. These two events occur on either side of the

actual dataset update, providing fine control of the process.

OnRowUpdating Event

The OnRowUpdating event is raised after the Update method has set the parameter

values of the command to be executed but before the command is executed. The event

handler for this event receives an argument whose properties provide essential

information about the command that is about to be executed.

The class of the event arguments is defined by the Data Provider, so it will be either

OleDbRowUpdatingEventArgs or SqlRowUpdatingEventArgs if one of the .NET

Framework Data Providers is used. The properties of RowUpdatingEventArgs are shown

in Table 4-7.

Table 4-7: RowUpdatingEventArgs Properties

Properties Description

Command The Data

Command to

be executed

Errors The errors

generated by

the .NET Data

Provider

Row The

DataReader to

be updated

StatementType The type of

Command to

be executed.

The possible

values are

Microsoft ADO.Net – Step by Step 85

Table 4-7: RowUpdatingEventArgs Properties

Properties Description

Select, Insert,

Delete, and

Update

Status The

UpdateStatus

of the

Command

TableMapping The

DataTableMap

ping used by

the update

The Command property contains a reference to the actual Command object that will be

used to update the data source. Using this reference, you can, for example, examine the

Command’s CommandText property to determine the SQL that will be executed and

change it if necessary.

The StatementType property of the event argument defines the action that is to be

performed. The property is an enumeration that can evaluate to Select, Insert, Update, or

Delete. The StatementType property is read-only, so you cannot use it to change the

type of action to be performed.

The Row property contains a read-only reference to the DataRow to be propagated to

the data source, while the TableMapping property contains a reference to the

DataTableMapping that is being used for the update.

When the event handler is first called, the Status property, which is an UpdateStatus

enumeration, defines the status of the event. If it is ErrorsOccurred, the Errors property

will contain a collection of Errors.

You can set the Status property within the event handler to determine what action the

system is to take. In addition to ErrorsOccured, which causes an exception to be thrown,

the possible exit status values are Continue, SkipAllRemainingRows, and

SkipCurrentRow. Continue, which is the default value, does exactly what you would

expect—it instructs the system to continue processing. SkipAllRemainingRows actually

discards the update to the current row, as well as any remaining unprocessed rows,

while SkipCurrentRow only cancels processing for the current row.

Respond to an OnRowUpdating Event

Visual Basic .NET

1. In the code editor, select daCategories in the ControlName list and

then select RowUpdating in the MethodName list.

Visual Studio displays the RowUpdating event handler template.

2. Add the following text to the Messages control to indicate that the

event has been triggered:

Me.txtMessages.Text &= vbCrLf & “Beginning Update…”

3. Press F5 to run the application, and then click Fill to fill the data grids.

4. Change the CategoryName for Category 1, which we changed to Old

Beverages in the previous exercise, back to Beverages.

Microsoft ADO.Net – Step by Step 86

5. Click Update.

The application updates the text in the Messages control.

6. Close the application.

Visual C# .NET

1. Add the following event handler in the code editor:

2. private void daCategories_RowUpdate(object sender,

3. System.Data.SqlClient.SqlRowUpdatedEventArgs e)

4. {

5. string strMsg;

6.

7. strMsg = “Beginning update…”;

8. this.txtMessages.Text += strMsg;

}

The code adds text to the Messages control to indicate that the event has

been triggered.

9. Add the following code to connect the event handler in the class

description:

10. this.daCategories.RowUpdating += new

11. System.Data.SqlClient.SqlRowUpdatingEventHandler

(this.daCategories_RowUpdating);

Microsoft ADO.Net – Step by Step 87

12. Press F5 to run the application, and then click Fill to fill the data

grids.

13. Change the CategoryName for Category 1, which we changed to

Old Beverages in the previous exercise, back to Beverages.

14. Click Update.

The application updates the text in the Messages control.

15. Close the application.

Examine the RowUpdatingEventArgs Properties

Visual Basic .NET

1. Add the following lines to the daCategories_RowUpdating event

handler that you created in the previous exercise:

2. Me.txtMessages.Text &= vbCrLf & (“Executing a command of

type ” _

& e.StatementType.ToString)

3. Press F5 to run the application, and then click Fill.

4. Change the CategoryName of Category 1 to New Beverages, and

then click Update.

The application updates the text in the Messages control.

Microsoft ADO.Net – Step by Step 88

5. Close the application.

Visual C# .NET

1. Change the da_Categories_RowUpdated event handler that you

created in the previous exercise to read:

2. string strMsg;

3.

4. strMsg = “\nUpdate Completed.”;

5. strMsg += “, ” + e.RecordsAffected.ToString();

6. strMsg += ” records(s) updated.”;

7. this.txtMessages.Text += strMsg;

8.

9. Press F5 to run the application, and then click Fill.

10. Change the CategoryName of Category 1 to New Beverages, and

then click Update.

The application updates the text in the Messages control.

11. Close the application.

Microsoft ADO.Net – Step by Step 89

OnRowUpdated Event

The OnRowUpdated event is raised after the Update method executes the appropriate

command against the data source. The event handler for this event is either passed an

SqlRowUpdatedEventArgs or an OleDbRowUpdatedEventArgs argument, depending on

the Data Provider.

Either way, the event argument contains all of the same properties as the

RowUpdatingEvent argument, plus an additional property, a read-only RecordsEffected

argument that indicates the number of rows that were changed, inserted, or deleted by

the SQL command that was executed.

Respond to an OnRowUpdated Event

Visual Basic .NET

1. Select daCategories in the ControlName list and then select

RowUpdated in the MethodName list.

Visual Studio displays the RowUpdated event handler template.

2. Add the following text to the Messages control to indicate that the

event has been triggered:

Me.txtMessages.Text &= vbCrLf & “Update completed”

3. Press F5 to run the application, and then click Fill to fill the data grids.

4. Change the CategoryName for Category 1, which we changed to New

Beverages in the previous exercise, back to Beverages.

5. Click Update.

The application updates the text in the Messages control.

Microsoft ADO.Net – Step by Step 90

6. Close the application.

Visual C# .NET

1. Add the following code to add the RowUpdated event template to the

code editor:

2. private void daCategories_RowUpdate(object sender,

3. System.Data.SqlClient.SqlRowUpdatedEventArgs e)

4. {

5. string strMsg;

6.

7. strMsg = “\nUpdate Completed.”;

8. this.txtMessages.Text += strMsg;

}

9. Add the following code to connect the event handler in the class

description:

10. this.daCategories.RowUpdated +=

11. new System.Data.SqlClient.SqlRowUpdatedEventHandler

(this.daCategories_RowUpdated);

12. Press F5 to run the application, and then click Fill to fill the data

grids.

13. Change the CategoryName for Category 1, which we changed to

New Beverages in the previous exercise, back to Beverages.

Microsoft ADO.Net – Step by Step 91

14. Click Update.

The application updates the text in the Messages control.

15. Close the application.

Examine the RowUpdatedEventArgs Properties

Visual Basic .NET

1. Add the following lines to the daCategories_RowUpdated event

handler that you created in the previous exercise:

2. Me.txtMessages.Text &= “, ” & e.RecordsAffected.ToString & ”

record(s) updated.”

3. Press F5 to run the application, and then click Fill.

4. Change the CategoryName of Category 1 to Beverages 2, and then

click Update.

The application updates the text in the Messages control.

Microsoft ADO.Net – Step by Step 92

5. Close the application.

Visual C# .NET

1. Change the daCategories_RowUpdated event handler that you

created in the previous exercise to read:

2. string strMsg;

3.

4. strMsg = “\nUpdate Completed.”;

5. strMsg += “, ” + e.RecordsAffected.ToString();

6. strMsg += ” records(s) updated.”;

7. this.txtMessages.Text += strMsg;

8. Press F5 to run the application, and then click Fill.

9. Change the CategoryName of Category 1 to Beverages 2, and then

click Update.

The application updates the text in the Messages control.

10. Close the application.

Microsoft ADO.Net – Step by Step 93

Chapter 4 Quick Reference

To Do this

Create a DataAdapter in the Server Explorer Drag a table

into the form

designer.

Create a DataAdapter using the Toolbox Drag an

OleDbDataAda

pter or an

SqlDataAdapte

r onto the form

designer.

Cancel the

Data Adapter

Configuration

Wizard if you

wish to

configure the

DataAdapter

manually.

Create a DataAdapter in code Declare the

DataAdapter

variable and

the four

Command

object

variables, and

then instantiate

them and

assign the

Command

objects to the

DataAdapter.

Preview the results of a DataAdapter Select the

DataAdapter in

the form

designer, and

then click

Preview

Dataset in the

Properties

window.

Chapter 5: Transaction Processing in ADO.NET

Overview

In this chapter, you’ll learn how to:

§ Create a transaction

§ Create a nested transaction

§Commit a transaction

§ Rollback a transaction

Microsoft ADO.Net – Step by Step 94

In the last few chapters, we’ve seen how ADO.NET data provider objects interact in the

process of editing and updating. In this chapter, we’ll complete our examination of data

providers in ADO.NET with an exploration of transaction processing.

Understanding Transactions

A transaction is a series of actions that must be treated as a single unit of work—either

they must all succeed, or they must all fail. The classic example of a transaction is the

transfer of funds from one bank account to another. To transfer the funds, an amount,

say $100, is withdrawn from one account and deposited in the other. If the withdrawal

were to succeed while the deposit failed, money would be lost into cyberspace. If the

withdrawal were to fail and the deposit succeed, money would be invented. Clearly, if

either action fails, they must both fail.

ADO.NET supports transactions through the Transaction object, which is created against

an open connection. Commands that are executed against the connection while the

transaction is pending must be enrolled in the transaction by assigning a reference to the

Transaction object to their Transaction property. Commands cannot be executed against

the Connection outside the transaction while it is pending.

If the transaction is committed, all of the commands that form a part of that transaction

will be permanently written to the data source. If the transaction is rolled back, all of the

commands will be discarded at the data source.

Creating Transactions

The Transaction object is implemented as part of the data provider. There is a version for

each of the intrinsic data providers: OleDbTransaction in the System.Data.OleDb

namespace and SqlTransaction in the System.Data.SqlClient namespace.

The SqlTransaction object is implemented using Microsoft SQL Server transactions—

creating a SqlTransaction maps directly to the BeginTransaction statement. The

OleDbTransaction is implemented within OLE DB. No matter which data provider you

use, you shouldn’t explicitly issue BeginTransaction commands on the database.

Creating New Transactions

Transactions are created by calling the BeginTransaction method of the Connection

object, which returns a reference to a Transaction object. BeginTransaction is

overloaded, allowing an IsolationLevel to optionally be specified, as shown in Table 5-1.

The Connection must be valid and open when BeginTransaction is called.

Table 5-1: Connection BeginTransaction Methods

Method Description

BeginTransaction() Begins a

transaction

BeginTransaction Begins a

transaction

at the

specified

IsolationLev

el

(IsolationLevel)

Because SQL Server supports named transactions, the SqlClient data provider exposes

two additional versions of BeginTransaction, as shown in Table 5-2.

Table 5-2: Additional SQL BeginTransaction Methods

Method Description

BeginTransaction (TransactionName) Begins a

Microsoft ADO.Net – Step by Step 95

Table 5-2: Additional SQL BeginTransaction Methods

Method Description

transaction

with the name

specified in

the

TransactionN

ame string

BeginTransaction (IsolationLevel, TransactionName) Begins a

transaction at

the specified

IsolationLevel

with the name

specified in

the

TransactionN

ame string

ADO Unlike ADO, the ADO.NET Commit and Rollback methods are

exposed on the Transaction object, not the Command object.

The optional IsolationLevel parameter to the BeginTransaction method specifies the

connection’s locking behavior. The possible values for IsolationLevel are shown in Table

5-3.

Table 5-3: Isolation Levels

Value Meaning

Chaos Pending

changes

from

more

highly

ranked

transacti

ons

cannot

be

overwritt

en

ReadCommitted Shared

locks are

held

while the

data is

being

read, but

data can

be

changed

before

the end

of the

transacti

on

ReadUncommitted No

shared

locks are

issued

Microsoft ADO.Net – Step by Step 96

Table 5-3: Isolation Levels

Value Meaning

and no

exclusive

locks are

honored

RepeatableRead Exclusive

locks are

placed

on all

data

used in

the query

Serializable A range

lock is

placed

on the

DataSet

Unspecified An

existing

isolation

level

cannot

be

determin

ed

Create a New Transaction

Visual Basic .NET

1. Open the Transactions project from the Microsoft Visual Studio .NET

Start Page or by using the File menu.

2. Double-click Transactions.vb to display the form in the form designer.

3. Double-click Create.

Visual Studio opens the code editor window and adds the Click event handler.

4. Add the following code to the procedure:

5. Dim strMsg As String

6. Dim trnNew As System.Data.OleDb.OleDbTransaction

Microsoft ADO.Net – Step by Step 97

7.

8. Me.cnAccessNwind.Open()

9. trnNew = Me.cnAccessNwind.BeginTr ansaction()

10. strMsg = “Isolation Level: ”

11. strMsg &= trnNew.IsolationLevel.ToString

12. MessageBox.Show(strMsg)

Me.cnAccessNwind.Close()

The code creates a new Transaction using the default method, and then

displays its IsolationLevel in a message box.

13. Press F5 to run the application.

14. Click Load Data.

The application fills the DataSet and displays the Customers and Orders lists.

15. Click Create.

The application displays the transaction’s IsolationLevel in a message box.

Microsoft ADO.Net – Step by Step 98

16. Click OK in the message box, and then close the application.

Visual C# .NET

1. Open the Transactions project from the Visual Studio Start Page or by

using the File menu.

2. Double-click Transactions.cs to display the form in the form designer.

3. Double-click Create.

Visual Studio opens the code editor window and adds the Click event handler.

4. Add the following code to the procedure:

5. string strMsg;

6. System.Data.OleDb.OleDbTransaction trnNew;

7.

8. this.cnAccessNwind.Open();

9. trnNew = this.cnAccessNwind.BeginTransaction();

10. strMsg = “Isolation Level: “;

11. strMsg += trnNew.IsolationLevel.ToString();

12. MessageBox.Show(strMsg);

this.cnAccessNwind.Close();

The code creates a new Transaction using the default method, and then

displays its IsolationLevel in a message box.

13. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 99

14. Click Load Data.

The application fills the DataSet and displays the Customers and Orders lists.

15. Click Create.

The application displays the transaction’s IsolationLevel in a message box.

16. Click OK in the message box, and then close the application.

Microsoft ADO.Net – Step by Step 100

Creating Nested Transactions

Although it isn’t possible to have two transactions on a single Connection, the

OleDbTransaction object supports nested transactions. (They aren’t supported on SQL

Server.)

ADO Multiple transactions on a single Connection, which were

supported in ADO, are no longer supported in ADO.NET.

The syntax for creating a nested transaction is the same as that for creating a first-level

transaction, as shown in Table 5-4. The difference is that nested transactions are

created by calling the BeginTransaction method on the Transaction object itself, not on

the Connection.

All nested transactions must be committed or rolled back before the trans-action

containing them is committed; however, if the parent (containing) transaction is rolled

back, the nested transactions will also be rolled back, even if they have previously been

committed.

Table 5-4: Transaction BeginTransaction Methods

Method Description

BeginTransaction() Begins a

transaction

BeginTransaction (IsolationLevel) Begins a

transaction

at the

specified

IsolationLev

el

Create a Nested Transaction

Visual Basic .NET

1. Select btnNested in the code editor’s ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim strMsg As String

4. Dim trnMaster As System.Data.OleDb.OleDbTransaction

5. Dim trnChild As System.Data.OleDb.OleDbTransaction

6.

7. Me.cnAccessNwind.Open()

8.

9. trnMaster = Me.cnAccessNwind.BeginTransaction

10.

11. trnChild = trnMaster.Begin

12. strMsg = “Child Isolation Level: ”

13. strMsg &= trnChild.IsolationLevel.ToString

14. MessageBox.Show(strMsg)

15.

Me.cnAccessNwind.Close()

The code first creates a transaction, trnMaster, on the Connection object. It

then creates a second, nested transaction, trnChild, on the trnMaster

transaction, and displays its IsolationLevel in a message box.

Microsoft ADO.Net – Step by Step 101

16. Press F5 to run the application.

17. Click Load Data.

18. Click Create Nested.

The application displays the child transaction’s IsolationLevel in a message

box.

19. Click OK in the message box, and then close the application.

Visual C# .NET

1. Add the following procedure to the code:

2. private void btnNested_Click(object sender, System.EventArgs e)

3. {

4. string strMsg;

5. System.Data.OleDb.OleDbTransaction trnMaster;

6. System.Data.OleDb.OleDbTransaction trnChild;

7.

8. this.cnAccessNwind.Open();

9.

10. trnMaster =

this.cnAccessNwind.BeginTransaction();

11.

12. trnChild = trnMaster.Begin();

13. strMsg = “Child Isolation Level: “;

14. strMsg += trnChild.IsolationLevel.ToString();

15. MessageBox.Show(strMsg);

16.

17. this.cnAccessNwind.Close();

}

The code first creates a transaction, trnMaster, on the Connection object. It

then creates a second, nested transaction, trnChild, on the trnMaster

transaction, and displays its IsolationLevel in a message box.

18. Add the code to bind the click handler to the top of the

frmTransactions() sub:

19. this.btnNested.Click += new

EventHandler(this.btnNested_Click);

20. Press F5 to run the application.

21. Click Load Data.

22. Click Create Nested.

The application displays the child transaction’s IsolationLevel in a message

box.

Microsoft ADO.Net – Step by Step 102

23. Click OK in the message box, and then close the application.

Using Transactions

There are three steps to using transactions after they are created. First they are

assigned to the commands that will participate in them, then the commands are

executed, and finally the transaction is closed by either committing it or rolling it back.

Assigning Transactions to a Command

Once a transaction has been begun on a connection, all commands executed against

that connection must participate in that transaction. Unfortunately, this doesn’t happen

automatically—you must set the Transaction property of the command to reference the

transaction.

However, once the transaction is committed or rolled back, the transaction reference in

any commands that participated in the transaction will be reset to Nothing, so it isn’t

necessary to do this step manually.

Committing and Rolling Back Transactions

The final step in transaction processing is to commit or roll back the changes that were

made by the commands participating in the transaction. If the transaction is committed,

all of the changes will be accepted in the data source. If it is rolled back, all of the

changes will be discarded, and the data source will be returned to the state it was in

before the transaction began.

Transactions are committed using the transaction’s Commit method and rolled back

using the transaction’s Rollback method, neither of which takes any parameters. The

actions are typically wrapped in a Try…Catch block.

Commit a Transaction

Visual Basic .NET

1. Select btnCommit in the ControlName list, and then select Click in the

MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following lines to the procedure:

3. Dim trnNew As System.Data.OleDb.OleDbTransaction

4.

5. AddRows(“AAAA1”)

Microsoft ADO.Net – Step by Step 103

6.

7. Me.cnAccessNwind.Open()

8. trnNew = Me.cnAccessNwind.BeginTransaction()

9. Me.daCustomers.InsertCommand.Transaction = trnNew

10. Me.daOrders.InsertCommand.Transaction = trnNew

11. Try

12.

Me.daCustomers.Update(Me.dsCustomerOrders1.CustomerList)

13. Me.daOrders.Update(Me.dsCustomerOrders1.Orders)

14. trnNew.Commit()

15. MessageBox.Show(“Transaction Committed”)

16. Catch err As System.Data.OleDb.OleDbException

17. trnNew.Rollback()

18. MessageBox.Show(err.Message.ToString)

19. Finally

20. Me.cnAccessNwind.Close()

End Try

The AddRows procedure, which is provided in Chapter 1, adds a Customer

row and an Order for that Customer.

Within a Try…Catch block, the code commits the two Update commands if

they succeed, and then displays a message confirming that the transaction

has completed without errors.

21. Press F5 to run the application.

22. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

23. Click Commit.

The application displays a message box confirming the updates.

24. Click OK in the message box, and then click Load Data to confirm

that the rows have been added.

Microsoft ADO.Net – Step by Step 104

25. Close the application.

Visual C# .NET

1. Add the following procedure to the code:

2. private void btnCommit_Click(object sender, System.EventArgs

e)

3. {

4. System.Data.OleDb.OleDbTransaction trnNew;

5.

6. AddRows(“AAAA1”);

7.

8. this.cnAccessNwind.Open();

9. trnNew = this.cnAccessNwind.BeginTransaction();

10. this.daCustomers.InsertCommand.Transaction =

trnNew;

11. this.daOrders.InsertCommand.Transaction =

trnNew;

12. try

13. {

this.daCustomers.Update(this.dsCustomerOrders1.CustomerLis

t);

14.

this.daOrders.Update(this.dsCustomerOrders1.Orders);

15. trnNew.Commit();

16. MessageBox.Show(“Transaction Committed”);

17. }

18. catch (System.Data.OleDb.OleDbException err)

19. {

20. trnNew.Rollback();

21. MessageBox.Show(err.Message.ToString());

22. }

23. finally

24. {

25. this.cnAccessNwind.Close();

26. }

}

The AddRows procedure, which is provided in Chapter 1, adds a Customer

row and an Order for that Customer.

Within a Try…Catch block, the code commits the two Update commands if

they succeed, and then displays a message confirming that the transaction

has completed without errors.

27. Add the code to bind the click handler to the top of the

frmTransactions() sub:

Microsoft ADO.Net – Step by Step 105

this.btnCommit.Click += new EventHandler(this.btnCommit_Click);

28. Press F5 to run the application.

29. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

30. Click Commit.

The application displays a message box confirming the updates.

31. Click OK in the message box, and then click Load Data to confirm

that the rows have been added.

32. Close the application.

Rollback a Transaction

Visual Basic .NET

1. Select btnRollback in the ControlName list, and then select Click in

the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following lines to the procedure:

3. Dim trnNew As System.Data.OleDb.OleDbTransaction

4.

5. AddRows(“AAAA2”)

6.

7. Me.cnAccessNwind.Open()

Microsoft ADO.Net – Step by Step 106

8. trnNew = Me.cnAccessNwind.BeginTransaction()

9. Me.daCustomers.InsertCommand.Transaction = trnNew

10. Me.daOrders.InsertCommand.Transaction = trnNew

11. Try

12. Me.daOrders.Update(Me.dsCustomerOrders1.Orders)

13.

Me.daCustomers.Update(Me.dsCustomerOrders1.CustomerList)

14. trnNew.Commit()

15. MessageBox.Show(“Transaction Committed”)

16. Catch err As System.Data.OleDb.OleDbException

17. trnNew.Rollback()

18. MessageBox.Show(err.Message.ToString)

19. Finally

20. Me.cnAccessNwind.Close()

End Try

This procedure is almost identical to the Commit procedure in the previous

exercise. However, because the order of the Updates is reversed so that the

Order is added before the Customer, the first Update will fail and a message

box will display the error.

21. Press F5 to run the application.

22. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

23. Click Rollback.The application displays a message box explaining

the error.

Microsoft ADO.Net – Step by Step 107

24. Click OK to close the message box, and then click Load Data to

confirm that the rows have not been added.

25. Close the application.

Visual C# .NET

1. Add the following procedure to the code:

2. private void btnRollback_Click(object sender, System.EventArgs

e)

3. {

4. System.Data.OleDb.OleDbTransaction trnNew;

5.

6. AddRows(“AAAA2”);

7.

8. this.cnAccessNwind.Open();

9. trnNew = this.cnAccessNwind.BeginTransaction();

10. this.daCustomers.InsertCommand.Transaction =

trnNew;

11. this.daOrders.InsertCommand.Transaction =

trnNew;

12. try

13. {

14.

this.daOrders.Update(this.dsCustomerOrders1.Orders);

15.

this.daCustomers.Update(this.dsCustomerOrders1.CustomerLis

t);

16. trnNew.Commit();

17. MessageBox.Show(“Transaction Committed”);

18. }

19. catch (System.Data.OleDb.OleDbException err)

20. {

21. trnNew.Rollback();

22. MessageBox.Show(err.Message.ToString());

23. }

24. finally

25. {

26. this.cnAccessNwind.Close();

27. }

}

This procedure is almost identical to the Commit procedure in the previous

exercise. However, because the order of the Updates is reversed so that the

Order is added before the Customer, the first Update will fail and a message

box will display the error.

Microsoft ADO.Net – Step by Step 108

28. Add the code to bind the click handler to the top of the

frmDataSets() sub:

this.btnRollback.Click += new EventHandler(this.btnRollback_Click);

29. Press F5 to run the application.

30. Click Load Data.

The application fills the DataSet, and then displays the Customers and Orders

lists.

31. Click Rollback.

The application displays a message box explaining the error.

32. Click OK to close the message box, and then click Load Data to

confirm that the rows have not been added.

33. Close the application.

Microsoft ADO.Net – Step by Step 109

Chapter 5 Quick Reference

To Do this

Create a transaction Call the BeginTransaction

method of the Connection

object:

myTrans =

myConn.BeginTransaction

Create a nested transaction Call the BeginTransaction

method of the Transaction

object:

nestedTrans =

myTrans.BeginTransactio

n()

Commit a transaction Call the Commit method of the

Transaction:

myTrans.Commit()

Rollback a transaction Call the Rollback method of the

Transaction:

myTrans.Rollback()

Part III: Manipulating Data

Chapter 6: The DataSet

Chapter 7: The DataTable

Chapter 8: The DataView

Chapter 6: The DataSet

Overview

In this chapter, you’ll learn how to:

§ Create Typed and Untyped DataSets

§ Add DataTables to DataSets

§Add DataRelations to DataSets

§ Clone and copy DataSets

Beginning with this chapter, we’ll move away from the ADO.NET Data Providers to

examine the objects that support the manipulation of data in your applications. We’ll start

with the DataSet, the memory-resident structure that represents relational data.

Note In this chapter, we’ll begin an application that we’ll continue to

work with in subsequent chapters.

Understanding DataSets

The structure of the DataSet is shown in the following figure.

Microsoft ADO.Net – Step by Step 110

ADO.NET supports two distinct kinds of DataSets: Typed and Untyped. Architecturally,

an Untyped DataSet is a direct instantiation of the System.Data.DataSet object, while a

Typed DataSet is a distinct class that inherits from System.Data.DataSet.

In functional terms, a Typed DataSet exposes its tables, and the columns within them, as

object properties. This makes manipulating the DataSet far simpler syntactically because

you can reference tables and columns directly by their names.

For example, given a Typed DataSet called dsOrders that contains a DataTable called

OrderHeaders, you can reference the value of the OrderID column in the first row as:

Me.dsOrders.OrderHeaders(0).OrderID

If you were working with an Untyped DataSet with the same structure, however, you

would need to reference the OrderHeaders DataTable and OrderID Column through the

Tables and Item collections, respectively:

Me.dsOrders.Tables(“OrderHeader”).Rows(0).Item(“OrderID”)

If you’re working in Microsoft Visual Studio, the Visual Studio code editor supports a

Typed DataSet’s tables and columns through IntelliSense, which makes the reference

even easier.

The Typed DataSet provides another important benefit: it allows compile-time type

checking of data values, which is referred to as strong typing. For example, assuming

that OrderTotal is numeric, the compiler would generate an error in the following line:

Me.dsOrders.OrderHeader.Rows(0).OrderTotal = “Hello, world”

But if you were working with an Untyped DataSet, the following line would compile

without error:

Me.dsOrders.Tables(“OrderHeader”).Rows(0).Item(“OrderTotal”) = “Hello, world”

Despite the advantages of the Typed DataSet, there are times when you’ll need an

Untyped DataSet. For example, your application may receive a DataSet from a middletier

component or a Web service, and you won’t know the structure of the DataSet until

run time. Or you may need to reconfigure a DataSet’s schema at run time, in which case

regenerating a Typed DataSet would be an unnecessary overhead.

Creating DataSets

As always, Visual Studio provides several different methods for creating DataSets, both

interactively and programmatically.

Microsoft ADO.Net – Step by Step 111

Creating Typed DataSets

Roadmap We’ll explore the XML SchemaDesigner in Chapter 13.

In previous chapters, we created Typed DataSets from DataAdapters by using the

Generate Dataset command. In this chapter, we’ll use the Component Designer. You

can also create them programmatically and by using the XML Schema Designer. We’ll

examine both of those techniques in detail in Part V. We will, however, use the Schema

Designer in this chapter to confirm our changes.

Create a Typed DataSet Using the Component Designer

1. Open the DataSets project from the Start page or the Project menu.

2. Double-click DataSets.vb (or DataSets.cs, if you’re using C#) in the

Solution Explorer.

Visual Studio opens the form in the form designer.

3. Select the daCustomers DataAdapter in the Component Designer.

4. Choose Generate Dataset from the Data menu.

The Generate Dataset dialog box opens.

Microsoft ADO.Net – Step by Step 112

5. In the New text box, change the name of the new DataSet to

dsMaster.

6. Click OK.

Visual Studio creates a Typed DataSet and adds an instance of it to the

Component Designer.

The DataSet object’s Tables collection, being a collection, can contain multiple

DataTables, and the Visual Studio Generate Dataset dialog box allows you to add the

result sets returned by a DataAdapter to an existing DataSet.

Because all of the result sets returned by the defined DataAdapters are displayed in the

Generate Dataset dialog box, you can add them all in a single operation by selecting the

check boxes next to their names.

Add a DataTable to an Existing Typed DataSet

1. Select daOrders in the Component Designer.

Microsoft ADO.Net – Step by Step 113

2. Choose Generate dataset from the Data Menu.

Visual Studio displays the Generate dataset dialog.

3. Verify that the default option to add the DataTable to the existing

dsMaster DataSet is selected, and then click OK.

Visual Studio adds the DataTable to dsMaster.

4. Select dsMaster in the Component Designer, and then click View

Schema at the bottom of the Properties window.

Visual Studio opens the XML Schema Designer.

5. Verify that the DataSet contains both DataTables, and then close the

XML Schema Designer.

Creating Untyped DataSets

You can create Untyped DataSets both interactively in Visual Studio and

programmatically at run time. Within Visual Studio, you can create both Typed and

Untyped DataSets by dragging the DataSet control from the Toolbox.

Create an Untyped DataSet Using Visual Studio

1. Drag a DataSet control from the Data tab of the Toolbox onto the form.

Visual Studio displays the Add Dataset dialog.

Microsoft ADO.Net – Step by Step 114

2. Select the Untyped dataset option, and then click OK.

Visual Studio adds the DataSet to the Component Designer.

3. In the Properties window, change both the DataSetName property and

the Name property to dsUntyped.

The DataSet object supports three versions of the usual New constructor to create an

Untyped DataSet in code, as shown in Table 6-1. Only the first two are typically used in

application programs.

Table 6-1: DataSet Constructors

Method Description

New() Creates an

Untyped

DataSet

with the

default

name

NewDataSet

New(dsName) Creates an

Untyped

DataSet

with the

name

passed in

the dsName

string

New(SerializationInfo, StreamingContext) Used

Microsoft ADO.Net – Step by Step 115

Table 6-1: DataSet Constructors

Method Description

internally by

the .NET

Framework

Create an Untyped DataSet at Run Time

Visual Basic .NET

1. Press F7 to open the code editor.

2. Expand the region labeled Windows Form Designer generated code,

and then scroll to the bottom of the class-level declarations.

3. Add the following declaration to the end of the section:

Dim dsEmployees As New System.Data.DataSet(“dsEmployees”)

Visual C# .NET

1. Press F7 to open the code editor.

2. Add the following declaration to the beginning of the class declaration:

private System.Data.DataSet dsEmployees;

3. Add the following instantiation to the frmDataSets sub, after the call to

InitializeComponent:

dsEmployees = new System.Data.DataSet(“dsEmployees”);

DataSet Properties

The properties exposed by the DataSet object are shown in Table 6-2.

Table 6-2: DataSet Properties

Property Value

CaseSensitive Determines

whether

compariso

ns are

casesensitive

DataSetName The name

used to

reference

the

DataSet in

code

DefaultViewManager Defines the

default

filtering

and sorting

order of the

DataSet

EnforceConstraints Determines

whether

constraint

rules are

followed

during

changes

Microsoft ADO.Net – Step by Step 116

Table 6-2: DataSet Properties

Property Value

ExtendedProperties Custom

user

information

HasErrors Indicates

whether

any of the

DataRows

in the

DataSet

contain

errors

Locale The locale

information

to be used

when

comparing

strings

Namespace The

namespac

e used

when

reading or

writing an

XML

document

Prefix An XML

prefix used

as an alias

for the

namespac

e

Relations A collection

of

DataRelati

on objects

that define

the

relationship

of the

DataTables

within the

DataSet

Tables The

collection

of

DataTables

contained

in the

DataSet

Roadmap We’ll examine the DataSet’s XML-related methods in Chapter

14.

Microsoft ADO.Net – Step by Step 117

The majority of properties supported by the DataSet are related to its interaction with

XML. We’ll examine these properties in Chapter 14. Of the non-XML properties, the two

most important are the Tables and Relations collections, which contain and define the

data maintained within the DataSet.

The DataSet Tables Collection

Roadmap We’ll examine the properties and methods of DataTables in

detail in Chapter 5.

For Typed DataSets, the contents of the DataSet’s Tables collection are defined by the

DataSet schema. For Untyped DataSets, you can create the tables and their columns

either programmatically or through the Visual Studio designers.

Add a DataTable to an Untyped DataSet Using Visual Studio

1. Select the dsUntyped DataSet in the form designer.

2. In the Properties window, select the Tables property, and then click

the ellipsis button.

The Tables Collection Editor opens.

3. Click Add.

Visual Studio adds a new table called Table1 to the DataSet.

4. Change both the Name and TableName properties to dtMaster.

Microsoft ADO.Net – Step by Step 118

5. Select the Columns property, and then click the ellipsis button.

The Columns Collection Editor opens.

6. Click Add.

Visual Studio adds a column named Column1 to the DataTable.

7. Set the column’s properties to the values shown in the following table.

Property Value

AllowDbNull False

AutoIncrement True

Caption MasterID

ColumnName MasterID

DataType System.Int32

Name MasterID

Microsoft ADO.Net – Step by Step 119

8.

9. Click Add again, and then set the new column’s properties to the

values shown in the following table.

Property Value

Caption MasterValue

ColumnName MasterValue

Name MasterValue

10.

11. Click Close.

The Columns Collection Editor closes.

12. In the Tables Collection Editor, click Add to add a second table to the

DataSet.

13. Change both the Name and TableName properties to dtChild.

Microsoft ADO.Net – Step by Step 120

14. Click the Columns property, and then click the ellipsis button.

The Columns Collection Editor opens.

15. Click Add.

Visual Studio adds a column named Column1 to the DataTable.

16. Set the column’s properties to the values shown in the following table.

Property Value

AllowDbNull False

AutoIncrement True

Caption ChildID

ColumnName ChildID

DataType System.Int32

Name ChildID

17.

Microsoft ADO.Net – Step by Step 121

18. Click Add again, and then set the column’s properties to the values

shown in the following table.

Property Value

AllowDbNull False

Caption MasterLink

ColumnName MasterLink

DataType System.Int32

Name MasterLink

19.

20. Click Add again, and then set the new column’s properties to the

values shown in the following table.

Property Value

Caption ChildValue

ColumnName ChildValue

Name ChildValue

Microsoft ADO.Net – Step by Step 122

21.

22. Click Close.

The Columns Collection Editor closes.

23. Click Close on the Tables Collection Editor.

Add a DataTable to an Untyped DataSet at Run Time

Visual Basic .NET

1. In the code editor window, select btnTable in the ControlName list,

and then select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create the Employees table and its columns:

3. Dim strMessage as String

4.

5. ‘Create the table

6. Dim dtEmployees As System.Data.DataTable

7. dtEmployees = Me.dsEmployees.Tables.Add(“Employees”)

8.

9. ‘Add the columns

10. dtEmployees.Columns.Add(“EmployeeID”, _

11. Type.GetType(“System.Int32”))

12. dtEmployees.Columns.Add(“FirstName”, _

13. Type.GetType(“System.String”))

14. dtEmployees.Columns.Add(“LastName”, _

15. Type.GetType(“System.String”))

16.

17. ‘Fill the DataSet

18. Me.daEmployees.Fill(Me.dsEmployees.Tables(“Emplo

yees”))

19. strMessage = “The first employee is ”

20. strMessage &= _

Microsoft ADO.Net – Step by Step 123

21.

Me.dsEmployees.Tables(“Employees”).Rows(0).Item(“LastNam

e”)

MessageBox.Show(strMessage)

22. Press F5 to run the application.

23. Click CreateTable.

The application displays a message box containing the last name of the first

employee.

24. Click OK to close the message box, and then close the application.

Visual C# .NET

1. In the form designer, double-click the Create Table button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to create the Employees table and its columns:

3. string strMessage;

4.

5. // Create the table

6. System.Data.DataTable dtEmployees;

7. dtEmployees = this.dsEmployees.Ta bles.Add(“Employees”);

8.

9. //Add the columns

Microsoft ADO.Net – Step by Step 124

10. dtEmployees.Columns.Add(“EmployeeID”,

Type.GetType(“System.Int32”));

11. dtEmployees.Columns.Add(“FirstName”,

Type.GetType(“System.String”));

12. dtEmployees.Columns.Add(“LastName”,

Type.GetType(“System.String”));

13.

14. //Fill the dataset

15. this.daEmployees.Fill(this.dsEmployees.Tables[“Emplo

yees”]);

16.

17. strMessage = “The first employee is “;

18. strMessage +=

this.dsEmployees.Tables[“Employees”].Rows[0][“LastName”];

MessageBox.Show(strMessage);

19. Press F5 to run the application.

20. Click CreateTable.

The application displays a message box containing the last name of the first

employee.

21. Click OK to close the message box, and then close the application.

Microsoft ADO.Net – Step by Step 125

DataSet Relations

While the DataSet’s Tables collection defines the structure of the data stored in a

DataSet, the Relations collection defines the relationships between the DataTables. The

Relations collection contains zero or more DataRelation objects, each one representing

the relationship between two tables.

As we’ll see in the next chapter, the DataRelation object allows you to easily move

between parent and child rows—given a parent, you can find all the related children, or

given a child, you can find its parent row. DataRelation objects also provide a

mechanism for enforcing relational integrity through their ChildKeyConstraint and

ParentKeyConstraint properties.

Important Even if constraints are established in the DataRelation

object, they will be enforced only if the DataSet’s

EnforceConstraints property is True.

Add a DataRelation to an Untyped DataSet Using Visual Studio

1. Select the dsUntyped DataSet in the Component Designer.

2. In the Properties window, select the Relations property, and then click

the ellipsis button.

The Relations Collection Editor opens.

3. Click Add.

The Relation dialog box opens.

Microsoft ADO.Net – Step by Step 126

4. Change the name of the relation to MasterChild, the Key Column to

MasterID, and the Foreign Key Column to MasterLink.

5. Click OK.

Visual Studio adds the DataRelation to the DataSet.

6. Click Close.

Roadmap We’ll discuss the XML Schema Designer in Chapter 13.

The Visual Studio Relations Collection Editor is available for only Untyped DataSets. For

Typed DataSets, you can use the XML Schema Designer, which we’ll examine in

Chapter 13, or you can add DataRelations pro-grammatically. You can, of course, also

add DataRelations to Untyped DataSets at run time.

Add a DataRelation to a Dataset at Run Time

Visual Basic .NET

1. In the code editor, select btnRelation in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

Microsoft ADO.Net – Step by Step 127

2. Add the following code to create the DataRelation:

3. Dim strMessage As String

4.

5. ‘Add a new relation

6. Me.dsMaster1.Relations.Add(“CustomerOrders”, _

7. Me.dsMaster1.CustomerList.CustomerIDColumn, _

8. Me.dsMaster1.OrderTotals.CustomerIDColumn)

9.

10. strMessage = “The name of the DataRelation is ”

11. strMessage &=

Me.dsMaster1.Relations(0).RelationName.ToString

12. MessageBox.Show(strMessage)

13. Press F5 to run the application.

14. Click Create Relation.

The application adds the DataRelation, and then displays a message box

containing the name of the DataRelation.

15. Click OK to close the message box, and then close the application.

Visual C# .NET

1. In the form designer, double-click the Clone DataSet button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to create the DataRelation:

Microsoft ADO.Net – Step by Step 128

3. string strMessage;

4.

5. //Add a new relation

6. this.dsMaster1.Relations.Add(“CustomerOrders”,

7. this.dsMaster1.CustomerList.CustomerIDColumn,

8. this.dsMaster1.OrderTotals.CustomerIDColumn);

9.

10. strMessage = “The name of the DataRelation is “;

11. strMessage+=

this.dsMaster1.Relations[0].RelationName.ToString();

MessageBox.Show(strMessage);

12. Press F5 to run the application.

13. Click Create Relation.

The application adds the DataRelation, and then displays a message box

containing the name of the DataRelation.

14. Click OK to close the message box, and then close the application.

DataSet Methods

The primary methods supported by the DataSet object are listed in Table6-3. Like the

DataSet’s properties, the majority of its methods are related to its interaction with XML

and will be examined in Part V.

Roadmap We’ll examine the relationship between ADO.NET and XML

in Part V.

Table 6-3: Primary DataSet Methods

Method Description

AcceptChanges Commits all

pending

changes to

the DataSet

Clear Empties all

the tables in

the DataSet

Clone Copies the

structure of

Microsoft ADO.Net – Step by Step 129

Table 6-3: Primary DataSet Methods

Method Description

a DataSet

Copy Copies the

structure

and

contents of

a DataSet

GetChanges Returns a

DataSet

containing

only the

changed

rows in each

of its tables

GetXml Returns an

XML

representati

on of the

DataSet

GetXmlSchema Returns an

XSD

representati

on of the

DataSet’s

schema

HasChanges Returns a

Boolean

value

indicating

whether the

DataSet has

pending

changes

InferXmlSchema Infers a

schema

from an

XML

TextReader

or file

Merge Combines

two

DataSets

ReadXml Reads an

XML

schema and

data into the

DataSet

ReadXmlSchema Reads an

XML

schema into

the DataSet

Microsoft ADO.Net – Step by Step 130

Table 6-3: Primary DataSet Methods

Method Description

RejectChanges Rolls back

all changes

pending in

the DataSet

Reset Returns the

DataSet to

its original

state

WriteXml Writes an

XML

schema and

data from

the DataSet

WriteXmlSchema Writes the

DataSet

structure as

an XML

schema

Roadmap We’ll examine the HasChanges, GetChanges,

AcceptChanges and RejectChanges methods in Chapter 9.

HasChanges, GetChanges, AcceptChanges, RejectChanges, and Merge are used when

updating the DataSet’s Tables collection, and we’ll examine those in the Chapter 9.

That leaves only three methods: Clear, which we’ve used extensively already; Clone,

which creates an empty copy of the DataSet; and Copy, which creates a complete copy

of the DataSet and its data.

Cloning a DataSet

The Clone method creates an exact duplicate of a DataSet, including its Tables,

Relations, and constraints.

Clone a DataSet

Visual Basic .NET

1. In the code editor, select btnClone in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to clone the record set:

3. Dim strMessage As String

4. Dim dsClone As System.Data.DataSet

5.

6. dsClone = Me.dsMaster1.Clone()

7. strMessage = “The cloned dataset has ”

8. strMessage &= dsClone.Tables.Count.ToString

9. strMessage &= ” Tables.”

10. MessageBox.Show(strMessage)

11. Press F5 to run the application.

12. Click Clone DataSet.

The application displays a message box containing the number of tables in

the new DataSet.

Microsoft ADO.Net – Step by Step 131

13. Close the application.

Visual C# .NET

1. In the form designer, double-click the Create Relation button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to clone the record set:

3. string strMessage;

4. System.Data.DataSet dsClone;

5.

6. dsClone = this.dsMaster1.Clone();

7.

8. strMessage = “The cloned dataset has “;

9. strMessage += dsClone.Tables.Count.ToString();

10. strMessage += ” tables.”;

MessageBox.Show(strMessage);

11. Press F5 to run the application.

12. Click Clone DataSet.

The application displays a message box containing the number of tables in

the new DataSet.

13. Close the application.

Microsoft ADO.Net – Step by Step 132

Copying a DataSet

Unlike the Clone method, which duplicates only the structure of a DataSet, the Copy

method copies both its structure and its data.

Copy a DataSet

Visual Basic .NET

1. In the code editor, select btnCopy in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to copy the DataSet:

3. Dim strMessage As String

4. Dim dsCopy As System.Data.DataSet

5.

6. ‘Fill the original dataset

7. Me.daCustomers.Fill(Me.dsMaster1.CustomerList)

8.

9. dsCopy = Me.dsMaster1.Copy

10. strMessage = “The copied dataset has ”

11. strMessage &= _

dsCopy.Tables(“CustomerList”).Rows.Count.ToString

strMessage &= ” rows in the CustomerList.”

12. Press F5 to run the application.

13. Click Copy DataSet.

Visual Studio displays a message box containing the number of rows in the

CustomerList table.

14. Click OK to close the message box, and then close the application.

Visual C# .NET

1. In the form designer, double-click the Copy DataSet button.

Visual Studio adds the Click event handler to the code.

2. Add the following code to copy the DataSet:

3. string strMessage;

4. System.Data.DataSet dsCopy;

5.

6. //Fill the original dataset

Microsoft ADO.Net – Step by Step 133

7. this.daCustomers.Fill(this.dsMaster1.CustomerList);

8.

9. dsCopy = this.dsMaster1.Copy();

10. strMessage = “The copied dataset has “;

11. strMessage +=

dsCopy.Tables[“CustomerList”].Rows.Count.ToString();

12. strMessage += ” rows in the CustomerList.”;

MessageBox.Show(strMessage);

13. Press F5 to run the application.

14. Click Copy DataSet.

Visual Studio displays a message box containing the number of rows in the

CustomerList table.

15. Click OK to close the message box, and then close the application.

Chapter 6 Quick Reference

To Do this

Create a

Typed

DataSet

using the

Component

Designer

Select a DataAdapter, and then choose Generate Dataset

From the Data menu.

Create an

Untyped

DataSet

using Visual

Studio

Drag a DataSet control from the Data tab of the Toolbox onto

the form.

Create an

Untyped

DataSet at

run time

Use the New method of the DataSet object:

myDs = New System.Data.DataSet()

Add a

DataTable to

an Untyped

DataSet

In the Property window for the DataSet, click the Tables

property, and then click the ellipsis button.

Microsoft ADO.Net – Step by Step 134

To Do this

using Visual

Studio

Add a

DataTable to

an Untyped

DataSet at

run time

Use the Add method of the DataTable’s Columns collection:

myTable.Columns.Add(“Name”,Type.GetType(“type”)

Add a

DataRelatio

n to an

Untyped

DataSet

using Visual

Studio

In the Properties window, click the Relations property, and then

click the ellipsis button.

Add a

DataRelatio

n to a

DataSet at

run time

Use the Add method of the DataSet’s Relations collection:

myDS.Relations.Add(“Name”, ParentCol, ChildCol)

Clone a

DataSet

Use the Clone method: newDS =myDS.Clone()

Copy a

DataSet

Use the Copy method: newDS = myDS.Copy()

Chapter 7: The DataTable

Overview

In this chapter, you’ll learn how to:

§ Create an independent DataTable at run time

§ Add a DataTable to an existing DataSet

§ Add a PrimaryKey constraint by using the FillSchema method

§ Create a calculated column in a DataTable

§ Add a new row to the Rows collection

§ Display the RowState of a DataRow

§ Add a ForeignKey constraint to a DataTable

§ Add a UniqueConstraint to a DataTable

§ Display a subset of rows within a DataTable

§ Retrieve data related to the current DataRow

We’ve been working with DataTables in the previous several chapters, but in this

chapter, we’ll take a detailed look at their structure, properties, and methods.

Understanding DataTables

Remember that we defined DataSets as an in-memory representation of relational data.

DataTables contain the actual data. They can exist as part of the DataSet’s Tables

collection or can be created independently.

As we’ll see, although the DataTable has properties of its own, it functions primarily as a

container for three collections: the Columns collection, which defines the structure of the

Microsoft ADO.Net – Step by Step 135

table; the Rows collection, which contains the data itself; and the Constraints collection,

which works in conjunction with the DataTable’s PrimaryKey property to enforce integrity

rules on the data.

Creating DataTables

In previous chapters, we used a number of techniques to create DataTables as part of a

DataSet—we used the Fill method of the DataAdapter, the Add method of the DataSet,

and the Table Collection Editor that’s part of Microsoft Visual Studio .NET. Tables can

also be created for Typed DataSets by using the XML Schema Designer in Visual

Studio, as we’ll see in Part V.

In this chapter, we’ll concentrate on creating DataTables at run time, using the DataSet’s

Add method and the DataTable’s New constructor.

Roadmap Run-time DataTables can also be created by using the

DataSet’s ReadXML, ReadXMLSchema, and

InferXmlSchema methods. We’ll examine those in Chapter

14.

Creating Independent DataTables

Although DataTa bles are most often used as part of a DataSet, they can be created

independently. You might want to create an independent DataTable to provide data for a

bound control, for example, or simply so that it can be configured before being added to

the DataSet.

The three forms of the DataTable’s New constructor are shown in Table 7-1. Of these,

only the first two are typically used in application programs.

Table 7-1: DataTable Constructors

Method Description

New() Creates a

new

DataTable

New(TableName) Creates a

new

DataTable

with the

name

specified in

the

TableName

string

New(SerializableInfo, StreamingContext) Used

internally by

the .NET

Framework

Create an Independent DataTable Object at Run Time

Visual Basic .NET

1. Open the DataTables project on the Start Page or from the Project

menu.

2. Double-click DataTables.vb in the Server Explorer.

Visual Studio opens the form designer.

Microsoft ADO.Net – Step by Step 136

3. On the form, double-click Add Table.

Visual Studio adds the Click event handler template to the code.

4. Add the following code to create a DataTable, and then set its name

to Employees:

5. Dim strMessage As String

6.

7. ‘Create the table

8. Dim dtEmployees As New System.Data.DataTable(“Employees”)

9.

10. strMessage = “The table name is ”

11. strMessage &= dtEmployees.TableName.ToString

MessageBox.Show(strMessage)

This code uses the New(tableName) version of the constructor to create a

DataTable named dtEmployees, and then displays the table name in a

message box.

12. Press F5 to run the application.

13. Click Add Table.

The application displays a message box containing the name of the table.

Microsoft ADO.Net – Step by Step 137

14. Close the application.

Visual C# .NET

1. Open the DataTables project on the Start Page or from the Project

menu.

2. Double-click DataTables.cs in the Server Explorer.

Visual Studio opens the form designer.

3. In the form designer, double-click Add Table.

Visual Studio adds the Click event handler template to the code.

4. Add the following code to create a DataTable, and then set its name

to Employees:

5. string strMessage;

6.

7. //Create the table

8. System.Data.DataTable dtEmployees;

9. dtEmployees = new System.Data.DataTable(“Employees”);

10.

Microsoft ADO.Net – Step by Step 138

11. strMessage = “The table name is “;

12. strMessage += dtEmployees.TableName.ToString();

13.

MessageBox.Show(strMessage);

This code uses the New(tableName) version of the constructor to create a

DataTable named dtEmployees, and then displays the table name in a

message box.

14. Press F5 to run the application.

15. Click Add Table.

The application displays a message box containing the name of the table.

16. Close the application.

Creating DataSet Tables

Table 7-2 shows the four methods that can be used to add a table to the DataSet’s

Tables collection. These methods are called on the Tables collection, not the DataSet

itself, for example, myDataSet.Tables.Add(), not myDataSet.Add().

Table 7-2: DataSet Add Table Methods

Method Description

Tables.Add() Creates a

new

DataTable

within the

Microsoft ADO.Net – Step by Step 139

Table 7-2: DataSet Add Table Methods

Method Description

DataSet

with the

name

TableN,

where N is a

sequential

number

Tables.Add (TableName) Creates a

new

DataTable

with the

name

specified in

the

TableName

string

Tables.Add (DataTable) Adds the

specified

DataTable

to the

DataSet

Tables.AddRange (TableArray) Adds the

DataTables

included in

the

TableArray

array to the

DataSet

The first version of the Add method creates a DataTable with the name TableN, where N

is a sequential number. Note that this behavior is different from creating an independent

DataTable without passing a table name to the constructor. In the latter case, the

TableName property will be an empty string.

We used the second version of the Add method, Add(TableName), in the previous

chapter. This version creates the new table and sets its TableName property to the string

supplied as a parameter.

You can add an independent DataTable that you’ve created at run time, or add a

DataTable that exists in another DataSet, by using the Add(DataTable) version, while the

AddRange method allows you to add an array of DataTables (again, either DataTables

that you’ve created at run time or DataTables belonging to another DataSet).

Create a DataTable Using the Tables.Add Method

Visual Basic .NET

1. In the code editor, select btnDataSet in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to add a DataTable with a default name to the

DataSet:

3. Dim strMessage As String

4.

5. ‘Create the table

6. Me.dsEmployees.Tables.Add()

7.

Microsoft ADO.Net – Step by Step 140

8. strMessage = “The table name is ”

9. strMessage &= Me.dsEmployees.Tables(0).TableName.ToString

MessageBox.Show(strMessage)

The code uses the version of the Add method that creates a new table with

the default name of TableN.

10. Press F5 to run the application.

11. Click DataSet Table.

The application displays a message box containing the name of the table.

12. Close the application.

Visual C# .NET

1. In the form designer, double-click the Dataset Table button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to add a DataTable with a default name to the

DataSet:

3. string strMessage;

4.

5. //Create the table

6. this.dsEmployees.Tables.Add();

7.

8. strMessage = “The table name is “;

9. strMessage +=

this.dsEmployees.Tables[0].TableName.ToString();

MessageBox.Show(strMessage);

The code uses the version of the Add method that creates a new table with

the default name of TableN.

10. Press F5 to run the application.

11. Click DataSet Table.

The application displays a message box containing the name of the table.

Microsoft ADO.Net – Step by Step 141

12. Close the application.

DataTable Properties

The primary properties of the DataTable are shown in Table 7-3. The most important of

these are the three collections that control the data—Columns, Rows, and Constraints.

We’ll look at each of these in detail later in this chapter.

Table 7-3: DataTable Properties

Property Description

CaseSensitive Determines

how string

comparison

s will be

performed.

ChildRelations A collection

of

DataRelatio

n objects

that have

this

DataTable

as the

Parent

table.

Columns The

collection of

DataColumn

objects

within the

DataTable.

Constraints The

collection of

constraints

maintained

by the

DataTable.

DataSet The DataSet

of which this

Microsoft ADO.Net – Step by Step 142

Table 7-3: DataTable Properties

Property Description

DataTable is

a member.

DisplayExpression An

expression

used to

represent

the table

name in the

user

interface

(UI).

HasErrors Indicates

whether

there are

errors in any

of the rows

belonging to

the

DataTable.

ParentRelations A collection

of

DataRelatio

n objects

that have

this

DataTable

as the Child

table.

PrimaryKey An array of

columns

that function

as the

primary key

of the table.

Rows The

collection of

rows

belonging to

the table.

TableName The name of

the

DataTable

in the

DataSet.

This is the

name by

which the

DataTable is

referenced

in code.

If the DataTable belongs to a DataSet, the CaseSensitive property will default to the

value of the corresponding DataSet.CaseSensitive property. Otherwise, the default value

will be False.

Microsoft ADO.Net – Step by Step 143

The ChildRelations and ParentRelations collections contain references to the

DataRelations that reference the table as a child or parent, respectively. For most

independent DataTables, these collections will be Null, but it is theoretically possible to

add a relation to the ChildRelations and ParentRelations collections if, for example, the

DataTable is related to itself.

The DisplayExpression property is similar to the Caption property of a column in that it

determines how the name of the table will be displayed to the user at run time, but unlike

the Caption property, DisplayExpression uses an expression to determine the value at

run time. One of the uses of the DataExpression property is to calculate the way the

table is displayed based on the contents of the table.

Using DataTable Properties

Most DataTable properties are set just like the properties of any other object—by a

simple assignment, or if the property is a collection, by calling the collection’s Add

method. Additionally, the structure of a DataTable based on a table in a data source can

be established using the FillSchema method of the DataAdapter. In Chapter 6, we used

FillSchema to load the entire structure of a DataTable. It can also be used to load

DataTable constraints such as the primary key.

Add a PrimaryKey Constraint Using the DataAdapter’s FillSchema Method

Visual Basic .NET

1. In the code editor, select btnSchema in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to create the table and its PrimaryKey

constraint by using FillSchema:

3. Dim strMessage As String

4.

5. Me.dsEmployees.Tables.Add(“Employees”)

6. Me.daEmployees.FillSchema(Me.dsEmployees.Tables(“Employe

es”), _

7. SchemaType.Source)

8.

9. With Me.dsEmployees.Tables(“Employees”)

10. strMessage = “Primary Key: ”

11. strMessage &=

.PrimaryKey(0).ColumnName.ToString

12. strMessage &= vbCrLf & “Constraint Count: ”

13. strMessage &=

.Constraints(0).ConstraintName.ToString

14. MessageBox.Show(strMessage)

End With

15. Press F5 to run the application.

16. Click FillSchema.

The application displays a message box showing the column of the primary

key and the number of constraints.

Microsoft ADO.Net – Step by Step 144

17. Close the application.

Visual C# .NET

1. In the form designer, double-click the FillSchema button.

2. Visual Studio adds the Click event handler to the code window.

3. Add the following code to create the table and its PrimaryKey

constraint by using FillSchema:

4. string strMessage;

5. System.Data.DataTable dt;

6.

7. dt = this.dsEmployees.Tables.Add(“Employees”);

8.

9. this.daEmployees.FillSchema(dt,

10. SchemaType.Source);

11.

12. strMessage = “Primary Key: “;

13. strMessage += dt.PrimaryKey[0].ColumnName.ToString();

14. strMessage += “\nConstraint Count: “;

15. strMessage += dt.Constraints[0].ConstraintName.ToString();

16. MessageBox.Show(strMessage);

17. Press F5 to run the application.

18. Click FillSchema.

The application displays a message box showing the column of the primary

key and the number of constraints.

Microsoft ADO.Net – Step by Step 145

19. Close the application.

The Columns Collection

The DataTable’s Columns collection contains zero or more DataColumn objects that

define the structure of the table. If the DataTable is created by a DataAdapter’s Fill or

FillSchema method, the Columns collection will be generated automatically.

If you’re creating a DataColumn in code, you can use one of the New constructors

shown in Table 7-4.

Table 7-4: DataColumn Constructors

Method Description

New() Creates a

new

DataColumn

with no

ColumnNam

e or Caption

New(columnName) Creates a

new

DataColumn

with the

name

specified in

the

columnNam

e string

New(columnName, dataType) Creates a

new

DataColumn

with the

name

specified in

the

columnNam

e string and

the data type

specified by

the dataType

parameter

New(columnName, DataType, Expression) Creates a

new

Microsoft ADO.Net – Step by Step 146

Table 7-4: DataColumn Constructors

Method Description

DataColumn

with the

name

specified in

the

columnNam

e string and

the specified

DataType

and

Expression

New(columnName, DataType, Expression,

ColumnMapping)

Creates a

new

DataColumn

with the

name

specified in

the

columnNam

e string and

the specified

DataType,

Expression,

and

ColumnMap

ping

The primary properties of the DataColumn are shown in Table 7-5. They correspond

closely to the properties of data columns in most relational databases.

Table 7-5: DataColumn Properties

Property Description

AllowDbNull Determines

whether the

column can be

left empty

AutoIncrement Determines

whether the

system will

automatically

increment the

value of the

column

AutoIncrementSeed The starting

value for an

AutoIncrement

column

AutoIncrementStep The increment

by which an

AutoIncrement

column will be

increased. For

example, if the

AutoIncrementS

Microsoft ADO.Net – Step by Step 147

Table 7-5: DataColumn Properties

Property Description

eed is 1, and the

AutoIncrementS

tep is 3, the first

value will be 1,

the second 4,

the third 7, and

so on

Caption The name of the

column

displayed in

some controls,

such as the

DataGrid. The

default value is

the

ColumnName

ColumnName The name of the

table in the

DataSet’s

Tables

collection. This

is the name by

which the

column can be

referenced in

code

DataType The .NET

Framework data

type of the

column

DefaultValue The value of the

column provided

by the system if

no other value is

provided

Expression The expression

used to

calculate the

value of the

column

MaxLength The maximum

length of a text

column

ReadOnly Determines

whether the

value of the

column can be

changed after

the row

containing it has

been added to

the table

Microsoft ADO.Net – Step by Step 148

Table 7-5: DataColumn Properties

Property Description

Unique Determines

whether each

row in the table

must have a

unique value for

this column

Important There is an incompatibility between the .NET Framework

decimal data type and the Microsoft SQL Server decimal

data type. The .NET Framework decimal data type allows a

maximum of 28 significant digits, while the SQL Server

decimal data type allows 38 significant digits. If a

DataColumn is defined as System.Decimal and it is filled

from a SQL Server table, any rows containing more than 28

significant digits will cause an exception.

Create a Calculated Column

Visual Basic .NET

1. Select btnCalculate in the ControlName list, and then select Click in

the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code, which first adds an Employees table to the

dsEmployees DataSet and then uses the daEmployees

DataAdapter to create the pre-existing columns and fill them with

data:

3. Dim dcName As System.Data.DataColumn

4.

5. ‘Create the table

6. Me.dsEmployees.Tables.Add(“Employees”)

7.

8. ‘Fill the table from daEmployees

Me.daEmployees.Fill(Me.dsEmployees.Tables(0))

9. Add the following code to create the column and then add it to the

table:

10. ‘Create the column

11. dcName = New System.Data.DataColumn(“Name”)

12. dcName.DataType = System.Type.GetType(“System.String”)

13. dcName.Expression = “FirstName + ‘ ‘ + LastName”

14.

15. ‘Add the calculated column

16. Me.dsEmployees.Tables(“Employees”).Columns.Add(dcName)

17. Add the following code to bind the lbEmployees list box to the

calculated column so that we can see the results:

Important Make sure that you choose the lbEmployees list box, not the

lblEmployees label.

18. ‘Bind to the listbox

19. Me.lbEmployees.DataSource =

Me.dsEmployees.Tables(“Employees”)

20. Me.lbEmployees.DisplayMember = “Name”

Microsoft ADO.Net – Step by Step 149

21. Press F5 to run the application.

22. Click Calculate.

The application displays the full name of the employees in the list box.

23. Close the application.

Visual C# .NET

1. In the form designer, double-click the Calculate button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure, which first adds an Employees table to

the dsEmployees DataSet, and then uses the daEmployees

DataAdapter to create the pre-existing columns and fill them with

data:

3. System.Data.DataColumn dcName;

4.

5. //Create the table

6. this.dsEmployees.Tables.Add(“Employees”);

7.

8. //Fill the data from the dataset

this.daEmployees.Fill(this.dsEmployees.Tables[0]);

9. Add the following code to create the column and then add it to the

table:

10. //Create the column

11. dcName = new System.Data.DataColumn(“Name”);

12. dcName.DataType = System.Type.GetType(“System.String”);

13. dcName.Expression = “FirstName + ‘ ‘ + LastName”;

14.

15. //Add the calculated column

this.dsEmployees.Tables[“Employees”].Columns.Add(dcName);

16. Add the following code to bind the lbEmployees list box to the

calculated column so that we can see the results:

Important Make sure that you choose the lbEmployees list box, not the

lblEmployees label.

17. //Bind to the listbox

18. this.lbEmployees.DataSource =

this.dsEmployees.Tables[“Employees”];

Microsoft ADO.Net – Step by Step 150

19. this.lbEmployees.DisplayMember = “Name”;

20. Press F5 to run the application.

21. Click Calculate.

The application displays the full name of the employees in the list box.

22. Close the application.

Rows

As we’ve seen previously, the DataTable’s Rows collection contains the actual data that

is contained in the DataTable, in the form of zero or more DataRow objects. The

structure of the DataRow is shown in Table 7-6.

Table 7-6: DataRow Properties

Property Description

HasErrors Indicates

whether

there are

any errors in

the row

Item The value of

a column in

the

DataRow

ItemArray The value of

all columns

in the

DataRow

represented

as an array

RowError The custom

error

description

for a row

RowState The

DataRowSta

te of a row

Table The

DataTable

to which the

Microsoft ADO.Net – Step by Step 151

Table 7-6: DataRow Properties

Property Description

DataRow

belongs

Because the Rows property is a collection, you can add new data to the DataTable by

using the Add method, which is available in two forms, as shown in Table 7-7.

Table 7-7: Rows.Add Methods

Method Description

Add(DataRow) Adds the

specified

DataRow to

the table

Add(dataValues()) Creates a

new

DataRow in

the table

and sets its

Item values

as specified

in the

dataValues

object array

Add a New Row to the Rows Collection

Visual Basic .NET

1. Select btnAddRow in the ControlName list, and then select Click in

the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create a new DataRow, and add it to the

Customers table:

3. Dim drNew As System.Data.DataRow

4.

5. ‘Create the new row

6. drNew = Me.dsMaster1.CustomerList.NewRow

7. drNew.Item(“CustomerID”) = “ANEWR”

8. drNew.Item(“CompanyName”) = “A New Row”

9.

10. ‘Add row to table

11. Me.dsMaster1.CustomerList.Rows.Add(drNew)

12.

13. ‘Refresh the display

Me.lbClients.Refresh()

14. Press F5 to run the application.

15. Click Add DataRow.

The application adds the new row to the table.

16. Scroll to the bottom of the Clients list box to confirm the addition.

Microsoft ADO.Net – Step by Step 152

17. Close the application.

Visual C# .NET

1. In the form designer, double-click the Add DataRow button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to create a new DataRow, and add it to

the Customers table:

3. System.Data.DataRow drNew;

4.

5. //Create the new row

6. drNew = this.dsMaster1.CustomerList.NewRow();

7. drNew[“CustomerID”] = “ANEWR”;

8. drNew[“CompanyName”] = “A New Row”;

9.

10. //Add row to table

11. this.dsMaster1.CustomerList.Rows.Add(drNew);

12.

13. //Refresh the display

14. this.lbClients.Refresh();

15. Press F5 to run the application.

16. Click Add DataRow.

The application adds the new row to the table.

17. Scroll to the bottom of the Clients list box to confirm the addition.

Microsoft ADO.Net – Step by Step 153

18. Close the application.

The RowState property of the DataRow reflects the actions that have been taken since

the DataTable was created or since the last time the AcceptChanges method was called.

The possible values for the RowState property are shown in Table 7-8.

Table 7-8: DataRowState Values

Property Description

Added The

DataRow is

new.

Deleted The

DataRow

has been

deleted from

the table.

Detached The

DataRow

has not yet

been added

to a table.

Modified The

contents of

the

DataRow

have been

changed.

Unchanged The

DataRow

has not

been

modified.

Display the Row State

Visual Basic .NET

1. Select btnVersion in the ControlName list, and then select Click in the

MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to edit a row and display its properties:

Microsoft ADO.Net – Step by Step 154

3. Dim strMessage As String

4.

5. With Me.dsMaster1.CustomerList.Rows(0)

6. .Item(“CustomerID”) = “NEWVAL”

7. strMessage = “The RowState is ” & .RowState.ToString

8. strMessage &= vbCrLf & “The original value was ”

9. strMessage &= .Item(“CustomerID”, DataRowVersion.Original)

10. strMessage &= vbCrLf & “The new value is ”

11. strMessage &= .Item(“CustomerID”,

DataRowVersion.Current)

End With

MessageBox.Show(strMessage)

12. Press F5 to run the application.

13. Click Row Version.

The application displays a message box indicating the changes.

14. Close the application.

Visual C# .NET

1. In the form designer, double-click the Row Version button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to edit a row and display its properties:

3. string strMessage;

4. System.Data.DataRow dr;

5.

6. dr = this.dsMaster1.CustomerList.Rows[0];

7. dr[“CustomerID”] = “NEWVAL”;

8.

9. strMessage = “The RowState is ” + dr.RowState.ToString();

10. strMessage += “\nThe original value was “;

11. strMessage += dr[“CustomerID”, DataRowVersion.Original];

12. strMessage += “\nThe new value is “;

13. strMessage += dr[“CustomerID”, DataRowVersion.Current];

14.

Microsoft ADO.Net – Step by Step 155

MessageBox.Show(strMessage);

15. Press F5 to run the application.

16. Click Row Version.

The application displays a message box indicating the changes.

17. Close the application.

Constraints

Along with the DataTable’s PrimaryKey property, the Constraints collection is used to

maintain the integrity of the data within a DataTable. The System.Data.Constraint object

has only the two properties, which are shown in Table 7-9.

Table 7-9: Constraint Properties

Property Description

ConstraintName The name of

the

constraint.

This

property is

used to

reference

the

Constraint in

code.

Table The

DataTable

to which the

constraint

belongs.

Obviously, an object that has only a name and a container is of little use when it comes

to enforcing integrity. In real applications, you will instantiate one of the objects that

inherits from Constraint, ForeignKeyConstraint, or UniqueConstraint.

The properties of the ForeignKeyConstraint object are shown in Table 7-10. This

constraint represents the rules that are enforced when a parent-child relationship exists

between tables (or between rows within a single table).

Table 7-10: ForeignKeyConstraint Properties

Property Description

AcceptRejectRule Determines

the action

Microsoft ADO.Net – Step by Step 156

Table 7-10: ForeignKeyConstraint Properties

Property Description

that should

take place

when the

AcceptChan

ges method

is called

Columns The

collection of

child

columns for

the

constraint

DeleteRule The action

that will take

place when

the row is

deleted

RelatedColumns The

collection of

parent

columns for

the

constraint

RelatedTable The parent

DataTable

for the

constraint

Table Overrides

the

Constraint.T

able property

to return the

child

DataTable

for the

constraint

UpdateRule The action

that will take

place when

the row is

updated

The actions to take place to enforce integrity are maintained by three properties of the

ForeignKeyConstraint: AcceptRejectRule, DeleteRule, and UpdateRule.

The possible values of the AcceptRejectRule property are Cascade or None. The

DeleteRule and UpdateRule properties can be set to any of the values shown in Table 7-

11. Both properties have a default value of Cascade.

Table 7-11: Action Rules

Property Description

Cascade Delete or

update the

Microsoft ADO.Net – Step by Step 157

Table 7-11: Action Rules

Property Description

related rows

None Take no

action on

the related

rules

SetDefault Set values

in the

related rows

to their

default

values

SetNull Set values

in the

related rows

to Null

Add a ForeignKeyConstraint

Visual Basic .NET

1. In the code editor, select btnForeign in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create the ForeignKeyConstraint:

3. Dim strMessage As String

4. Dim fkNew As System.Data.ForeignKeyConstraint

5.

6. With Me.dsUntyped

7. fkNew = New System.Data.ForeignKeyConstraint(“NewFK”, _

8. .Tables(“dtMaster”).Columns(“MasterID”), _

9. .Tables(“dtChild”).Columns(“MasterLink”))

10. .Tables(“dtChild”).Constraints.Add(fkNew)

11.

12. strMessage = “The new constraint is called ”

13. strMessage &=

.Tables(“dtChild”).Constraints(0).ConstraintName.ToString

14. End With

15.

MessageBox.Show(strMessage)

16. Press F5 to run the application.

17. Click Foreign Key.

The application adds the ForeignKeyConstraint and displays its name in a

message box.

Microsoft ADO.Net – Step by Step 158

18. Close the application.

Visual C# .NET

1. In the form designer, double-click the Foreign Key button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to create the ForeignKeyConstraint:

3. string strMessage;

4. System.Data.ForeignKeyConstraint fkNew;

5. System.Data.DataSet ds = this.dsUntyped;

6.

7. fkNew = new System.Data.ForeignKeyConstraint(“NewFK”,

8. ds.Tables[“dtMaster”].Columns[“MasterID”],

9. ds.Tables[“dtChild”].Columns[“MasterLink”]);

10. ds.Tables[“dtChild”].Constraints.Add(fkNew);

11.

12. strMessage = “The new constraint is called “;

13. strMessage +=

14. ds.Tables[“dtChild”].Constraints[0].ConstraintName.ToString();

MessageBox.Show(strMessage);

15. Press F5 to run the application.

16. Click Foreign Key.

The application adds the ForeignKeyConstraint and displays its name in a

message box.

Microsoft ADO.Net – Step by Step 159

17. Close the application.

The UniqueConstraint ensures that the column or columns specified in its Columns

property are unique within the table. Its structure is much simpler than a

ForeignKeyConstraint, as shown in Table 7-12.

Table 7-12: UniqueConstraint Properties

Property Description

Columns The array of

columns

affected by

the

constraint

IsPrimaryKey Indicates

whether the

constraint is

on the

primary key

Add a UniqueConstraint

Visual Basic .NET

1. In the code editor, select btnUnique in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create the UniqueConstraint:

3. Dim strMessage As String

4. Dim ucNew As System.Data.UniqueConstraint

5.

6. With Me.dsUntyped.Tables(“dtMaster”)

7. ucNew = New System.Data.UniqueConstraint(“NewUnique”, _

8. .Columns(“MasterValue”))

9. .Constraints.Add(ucNew)

10.

11. strMessage = “The new constraint is called ”

12. strMessage &=

.Constraints(“NewUnique”).ConstraintName.ToString

Microsoft ADO.Net – Step by Step 160

13. End With

14.

MessageBox.Show(strMessage)

15. Press F5 to run the application.

16. Click Unique.

The application adds the UniqueConstraint and displays its name in a

message box.

17. Close the application.

Visual C# .NET

1. In the form designer, double-click the Unique button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to create the UniqueConstraint:

3. string strMessage;

4. System.Data.UniqueConstraint ucNew;

5. System.Data.DataTable dt = this.dsUntyped.Tables[“dtMaster”];

6.

7. ucNew = new System.Data.UniqueConstraint(“NewUnique”,

8. dt.Columns[“MasterValue”]);

9. dt.Constraints.Add(ucNew);

10.

11. strMessage = “The new constraint is called “;

12. strMessage +=

dt.Constraints[“NewUnique”].ConstraintName.ToString();

MessageBox.Show(strMessage);

13. Press F5 to run the application.

14. Click Unique.

The application adds the UniqueConstraint and displays its name in a

message box.

Microsoft ADO.Net – Step by Step 161

15. Close the application.

DataTable Methods

The methods supported by the DataTable are shown in Table 7-13. We’ve already used

some of these, such as the Clear method, in previous exercises. We’ll examine most of

the others in Chapter 9.

Table 7-13: DataTable Methods

Method Description

AcceptChanges Commits the

pending

changes to

all

DataRows

BeginLoadData Turns off

notifications,

index

maintenanc

e, and

constraint

enforcement

while a bulk

data load is

being

performed.

Used in

conjunction

with the

LoadDataRo

w and

EndLoadDat

a methods

Clear Removes all

DataRows

from the

DataTable

Clone Copies the

structure of

a DataTable

Microsoft ADO.Net – Step by Step 162

Table 7-13: DataTable Methods

Method Description

Compute Performs an

aggregate

operation on

the

DataTable

Copy Copies the

structure

and data of

a DataTable

EndLoadData Reinstates

notifications,

index

maintenanc

e, and

constraint

enforcement

after a bulk

data load

has been

performed

ImportRow Copies a

DataRow,

including all

row values

and the row

state, into a

DataTable

LoadDataRow Used during

bulk

updating of

a DataTable

to update or

add a new

DataRow

NewRow Creates a

new

DataRow

that

matches the

DataTable

schema

RejectChanges Rolls back

all pending

changes on

the

DataTable

Select Gets an

array of

DataRow

objects

Microsoft ADO.Net – Step by Step 163

The Select Method

The Select method is used to filter and sort the rows of a DataTable at run time. The

Select method doesn’t affect the contents of the table. Instead, the method returns an

array of DataRows that match the criteria you specify.

Note The DataView, which we’ll examine in the following chapter, also

allows you to filter and sort data rows.

Use the Select Method to Display a Subset of Rows

Visual Basic .NET

1. In the code editor, select btnSelect in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to select only those Customers whose

CustomerID begins with A, and rebind the lbCustomers list box to

the array of selected rows:

3. Dim drFound() As System.Data.DataRow

4. Dim dr As System.Data.DataRow

5.

6. drFound = Me.dsMaster1.CustomerList.Select(“CustomerID

LIKE” _ & ” ‘A*'”)

7.

8. Me.lbClients.DataSource = Nothing

9. Me.lbClients.Items.Clear()

10.

11. For Each dr In drFound

12. Me.lbClients.Items.Add(dr(“CompanyName”))

13. Next

14.

Me.lbClients.Refresh()

15. Press F5 to run the application.

16. Click Select.

The application displays a subset of rows in the lbCustomers list box.

17. Close the application.

Microsoft ADO.Net – Step by Step 164

Visual C# .NET

1. In the form designer, double-click the Select button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to select only those Customers whose

CustomerID begins with A, and rebind the lbCustomers list box to

the array of selected rows:

3. System.Data.DataRow[] drFound;

4.

5. drFound = this.dsMaster1.CustomerList.Select(“CustomerID

LIKE” + ” ‘A*'”);

6.

7. this.lbClients.DataSource = null;

8. this.lbClients.Items.Clear();

9.

10. foreach (System.Data.DataRow dr in drFound)

11. {

12. this.lbClients.Items.Add(dr[“CompanyName”]);

13. }

14.

this.lbClients.Refresh();

15. Press F5 to run the application.

16. Click Select.

The application displays a subset of rows in the lbCustomers list box.

17. Close the application.

DataRow Methods

The methods supported by the DataRow object are shown in Table 7-14. The majority of

the methods are used when editing data and we’ll look at them in detail in Chapter 9.

Table 7-14: DataRow Methods

Method Description

AcceptChanges Commits all

pending

changes to

a DataRow

Microsoft ADO.Net – Step by Step 165

Table 7-14: DataRow Methods

Method Description

BeginEdit Begins an

edit

operation

CancelEdit Cancels an

edit

operation

Delete Deletes the

row

End Edit Ends an edit

operation

GetChildRows Gets all the

child rows of

a DataRow

GetParentRow Gets the

parent row

of a

DataRow

based on

the specified

DataRelatio

n

GetParentRows Gets the

parent rows

of a

DataRow

based on

the specified

DataRelatio

n

HasVersion Indicates

whether a

specified

version of a

DataRow

exists

IsNull Indicates

whether the

specified

Column is

Null

RejectChanges Rolls back

all pending

changes to

the

DataRow

SetParentRow Sets the

parent row

of a

DataRow

Microsoft ADO.Net – Step by Step 166

The GetChildRows and GetParentRows methods of the DataRow are used to navigate

the relationships you set up using the DataSet’s Relations collection. Both methods are

overloaded, allowing you to pass either a DataRelation or a string representing the name

of the DataRelation, and, optionally, a RowState value.

Use the GetChildRows Method to Retrieve Data

Visual Basic .NET

1. In the code editor, select lbClients in the ControlName list, and then

select SelectedIndexChanged in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to create a relation in dsMaster1, retrieve the

child rows of the current list box selection, and then display them in

the dgOrders data grid:

3. Dim drCurrent As System.Data.DataRow

4. Dim dsCustOrders As New System.Data.DataSet()

5. Dim drCustOrders() As System.Data.DataRow

6.

7. ‘Create the relation if necessary

8. If Me.dsMaster1.Relations.Count = 0 Then

9. Me.dsMaster1.Relations.Add(“CustomerOrders”, _

10. Me.dsMaster1.CustomerList.CustomerIDColumn, _

11. Me.dsMaster1.OrderTotals.CustomerIDColumn)

12. End If

13.

14. drCurrent = Me.lbClients.SelectedItem.Row

15. dsCustOrders.Merge(drCurrent.GetChildRows(“CustomerOrders”

))

16.

17. Me.dgOrders.SetDataBinding(dsCustOrders, “OrderTotals”)

Me.dgOrders.Refresh()

18. Press F5 to run the application.

19. Select different items in the Clients list.

The application displays the Client’s rows in the Orders data grid.

20. Close the application.

Microsoft ADO.Net – Step by Step 167

Visual C# .NET

1. In the form designer, double-click the Select button.

Visual Studio adds the Click event handler to the code window.

2. Add the following code to create a relation in dsMaster1, retrieve the

child rows of the current list box selection, and then display them in

the dgOrders data grid:

3. System.Data.DataRowView drCurrent;

4. System.Data.DataSet dsCustOrders;

5.

6. dsCustOrders = new System.Data.DataSet();

7. //Create the relation if necessary

8. if (this.dsMaster1.Relations.Count == 0)

9. {

10. this.dsMaster1.Relations.Add(“CustomerOrders”,

11.

this.dsMaster1.CustomerList.CustomerIDColumn,

12. this.dsMaster1.OrderTotals.CustomerIDColumn);

13. }

14.

15. drCurrent = (System.Data.DataRowView)

this.lbClients.SelectedItem;

16.

17. dsCustOrders.Merge(drCurrent.Row.GetChildRows(“C

ustomerOrders”));

18.

19. this.dgOrders.SetDataBinding(dsCustOrders,

“OrderTotals”);

this.dgOrders.Refresh();

20. Press F5 to run the application.

21. Select different items in the Clients list.

The application displays the Client’s rows in the Orders data grid.

22. Close the application.

Microsoft ADO.Net – Step by Step 168

DataTable Events

The events supported by the DataTable are shown in Table 7-15. All of the events are

used as part of data validation, and we’ll examine them in more detail in Chapter 10.

Table 7-15: DataTable Events

Event Description

ColumnChanged Raised after a DataRow item has been changed

ColumnChanging Raised before a DataRow item has been changed

RowChanged Called after a DataRow has been changed

RowChanging Called before a DataRow has been changed

RowDeleted Called after a DataRow has been deleted

RowDeleting Called before a DataRow is deleted

Chapter 7 Quick Reference

To Do this

Create an

independent

DataTable at run time

Use the New method:

myTable = New.System.Data.DataTable()

Add a DataTable to

an existing DataSet

Use the Add method of the DataSet’s Tables

collection:

myDataSet.Tables.Add(TableName)

Add a PrimaryKey

constraint based on a

table in the data

source

Use the DataAdapter’s FillSchema method:

myDA.FillSchema(mytable.

SchemaType.Source)

Create a calculated

column

Set the Expression property of the column:

MyColumn.Expression = “New ” & “Value”

Add a new DataRow Create the DataRow by using the NewRow method,

and then add it to the DataTable:

myRow = myTable.NewRow

myTable.Rows.Add(myRow)

Display a subset of

rows

Use the DataTable’s Select method:

DataRowArray =

myTable.Select(“Criteria”)

Retrieve data related

to the current

DataRow

Use the GetChildRows method:

myRow.GetChildRows(“RelationName”)

Chapter 8: The DataView

Overview

In this chapter, you’ll learn how to:

§Add a DataView to a form

§ Create a DataView at run time

§ Create calculated columns in a DataView

§ Sort DataView rows

§ Filter DataView rows

Microsoft ADO.Net – Step by Step 169

§ Search a DataView based on a primary key value

In the previous chapter, we looked at the Select method of the DataTable, which

provides a mechanism for filtering and sorting DataRows. The DataView provides

another mechanism for performing the same actions. Unlike the Select method, a

DataView is a separate object that sits on top of a DataTable.

Understanding DataViews

A DataView provides a filtered and sorted view of a single DataTable. Al-though the

DataView provides the same functionality as the DataTable’s Select method, it has a

number of advantages. Because they are distinct objects, DataViews can be created and

configured at both design time and run time, making them easier to implement in many

situations.

Furthermore, unlike the array of DataRows returned from a Select method, DataViews

can be used as the data source for bound controls. (Remember that in the previous

chapter we had to load the DataRow array returned by the Select method into a DataSet

before we could display its contents in the data grid.)

You can create multiple DataViews for any given DataTable. In fact, every DataTable

contains at least one DataView in its DefaultDataView property. The properties of the

DefaultDataView can be set at run time, but not at design time.

The rows of a DataView, although very much like DataRows, are actually DataRowView

objects that reference DataRows. The DataRowView properties are shown in Table 8-1.

Only the Item property is also exposed by the DataRow; the other properties are unique.

Table 8-1: DataRowView Properties

Property Description

DataView The DataView to which this DataRowView belongs

IsEdit Indicates whether the DataRowView is currently being

edited

IsNew Indicates whether the DataRowView is new

Item The value of a column in the DataRowView

Row The DataRow that is being viewed

RowVersion The current version of the DataRowView

DataViewManagers

Functionally, a DataViewManager is similar to a DataSet. Just as a DataSet acts as a

container for DataTables, the DataViewManager acts as a container for DataViews,

one for each DataTable in a Dat aSet.

The DataViews within the DataViewManager are accessed through the

DataViewSettings collection of the DataViewManager. It’s convenient to think of a

DataViewSetting existing for each DataTable in a DataSet. In reality, the

DataViewSetting isn’t physically created until (and unless) it is referenced in code.

DataViewManagers are most often used when the DataSet contains related tables

because they allow you to persist sorting and filtering criteria across calls to

GetChildRows . If you were to use individual DataViews on the child table, the sorting

and filtering criteria would need to be reset after each call. With a DataViewManager,

after the criteria have been established, the rows returned by GetChildRows will be

sorted and filtered automatically.

Microsoft ADO.Net – Step by Step 170

In Chapter 7, we saw that the DataSet has a DefaultViewManager property. In reality,

you’re actually binding to the default DataViewManager when you bind a control to a

DataSet. Under most circumstances, you can ignore this technicality, but it can be

useful for setting default sorting and filtering criteria.

Note, however, that the DataSet’s DefaultViewManager property is read-only—you can

set its properties, but you cannot create a new DataViewManager and assign it to the

DataSet as the default DataViewManager.

Creating DataViews

Because DataViews are independent objects, you can create and configure them at

design time using Microsoft Visual Studio. You can, of course, also create and configure

DataViews at run time in code.

Using Visual Studio

Visual Studio supports the design-time creation of DataViews through the DataView

control on the Data tab of the Toolbox. Like any other control with design-time support,

you simply drag the control onto a form and set its properties in the Properties window.

Create and Bind a DataView Using Visual Studio

1. Open the DataViews project from the Start menu or the Project menu.

2. Double-click DataViews.vb (or DataViews.cs if you’re using C#) in the

Solution Explorer.

Visual Studio .NET opens the form designer.

3. Drag a DataView control from the Data tab of the Toolbox to the form.

Visual Studio adds the control to the component designer.

4. In the Properties window, change the DataView’s name to dvOrders.

5. Change the Table property to dsMaster1.OrderTotals, and then

change the Sort property to OrderID.

Microsoft ADO.Net – Step by Step 171

6. Select the dgOrders data grid, and then change the DataSource

property to dvOrders.

7. Press F5 to run the application.

Visual Studio displays the information in the Orders data grid arranged

according to the values in the OrderID column.

Microsoft ADO.Net – Step by Step 172

8. Close the application.

Creating DataViews at Run Time

Like most of the objects in the .NET Framework Class Library, the DataView supports a

New constructor, which allows the DataView to be created in code at run time. The

DataView supports the two versions of the New constructor, which are shown in Table 8-

2.

Table 8-2: DataView Constructors

Method Description

New() Creates a

new

DataView

New(DataTable) Creates a

new

DataView

and sets its

Table

property to

the specified

DataTable

Create a DataView at Run Time

Visual Basic .NET

1. Double-click Create.

Visual Studio opens the code editor and adds the Click event handler

template.

2. Add the following code to the method:

3. Dim drCurrent As System.Data.DataRow

4. Dim dvNew As New System.Data.DataView()

5.

6. ‘retrieve the selected row in lbOrders

7. drCurrent = Me.lbClients.SelectedItem.Row

8.

9. ‘configure the dataview

10. dvNew.Table = Me.dsMaster1.OrderTotals

Microsoft ADO.Net – Step by Step 173

11. dvNew.RowFilter = “CustomerID = ‘” & drCurrent(0) & “‘”

12.

13. ‘rebind the datagrid

Me.dgOrders.DataSource = dvNew

The code first declares a DataRow variable that will contain the item selected

in the lbClients list box, and then creates a new DataView using the default

constructor. Next drCurrent is assigned to the current selection in the list box.

The Table property of the dvNew DataView is set to the OrderTotals table,

and the RowFilter property is set to show only the orders for the selected

client. Finally the dgOrders data grid is bound to the new DataView.

14. Press F5 to run the application, click in the Clients list box, and then

click Create.

The data grid displays the orders for only the selected client.

15. Close the application.

Visual C# .NET

1. Double-click Create.

Visual Studio opens the code editor and adds the Click event handler

template and the Click event delegate.

2. Add the following code to the method:

3. System.Data.DataRowView drCurrent;

4. System.Data.DataView dvNew;

5. dvNew = new System.Data.DataView();

6.

7. //retrieve the selected row in lbOrders

8. drCurrent =

(System.Data.DataRowView)this.lbClients.SelectedItem;

9.

10. //configure the dataview

11. dvNew.Table = this.dsMaster1.OrderTotals;

12. dvNew.RowFilter = “CustomerID = ‘” + drCurrent[0] +

“‘”;

13.

14. //rebind the datagrid

this.dgOrders.DataSource = dvNew;

Microsoft ADO.Net – Step by Step 174

The code first declares a DataRowView variable that will contain the item

selected in the lbClients list, and then creates a new DataView using the

default constructor. Next drCurrent is assigned to the current selection in the

list.

The Table property of the dvNew DataView is set to the OrderTotals table,

and the RowFilter property is set to show only the orders for the selected

client. Finally the dgOrders data grid is bound to the new DataView.

15. Press F5 to run the application, click in the Clients list, and then

click Create.

The data grid displays the orders for only the selected client.

16. Close the application.

DataView Properties

The properties exposed by the DataView object are shown in Table 8-3. The

AllowDelete, AllowEdit, and AllowNew properties determine whether the data reflected

by the DataView can be changed through the DataView. (Data can always be changed

by referencing the row in the underlying DataTable.)

Table 8-3: DataView Properties

Property Description

AllowDelete Determines

whether rows in

the DataView

can be deleted

AllowEdit Determines

whether rows in

the DataView

can be changed

AllowNew Determines

whether rows

can be added to

the DataView

Apply Determines

whether the

default sort

order,

determined by

Microsoft ADO.Net – Step by Step 175

Table 8-3: DataView Properties

Property Description

DefaultSort the underlying

data source, will

be used

Count The number of

DataRowViews

in the DataView

DataViewManager The

DataViewMana

ger to which

this DataView

belongs

Item(Index) The

DataRowView

at the specified

Index in the

DataView

RowFilter The expression

used to filter the

rows contained

in the DataView

RowStateFilter The

DataViewRowS

tate used to

filter the rows

contained in the

DataView

Sort The expression

used to sort the

rows contained

in the DataView

Table The DataTable

that is the

source of rows

for the

DataView

The Count property does exactly what one might expect—it returns the number of

DataRows reflected in the DataView, while the DataViewManager and Table properties

serve to connect the DataView to other objects within an application.

Finally the RowFilter, RowStateFilter, and Sort properties control the DataRows that are

reflected in the DataView and how those rows are ordered. We’ll examine each of these

properties later in this chapter.

DataColumn Expressions

Expressions, technically DataColumn Expressions, are used by the RowFilter and Sort

properties of the DataView. We’ve used DataColumn Expressions in previous chapters

when we created a calculated column in a DataTable and when we set the sort and filter

expressions for the DataTable Select method. Now it’s time to examine them more

closely.

Microsoft ADO.Net – Step by Step 176

A DataColumn Expression is a string, and you can use all the normal string handling

functions to build one. For example, you can use the & concatena-tion operator to join

two strings into a single Expression:

myExpression = “CustomerID = ‘” & strCustID & “‘”

Note that the value of strCustID will be surrounded by single quotation marks in the

resulting text. In building DataColumn Expressions, columns may be referred to directly

by using the ColumnName property, but any actual text values must be quoted.

In addition, certain special characters must be “escaped,” that is, wrapped in square

brackets. For example, if you had a column named Miles/Gallon, you would have to

surround the column name with brackets:

MyExpression = “[Miles/Gallon] > 10”

Tip You can find the complete list of special characters in the online

Help for the DataColumn.Expression property.

Numeric values in DataColumn Expressions require no special handling, as shown in the

previous example, but date values must be surrounded by hash marks:

MyExpression = “OrderDate > #01/01/2001#”

Important Dates in code must conform to US usage, that is,

month/day/year.

As we’ve seen, DataRow columns are referred to by the ColumnName prop-erty. You

can reference a column in a Child DataRow by adding “Child” before the ColumnName in

the Child row:

MyExpression = “Child.OrderTotal > 3000”

The syntax for referencing a Parent row is identical:

MyExpression = “Parent.CustomerID = ‘AFLKI'”

Parent and Child references are frequently used along with one of the aggre-gate

functions shown in Table 8-4. The aggregate functions can also be used directly, without

reference to Child or Parent rows.

Table 8-4: Aggregate Functions

Function Result

Sum Sum

Avg Average

Min Minimum

Max Maximum

Count Count

StDev Statistical

standard

deviation

Var Statistical

variance

When setting the expressions for DataViews, you will frequently be comparing values.

The .NET Framework handles the usual range of operators, as shown in Table 8-5.

Table 8-5: Comparison Operators

Operator Action

AND Logical

AND

Microsoft ADO.Net – Step by Step 177

Table 8-5: Comparison Operators

Operator Action

OR Logical OR

NOT Logical

NOT

< Less than

> Greater

than

<= Less than

or equal to

>= Greater

than or

equal to

<> Not equal

IN Determines

whether

the value

specified is

contained

in a set

LIKE Inexact

match

using a

wildcard

character

The IN operator requires that the set of values to be searched be separated by commas

and surrounded by parentheses:

MyExpression = “myColumn IN (‘A’,’B’,’C’)

The LIKE operator treats the characters * or % as interchangeable wildcards—both

replace zero or more characters. The wildcard characters can be used at the beginning

or end of a string, or at both ends, but cannot be contained within a string.

DataColumn Expressions also support the arithmetic operators shown in Table 8-6.

Table 8-6: Arithmetic Operators

Operator Action

+ Addition

– Subtraction

* Multiplication

/ Division

% Modulus

(integer

division)

The arithmetic + operator is also used for string concatenation within a DataColumn

Expression rather than the more usual & operator.

Finally DataColumn Expressions support a number of special functions, as shown in

Table 8-7.

Microsoft ADO.Net – Step by Step 178

Table 8-7: Special Functions

Function Result

Convert(Expression, Type) Converts the

value returned

by Expression

to the specified

.NET

Framework

Type

Len(String) The number of

characters in

the String

ISNULL(Expression, ReplacementValue) Determines

whether

Expression

evaluates to

Null, and if so,

it returns

ReplacementV

alue

IF(Expression, ValueIfTrue, ValueIfFalse) Returns

ValueIfTrue if

Expression

evaluates to

True; otherwise

returns

ValueIfFalse

SUBSTRING(Expression, Start, Length) Returns Length

characters of

the string

returned by

Expression,

beginning at

the zero-based

position

specified by

Start

Sort Expressions

Although the DataColumn Expressions used in the Sort property can be arbitrarily

complex, in most cases they will take the form of one or more ColumnNames separated

by commas:

myDataView.Sort = “CustomerID, OrderID”

Optionally, the ColumnNames may be followed by ASC or DESC to cause the values to

be sorted in ascending or descending order, respectively. The default sort is ascending,

so the ASC keyword isn’t strictly necessary, but it can sometimes be useful to include it

for clarity.

Change the Sorting Method

Visual Basic .NET

1. In the code editor, select btnSort in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to the method:

Microsoft ADO.Net – Step by Step 179

3. ‘Change the sort order

4. Me.dvOrders.Sort = “EmployeeID, CustomerID, OrderID DESC”

5.

6. ‘Refresh the datagrid

Me.dgOrders.Refresh()

The code sets the sort order of the dvOrders DataView to sort first by

EmployeeID, then by CustomerID, and finally by OrderID in descending order.

7. Press F5 to run the application.

8. Click Sort.

The application displays the sorted contents of the data grid.

9. Close the application.

Visual C# .NET

1. In the form designer, double-click the Create button.

Visual Studio adds the Click event handler to the code window.

2. In the code editor, add a Click event handler for the btnSort button

after the btnCreate_Click event handler that we created in the

previous exercise:

3. private void btnSort_Click (object sender, System.EventArgs e)

4. {

5.

}

6. Add the following code to the method:

7. //Change the sort order

8. this.dvOrders.Sort = “EmployeeID, CustomerID, OrderID DESC”;

9.

10. //Refresh the datagrid

this.dgOrders.Refresh();

The code sets the sort order of the dvOrders DataView to sort first by

EmployeeID, then by CustomerID, and finally by OrderID in descending order.

11. Press F5 to run the application.

12. Click Sort.

The application displays the sorted contents of the data grid.

Microsoft ADO.Net – Step by Step 180

13. Close the application.

RowStateFilter

In the previous chapter, we saw that each DataRow maintains its status in its RowState

property. The DataView’s RowStateFilter property can be used to limit the

DataRowViews within the DataView to those with a certain RowState or to return values

of a given state. The possible values for the RowStateFilter property are shown in Table

8-8.

Table 8-8: DataViewRowState Values

Member Name Description

Added Only those

rows that

have been

added

CurrentRows All current

row values

Deleted Only those

rows that

have been

deleted

ModifiedCurrent Current row

values for

rows that

have been

modified

ModifiedOriginal Original

values of

rows that

have been

modified

None No rows

OriginalRows Original

values of all

rows

Unchanged Only those

rows that

Microsoft ADO.Net – Step by Step 181

Table 8-8: DataViewRowState Values

Member Name Description

haven’t

been

modified

Display Only New Rows

Visual Basic .NET

1. In the code editor, select btnRowState in the ControlName list, and

then select Click in the MethodName list.

Visual Studio adds the Click event handler template.

2. Add the following code to the method:

3. Dim drNew As System.Data.DataRowView

4.

5. ‘Add a new order

6. drNew = Me.dvOrders.AddNew()

7. drNew(“CustomerID”) = “ALFKI”

8. drNew(“EmployeeID”) = 1

9. drNew(“OrderID”) = 0

10.

11. ‘Set the RowStateFilter

12. Me.dvOrders.RowStateFilter = DataViewRowState.Added

13.

14. ‘Refresh the datagrid

Me.dgOrders.Refresh()

The code first creates a new DataRowView (we’ll examine the AddNew

method in the following section), and then sets the RowStateFilter to display

only new (or added) rows. Finally the dgOrders data grid is refreshed to

display the changes.

15. Press F5 to run the application, and then click Row State.

The data grid shows only the new order.

16. Close the application.

Microsoft ADO.Net – Step by Step 182

Visual C# .NET

1. In the Form Designer, double-click the Row State button.

2. Visual Studio adds the Click event handler to the code window.

3. In the code editor, add a Click event handler for the btnRowState

button after the btnSort event handler that we created in the

previous exercise:

4. private void btnRowState_Click (object sender,

System.EventArgs e)

5. {

6.

}

7. Add the following code to the method:

8. System.Data.DataRowView drNew;

9.

10. //Add a new row

11. drNew = this.dvOrders.AddNew();

12. drNew[“CustomerID”] = “AFLKI”;

13. drNew[“EmployeeID”] = 1;

14. drNew[“OrderID”] = 0;

15.

16. //Set the RowStateFilter

17. this.dvOrders.RowStateFilter = DataViewRowState.Added;

18.

19. //Refresh the datagrid

this.dgOrders.Refresh();

The code first creates a new DataRowView (we’ll examine the AddNew

method in the following section), and then sets the RowStateFilter to display

only new (or added) rows. Finally the dgOrders data grid is refreshed to

display the changes.

20. Press F5 to run the application, and then click Row State.

The data grid shows only the new order.

21. Close the application.

Microsoft ADO.Net – Step by Step 183

DataView Methods

The primary methods supported by the DataView are shown in Table 8-9. The AddNew

method adds a new DataRowView to the DataView, while the Delete method deletes the

row at the specified index.

Table 8-9: DataView Methods

Method Description

AddNew Adds a new

DataRowVie

w to the

DataView

Delete Removes a

DataRowVie

w from a

DataView

Find Finds one or

more

DataRowVie

ws

containing

the primary

key value(s)

that are

specified

The Find Method

The DataView’s Find method finds one or more rows based on primary key values. If you

want to find a row based on some other column value, you must use the RowFilter

property of the DataView.

There are two versions of the Find method, allowing you to pass either a single value or

an array of values. The Find method returns the index of the row that was found (or an

array of rows if an array of primary keys is provided) or Null if the value is not found in

the DataView.

Find a Row Based on Its Primary Key Value

Visual Basic .NET

1. In the code editor, select btnFind in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the method:

3. Dim idxFound As Integer

4. Dim strMessage As String

5.

6. idxFound = Me.dvOrders.Find(10255)

7.

8. strMessage = “The OrderID is ” & _

9. Me.dvOrders(idxFound).Item(“OrderID”)

10. strMessage &= vbCrLf & “The CustomerID is ” & _

11. Me.dvOrders(idxFound).Item(“CustomerID”)

12. strMessage &= vbCrLf & “The EmployeeID is ” & _

13. Me.dvOrders(idxFound).Item(“EmployeeID”)

Microsoft ADO.Net – Step by Step 184

MessageBox.Show(strMessage)

The code uses the Find method to find Order 10255 and then displays the

results in a message box.

14. Press F5 to run the application, and then click Find.

The application displays the results.

15. Close the application.

Visual C# .NET

1. In the form designer, double-click the Find button.

Visual Studio adds the Click event handler to the code window.

2. In the code editor, add a Click event handler for the btnFind button

after the btnRowState event handler that we created in the previous

exercise:

3. private void btnFind_Click (object sender, System.EventArgs e)

4. {

5.

}

6. Add the following code to the method:

7. int idxFound;

8. string strMessage;

9.

10. idxFound = this.dvOrders.Find(10255);

11.

12. strMessage = “The OrderID is ” +

13. this.dvOrders[idxFound][“OrderID”];

14. strMessage += “\nThe CustomerID is ” +

15. this.dvOrders[idxFound][“CustomerID”];

16. strMessage += “\nThe EmployeeID is ” +

17. this.dvOrders[idxFound][“EmployeeID”];

MessageBox.Show(strMessage);

The code uses the Find method to find Order 10255 and then displays the

results in a message box.

18. Press F5 to run the application, and then click Find.

The application displays the results.

Microsoft ADO.Net – Step by Step 185

19. Close the application.

Chapter 8 Quick Reference

To Do this

Add a DataView to a form Drag a DataView control from the Data tab of

the Toolbox onto the form

Create a DataView at run

time

Use one of the New constructors. For example:

Dim myDataView as New

System.Data.DataView()

Sort DataView rows Set the Sort property of the DataView. For

example:

myDataView.Sort = “CustomerID”

Filter DataView rows Set the RowFilter or RowStateFilter property.

For example:

myDataView.RowStateFilter =

DataViewRowState.Added

Find a row in a DataView Pass the primary key value to the DataView’s

Find method. For example:

idxFound = myDataView.Find(1011)

Part IV: Using the ADO.NET Objects

Chapter 9: Editing and Updating Data

Chapter 10: ADO.NET Data-Binding in Windows Forms

Chapter 11: Using ADO.NET in Windows Forms

Chapter 12: Data-Binding in Web Forms

Chapter 13: Using ADO.NET in Web Forms

Chapter 9: Editing and Updating Data

Overview

In this chapter, you’ll learn how to:

§ Use the RowState property of a DataRow

§ Retrieve a specific version of a DataRow

Microsoft ADO.Net – Step by Step 186

§ Add a row to a DataTable

§ Delete a row from a DataTable

§ Edit a DataRow

§ Temporarily suspend enforcement of constraints during updates

§ Accept changes to data

§ Reject changes to data

In the previous few chapters, we’ve examined each of the Microsoft ADO.NET objects in

turn. Starting with this chapter, we’ll look at how these objects work together to perform

specific tasks. Specifically, in this chapter, we’ll examine the process of editing and

updating data.

Understanding Editing and Updating Data

Given the disconnected architecture of ADO.NET, there are four distinct phases to the

process of editing and updating data from a data source: data retrieval, editing, updating

the data source, and finally, updating the DataSet.

First, the data is retrieved from the data source, stored in memory, and possibly

displayed to the user. This is typically done using the Fill method of a DataAdapter to fill

the tables of a DataSet, but as we’ve seen, data may also be retrieved using a

Command and a DataReader.

Next, the data is modified as required. Values can be changed, new rows can be added,

and existing rows can be deleted. Data modification can be done under programmatic

control or by the data binding mechanisms of Windows Forms and Web Forms.

We’ll be exploring how to make changes to data under programmatic control in this

chapter. In Windows Forms, the data binding architecture handles transmitting changes

from data-bound controls to the dataset. No other action is required. In Web Forms, any

data changes must of course be submitted to the server.

Roadmap We’ll examine the data binding mechanisms of Windows

Forms and Web Forms in Chapters 10 and 11.

If the changes made to the in-memory copy of the data are to be persisted, they must be

propagated to the data source. If a DataSet is used for managing the in-memory data,

the data source propagation can be done by using the Update method of the

DataAdapter. Alternatively, Command objects may be used directly to submit the

changes. (Of course, as we saw in Chapter 3, the DataAdapter uses Command objects

to submit the changes, as well.)

Finally the DataSet can be updated to reflect the new state of the data source. This is

done by using the AcceptChanges method of the DataSet or DataTable. Both the Fill

method and the Update method of the DataAdapter call AcceptChanges automatically. If

you execute Data Commands directly, you must call AcceptChanges explicitly to update

the status of the DataSet.

Concurrency

With the disconnected methodology used by ADO.NET, there is always a chance that a

row in the data source may have been changed since the time it was loaded into the

DataSet. This is a concurrency violation.

The Update method supports a DBConcurrencyException, which one might expect to

be thrown if a concurrency violation occurs. In fact, the DBConcurrencyException is

thrown whenever the number of rows updated by a Data Command is zero. This is

typically due to a concurrency violation, but it’s important to understand that this is not

necessarily the case.

Microsoft ADO.Net – Step by Step 187

DataRow States and Versions

As we saw in Chapter 7, the DataRow maintains a RowState property that indicates

whether the row has been added, deleted, or modified. In addition, the DataTable

maintains multiple copies of each row, each reflecting a different version of the DataRow.

We’ll explore both the RowState property and row versions in this section.

RowState Properties

The RowState property of the DataRow reflects the actions that have been taken since

the DataTable was created or since the last time the AcceptChanges method was called.

The possible values for RowState, as defined by the DataRowState enumeration, are

shown in Table 9-1.

Table 9-1: DataRowStates

Property Description

Added The

DataRow is

new

Deleted The

DataRow

has been

deleted from

the table

Detached The

DataRow

has not yet

been added

to a table

Modified The

contents of

the

DataRow

have been

changed

Unchanged The

DataRow

has not

been

modified

The baseline values of the rows in a DataSet are established when the AcceptChanges

method is called, either by the Fill or Update methods of the DataAdapter or explicitly by

program code. At that time, all of the DataRows have their RowState set to Unchanged.

Not surprisingly, if the value of any column of a DataRow is changed after

AcceptChanges is called, its RowState is set to Modified. If new DataRows are added to

the DataSet by using the Add method of the DataSet’s Row collection, their RowState

will be Added. The new rows will maintain the status of Added even if their contents are

changed before the next call to AcceptChanges.

If a DataRow is deleted by using the Delete method, it isn’t actually removed from the

DataSet until the AcceptChanges method is called. Instead, their RowState is set to

Deleted and, as we’ll see, its Current values are set to Null.

DataRows don’t necessarily belong to a DataTable. These independent rows will have a

RowState of Detached until they are added to the Rows collection of a table.

Microsoft ADO.Net – Step by Step 188

Row Versions

A DataTable may maintain multiple versions of any given DataRow, depending on the

actions that have been performed on it since the last time AcceptChanges was called.

The possible DataRowVersions are shown in Table 9-2.

Table 9-2: DataRowVersions

Version Meaning

Current The

current

values of

each

column

Default The

default

values

used for

new

rows

Original The

values

set when

the row

was

created,

either by

a Fill

operation

or by

adding

the row

manually

Proposed The

values

assigned

to the

columns

in a row

after a

BeginEdi

t method

has been

called

There will always be a Current version of every row in the DataSet. The Current version

of the DataRow reflects any changes that have been made to its values since the row

was created.

Rows that existed in the DataSet when AcceptChanges was last called will have an

Original version, which contains the initial data values. Rows that are added to the

DataSet will not contain an Original version until AcceptChanges is called again.

If any of the columns of a DataTable have values assigned to its DefaultValue property,

all the DataRows in the table will have a Default version, with the values determined by

the DefaultValues of each column.

DataRows will have a Proposed version after a call to DataRow.BeginEdit and before

either EndEdit or CancelEdit is called. We’ll examine these methods, which are used to

temporarily suspend data constraints, in the next section.

Microsoft ADO.Net – Step by Step 189

Exploring DataRow States and Versions

The example application for this chapter displays the Original and Current values of a

DataSet based on the EmployeeList view in the Northwind sample database. Because

the display is based on the Windows Form BindingContext object, which we won’t be

examining until Part V, the code to display these values is already in place.

1. Open the Editing project from the Start page or from the File menu.

2. Double-click Editing.vb (or Editing.cs, if you’re using C#) in the

Solution Explorer.

Microsoft Visual Studio displays the Editing form in the form designer.

3. Press F5 to run the application.

4. Use the navigation buttons at the bottom of the form to move through

the DataSet.

Note that all the rows have identical Current and Original versions and that

the RowStatus is Unchanged.

5. Change the value of the First Name or Last Name text box of one of

the rows, and then click Save.

The Current version of the row is updated to reflect the name, and the

RowStatus changes to Modified.

Microsoft ADO.Net – Step by Step 190

6. Close the application.

Editing Data in a DataSet

Editing data after it has been loaded into a DataSet is a straightforward process of calling

methods and setting property values. In this chapter, we’ll concentrate on manipulating

the contents of the DataSet programmatically, leaving the discussion of using Windows

and Web Form controls to Parts V and VI, respectively.

Roadmap We’ll examine editing using data-bound controls in Parts V

and VI.

Adding a DataRow

There is no way to create a new row directly in a DataTable. Instead, a DataRow object

must be created independently and then added to the DataTable’s Rows collection.

The DataTable’s NewRow method returns a detached row with the same schema as the

table on which it is called. The values of the row can then be set, and the new row

appended to the DataTable.

Add a Row to a DataTable

Visual Basic .NET

1. Double-click Add in the form designer.

Visual Studio opens the code editor and adds the Click event handler.

2. Add the following code to the procedure:

3. Dim drNew As System.Data.DataRow

4.

5. drNew = Me.dsEmployeeList1.EmployeeList.NewRow()

6. drNew.Item(“FirstName”) = “New First”

7. drNew.Item(“LastName”) = “New Last”

Me.dsEmployeeList1.EmployeeList.Rows.Add(drNew)

The first line declares the DataRow variable that will contain the new row.

Then the NewRow method is called, instantiating the variable; its fields are

set; and it is added to the Rows collection of the EmployeeList table.

8. Press F5 to run the application.

9. Click Add.

The application adds a new row.

10. Move to the last row in the DataSet by clicking the >> button.

The application displays the new row.

Microsoft ADO.Net – Step by Step 191

11. Close the application.

Visual C# .NET

1. Double-click Add in the form designer.

Visual Studio opens the code editor and adds the Click event handler.

2. Add the following code to the procedure:

3. dsEmployeeList.EmployeesRow drNew;

4.

5. drNew = (dsEmployeeList.EmployeesRow)

6. this.dsEmployeeList1.Employees.NewRow();

7. drNew[“FirstName”] = “New First”;

8. drNew[“LastName”] = “New Last”;

this.dsEmployeeList1.Employees.AddEmployeesRow(drNew);

The first line declares the DataRow variable that will contain the new row.

Then the NewRow method is called, instantiating the variable; its fields are

set; and it is added to the Rows collection of the EmployeeList table.

9. Press F5 to run the application.

10. Click Add.

The application adds a new row.

11. Move to the last row in the DataSet by clicking the >> button.

The application displays the new row.

12. Close the application.

Microsoft ADO.Net – Step by Step 192

Deleting a DataRow

The DataTable’s Rows collection supports three methods to remove DataRows, as

shown in Table 9-3. Each of these methods physically removes the DataRow from the

collection.

Table 9-3: Remove Methods

Method Description

Clear() Removes all

rows from

the

DataTable

Remove(DataRow) Removes

the specified

DataRow

RemoveAt(Index) Removes

the

DataRow at

the position

specified by

the integer

Index

However, a row that has been physically removed by using one of these methods won’t

be deleted from the data source. If you need to delete the row from the data source as

well, you must use the Delete method of the DataRow object instead.

The Delete method physically removes the DataRow only if it was added to the

DataTable since the last time AcceptChanges was called. Otherwise, it sets the

RowState to Deleted and sets the current values to Null.

Delete a DataRow Using the Delete method

Visual Basic .NET

1. In the code editor, select btnDelete in the ControlName list, and then

select Click from the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim dr As System.Data.DataRow

4.

5. ‘Get row currently displayed in the form

6. dr = GetRow()

7.

8. ‘Delete the row

9. dr.Delete()

10.

11. ‘Move to the next record & display

12. Me.BindingContext(Me.dsEmployeeList1,

“EmployeeList”).Position += 1

UpdateDisplay()

The GetRow and UpdateDisplay procedures, which are not intrinsic to the

.NET Framework, are contained in the Utility Functions region of the code.

13. Press F5 to run the application.

14. Use the navigation buttons to display the row for Nancy Davolio.

15. Click Delete.

Microsoft ADO.Net – Step by Step 193

The application deletes the row, displays the next row, and changes the

number of employees to 8.

16. Close the application.

Visual C# .NET

1. In the form designer, double-click the Delete button.

2. Visual Studio adds the Click event handler to the code window.

3. Add the following event handler to the code window:

4. System.Data.DataRow dr;

5.

6. //Get row currently displayed in the form

7. dr = GetRow();

8.

9. //Delete the row

10. dr.Delete();

11.

12. //Move to the next record & display

13. this.BindingContext[this.dsEmployeeList1,

“Employees”].Position += 1;

UpdateDisplay();

The GetRow and UpdateDisplay procedures, which are not intrinsic to the

.NET Framework, are contained in the Utility Functions region of the code.

14. Press F5 to run the application.

15. Use the navigation buttons to display the row for Nancy Davolio.

16. Click Delete.

The application deletes the row, displays the next row, and changes the

number of employees to 8.

Microsoft ADO.Net – Step by Step 194

17. Close the application.

Changing DataRow Values

Changing the value of a column in a DataRow couldn’t be simpler—just reference the

column using the Item property of the DataRow, and assign the new value to it by using

a simple assignment operator.

The Item property is overloaded, supporting the forms shown in Table 9-4. However, the

three forms of the property that specify a DataRowVersion are read-only and cannot be

used to change the values. The other three forms return the Current version of the value

and may be changed.

Table 9-4: DataRow Item Properties

Method Description

Item(columnName) Returns the

value of the

column with

the

ColumnNam

e property

identified by

the

columnNam

e string

Item(dataColumn) Returns the

value of the

specified

dataColumn

Item(columnIndex) Returns the

value of the

column

specified by

the

columnInde

x integer

value (the

Columns

collection is

zero-based)

Item(columnName, rowVersion) Returns the

value of the

rowVersion

Microsoft ADO.Net – Step by Step 195

Table 9-4: DataRow Item Properties

Method Description

version of

the column

with the

ColumnNam

e property

identified by

the

columnNam

e string

Item(dataColumn, rowVersion) Returns the

value of the

rowVersion

version of

the specified

dataColumn

Item(columnIndex, rowVersion) Returns the

value of the

rowVersion

version of

the column

specified by

the

columnInde

x integer

value

Edit a DataRow

Visual Basic .NET

1. In the code editor, select btnEdit in ControlName list, and then select

Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim drCurrent As System.Data.DataRow

4.

5. drCurrent = GetRow()

6. drCurrent.Item(“FirstName”) = “Changed ”

UpdateDisplay()

Again, the GetRow and UpdateDisplay procedures, which reference the

Windows Form data binding architecture, are not intrinsic to the .NET

Framework. They are in the Utility Functions region of the code.

7. Press F5 to run the application.

8. Click Edit.

The application changes the Current version of the FirstName column to

Changed and changes the RowStatus to Modified.

Microsoft ADO.Net – Step by Step 196

9. Close the application.

Visual C# .NET

1. In the form designer, double-click the Edit button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code window:

3. System.Data.DataRow drCurrent;

4.

5. drCurrent = GetRow();

6. drCurrent[“FirstName”] = “Changed “;

UpdateDisplay();

Again, the GetRow and UpdateDisplay procedures, which reference the

Windows Form data binding architecture, are not intrinsic to the .NET

Framework. They are in the Utility Functions region of the code.

7. Press F5 to run the application.

8. Click Edit.

The application changes the Current version of the FirstName column to

Changed and changes the RowStatus to Modified.

9. Close the application.

Microsoft ADO.Net – Step by Step 197

Deferring Changes to DataRow Values

Sometimes it’s necessary to temporarily suspend validation of data until a series of edits

have been performed, either for performance reasons or because rows will temporarily

be in violation of business or integrity constraints.

BeginEdit does just that—it suspends the Column and Row change events until either

EndEdit or CancelEdit are called. During the editing process, assignments are made to

the Proposed version of the DataRow instead of to the Current version. This is the only

time the Proposed version exists.

If the edit is completed by calling EndEdit, the Proposed column values are copied to the

Current version and the Proposed version of the DataRow is removed. If the edit is

completed by calling CancelEdit, the Proposed version of the DataRow is removed,

leaving the Current column values unchanged. In effect, EndEdit and CancelEdit commit

and rollback the changes, respectively.

Use BeginEdit to Defer Column Changes

Visual Basic .NET

1. In the code editor, select btnDefer in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler template to the code.

2. Add the following code to the procedure:

3. Dim drCurrent As System.Data.DataRow

4.

5. drCurrent = GetRow()

6. With drCurrent

7. .BeginEdit()

8. .Item(“FirstName”) = “Proposed Name”

9. MessageBox.Show(drCurrent.Item(“FirstName”,

DataRowVersion.Proposed))

10. .CancelEdit()

End With

11. Press F5 to run the application.

12. Click Defer.

The application displays Proposed Name in a message box.

13. Click OK to close the message box.

Because the edit was canceled, the Current value of the column and the

RowStatus remain unchanged.

Microsoft ADO.Net – Step by Step 198

14. Close the application.

Visual C# .NET

1. In the form designer, double-click the Defer button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code window:

3. System.Data.DataRow drCurrent;

4.

5. drCurrent = GetRow();

6.

7. drCurrent.BeginEdit();

8. drCurrent[“FirstName”]= “Proposed Name”;

9. MessageBox.Show(drCurrent[“First Name”,

10. System.Data.DataRowVersion.Proposed].ToString());

drCurrent.CancelEdit();

11. Press F5 to run the application.

12. Click Defer.

The application displays Proposed Name in a message box.

13. Click OK to close the message box.

Because the edit was canceled, the Current value of the column and the

RowStatus remain unchanged.

Microsoft ADO.Net – Step by Step 199

14. Close the application.

Updating Data Sources

After changes have been made to the in-memory copy of the data represented by the

DataSet, they can be propagated to the data source either by executing the appropriate

Command objects against a connection or by calling the Update method of the

DataAdapter (which, of course, executes the Command objects that it references).

Using the DataAdapter’s Update Method

The System.Data.Common.DbDataAdapter, which you will recall is the DataAdapter

class from which relational database Data Providers inherit their DataAdapters, supports

a number of versions of the Update method, as shown in Table 9-5. Neither the

SqlDataAdapter nor the OleDbDataAdapter add any additional versions.

Table 9-5: DbDataAdapter Update Methods

Update Method Description

Update(DataSet) Updates the

data source

from a

DataTable

named Table in

the specified

DataSet

Update(dataRows) Updates the

data source

from the

specified array

of dataRows

Update(DataTable) Updates the

data source

from the

specified

DataTable

Update(dataRows, DataTableMapping) Updates the

data source

from the

specified array

of dataRows,

using the

specified

Microsoft ADO.Net – Step by Step 200

Table 9-5: DbDataAdapter Update Methods

Update Method Description

DataTableMap

ping

Update(DataSet, sourceTable) Updates the

data source

from the

DataTable

specified in

sourceTable in

the specified

DataSet

The Command object exposes a property called RowUpdated that controls whether the

DataSet will be updated using any results from executing the SQL command on the data

source. The possible values for the OnRowUpdated property are shown in Table 9-6.

Table 9-6: UpdateRowSource Values

Value Description

Both Maps both

the output

parameters

and the first

returned row

to the

changed

row in the

DataSet

FirstReturnedRecord Maps the

values in the

first returned

row to the

changed

row in the

DataSet

None Ignores any

output

parameters

or returned

rows

OutputParameters Maps output

parameters

to the

changed

row in the

DataSet

By default, commands that are automatically generated for a DataAdapter will have their

UpdatedRowSource values set to None. Commands that are created by setting the

CommandText property, either in code or by using the Query Builder, will default to Both.

When the Update method is called, the following actions occur:

1. The DataAdapter examines the RowState of each row in the specified

DataSet or DataTable and executes the appropriate command—insert,

update, or delete.

2. The Parameters collection of the appropriate Command object will be

filled based on the SourceColumn and SourceVersion properties.

3. The RowUpdating event is raised.

Microsoft ADO.Net – Step by Step 201

4. The command is executed.

5. Depending on the value of the OnRowUpdated property, the

DataAdapter may update the row values in the DataSet.

6. The RowUpdated event is raised.

7. AcceptChanges is called on the DataSet or DataTable.

Update a Data Source

Visual Basic .NET

1. In the code editor, select btnUpdate in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler.

2. Add the following code to the procedure:

3. Me.daEmployeeList.Update(Me.dsEmployeeList1.EmployeeList)

UpdateDisplay()

4. Press F5 to run the application.

5. Type Changed after Steven in the First Name text box, and then click

Save.

The application sets the Current value of the column to Steven Changed.

6. Click Update.

The application updates the data source and then resets the contents of the

DataSet.

7. Close the application.

Microsoft ADO.Net – Step by Step 202

Visual C# .NET

1. In the form designer, double-click the Update button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code window:

3. this.daEmployeeList.Update(this.dsEmployeeList1.Employees);

UpdateDisplay();

4. Press F5 to run the application.

5. Type Changed after Steven in the First Name text box, and then click

Save.

The application sets the Current value of the column to Steven Changed.

6. Click Update.

The application updates the data source and then resets the contents of the

DataSet.

7. Close the application.

Executing Command Objects

The DataAdapter’s Update method, although very convenient, isn’t always the best

choice for persisting changes to a data source. Sometimes, of course, you won’t be

using a DataAdapter. Sometimes you’ll be using a structure other than a DataSet to

store the data. And sometimes, in order to maintain data integrity, it will be necessary to

perform operations in a particular order. In any of these situations, you can use

Command objects to control the order in which the updates are performed.

When the DataAdapter’s Update method is used to propagate changes to a data source,

it will use the SourceColumn and SourceVersion properties to fill the Parameters

Microsoft ADO.Net – Step by Step 203

collection. As we saw in Chapter 8, when executing a Command object directly, you

must explicitly set the Parameter values.

Update a Data Source Using a Data Command

Visual Basic .NET

1. In the code editor, select btnCmd in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Dim cmdUpdate As System.Data.SqlClient.SqlCommand

4. Dim drCurrent As System.Data.DataRow

5.

6. cmdUpdate = Me.daEmployeeList.UpdateCommand

7. drCurrent = GetRow()

8.

9. cmdUpdate.Parameters(“@first”).Value = drCurrent(“FirstName”)

10. cmdUpdate.Parameters(“@last”).Value =

drCurrent(“LastName”)

11. cmdUpdate.Parameters(“@empID”).Value =

drCurrent(“EmployeeID”)

12.

13. Me.cnNorthwind.Open()

14. cmdUpdate.ExecuteNonQuery()

Me.cnNorthwind.Close()

This code first creates two temporary variables, and then it sets them to the

Update command of the daEmployeeList DataAdapter and the row currently

being displayed on the form, respectively. It then sets the three parameters in

the Update command to the values of the row. Finally the connection is

opened, the command executed, and the connection closed.

15. In the code editor, select btnFill in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

16. Add the following code to the procedure:

17. Me.dsEmployeeList1.EmployeeList.Clear()

18. Me.daEmployeeList.Fill(Me.dsEmployeeList1.EmployeeList)

UpdateDisplay()

This code reloads the data into the DataSet from the data source, and then it

updates the version and row status information of the form.

19. Press F5 to run the application.

20. In the First Name text box, change Steven Changed to Steven, and

then click Save.

The application updates the Current value of the DataRow.

Microsoft ADO.Net – Step by Step 204

21. Click Command.

The application updates the data source, but because executing the

command directly does not update the DataSet, the change isn’t reflected.

22. Click Fill.

The application reloads the data. Note that the First Name text box has been

changed.

23. Close the application.

Microsoft ADO.Net – Step by Step 205

Visual C# .NET

1. In the form designer, double-click the Command button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code editor:

3. System.Data.SqlClient.SqlCommand cmdUpdate;

4. System.Data.DataRow drCurrent;

5.

6. cmdUpdate = this.daEmployeeList.UpdateCommand;

7. drCurrent = GetRow();

8.

9. cmdUpdate.Parameters[“@FirstName”].Value =

drCurrent[“FirstName”];

10. cmdUpdate.Parameters[“@LastName”].Value =

drCurrent[“LastName”];

11. cmdUpdate.Parameters[“@empID”].Value =

drCurrent[“EmployeeID”];

12.

13. this.cnNorthwind.Open();

14. cmdUpdate.ExecuteNonQuery();

15. this.cnNorthwind.Close();

16.

17. this.dsEmployeeList1.AcceptChanges();

18. UpdateDisplay();

This code first creates two temporary variables, and then it sets them to the

Update command of the daEmployeeList DataAdapter and the row currently

being displayed on the form, respectively. It then sets the three parameters in

the Update command to the values of the row. Finally the connection is

opened, the command executed, and the connection closed.

19. In the form designer, double-click the Fill button.

Visual Studio adds the event handler to the code window.

20. Add the following procedure to the code window:

21. this.dsEmployeeList1.Employees.Clear();

22. this.daEmployeeList.Fill(this.dsEmployeeList1.Employees);

UpdateDisplay();

This code reloads the data into the DataSet from the data source and then

updates the version and row status information of the form.

23. Press F5 to run the application.

24. In the First Name text box, change Steven Changed to Steven, and

then click Save.

The application updates the Current value of the DataRow.

Microsoft ADO.Net – Step by Step 206

25. Click Command.

The application updates the data source, but because executing the

command directly does not update the DataSet, the change isn’t reflected.

26. Click Fill.

The application reloads the data. Note that the First Name text box has been

changed.

27. Close the application.

Microsoft ADO.Net – Step by Step 207

Accepting and Rejecting DataSet Changes

The final step in the process of updating data is to set a new baseline for the DataRows.

This is done by using the AcceptChanges method. The DataAdapter’s Update method

calls AcceptChanges automatically. If you execute a command directly, you must call

AcceptChanges to update the row state values.

If instead of accepting the changes made to the DataSet, you want to discard them, you

can call the RejectChanges method. RejectChanges returns the DataSet to the state it

was in the last time AcceptChanges was called, discard-ing all new rows, restoring

deleted rows, and returning all columns to their original values.

Important If you call AcceptChanges or RejectChanges prior to

updating the data source, you will lose the ability to persist

the changes made since the last time AcceptChanges was

called using the Update method. The DataAdapter’s Update

method uses the RowStatus property to determine which

rows to persist, and both AcceptChanges and

RejectChanges set the RowStatus of every row to

Unchanged.

Using AcceptChanges

The AcceptChanges method is supported by the DataSet, the DataTable, and the

DataRow. Under most circumstances, you need only call AcceptChangeson the DataSet

because it calls AcceptChanges for each DataTable that it contains, and the DataTable,

in turn, calls AcceptChanges for each DataRow.

When the AcceptChanges call reaches the DataRow, rows with a RowStatus of either

Added or Modified will have the Original values of each column changed to the Current

values, and their RowStatus will be set to Unchanged. Deleted rows will be removed

from the Rows collection.

Accept Changes to a DataSet

Visual Basic .NET

1. Add the following code to the end of the btnCmd_Click procedure that

you created in the previous exercise:

2. Me.dsEmployeeList1.AcceptChanges()

UpdateDisplay()

3. Press F5 to run the application.

4. In the Last Name text box, type New after Buchanan, and then click

Save.

The application updates the Current value.

5. Click Command.

Because the AcceptChanges method is called, the Version and RowStatus

information is updated.

Microsoft ADO.Net – Step by Step 208

6. In the Last Name text box, change Buchanan New back to Buchanan,

and then click Save.

The application updates the Current value and RowStatus.

7. Click Accept.

The application updates the Original value and RowStatus.

8. Click Update, and then click Fill.

Because the RowStatus of the DataRow had been reset to Unchanged, no

changes were persisted to the data source.

Microsoft ADO.Net – Step by Step 209

9. Close the application.

Visual C# .NET

1. Add the following code to the end of the btnCmd_Click procedure that

you created in the previous exercise:

2. this.dsEmployeeList1.AcceptChanges();

UpdateDisplay();

3. Press F5 to run the application.

4. In the Last Name text box, type New after Buchanan, and then click

Save.

The application updates the Current value.

5. Click Command.

Because the AcceptChanges method is called, the Version and RowStatus

information is updated.

Microsoft ADO.Net – Step by Step 210

6. In the Last Name text box, change Buchanan New back to Buchanan,

and then click Save.

The application updates the Current value and RowStatus.

7. Click Accept.

The application updates the Original value and RowStatus.

8. Click Update, and then click Fill.

Because the RowStatus of the DataRow had been reset to Unchanged, no

changes were persisted to the data source.

Microsoft ADO.Net – Step by Step 211

9. Close the application.

Using RejectChanges

Like AcceptChanges, the RejectChanges method is supported by the DataSet,

DataTable, and DataRow objects, and each object cascades the call to the objects below

it in the hierarchy.

When the RejectChanges call reaches the DataRow, rows with a RowStatus of either

Deleted or Modified will have the Original values of each column changed to the Current

values, and their RowStatus will be set to Unchanged. Added rows will be removed from

the Rows collection.

Reject the Changes to a DataRow

Visual Basic .NET

1. In the code editor, select btnReject in the ControlName list, and then

select Click in the MethodName list.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the procedure:

3. Me.dsEmployeeList1.RejectChanges()

UpdateDisplay()

4. Press F5 to run the application.

5. In the First Name text box, change Stephen to Reject, and then click

Save.

The application updates the Current value and RowStatus.

6. Click Reject.

Microsoft ADO.Net – Step by Step 212

The application returns the Current version of the row to its Original values

and then resets the RowStatus to Unchanged.

7. Close the application.

Visual C# .NET

1. In the form designer, double-click the Reject button.

Visual Studio adds the Click event handler to the code window.

2. Add the following procedure to the code editor:

3. this.dsEmployeeList1.RejectChanges();

UpdateDisplay();

4. Press F5 to run the application.

5. In the First Name text box, change Stephen to Reject, and then click

Save.

The application updates the Current value and RowStatus.

6. Click Reject.

The application returns the Current version of the row to its Original values

and then resets the RowStatus to Unchanged.

Microsoft ADO.Net – Step by Step 213

7. Close the application.

Chapter 9 Quick Reference

To Do this

Add a row to a DataTable Use the NewRow method of the DataTable to

create the row, and then use the Add method of

the Rows collection:

newRow = myTable.NewRow()

myTable.Rows.Add(newRow)

Delete a row from a

DataTable

Use the Delete method of the DataRow:

myRow.Delete()

Change the values in a

DataReader

Use the DataRow’s Item property:

myRow.Item(“Row Name”) = newValue

Suspend constraint

enforcement

Use BeginEdit combined with either EndEdit or

CancelEdit:

myRow.BeginEdit()

myRow.Item(“Row Name”) = newValue

myRow.EndEdit()

Or:

myRow.BeginEdit()

myRow.Item(“Row Name”) = newValue

myRow.CancelEdit()

Accept changes to data Use the AcceptChanges method of the DataSet,

DataTable, or DataRow:

myDataSet.AcceptChanges()

Reject changes to data Use the RejectChanges method of the DataSet,

DataTable, or DataRow:

myDataSet.RejectChanges()

Chapter 10: ADO.NET Data-Binding in Windows

Forms

Overview

In this chapter, you’ll learn how to:

Microsoft ADO.Net – Step by Step 214

§ Simple-bind control properties using the Properties window

§ Simple-bind control properties using the Advanced Binding dialog box

§ Simple-bind control properties at run time

§ Complex-bind control properties using the Properties window

§ Complex-bind control properties at run time

§ Use CurrencyManager properties

§ Respond to CurrencyManager events

§ Use the Binding object’s properties

In previous chapters, we have, of course, been binding data to controls on Windows

Forms, but we haven’t really looked at the process in any detail. We’ll begin to do that in

this chapter. We’ll start by examining the underlying mechanisms used to bind Windows

Forms controls to Microsoft ADO.NET data sources. In Chapter 11, we’ll examine the

techniques used to perform some common data-binding tasks.

Understanding Data-Binding in Windows Forms

The Microsoft .NET Framework provides an extremely powerful and flexible mechanism

for binding data to properties of controls. Although in the majority of cases you will bind

to the displayed value of a control—for example, the DisplayMember property of a

ListBox control or the Text property of a TextBox control—you can bind any property of a

control to a data source.

This makes it possible, for example, to bind the background and foreground colors of a

form and the font characteristics of its controls to a row in a database table. By using this

technique, you could allow users to customize an application’s user interface without

requiring any changes to the code base.

Data Sources

Windows Forms controls can be bound to any data source, not just traditional database

tables. Technically, to qualify as a data source, an object must implement the IList,

IBindingList, or IEditableObject interface.

The IList interface, the simplest of the three, is implemented by arrays and collections.

This means that it’s possible, for example, to bind the Text property of a label to the

contents of a ListBox control’s ObjectCollection (although it’s difficult to think of a

situation in which doing so might be useful). Any object that implements both the IList

and the IComponent interfaces can be bound at design time as well as at run time.

The IBindingList interface, which is implemented by the DataView and

DataViewManager objects, supports change notification. Objects that implement this

interface raise ListChanged events to notify the application when either an item in the

list or the list itself has been changed.

Finally, the IEditableObject interface, which is implemented by the DataRowView

object, exposes the BeginEdit, EndEdit, and CancelEdit methods.

Fortunately, when you’re working within ADO.NET, you can largely ignore the details of

interface implementation. They’re really only important if you are building your own

data source objects.

Within the .NET Framework, the actual binding of data in a Windows form is

handled by a number of objects working in conjunction, as shown below.

Microsoft ADO.Net – Step by Step 215

At the highest level in the logical architecture is the BindingContext object. Any

object that inherits from the Control class can contain a BindingContext object.

In most cases, you’ll use the form’s BindingContext object, but if your form

includes a container control, such as a Panel or a GroupBox, that contains

data-bound controls, it may be easier to create a separate BindingContext

object for the container control because it saves a level of indirection when

referencing the contained controls.

The BindingContext object manages one or more BindingManagerBase

objects, one for each data source that is referenced by the form. The

BindingManagerBase is an abstract class, so instances of this object cannot be

directly instantiated. Instead, the objects managed by the BindingContext

object will actually be instances of either the PropertyManager class or the

CurrencyManager class. All of these objects are implemented in the

System.Windows.Forms namespace.

If the data source can return only a single value, the BindingManagerBase

object will be an instance of the PropertyManager class. If the data source

returns (or can return) a collection of objects, the BindingManagerBase object

will be an instance of the CurrencyManager class. ADO.NET objects will

always instantiate CurrencyManagers.

The CurrencyManager object keeps track of position in the list and

managesthe bindings to that data source. Note that the data source itself

doesn’t know which item is being displayed.

ADO The CurrencyManager’s Position property maintains the current

row in a data source. ADO.NET data sources don’t support

cursors and therefore have no knowledge of the ‘current’ row. This

may at first seem awkward, but is actually a more powerful

architecture because it’s now possible to maintain multiple

‘cursors’ in a single data source.

There is a separate instance of the CurrencyManager object for each discrete

data source. If all of the controls on a form bind to a single data source, there

will be a single CurrencyManager. For example, a form that contains text

boxes displaying fields from a single table will contain a single

CurrencyManager object. However, if there are multiple data sources, as in a

form that displays master/detail information, there will be separate

CurrencyManager objects for each data source.

Windows Forms controls contain a DataBindings collection that contains the

Binding objects for that control. The Binding object, as we’ll see, specifies the

data source, the control that is being bound, and the property of the control that

will display the data for simple-bound properties.

Microsoft ADO.Net – Step by Step 216

The CurrencyManager inherits a BindingsCollection property from the

BindingManagerBase class. The BindingsCollection contains references to the

Binding objects for each control.

Binding Controls to an ADO.NET Data Source

Windows Forms controls in the .NET Framework support two different types of data

binding: simple and complex. The distinction is really quite simple. Control properties that

contain a single value are simple-bound, while properties that contain multiple values,

such as the displayed contents of list boxes and data grids, are complex-bound.

Any given control can contain both simple-bound and complex -bound attributes. For

example, the MonthCalendar control’s MaxDate property, which determines the

maximum allowable selected date, is a simple-bound property containing a single

DateTime value, while its BoldedDates property, which contains an array of dates that

are to be displayed in bold formatting, would be complex-bound.

Simple-Binding Control Properties

In the .NET Framework, any property of a control that contains a single value can be

simple-bound to a single value in a data source.

Binding can take place either at design time or at run time. In either situation, you must

specify three values: the name of property to be bound, the data source, and a

navigation path within the data source that resolves to a single value.

The navigation path consists of a period-delimited hierarchy of names. For example, to

reference the ProductID column of the Products table, the navigation path would be

Products.ProductID.

The Microsoft Visual Studio .NET Properties window contains a Data Bindings section

that displays the properties that are most commonly data-bound. Other properties are

available through the (Advanced) section, which opens the Advanced Data Binding

dialog box. The Advanced Data Binding dialog box provides design time access to all the

simple-bound properties of the selected control.

Bind a Property Using the Properties Window

1. Open the Binding project from the Start page or by using the File

menu.

2. In the Solution Explorer, double-click Binding.vb (or Binding.cs, if

you’re using C#) to open the form.

Visual Studio displays the form in the form designer.

3. Select the tbCategoryID text box (after the Category ID label).

Microsoft ADO.Net – Step by Step 217

4. In the Properties window, expand the Data Bindings section, and then

open the drop-down list for the Text property.

5. Expand dsMaster1, expand Categories, and then select CategoryID.

Bind a Property Using the Advanced Binding Dialog Box

1. In the form designer, select the tbCategoryName text box (after the

Name label).

2. In the Properties window, expand the DataBindings section (if

necessary), and then click the Ellipsis button after the (Advanced)

property.

Visual Studio opens the Advanced Data Binding dialog box with the Text

property selected.

3. Open the drop-down list for the Text property, expand dsMaster,

expand Categories, and then select CategoryName.

4. Click Close.

Visual Studio sets the data binding. Because Text is one of the default databound

properties, its value is shown in the Properties window.

Microsoft ADO.Net – Step by Step 218

When you bind a control at design time, you simply select the appropriate column from

the drop-down list in the Properties window or the Advanced Data Binding dialog box.

When you’re binding at run time, you must specify two values separately.

The .NET Framework provides a lot of flexibility in how you specify the data source and

navigation path values when creating a binding at run time. For example, both of the

following Binding objects will refer to the ProductID column of the Products table:

bndFirst = New System.Windows.Forms.Binding(“Text”, Me.dsMaster1, _

“Products.ProductID”)

bndSecond = New System.Windows.Forms.Binding(“Text”, _

Me.dsMaster.Products, “ProductID”)

However, because the data source properties are different, the .NET Framework will

create different CurrencyManagers to manage them, and the controls on the form will not

be synchronized.

In some situations, this might be useful. For example, you might need to display two

different rows of a table on a single form, and this technique makes it easy to do so.

However, in the majority of cases, you’ll want all the controls on a form that are bound to

the same table to display information from the same row, and in order to achieve this,

you must be consistent in the way you specify the data source and navigation path

values.

Tip If you’re creating a binding at run time that you want synchronized

with design-time bindings, specify only the top-level of the hierarchy

as the data source:

bndFirst = New System.Windows.Forms.Binding(“Text”,

Me.dsMaster1, “Products.ProductID”)

Bind a Property at Run Time

Visual Basic .NET

1. In the form designer, double-click the Simple button.

Visual Studio opens the code editor and adds the btnSimple Click event

handler.

2. Add the following lines to bind the tbCategoryDescription text box to

the Categories.Description column:

3. Dim newBinding As System.Windows.Forms.Binding

4.

5. newBinding = New System.Windows.Forms.Binding(“Text”, _

6. Me.dsMaster1, “Categories.Description”)

Me.tbCategoryDescription.DataBindings.Add(newBinding)

This code first declares a new Binding object, and then instantiates it by

passing the property name (“Text”), data source (Me.dsMaster1), and

navigation path (“Categories.Description”) to the constructor. Finally, the new

Binding object is added to the DataBindings collection of the

tbCategoryDescription control by using the Add method.

7. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 219

8. Click the Simple button.

The application adds the binding and displays the value in the text box.

Roadmap We’ll examine the code that implements these buttons later in

this chapter.

9. Click the Next button (“>”) at the bottom of the form.

The application displays the next category, along with its description.

Microsoft ADO.Net – Step by Step 220

Important If we had passed dsMaster1.Categories as the data source and

“Description” as the navigation path to the Binding’s constructor,

the Description field would not display data from the current row

because Visual Studio would have created a second

CurrencyManager. When creating bindings that are to be

synchronized with design-time bindings, be sure to specify only

the DataSet as the data source.

10. Close the application.

Visual C# .NET

1. In the form designer, double-click the Simple button.

Visual Studio opens the code editor and adds the btnSimple Click event

handler.

2. Add the following lines to bind the tbCategoryDescription text box to

the Categories.Description column:

3. System.Windows.Forms.Binding newBinding;

4.

5. newBinding = new System.Windows.Forms.Binding(“Text”,

6. this.dsMaster1, “Categories.Description”);

7. this.tbCategoryDescription.DataBindings.Add(newBinding);

This code fi rst declares a new Binding object, and then instantiates it by

passing the property name (“Text”), data source (Me.dsMaster1), and

navigation path (“Categories.Description”) to the constructor. Finally, the new

Binding object is added to the DataBindings collection of the

tbCategoryDescription control by using the Add method.

8. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 221

9. Click the Simple button.

The application adds the binding and displays the value in the text box.

Roadmap We’ll examine the code that implements these buttons later in

this chapter.

10. Click the Next button (“>”) at the bottom of the form.

The application displays the next category, along with its description.

Microsoft ADO.Net – Step by Step 222

Important If we had passed dsMaster1.Categories as the data source and

“Description” as the navigation path to the Binding’s constructor,

the Description field would not display data from the current row

because Visual Studio would have created a second

CurrencyManager. When creating bindings that are to be

synchronized with design-time bindings, be sure to specify only

the DataSet as the data source.

11. Close the application.

Complex-Binding Control Properties

Unlike simple-bound properties, which must be bound to a single value, complex-bound

control properties contain (and possibly display) multiple items. The most common

examples of complex-bound controls are, of course, the ListBox and ComboBox, but any

control property that accepts multiple values can be complex-bound.

Although the techniques can vary somewhat depending on the specific control, most

complex-bound controls are bound by setting the DataSource property directly rather

than by adding a Binding object to the DataBindings collection.

The most common complex-bound controls, the ListBox, ComboBox, and DataGrid, also

expose a DisplayMember property, which determines what will be displayed by the

control. In the case of the ListBox and ComboBox controls, the DisplayMember property

must resolve to a single value, while the DataGrid control can display multiple values for

each row (for example, all the columns of a DataTable).

Roadmap We’ll examine the use of the ValueMember property to create

look-up tables in Chapter 11.

In addition, the ListBox and ComboBox controls expose a ValueMember property, which

allows the control to display a user-friendly name while updating an underlying DataSet

with the value of a different column.

One particularly convenient possibility when using complex-bound controls is to bind to a

relationship rather than to a DataSet, which causes the items displayed in the control to

be automatically filtered. We’ll see an example of this technique in the following exercise.

Add a Complex Data-Binding Using the Properties Window

1. In the form designer, select the lbProducts ListBox.

2. In the Properties window, select DataSource, and then select

dsMaster1 from the drop-down list.

Microsoft ADO.Net – Step by Step 223

3. In the DisplayMember drop-down list, expand Categories, expand

CategoryProducts, and then select the ProductName column.

4. Press F5 to run the application.

Visual Studio displays the products in the current category.

Roadmap We’ll examine the code that implements these buttons later in

this chapter.

5. Click the Next button (“>”) at the bottom of the form.

The application displays the next category, along with its products.

6. Close the application.

Add a Complex Data-Binding at Run Time

Visual Basic .NET

1. In the form designer, double-click the Complex button.

Visual Studio opens the code editor and adds the Click event handler for the

btnComplex button.

2. Add the following code to the event handler:

3. Me.lbOrderDates.DataSource = Me.dvOrderDates;

Microsoft ADO.Net – Step by Step 224

Me.lbOrderDates.DisplayMember = “OrderDate”;

This code simply sets the DataSource and DisplayMember properties to the

OrderDate column of the dvOrderDates DataView.

4. Press F5 to run the application, and then click the Complex button.

The OrderDates list box displays the dates for the product selected in the

Products list box.

5. Select a different product to confirm that the dates that are displayed

change.

6. Close the application.

Visual C# .NET

1. In the form designer, double-click the Complex button.

Visual Studio opens the code editor and adds the Click event handler for the

btnComplex button.

2. Add the following code to the event handler:

3. Me.lbOrderDates.DataSource = Me.dvOrderDates;

Me.lbOrderDates.DisplayMember = “OrderDate”;

Microsoft ADO.Net – Step by Step 225

This code simply sets the DataSource and DisplayMember properties to the

OrderDate column of the dvOrderDates DataView.

4. Press F5 to run the application, and then click the Complex button.

The OrderDates list box displays the dates for the product selected in the

Products list box.

5. Select a different product to confirm that the dates that are displayed

change.

6. Close the application.

Using the BindingContext Object

As we have seen, the BindingContext object is the highest level object in the binding

hierarchy and manages the BindingManagerBase objects that control the interaction

between a data source and the controls bound to it.

The BindingContext object doesn’t expose any useful methods or events, and has only a

single property, as shown in Table 10-1. The Item property is used to index into the

BindingManagerBase collection contained in the BindingContext object. The first version,

Microsoft ADO.Net – Step by Step 226

which uses only the data source as a parameter, is used if no navigation path is

required. For example, if a DataTable is specified as the data source for a DataGrid, you

could use the following syntax to retrieve the CurrencyManager that controls that

binding:

Me.myDG.DataSource = Me.myDataSet.myTable

myCurrencyManager = Me.BindingContext(me.myDataSet.myTable)

The second version of the Item property allows the specification of the navigation path.

However, the navigation path provided here must resolve to a list, not a single property.

For example, if a text box is bound to the Description column of a DataTable, the

following syntax would be used to retrieve the CurrencyManager that controls the

binding:

Me.myText.DataBindings.Add(“Text”,Me.myDataSet,”myTable.Description”)

myCurrencyManager = Me.BindingContext(Me.myDataSet.myTable)

Table 10-1: BindingContext Properties

Property Description

Item(DataSource) Returns the BindingManagerBase object

associated with the specified DataSource

Item(DataSource,

DataMember)

Returns the BindingManagerBase object

associated with the specified DataSource

and DataMember, where the DataMember

is a table or relation

Using the CurrencyManager Object

The CurrencyManager object is fundamental to the Windows Forms data-binding

architecture. Through its properties, methods, and events, the CurrencyManager object

manages the link between a data source and the controls that display data from that

source.

CurrencyManager Properties

The properties exposed by the CurrencyManager are shown in Table 10-2. With the

exception of the Position property, they are all read-only.

Table 10-2: CurrencyManager Properties

Property Description

Bindings The collection

of Binding

objects being

managed by

the

CurrencyMana

ger

Count The number of

rows managed

by the

CurrencyMana

ger

Current The value of

the current

object in the

data source

Microsoft ADO.Net – Step by Step 227

Table 10-2: CurrencyManager Properties

Property Description

List The list

managed by

the

CurrencyMana

ger

Position Gets or sets

the current

item in the list

managed by

the

CurrencyMana

ger

The Bindings and List properties define the relationship between the data source and the

controls bound to it. The Bindings property, which returns a BindingsCollection object,

contains the Binding object for each individual control property that is bound to the data

source. We’ll examine the Binding object later in this chapter.

The List property returns a reference to the data source that is managed by the

CurrencyManager. The List property returns a reference to the IList interface. To treat

the data source as its native type in code, you must explicitly cast it to that type.

As might be expected, the Count property returns the number of rows in the list managed

by the CurrencyManager. Unlike some other environments, the Count property is

immediately available—it is not necessary to move to the end of the list before the Count

property is set.

The Current property returns the value of the current row in the data source as an object.

Like the List property, if you want to treat the value returned by Current as its native type,

you must explicitly cast it.

Remember that the Current property is read-only. To change the current row in the data

source, you must use the Position property, which is the only property exposed by the

CurrencyManager that is not read-only. The Position property is an integer that

represents the zero-based index into the List property.

Use CurrencyManager Read-Only Properties

Visual Basic .NET

1. In the code editor, select btnReadOnly in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the method:

3. Dim strMsg As String

4. Dim cm As System.Windows.Forms.CurrencyManager

5. Dim dsrc As System.Data.DataView

6.

7. cm = Me.BindingContext(Me.dsMaster1, “Categories”)

8. dsrc = CType(cm.List, System.Data.DataView)

9.

10. strMsg = “There are ” & cm.Count.ToString & ” rows in ”

11. strMsg += dsrc.Table.TableName.ToString & “.”

Microsoft ADO.Net – Step by Step 228

12. strMsg += vbCrLf & “There are ” & cm.Bindings.Count.ToString

13. strMsg += ” controls bound to it.”

MessageBox.Show(strMsg)

The first three lines declare some local variables. The fourth line sets the

variable cm to the CurrencyManager for the Categories DataTable, while the

next line assigns the variable dsrc to the data source referenced by the List

property.

Note that the value returned by List is explicitly cast to a DataView.

(Remember that although Categories is a DataTable, data binding always

occurs to the default view.)

The remaining lines display the Count and Bindings.Count properties in a

message box.

14. Press F5 to run the application.

15. Click the Read-Only button.

The application displays the CurrencyManager properties, showing two bound

controls.

16. Dismiss the dialog box, and then click the Simple button.

The application adds the binding for the Description control.

17. Click the Read-Only button.

The application displays the CurrencyManager properties, showing three

bound controls.

Microsoft ADO.Net – Step by Step 229

18. Close the application.

Visual C# .NET

1. In the form designer, double-click the Read-Only button.

Visual Studio adds the event handler to the code window.

2. Add the following code to the procedure:

3. string strMsg;

4. System.Windows.Forms.CurrencyManager cm;

5. System.Data.DataView dsrc;

6.

7. cm = (System.Windows.Forms.CurrencyManager)

8. this.BindingContext[this.dsMaster1, “Categories”];

9. dsrc = (System.Data.DataView) cm.List;

10.

11. strMsg = “There are ” + cm.Count.ToString() + ” rows in “;

12. strMsg += dsrc.Table.TableName.ToString() + “.”;

13. strMsg += “\nThere are ” + cm.Bindings.Count.ToString();

14. strMsg += ” controls bound to it.”;

MessageBox.Show(strMsg);

The first three lines declare some local variables. The fourth line sets the

variable cm to the CurrencyManager for the Categories DataTable, while the

next line assigns the variable dsrc to the data source referenced by the List

property.

Note that the value returned by List is explicitly cast to a DataView.

(Remember that although Categories is a DataTable, data binding always

occurs to the default view.)

The remaining lines display the Count and Bindings.Count properties in a

message box.

15. Press F5 to run the application.

16. Click the Read-Only button.

The application displays the CurrencyManager properties, showing two bound

controls.

Microsoft ADO.Net – Step by Step 230

17. Dismiss the dialog box, and then click the Simple button.

The application adds the binding for the Description control.

18. Click the Read-Only button.

The application displays the CurrencyManager properties, showing three

bound controls.

19. Close the application.

Use the Position Property

Visual Basic .NET

1. Open the region labeled ‘Navigation Buttons.’

2. Add the following code to the btnFirst_Click event handler:

3. Me.BindingContext(Me.dsMaster1, “Categories”).Position = 0

UpdateDisplay()

This code sets the Position property of the CurrencyManager for the Categories

DataTable to the beginning (remember that Position is a zero-based

index), and then calls the UpdateDisplay function. UpdateDisplay, which is

Microsoft ADO.Net – Step by Step 231

contained in the Utility Functions region, simply displays ‘Category x of y’ in

the text box at the bottom of the form.

4. Add the following code to the btnPrevious_Click event handler:

5. With Me.BindingContext(Me.dsMaster1, “Categories”)

6. If .Position = 0 Then

7. Beep()

8. Else

9. .Position -= 1

10. UpdateDisplay()

11. End If

12. End With

This code uses Microsoft Visual Basic’s With … End With structure to simplify

the reference to the CurrencyManager. Note that it checks to see if the

Position property is already set at the beginning of the file before

decrementing the value. The Position property does not throw an exception if

it is set outside the bounds of the list.

13. The remaining navigation code is already there, so press F5 to run

the application.

14. Use the navigation buttons to move through the display.

15. Close the application.

Visual C# .NET

1. Open the region labeled ‘Navigation Buttons.’

2. Add the following code to the btnFirst_Click event handler:

3. this.BindingContext[this.dsMaster1, “Categories”].Position = 0;

UpdateDisplay();

This code sets the Position property of the CurrencyManager for the Categories

DataTable to the beginning (remember that Position is a zero-based

index), and then calls the UpdateDisplay function. UpdateDisplay, which is

contained in the Utility Functions region, simply displays ‘Category x of y’ in

the text box at the bottom of the form.

4. Add the following code to the btnPrevious_Click event handler:

5. System.Windows.Forms.BindingManagerBase bmb;

6. bmb = (System.Windows.Forms.BindingManagerBase)

7. this.BindingContext[this.dsMaster1, “Categories”];

8.

9. bmb.Position -= 1;

UpdateDisplay();

10. The remaining navigation code is already there, so press F5 to run the

application.

11. Use the navigation buttons to move through the display.

12. Close the application.

CurrencyManager Methods

The public methods exposed by the CurrencyManager object are shown in Table 10-3.

Table 10-3: CurrencyManager Methods

Method Description

AddNew Adds a new

item to the

underlying

list

Microsoft ADO.Net – Step by Step 232

Table 10-3: CurrencyManager Methods

Method Description

CancelCurrentEdit Cancels the

current edit

operation

EndCurrentEdit Commits the

current edit

operation

Refresh Redisplays

the contents

of bound

controls

RemoveAt(Index) Removes

the item at

the position

specified by

Index in the

underlying

list

ResumeBinding Resumes

data binding

and data

validation

after the

SuspendBin

ding method

has been

called

SuspendBinding Temporarily

suspends

data binding

and data

validation

The data editing methods AddNew and RemoveAt, which add and remove items from

the data source, along with the CancelCurrentEdit and EndCurrentEdit methods, are for

use only within complex-bound controls. Unless you are creating a custom version of a

complex-bound control, use the DataView’s or DataRowView’s equivalent methods.

Roadmap We’ll examine the SuspendBinding and ResumeBinding

methods in Chapter 11.

The SuspendBinding and ResumeBinding methods allow binding (and hence data

validation) to be temporarily suspended. As we’ll see in Chapter 11, these methods are

typically used when data validation requires that values be entered into multiple fields

before they are validated.

The Refresh method is used only with data sources that don’t support change

notification, such as collections and arrays.

CurrencyManager Events

The events exposed by the CurrencyManager are shown in Table 10-4.

Table 10-4: CurrencyManager Events

Event Description

CurrentChanged Occurs

when the

bound value

changes

Microsoft ADO.Net – Step by Step 233

Table 10-4: CurrencyManager Events

Event Description

ItemChanged Occurs

when the

current item

has

changed

PositionChanged Occurs

when the

Position

property has

changed

The CurrentChanged and PositionChanged events both occur whenever the current row

in the CurrencyManager’s list changes. The difference is the event arguments passed

into the event—PositionChanged receives the standard System.EventArgs, while

ItemChanged receives an argument of the type ItemChangedEventArgs, which includes

an Index property.

The ItemChanged event occurs when the underlying data is changed. Under most

circumstances, when working with ADO.NET objects, you will use the DataRow or

DataColumn Changed and Changing events because they provide greater flexibility, but

there is nothing to prevent responding to the CurrencyManager’s ItemChanged event if it

is more convenient.

Respond to an ItemChanged Event

Visual Basic .NET

1. Add the following event handler to the code editor:

2. Private Sub Position_Changed(ByVal sender As System.Object,

_

3. ByVal e As System.EventArgs)

4. Dim strMsg As String

5.

6. strMsg = “Row ” & (Me.BindingContext(Me.dsMaster1, _

7. “Categories”).Position + 1).ToString

8. MessageBox.Show(strMsg)

End Sub

The code simply displays the current row number in a message box.

9. Expand the Region labeled Windows Form Designer generated code,

and add the following code to the end of the New sub to connect the

event handler to the PositionChanged event:

AddHandler Me.BindingContext(dsMaster1, Categories”).PositionChanged,

_ AddressOf Me.Position_Changed

10. Press F5 to run the application, and then click the Next button (‘>’).

The application displays a message box showing the new row number.

Microsoft ADO.Net – Step by Step 234

11. Close the application.

Visual C# .NET

1. Add the following event handler to the code editor:

2. private void Position_Changed(object sender, System.EventArgs

e)

3. {

4. string strMsg;

5.

6. strMsg = “Row ” + (this.BindingContext[this.dsMaster1,

7. “Categories”].Position + 1).ToString();

8. MessageBox.Show(strMsg);

}

The code simply displays the current row number in a message box.

9. Add the code to bind the event handler to the bottom of the

frmBindings() sub:

10. this.BindingContext[this.dsMaster1,

“Categories”].PositionChanged

+= new EventHandler(this.Position_Changed);

11. Press F5 to run the application, and then click the Next button (‘>’).

The application displays a message box showing the new row number.

Microsoft ADO.Net – Step by Step 235

12. Close the application.

Using the Binding Object

The Binding object represents the link between a simple-bound control property and the

CurrencyManager. The control’s DataBindings collection contains a Binding object for

each bound property.

Binding Object Properties

The properties exposed by the Binding object are shown in Table 10-5. All of the

properties are read-only.

Table 10-5: Binding Properties

Property Description

BindingManagerBase The

BindingManagerB

ase that manages

this Binding object

BindingMemberInfo Returns

information

regarding this

Binding object

based on the

DataMember

specified in its

constructor

Control The control being

bound

DataSource The data source

for the binding

IsBinding Indicates whether

the binding is

active

PropertyName The control’s

data-bound

property

Microsoft ADO.Net – Step by Step 236

The BindingManagerBase, Control, and PropertyName properties define the data

binding. The BindingManagerBase property returns the CurrencyManager or

PropertyManager that manages the Binding object, while the Control and PropertyName

properties specify the control property containing the data.

The IsBinding property indicates whether the binding is active. It returns True unless

SuspendBinding has been evoked.

The DataSource property returns the data source to which the control property is bound

as an object. Note that it returns the data source only, not the navigation path. To

retrieve the Binding object’s navigation path, you must use the BindingMemberInfo

property, a complex object whose fields are shown in Table 10-6.

Table 10-6: BindingMemberInfo Properties

Field Description

BindingField The data

source

property

specified by

the Binding

object’s

navigation

path

BindingMember The

complete

navigation

path of the

Binding

object

BindingPath The

navigation

path up to,

but not

including,

the data

source

property,

specified by

the Binding

object’s

navigation

path

The BindingMember field of the BindingMemberInfo property represents the entire

navigation path of the binding, while the BindingField field represents only the final field.

The BindingPath field represents everything up to the BindingField. For example, given

the navigation path ‘Categories.CategoryProducts.ProductID,’ the BindingField is

‘ProductID,’ while the BindingPath is ‘Categories.CategoryProducts.’ Note that all three

properties return a string value, not an object reference.

Use the BindingMemberInfo Property

Visual Basic .NET

1. In the code editor, select btnBindings in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the event handler template to the code.

2. Add the following code to the method:

3. Dim strMsg As String

4. Dim bmo As System.Windows.Forms.BindingMemberInfo

Microsoft ADO.Net – Step by Step 237

5.

6. bmo = Me.tbCategoryID.DataBindings(0).BindingMemberInfo

7. strMsg = “BindingMember: ” + bmo.BindingMember.ToString

8. strMsg += vbCrLf & “BindingPath: ” + _

9. bmo.BindingPath.ToString

10. strMsg += vbCrLf & “BindingField: ” + _

11. bmo.BindingField.ToString

MessageBox.Show(strMsg)

The first two lines declare local variables to be used in the method. The third

line assigns the BindingMemberInfo property of the first (and only) Binding

object in the tbCategoryID DataBindings collection to the bmo variable. The

remaining lines display the BindingMember, BindingPath, and BindingField

properties in a message box.

12. Press F5 to run the application, and then click the

BindingMemberInfo button.

The application displays the BindingMemberInfo fields in a dialog box.

13. Close the application.

Visual C# .NET

1. In the form designer, double-click the Bindings button.

Visual Studio adds the event handler to the code window.

2. Add the following code to the procedure:

3. string strMsg;

4. System.Windows.Forms.BindingMemberInfo bmo;

5.

6. bmo = this.tbCategoryID.DataBindings[0].BindingMemberInfo;

7.

8. strMsg = “BindingMember: ” + bmo.BindingMember.ToString();

9. strMsg += “\nBindingPath: ” + bmo.BindingPath.ToString();

10. strMsg += “\nBindingField: ” + bmo.BindingField.ToString();

MessageBox.Show(strMsg);

11. Press F5 to run the application, and then click the Bindings button.

Microsoft ADO.Net – Step by Step 238

The application displays the BindingMemberInfo fields in a dialog box.

12. Close the application.

Binding Object Events

The events exposed by the Binding object are shown in Table 10-7. The Format and

Parse events are used to control the way data is displayed to the user. We’ll examine

both of these events in detail in Chapter 11.

Table 10-7: Binding Events

Event Description

Format Occurs when data is pushed from the data source to the control

or pulled from the control to the data source

Parse Occurs when data is pulled from the control to the data source

Roadmap We’ll examine the Format and Parse events in Chapter 11.

Chapter 10 Quick Reference

To Do this

Simple-bind

control properties

at run time

Create a new Binding object, and add it to the control’s

DataBindings collection:

newBinding = New Binding(<propertyString>,

<dataSource>, <navigationPath>)

myControl.DataBindings.Add(newBinding)

Complex-bind

control properties

at run time

Set the DataSource and DisplayMember properties:

myControl.DataSource = myDataSource

myControl.DisplayMember = “field”

Use

CurrencyManage

r properties

Obtain a reference to the CurrencyManager by specifying

the data source and navigation path, and then reference

its properties in the usual way:

myCM = Me.BindingContext(<dataSource>,

<path>)

MessageBox.Show(myCM.Count.ToString())

Microsoft ADO.Net – Step by Step 239

Chapter 11: Using ADO.NET in Windows Forms

Overview

In this chapter, you’ll learn how to:

§ Format data using the Format and Parse events

§ Use specialized controls to simplify data entry

§ Use data relations to display related data

§ Find rows based on a DataSet’s Sort column

§ Find rows based on other criteria

§ Work with data change events

§ Work with validation events

§ Use the ErrorProvider component

In the previous chapter, we examined the objects that support Microsoft ADO.NET data

binding. In this chapter, we’ll explore using ADO.NET and Windows Forms to perform

some common tasks.

Formatting Data

The Binding object exposes two events, Format and Parse, which support formatting

data for an application. The Format event occurs whenever data is pushed from the data

source to the control, and when it is pulled from the control back to the data source, as

shown in the figure below.

The Format event is used to translate the data from its native format to the format you

want to display to the user, while the Parse event is used to translate it back to its

original format.

Both events receive a ConvertEventArgs argument, which has the properties shown in

Table 11-1. The Value property contains the actual data. When the event is triggered,

this property will contain the original data in its original format. To change the formatting,

you set this value to the new data or format within the event handler. The DesiredType

property is used when you are changing the data type of the value.

Table 11-1: ConvertEventArgs Properties

Property Description

DesiredType The data

type of the

desired

value

Value The data

value

Using the Format Event

Because the Format event occurs both when data is being pushed from the data source

and when it is pulled from the control, you must be sure you know which action is taking

place before performing any action. If you change the data type of the value, you can

use the DesiredType property to perform this check.

However, if the data type remains the same, you must use a method that is external to

the event to determine which way the data is being moved. Setting the Tag property of

Microsoft ADO.Net – Step by Step 240

the control is an easy way to manage this. If you’re using the Tag property for another

purpose, you can use a form-level variable or determine the direction from the value

itself.

Change the Format of Data Using the Format Event

Visual Basic .NET

1. In Microsoft Visual Studio .NET, open the UsingWindows project from

the Start page or by using the File menu.

2. Double-click the Master.vb form.

Visual Studio displays the form in the form designer.

3. Press F7 to display the code editor.

4. Add the following event handler to the code:

5. Private Sub FormatName (ByVal sender As Object, ByVal e As _

6. ConvertEventArgs)

7. If Me.tbCategoryName.Tag <> “PARSE” Then

8. e.Value = CType(e.Value, String).ToUpper

9. End If

10. Me.tbCategoryName.Tag = “FORMAT”

11. MessageBox.Show(e.Value, “Format”)

End Sub

This code first checks the tbCategoryName text box’s Tag property to see if

the value is “PARSE.” If it isn’t “PARSE,” it translates the Value property of e

to uppercase. It then sets the Tag property to “FORMAT” and displays a

message box showing the Value property.

12. Expand the Region labeled Windows Form Designer generated

code.

13. In the New sub, after the call to UpdateDisplay(), add the code to

call the procedure:

AddHandler Me.tbCategoryName.DataBindings(0).Format, _ AddressOf

Me.FormatName

This line adds the handler to the first (and only) Binding object in the

tbCategoryName text box’s DataBindings collection.

14. Press F5 to run the application. The message box is displayed twice

before the application’s form is displayed, once when the control is

bound and a second time when the data is first pushed to the

control.

15. Close both message boxes.

Microsoft ADO.Net – Step by Step 241

16. Click the Next button (“>”).

The application displays the formatted CategoryName for the next row.

17. Close the message box.

18. Close the application.

Visual C# .NET

1. In Microsoft Visual Studio .NET, open the UsingWindows project from

the Start page or by using the File menu.

2. Double-click the Master.cs form.

Microsoft ADO.Net – Step by Step 242

Visual Studio displays the form in the form designer.

3. Press F7 to display the code editor.

4. Add the following event handler to the bottom of the class definition:

5. private void FormatName(object sender, ConvertEventArgs e)

6. {

7. string eStr = (string) e.Value;

8.

9. if ((string) this.tbCategoryName.Tag != “PARSE”)

10. e.Value = eStr.ToUpper();

11. this.tbCategoryName.Tag = “FORMAT”;

12. MessageBox.Show((string)e.Value, “Format”);

}

This code first checks the tbCategoryName text box’s Tag property to see if

the value is “PARSE.” If it isn’t “PARSE,” it translates the Value property of e

to uppercase. It then sets the Tag property to “FORMAT” and displays a

message box showing the Value property.

13. In the frmMaster sub, after the call to UpdateDisplay(), add the code

to call the procedure:

14. this.tbCategoryName.DataBindings[0].Format +=

new ConvertEventHandler(this.FormatName);

This line adds the handler to the first (and only) Binding object in the

tbCategoryName text box’s DataBindings collection.

15. Press F5 to run the application. The message box is displayed twice

before the application’s form is displayed, once when the control is

bound and a second time when the data is first pushed to the

control.

16. Close both message boxes.

Microsoft ADO.Net – Step by Step 243

17. Click the Next button (“>”).

The application displays the formatted CategoryName for the next row.

18. Close the message box.

19. Close the application.

Microsoft ADO.Net – Step by Step 244

Using the Parse Event

As we have seen, the Parse event occurs when data is being pulled from a control back

to the data source, and it is typically used to “un-format” data that has been customized

for display.

Because Parse is called only once, this “un-formatting” operation should always happen,

unlike the Format operation, which should take place only when data is being pushed to

the control. However, you do need to be careful to set up any variables or properties

required to make sure that the Format event, which will always be called after Parse,

doesn’t reformat data before it is submitted to the data source.

Restore the Original Format of Data Using the Parse Event

Visual Basic .NET

1. Add the following procedure to the bottom of the code editor:

2. Private Sub ParseName(ByVal sender As Object, ByVal e As _

ConvertEventArgs)

3. Me.tbCategoryName.Tag = “PARSE”

4. e.Value = CType(e.Value, String).ToLower

5. MessageBox.Show(e.Value, “Parse”)

End Sub

Note that because the Parse event occurs only when data is being pulled

from the control, there is no need to check the Tag property.

6. Add the following handler to the New sub, after the handler from the

previous exercise:

AddHandler Me.tbCategoryName.DataBindings(0).Parse, _ AddressOf

Me.ParseName

7. Press F5 to run the application, and close both of the preliminary

Format event message boxes.

8. Add a couple of spaces after “BEVERAGES,” and then click the Next

button (“>”).

The application displays the Parse message box.

9. Close the Parse message box, and then close the application.

Important The code for this book was checked with a pre-release version of

Visual Studio .NET (build 4997). A bug in that build interfered

with the click event firing if the project had both a Parse and

Format event handler and if either of these displayed a

MessageBox.

We fully expect that this will be fixed before Visual Studio .NET is

released; however, if the project re-displays the Beverages

category, please refer to the Microsoft Press Web site for further

Microsoft ADO.Net – Step by Step 245

information.

10. Comment out the two AddHandler statements in the New sub.

(Otherwise, the message boxes will get irritating as we work through

the remaining exercises.)

Visual C# .NET

1. Add the following procedure to the bottom of the class file:

2. private void ParseName(object sender, ConvertEventArgs e)

3. {

4. string eStr = (string) e.Value;

5.

6. this.tbCategoryName.Tag = “PARSE”;

7. e.Value = eStr.ToLower();

8. MessageBox.Show((string)e.Value, “Parse”);

}

Note that because the Parse event occurs only when data is being pulled

from the control, there is no need to check the Tag property.

9. Add the following handler to the New sub, after the handler from the

previous exercise:

this.tbCategoryName.DataBindings[0].Parse += new

ConvertEventHandler(this.ParseName);

10. Press F5 to run the application, and close both of the preliminary

Format event message boxes.

11. Add a couple of spaces after “BEVERAGES,” and then click the

Next button (“>”).

The application displays the Parse message box.

12. Close the Parse message box, and then close the application.

Important When a message box is displayed, it stops code in the

application from executing until the user clicks one of the

message box buttons. Stopping the execution of code with a

message box can cause events to fire incorrectly. For example,

the Parse and Format event handlers for this sample include a

call to MessageBox.Show. When you run the sample, add a

couple of spaces in the Name text box, and click the Next button,

you might notice that the Click event for the Next button does not

fire. To ensure that the events for this sample fire correctly, you

can comment out the calls to MessageBox.Show or replace the

calls to MessageBox.Show with Console.WriteLine or

Debug.WriteLine. Console.WriteLine or Debug.WriteLine won’t

stop code from executing and will output specified text to the

Visual Studio .NET Output window so that you can see how the

events are firing.

Microsoft ADO.Net – Step by Step 246

13. Comment out the two statements that add the

ConvertEventHandlers in the frmMaster sub. (Otherwise, the

message boxes will get irritating as we work through the remaining

exercises.)

Displaying Data in Windows Controls

The Microsoft .NET Framework supports a wide variety of controls for use on Windows

forms, and as we’ve seen, any form property can be bound, directly or indirectly, to an

ADO.NET data source.

The details of each control are unfortunately outside the scope of this book, but in this

section, we’ll examine some specific techniques for data-binding.

Simplifying Data Entry

One of the reasons that so many controls are provided, of course, is to make data entry

simpler and more accurate. TextBox controls are always an easy choice, but the time

spent choosing and implementing controls that more closely match the way the user

thinks about the data will be richly rewarded.

To take a fairly simple example, it is certainly possible to use a ComboBox containing

True and False or Yes and No to represent Boolean values, but in most circumstances,

it’s far more effective to use the CheckBox control provided by the .NET Framework.

The Checked property of the CheckBox control, which determines whether the box is

selected, can be simple-bound either at design time by using the Properties window or at

run time in code by using standard techniques.

Use the CheckBox Control for Boolean Values

1. In the Solution Explorer, double-click Controls.vb (or Controls.cs, if

you’re using C#).

Visual Studio .NET opens the form in the form designer.

2. Select the Discontinued CheckBox control.

3. In the Properties window, expand the Data Bindings section (if

necessary).

4. Select the Checked property. In the drop-down list, expand

dsMaster1,and then expand ProductsExtended and select

Discontinued.

5. Press F5 to run the application.

6. Click the Controls button.

The application displays the Controls form.

Microsoft ADO.Net – Step by Step 247

7. Move through the DataTable by pressing the Next button (“>”),

confirming that only discontinued products have the field checked.

8. Close the Controls window.

9. Close the application.

In order to simplify the database schema, many tables use artificial keys—an identity

value of some type rather than a key derived from the entity’s attributes. These artificial

keys are convenient, but they don’t typically have any meaning for users. When working

with the primary table, the artificial key can often be hidden from users or simply ignored

by them. With foreign keys, however, this is rarely the case.

Fortunately, the .NET Framework controls that inherit from the ListControl class,

including both ListBox controls and ComboBox controls, make it easy to bind the control

to one column while displaying another, even a column in a different table.

The technique is reasonably straightforward. First set the DataSource and

DisplayMember properties of the list control to the user-friendly table and column. Under

most circumstances, this won’t be the table that the form is updating. Then, to set the

data binding, set the ValueMember property to the key field in the form being updated,

and finally create a Binding object linking the SelectedValue property to the field to be

updated.

For example, given the database schema shown in the figure below, if you were creating

a form to update the Relatives table, you would typically use a ComboBox control to

represent the Relationship type rather than forcing the user to remember that Type 1

means Sister, Type 2 means Father, and so on.

To implement this in the .NET Framework, you would set the ComboBox control’s

DisplayMember property to RelationshipTypes.Relationship, and then set its

ValueMember property to RelationshipTypes.RelationshipID. With these settings, the

ComboBox control will display Sister but return a SelectedValue of 1.

Once the properties have been set, either in the Properties window or in code, you must

then add a Binding object to the ComboBox control to link the SelectedValue to the

Relationship field in the Relatives table. Because SelectedValue isn’t available for databinding

at design time, you must do this in code:

[VB]

Me.RelationshipType.DataBindings.Add(“SelectedValue”, myDS, _

“Relatives.Relationship”)

Microsoft ADO.Net – Step by Step 248

[C#]

this.RelationshipType.DataBindings.Add(“SelectedValue”, myDS,

“Relatives.Relationship”);

Display Full Names in a ComboBox Control

Visual Basic .NET

1. In the form designer, select the Category combo box (cbCategory) on

the Controls form.

2. In the Properties window, select the DataSource property.

3. In the drop-down list, select dsMaster1.

4. In the Properties window, select the DisplayMember property.

5. In the drop-down list, expand Categories, and then select

CategoryName.

6. In the Properties window, select the ValueMember property.

7. In the drop-down list, expand Categories, and then select CategoryID.

8. Press F7 to open the code editor window.

9. Expand the Region labeled Windows Form Designer generated code.

10. Add the following code after the call to UpdateDisplay in the New

sub:

Me.cbCategory.DataBindings.Add(“SelectedValue”, Me.dsMaster1, _

“ProductsExtended.CategoryID”)

This code binds the ValueMember property of the control to the CategoryID

column of the ProductsExtended DataTable.

11. Press F5 to run the application.

12. Click the Controls button.

The application displays the Controls form and populates the Category combo

box.

13. Close the Controls form and the application.

Visual C# .NET

1. In the form designer, select the Category combo box (cbCategory) on

the Controls form.

2. In the Properties window, select the DataSource property.

3. In the drop-down list, select dsMaster1.

4. In the Properties window, select the DisplayMember property.

5. In the drop-down list, expand Categories, and then select

CategoryName.

6. In the Properties window, select the ValueMember property.

7. In the drop-down list, expand Categories, and then select CategoryID.

8. Press F7 to open the code editor window.

9. Add the following code after the call to UpdateDisplay in the

frmControls sub:

10. this.cbCategory.DataBindings.Add(“SelectedValue”,

Microsoft ADO.Net – Step by Step 249

this.dsMaster1, “ProductsExtended.CategoryID”);

This code binds the ValueMember property of the control to the CategoryID

column of the ProductsExtended DataTable.

11. Press F5 to run the application.

12. Click the Controls button.

The application displays the Controls form and populates the Category combo

box.

13. Close the Controls form and the application.

Numeric data is presented to the user in a text box. Unfortunately, the .NET Framework

version of the control doesn’t provide any method to constrain data entry to numeric

characters. One option is to use the NumericUpDown control. The user can type directly

into this control (numeric characters only) or use the up and down arrows to set the

value.

The NumericUpDown control can be simple-bound at design time or at run time by using

the standard techniques, and it allows a fine degree of control over the format of the

numbers—you can specify the number of decimal places, the increment by which the

value changes when the user clicks the up and down arrows, and the minimum and

maximum values.

Use NumericUpDown Controls

1. In the form designer, select the UnitPrice NumericUpDown control

(udPrice).

2. In the Properties window, expand the Data Bindings section, if

necessary, and then select the Value property.

3. In the drop-down list box, expand dsMaster, expand

ProductsExtended, and then select UnitPrice.

4. Press F5 to run the application.

5. Click the Controls button.

The application displays the Controls form and populates the UnitPrice

NumericUpDown control.

Microsoft ADO.Net – Step by Step 250

6. Close the Controls form and the application.

7. Close the Controls form designer and code editor.

Working with DataRelations

The data model implemented by ADO.NET, with its ability to specify multiple DataTables

and the relationships between them, makes it easy to represent relationships of arbitrary

depth on a single form.

By binding the control to a DataRelation rather than to a DataTable, the .NET Framework

will automatically handle synchronization of controls on a form.

Create a Nested ListBox

Visual Basic .NET

1. Select the code editor for Master.vb.

2. In the New sub, add the following data bindings below the two

commented AddHandler calls:

3. Me.lbOrders.DataSource = Me.dsMaster1

Me.lbOrders.DisplayMember = _

“Categories.CategoriesProducts.ProductOrders.OrderDate”

4. Press F5 to run the application.

Visual Studio displays the application’s main form and populates the Orders

list box.

5. Select different products in the Products list box.

The application displays the date on which each Product was ordered.

Microsoft ADO.Net – Step by Step 251

6. Close the application.

Visual C# .NET

1. Select the code editor for Master.cs.

2. In the frmMaster sub, add the following data bindings below the two

commented ConvertEventHandlers:

3. this.lbOrders.DataSource = this.dsMaster1;

4. this.lbOrders.DisplayMember =

5. “Categories.CategoriesProducts.ProductOrders.OrderDate”;

6. Press F5 to run the application.

Visual Studio displays the application’s main form and populates the Orders

list box.

7. Select different products in the Products list box.

The application displays the date on which each Product was ordered.

Microsoft ADO.Net – Step by Step 252

8. Close the application.

In the previous exercise, we used two ListBox controls to represent a hierar-chical

relationship in the data. The DataGrid control also supports the display of hierarchical

data, and it has the advantage of allowing multiple columns from the data source to be

displayed simultaneously. Unfortunately, because it can display only a single table at a

time, the DataGrid control forces the user to manually navigate the hierarchy and some

users find this confusing.

Note The DataGrid is a complex control, and details of its uses are

outside the scope of this book. The following exercise walks you

through the process of displaying two related DataTables in the

DataGrid control. For more information on using this control, refer

to the Visual Studio and .NET Framework documentation.

Displaying Hierarchical Data Using the DataGrid

1. In the Solution Explorer, double-click DataGrid.vb (or DataGrid.cs, if

you are using C#).

Visual Studio opens the form in the form designer.

2. Select the dgProductOrders DataGrid.

3. In the Properties window, select the DataSource property, expand the

drop-down list, and then select dsMaster1.

4. Select the DataMember property, expand the drop-down list, expand

Categories, and then select CategoriesProducts.

5. Click the Ellipsis button after the TableStyles property.

Visual Studio displays the DataGridTableStyle Collection Editor.

Microsoft ADO.Net – Step by Step 253

6. Click the Add button.

Visual Studio adds a DataGridTableStyle.

7. Change the Name property of the DataGridTableStyle to tsProducts.

8. Select the MappingName property, expand the drop-down list, expand

Categories, and then select CategoriesProducts.

Microsoft ADO.Net – Step by Step 254

9. Click the Add button again.

Visual Studio adds a second DataGridTableStyle.

10. Change the Name property to tsOrders and the MappingName

property to Categories.CategoriesProducts.ProductOrders.

11. Click OK to close the editor.

12. Press F5 to run the application, and then click the DataGrid button.

The application displays the DataGrid form.

Microsoft ADO.Net – Step by Step 255

13. Expand one of the rows in the DataGrid.

The application displays the name of the related table.

14. Select ProductOrders.

The application displays the selected orders.

Microsoft ADO.Net – Step by Step 256

15. Click the Back button.

The application returns to the Products display.

16. Close the window, and close the application.

17. Close the DataGrid.vb (or DataGrid.cs, if you’re using C#) form.

The DataGrid control is fairly easy to bind to multiple DataTables, but because it can

display only a single table at any time, it can be confusing for the user. The TreeView

control can also represent hierarchical data, and it does so in a way that often matches

the user’s expectations more closely.

Microsoft ADO.Net – Step by Step 257

Unfortunately, the TreeView control can’t be directly bound to a data source. Instead,

you must manually add the data by using the Add method of its Nodes collections. The

following exercise walks you through the process.

Displaying Hierarchical Data Using the TreeView

Visual Basic .NET

1. In the Solution Explorer, double-click TreeView.vb.

Visual Studio displays the form in the form designer.

2. Press F7 to display the code editor.

3. Add the following procedure to the bottom of the code editor:

4. Private Sub AddNodes(ByVal sender As Object, ByVal e As

EventArgs)

5. Dim dvCategory As System.Data.DataRowView

6. Dim arrProducts() As System.Data.DataRow

7. Dim currProduct As dsMaster.ProductsRow

8. Dim arrOrders() As System.Data.DataRow

9. Dim currOrder As dsMaster.OrderDatesRow

10. Dim root As System.Windows.Forms.TreeNode

11. With Me.tvProductOrders

12. .BeginUpdate()

13. .Nodes.Clear()

14.

15. dvCategory = _

16. Me.BindingContext(Me.dsMaster1,

“Categories”).Current

17. arrProducts = _

dvCategory.Row.GetChildRows(“CategoriesProducts”)

18. For Each currProduct In arrProducts

19. root = .Nodes.Add(currProduct.ProductName)

20. arrOrders =

currProduct.GetChildRows(“ProductOrders”)

21.

22. For Each currOrder In arrOrders

23. root.Nodes.Add(currOrder.OrderDate)

Microsoft ADO.Net – Step by Step 258

24. Next

25. Next currProduct

26.

27. .EndUpdate()

28. End With

End Sub

29. Expand the Region labeled Windows Form Designer generated

code.

30. Add the following code below the call to UpdateDisplay in the New

sub:

31. AddHandler Me.BindingContext(Me.dsMaster1, _

32. “Categories”).PositionChanged, AddressOf _

33. Me.AddNodes

AddNodes(Me, New System.EventArgs())

The first line links the AddNodes procedure to the PositionChanged event so

that it will be called each time the Category changes. The second line calls

the procedure directly to set up the initial display.

34. Press F5 to run the application, and then click the TreeView button.

Visual Studio displays the TreeView form.

35. Verify that the TreeView is updated correctly by clicking the Next

button (“>”) and expanding nodes.

36. Close the TreeView form and the application.

37. Close the TreeView form designer and code editor.

Visual C# .NET

1. In the Solution Explorer, double-click TreeView.cs.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 259

2. Press F7 to display the code editor.

3. Add the following procedure to the bottom of the code editor:

4. private void AddNodes(object sender, System.EventArgs e)

5. {

6. System.Data.DataRowView dvCategory;

7. System.Data.DataRow[] arrProducts;

8. System.Data.DataRow[] arrOrders;

9. System.Windows.Forms.TreeNode root;

10. System.Windows.Forms.TreeView tv;

11.

12. tv = this.tvProductOrders;

13.

14. tv.BeginUpdate();

15. tv.Nodes.Clear();

16.

17. dvCategory = (System.Data.DataRowView)

18. this.BindingContext[this.dsMaster1,

“Categories”].Current;

19. arrProducts =

dvCategory.Row.GetChildRows(“CategoriesProducts”);

20. foreach (dsMaster.ProductsRow currProduct in

arrProducts)

21. {

22. root = tv.Nodes.Add(currProduct.ProductName);

23. arrOrders =

currProduct.GetChildRows(“ProductOrders”);

24. foreach (dsMaster.OrderDatesRow currOrder in

arrOrders)

25. {

26.

root.Nodes.Add(currOrder.OrderDate.ToString());

27. }

28. }

Microsoft ADO.Net – Step by Step 260

29. tv.EndUpdate();

30. }

31. Add the following code below the call to UpdateDisplay in the

frmTreeView sub:

32. this.BindingContext[this.dsMaster1,

“Categories”].PositionChanged +=

33. new EventHandler(this.AddNodes);

34. System.EventArgs ea;

35. ea = new System.EventArgs();

AddNodes(this, ea);

The first line links the AddNodes procedure to the PositionChanged event so

that it will be called each time the Category changes. The remaining lines call

the procedure directly to set up the initial display.

36. Press F5 to run the application, and then click the TreeView button.

Visual Studio displays the TreeView form.

37. Verify that the TreeView is updated correctly by clicking the Next

button (“>”) and expanding nodes.

38. Close the TreeView form and the application.

39. Close the TreeView form designer and code editor.

Finding Data

Finding a specific row in a DataTable is a common application task. Unfor-tunately, the

BindingContext object, which controls the data displayed by the controls on a form,

doesn’t directly support a Find method. Instead, you must use either a DataView object

to find a row based on the current Sort key or use a DataTable object to find a row based

on more complex criteria.

Finding Sorted Rows

Using the DataView’s Find method is straightforward, but it can be used only to find a

row based on the row(s) currently specified in the Sort property. If your controls are

bound to a DataView, you can reference the object directly. If you bound the controls to a

DataTable, you can use the DefaultView property to obtain a reference without creating a

new object.

Once you have a reference to a DataView, you can use the Find method, which returns

the index of the row matching the specified criteria or -1 if no matching row is found. The

index of the row in the DataView will correspond directly to the same row’s index in the

BindingContext object, so it’s a simple matter of setting the BindingContext.Position

property to the value that is returned.

Microsoft ADO.Net – Step by Step 261

Find a Row Based on Its Sort Column

Visual Basic .NET

1. In the code editor for Master.vb, select btnFindCategory in the Control

Name combo box, and then select Click in the Event combo box.

Visual Studio adds an event handler to the code editor.

2. Add the following code to the event handler:

3. Dim fcForm As New frmFindCategory()

4. Dim dv As System.Data.DataView =

Me.dsMaster1.Categories.DefaultView

5. Dim id As Integer

6. Dim idx As Integer

7.

8. If fcForm.ShowDialog() = DialogResult.OK Then

9. If fcForm.GetID = 0 Then

10. Else

11. id = fcForm.GetID

12. idx = dv.Find(id)

13. If idx = -1 Then

14. MessageBox.Show(“Category ” + id.ToString + ”

not found”,

15. _ “Error”)

16. Else

17. Me.BindingContext(Me.dsMaster1, _

18. “Categories”).Position = idx

19. End If

20. End If

21. End If

22. fcForm.Dispose()

After declaring some variables and calling fcForm as a dialog box, the code

sets up an if … else statement to handle the two possible search criteria.

(We’ll complete the first section of the if statement in the following exercise.)

The variable id is set to the value of the GetID field on fcForm, and then the

code uses the Find method to locate the index of the row containing that field.

Find returns -1 if the row is not found, in which case the code displays an

error message. If the row is found, it is displayed in the Master form by setting

the BindingContext.Position property.

23. Press F5 to run the application, and click the Find Category button.

Microsoft ADO.Net – Step by Step 262

24. Type 3 in the ID field, and then click Find.

The application displays Category 3 on the Master form.

25. Close the application.

Visual C# .NET

1. In the form designer, double-click the btnFindCategory button on the

Master form.

Visual Studio adds an event handler to the code editor.

2. Add the following code to the event handler:

3. frmFindCategory fcForm = new frmFindCategory();

4. System.Data.DataView dv =

this.dsMaster1.Categories.DefaultView;

5. int id;

6. int idx;

7.

8. if (fcForm.ShowDialog() == DialogResult.OK)

9. if (fcForm.GetID == 0)

10. {

11. }

12. else

13. {

14. id = fcForm.GetID;

15. idx = dv.Find(id);

Microsoft ADO.Net – Step by Step 263

16. if (idx == -1)

17. MessageBox.Show(“Category ” + id.ToString(),

“Error”);

18. else

19. this.BindingContext[this.dsMaster1,

20. “Categories”].Position = idx;

21. }

22. fcForm.Dispose();

After declaring some variables and calling fcForm as a dialog box, the code

sets up an if … else statement to handle the two possible search criteria.

(We’ll complete the first section of the if statement in the following exercise.)

The variable id is set to the value of the GetID field on fcForm, and then the

code uses the Find method to locate the index of the row containing that field.

Find returns -1 if the row is not found, in which case the code displays an

error message. If the row is found, it is displayed in the Master form by setting

the BindingContext.Position property.

23. Press F5 to run the application, and click the Find Category button.

24. Type 3 in the ID field, and then click Find.

The application displays Category 3 on the Master form.

25. Close the application.

Finding Rows Based on Other Criteria

The DataView object’s Find method is easy to use but limited in scope. If you need to

find a row based on complex criteria, or on a single column other than the one on which

the data is sorted, you must use the DataTable’s Select method.

As we saw in Chapter 7, the Select method is easy to use, but positioning the

CurrencyManager to the correct row requires several steps. The process requires using

Microsoft ADO.Net – Step by Step 264

both the DataView and the DataTable object to perform the search, along with the

BindingContext object to display the results. In truth, the whole process is decidedly

awkward, but you’ll learn the steps by rote soon enough.

First you must execute the Select method with the required criteria against the

DataTable. Once the appropriate row is found, you obtain the Sort column value from the

array returned by the Select method and use that to perform a Find against the

DataView. Finally, you use the Position property of the BindingContext to display the

result.

Find a Row Based on an Unsorted Column

Visual Basic .NET

1. Add the following code after the line If fcForm.GetID = 0 in the

btnFindCategory_Click procedure we began in the previous

exercise:

2. Dim name As String

3. Dim dt As System.Data.DataTable = Me.dsMaster1.Categories

4. Dim dr() As System.Data.DataRow

5.

6. name = fcForm.GetName

7.

8. Try

9. dr = dt.Select(“CategoryName = ‘” & name & “‘”)

10. id = CType(dr(0),

dsMaster.CategoriesRow).CategoryID

11. idx = dv.Find(id)

12. Me.BindingContext(Me.dsMaster1,

“Categories”).Position = idx

13. Catch

14. MessageBox.Show(“Category ” + name + ” not

found”, “Error”)

End Try

This code uses the DataTable’s Select method to find the specified category

name. Select returns an array of rows, so the second line uses the CType

function to convert the first row of the array3—dr(0)—to a CategoriesRow,

and sets id to the CategoryID. It then finds the CategoryID in the DataView

and positions the Master form to the row by using the BindingContext.Position

property by using the same code from the previous exercise.

15. Press F5 to run the application, and then click the Find Category

button.

16. Type Condiments in the Name field, and then click Find.

The application displays the Condiments category in the Master form.

Microsoft ADO.Net – Step by Step 265

17. Close the application.

Visual C# .NET

1. Add the following code after the line If (fcForm.GetID == 0) in the

btnFindCategory_Click procedure we began in the previous

exercise:

2. string name;

3. System.Data.DataTable dt = this.dsMaster1.Categories;

4. dsMaster.CategoriesRow cr;

5. System.Data.DataRow[] dr;

6.

7. name = fcForm.GetName;

8.

9. try

10. {

11. dr = dt.Select(“CategoryName = ‘” + name + “‘”);

12. cr = (dsMaster.CategoriesRow) dr[0];

13. id = cr.CategoryID;

14. idx = dv.Find(id);

15. this.BindingContext[this.dsMaster1,

“Categories”].Position = idx;

16. }

17. catch

18. {

19. MessageBox.Show(“Category ” + name + ” not

found”, “Error”);

}

This code uses the DataTable’s Select method to find the specified category

name. Select returns an array of rows, so the second line uses the CType

function to convert the first row of the array—dr(0)—to a CategoriesRow, and

sets id to the CategoryID. It then finds the CategoryID in the DataView and

positions the Master form to the row by using the BindingContext.Position

property by using the same code from the previous exercise.

20. Press F5 to run the application, and then click the Find Category

button.

Microsoft ADO.Net – Step by Step 266

21. Type Condiments in the Name field, and then click Find.

The application displays the Condiments category in the Master form.

22. Close the application.

Validating Data in Windows Forms

The .NET Framework supports a number of techniques for validating data entry prior to

submitting it to a data source. First, as we’ve already seen, is the use of controls that

constrain the data entry to appropriate values.

After the data has been entered, the .NET Framework exposes a series of events at both

the control and data level to allow you to trap and manage problems.

Data Change Events

Data validation is most often implemented at the data source level. This tends to be

more efficient because the validation will occur regardless of which control or controls

are used to change the data.

As we saw in Chapter 7, the DataTable object exposes six events that can be used for

data validation. In order of occurrence, they are:

§ ColumnChanging

§ ColumnChanged

§ RowChanging

§ RowChanged

§ RowDeleting

§ RowDeleted

Note If a row is being deleted, only the RowDeleting and RowDeleted

events occur.

Microsoft ADO.Net – Step by Step 267

If you are using a Typed DataSet, you can create separate event handlers for each

column in a DataTable. If you’re using an Untyped DataSet, a single event handler must

handle all the columns in a single DataRow. You can use the Column property of the

DataColumnChangeArgs parameter, which is passed to the event to determine which

column is being changed.

Respond to a ColumnChanging Event

Visual Basic .NET

1. Add the following procedure to the bottom of the code editor:

2. Private Sub Categories_ColumnChanging(ByVal sender As

Object, _

3. ByVal e As DataColumnChangeEventArgs)

4. Dim str As String

5.

6. str = “Column: ” & e.Column.ColumnName.ToString

7. str += vbCrLf + “New Value: ” & e.ProposedValue

8. MessageBox.Show(str, “Column Changing”)

9. End Sub

10. Add the following event handler to the end of the New sub:

11. AddHandler dsMaster1.Categories.ColumnChanging, AddressOf

_

Me.Categories_ColumnChanging

12. Press F5 to run the application.

13. Change the Category Name to Beverages New, and then click the

Next button (“>”).

The application displays the column name and new value in a message box.

14. Close the message box, and then close the application.

15. Comment out the ColumnChanging event handler in the New sub.

Visual C# .NET

1. Add the following procedure to the class file:

2. private void Categories_ColumnChanging(object

3. sender, DataColumnChangeEventArgs e)

4. {

5. string str;

6.

7. str = “Column: ” + e.Column.ColumnName.ToString();

8. str += “\nNew Value: ” + e.ProposedValue;

9. MessageBox.Show(str, “Column Changing”);

}

10. Add the following event handler to the end of the frmMaster sub:

Microsoft ADO.Net – Step by Step 268

11. this.dsMaster1.Categories.ColumnChanging +=

12. new

DataColumnChangeEventHandler(this.Categories_ColumnCha

nging);

13. Press F5 to run the application.

14. Change the Category Name to Beverages New, and then click the

Next button (“>”).

The application displays the column name and new value in a message box.

15. Close the message box, and then close the application.

16. Comment out the event handler in the frmMaster sub.

The column change events are typically used for validating discrete values—for

example, if the value is within a specified range or has the correct format. For data

validation that relies on multiple column values, you can use the row change events.

Respond to a RowChanging Event

Visual Basic .NET

1. Add the following procedure to the code editor:

2. Private Sub Categories_RowChanging(ByVal sender As Object, _

3. ByVal e As DataRowChangeEventArgs)

4. Dim str As String

5.

6. str = “Action: ” & e.Action.ToString

7. str += vbCrLf + “ID: ” & e.Row(“CategoryID”)

8. MessageBox.Show(str, “Row Changing”)

9. End Sub

10. Add the following code to the end of the New sub:

11. AddHandler dsMaster1.Categories.RowChanging, AddressOf _

Me.Categories_RowChanging

12. Press F5 to run the application.

13. Change the Category Name to New, and then click Next button

(“>”). Close the Column Changing message.

The application displays the Action and Category ID.

Microsoft ADO.Net – Step by Step 269

14. Close the message, and then close the application.

15. Comment out the RowChanging event handlers in the New sub.

Visual C# .NET

1. Add the following procedure to the code editor:

2. private void Categories_RowChanging(object sender,

3. DataRowChangeEventArgs e)

4. {

5. string str;

6.

7. str = “Action: ” + e.Action.ToString();

8. str += “\nID: ” + e.Row[“CategoryID”];

9. MessageBox.Show(str, “Row Changing”);

}

10. Add the following code to the end of the frmMaster sub:

11. this.dsMaster1.Categories.RowChanging += new

12.

DataRowChangeEventHandler(this.Categories_RowChanging);

13. Press F5 to run the application.

14. Change the Category Name to New, and then click Next button

(“>”). Close the Column Changing message.

The application displays the Action and Category ID.

15. Close the message, and then close the application.

16. Comment out the ColumnChanging event handler in the frmMaster

sub.

Microsoft ADO.Net – Step by Step 270

Control Validation Events

In addition to the DataTable events, data validation can also be triggered by individual

controls. Every control supports the following events, in order:

§ Enter

§ GotFocus

§ Leave

§ Validating

§ Validated

§ LostFocus

In addition, the CurrencyManager object supports the ItemChanged event, which is

triggered before a new row becomes current.

Respond to an ItemChanged Event

Visual Basic .NET

1. Add the following procedure to the code editor:

2. Private Sub Categories_ItemChanged(ByVal sender As Object, _

3. ByVal e As ItemChangedEventArgs)

4. Dim str As String

5.

6. str = “Index into CurrencyManager List: ” &

e.Index.ToString

7. MessageBox.Show(str, “Item Changed”)

End Sub

8. Add the following code to the end of the New sub:

9. AddHandler CType(Me.BindingContext(Me.dsMaster1,

“Categories”), _

CurrencyManager).ItemChanged, AddressOf

Me.Categories_ItemChanged

10. Press F5 to run the application.

11. Delete the category description, and then click the Next button (“>”).

The application displays the index of the row that has been changed.

12. Close the message box, and then close the application.

13. Comment out the event handler in the New sub.

Visual C# .NET

1. Add the following procedure to the code editor:

2. private void Categories_ItemChanged(object sender,

3. ItemChangedEventArgs e)

4. {

Microsoft ADO.Net – Step by Step 271

5. string str;

6.

7. str = “Index into CurrencyManager List: ” + e.Index.ToString();

8. MessageBox.Show(str, “Item Changed”);

}

9. Add the following lines to the end of the frmMaster sub:

10. CurrencyManager cm = (CurrencyManager)

11. this.BindingContext[this.dsMaster1, “Categories”];

12. cm.ItemChanged += new

ItemChangedEventHandler(this.Categories_ItemChanged);

13. Press F5 to run the application.

14. Delete the category description, and then click the Next button (“>”).

The application displays the index of the row that has been changed.

15. Close the message box, and then close the application.

16. Comment out the event handler in the New sub.

For purposes of data validation, the Validating and Validated events roughly correspond

to the ColumnChanging and ColumnChanged events, but they have the advantage of

occurring as soon as the user leaves the control, rather than when the BindingContext

object is repositioned.

Respond to a Validating Event

Visual Basic .NET

1. In the code editor, select tbCategoryName in the Control Name

combo box, and then select Validating in the Method combo box.

Visual Studio adds the event handler template to the code editor.

2. Add the following code to the procedure:

3. If Me.tbCategoryName.Text = “Cancel” Then

4. MessageBox.Show(“Change the Name from ‘Cancel'”,

“Validating”)

5. e.Cancel = True

End If

6. Press F5 to run the application.

7. Change the Category Name to Cancel, and then click the Next button

(“>”).

The application cancels the change and redisplays the original row.

Microsoft ADO.Net – Step by Step 272

8. Close the application.

Visual C# .NET

1. Add the following procedure to the class file:

2. private void Categories_Validating(object sender,

CancelEventArgs e)

3. {

4. if (this.tbCategoryName.Text == “Cancel”)

5. {

6. MessageBox.Show(“Change the Name from ‘Cancel'”,

7. “Validating”);

8. e.Cancel = true;

9. }

}

10. Add the following lines to the frmMaster sub:

11. this.tbCategoryName.Validating +=

new CancelEventHandler(this.Categories_Validating);

12. Press F5 to run the application.

13. Change the Category Name to Cancel, and then click the Next

button (“>”).

The application cancels the change.

14. Close the application.

Microsoft ADO.Net – Step by Step 273

Using the ErrorProvider Component

In the previous exercises, we’ve used MessageBox controls in response to data

validation errors. This is a common technique, but it’s not a very good one from a

usability standpoint. MessageBox controls are disruptive, and after they are dismissed,

the error information contained in them also disappears.

Fortunately, the .NET Framework provides a much better mechanism for displaying

errors to the user: the ErrorProvider component. The ErrorProvider, which can be bound

to either a specific control or a data source object, dis-plays an error icon next to the

appropriate control. If the user places the mouse pointer over the icon, a ToolTip will

display the specified error message.

Use an ErrorProvider with a Form Control

Visual Basic .NET

1. In the code editor, select tbCategoryID in the Control Name combo

box, and then select Validating in the Method Name combo box.

Visual Studio adds the event handler template to the code editor.

2. Add the following code to the event handler:

3. If Me.tbCategoryID.Text = “Error” Then

4. Me.epControl.SetError(Me.tbCategoryID, _

5. “Please re-enter the CategoryID”)

6. e.Cancel = True

7. Else

8. Me.epControl.SetError(Me.tbCategoryID, “”)

End If

9. Press F5 to run the application.

10. Change the CategoryID to Error, and then click the Next button

(“>”).

The application displays a blinking error icon after the CategoryID control.

11. Place the mouse pointer over the icon.

The application displays the ToolTip.

Microsoft ADO.Net – Step by Step 274

12. Close the application.

Visual C# .NET

1. Add the following procedure to the class module:

2. private void Categories_Error(object sender, CancelEventArgs e)

3. {

4. if (this.tbCategoryID.Text == “Error”)

5. {

6. this.epControl.SetError(this.tbCategoryID, “Please re-

7. enter the CategoryID”);

8. e.Cancel = true;

9. }

10. else

11. {

12. this.epControl.SetError(this.tbCategoryID, “”);

13. }

}

14. Add the following line to the end of the frmMaster sub:

15. this.tbCategoryID.Validating +=

new CancelEventHandler(this.Categories_Error);

16. Press F5 to run the application.

17. Change the CategoryID to Error, and then click the Next button

(“>”).

The application displays a blinking error icon after the CategoryID control.

18. Place the mouse pointer over the icon.

The application displays the ToolTip.

Microsoft ADO.Net – Step by Step 275

19. Close the application.

The previous exercise demonstrated the use of the ErrorProvider from within the

Validating event of a control. But the ErrorProvider component can also be bound to a

data source, and it can display errors for any column or row containing errors.

Binding an ErrorProvider to a data source object has the advantage of allowing multiple

errors to be displayed simultaneously—a significant improvement in system usability.

Use an ErrorProvider with a DataColumn

Visual Basic .NET

1. In the form designer, select the epDataSet ErrorProvider control.

2. In the Properties window, select the DataSource property, expand the

drop-down list, and then select dsMaster1.

3. Select the DataMember property, expand the drop-down list, and then

select Categories.

4. Double-click the btnDataSet button.

Visual Studio adds the event handler template to the code editor.

5. Add the following code to the event handler:

6. Me.dsMaster1.Categories.Rows(0).SetColumnError(“Description”

, _

“Error Created Here”)

This code artificially creates an error condition for the Description column of

the second row in the Categories table.

7. Press F5 to run the application, and then click the DataSet Error

button.

Visual Studio displays an error icon after the Description text box.

Microsoft ADO.Net – Step by Step 276

8. Close the application.

Visual C# .NET

1. In the form designer, select the epDataSet ErrorProvider control.

2. In the Properties window, select the DataSource property, expand the

drop-down list, and then select dsMaster1.

3. Select the DataMember property, expand the drop-down list, and then

select Categories.

4. Double-click the btnDataSet button.

Visual Studio adds the event handler template to the code editor.

5. Add the following code to the event handler:

6. this.dsMaster1.Categories.Rows[0].SetColumnError(“Description”

,

“Error Created Here”);

This code artificially creates an error condition for the Description column of

the second row in the Categories table.

7. Press F5 to run the application, and then click the DataSet Error

button.

Visual Studio displays an error icon after the Description text box.

8. Close the application.

Chapter 11 Quick Reference

To Do this

Microsoft ADO.Net – Step by Step 277

To Do this

Use the Format

event

Create the event handler, changing the Value property of the

ConvertEventArgs parameter, and then bind it to the

control’s Format event

Use the Parse

event

Create the event handler, changing the Value property of the

ConvertEventArgs parameter, and then bind it to the

control’s Parse event

Use the

CheckBox

control to

display

Boolean values

in a DataTable

Bind the value of the control’s Checked property

Bind a

ComboBox to a

key value it

doesn’t display

Set the control’s DisplayMember property to the column to

be displayed, and set the ValueMember property to the key

value

Create a

nested ListBox

Set the ListBox’s DisplayMember property to the entire

hierarchy, including the DataRelation:

myListBox.DisplayMember =

“tblParent.drRelation.tblChild.Column”

Display

hierarchical

data using the

DataGrid

control

In the form designer, use the DataGridTableStyle Collection

editor (available from the TableStyles property in the

Properties Window) to add the related tables to the DataGrid

Display

hierarchical

data using the

TreeView

control

Use the DataRow’s GetChildRows method to manually add

the nodes to the TreeView’s Nodes array:

for each mainRow in masterTable

rootNode = _

myTreeView.Nodes.Add(mainRow.myColumn)

childArray = _

mainRow.GetChildRows(“myRelation”)

for each childRow in childArray

rootNote.Nodes.Add(childRow.myColumn)

next childRow

next mainRow

Find rows

based on the

Sort column

Use the DataView’s Find method to return the position of the

row:

rowIndex = myDataView.Find(theKey)

myBindingContext.Position = rowIndex

Find Rows

based on an

Unsorted

Column

Use the DataTable’s Select method to return the row, and

then use the DataView’s Find method to find its position:

drFound = myTable.Select(strCriteria)

rowSortKey = drFound(0).myColumn

rowIndex = myDataView.Find(rowSortKey)

myBindingContext.Position = rowIndex

Validate Data

at the

DataTable level

Respond to one of the DataTable change events:

ColumnChanging, ColumnChanged, RowChanging,

RowChanged, RowDeleting, or RowDeleted

Validate Data

at the Control

level

Respond to one of the Control validation events:

Enter, GotFocus, Leave, Validating, Validated, LostFocus

Microsoft ADO.Net – Step by Step 278

To Do this

Use an

ErrorProvider

with a Form

Control

Set the ErrorProvider’s ContainerControl property to the

control, and then, if necessary, call the SetError method to

display an error condition from within the control’s Validating

event

Chapter 12: Data-Binding in Web Forms

Overview

In this chapter, you’ll learn how to:

§ Simple-bind controls at design time

§ Simple-bind controls at run time

§ Display bound data on a page

§ Complex-bind controls at design time

§ Complex-bind controls at run time

§ Use the DataBinder object

§ Store a DataSet in the session state

§ Store a DataSet in the ViewState

§ Update a data source using a Command object

In the previous eleven chapters, we’ve examined the ADO.NET object model, using

examples in Windows forms. In this chapter, we’ll examine the way that Microsoft

ADO.NET interacts with Microsoft ASP.NET and Web forms.

Understanding Data-Binding in Web Forms

As part of the Microsoft .NET Framework, ADO.NET is independent of any application in

which it is deployed, whether it’s a Windows form, like the exercises in the previous

chapters, a Web form, or a middle-level business object. But the way that data is pushed

to and pulled from controls is a function of the control itself, not of ADO.NET, and the

Web form data-binding architecture is very different from anything we’ve seen so far.

The Web form data-binding architecture is based on two assumptions. The first

assumption is that the majority of data access is read-only—data is displayed to users,

but in most cases, it is not updated by them. The second assumption is that performance

and scalability, while not insignificant in the Microsoft Windows operating system, are of

critical importance when applications are deployed on the Internet.

To optimize performance for read-only data access, the .NET Framework Web form

data-binding architecture is also read-only—when you bind a control to a data source,

the data will only be pushed to the bound property; it will not be pulled back from the

control.

This doesn’t mean that it’s impossible, or even particularly difficult, to edit data by using

Web forms, but it has to be done manually. As a simple example, if you have a Windows

Form TextBox control bound to a column in a DataSet, and the user changes the value

of that TextBox, the new value will be automatically propagated to the DataSet by the

.NET Framework, and the Item, DataColumn, and DataRow change events will be

triggered.

If a TextBox control on a Web form is bound to a column in a DataSet, however, the user

must explicitly submit any changes to the server, and you must write the code to handle

Microsoft ADO.Net – Step by Step 279

the submission, both on the client and the server. After the changes reach the DataSet,

of course, the DataColumn and DataRow change events will still be triggered.

Most of this arises from the nature of the Internet itself. In a traditional Web programming

environment, a page is created, sent to the user’s browser, and then the user, the page,

and any information the page contains are forgotten. In other words, the Internet is, by

default, stateless—the state of a page is not maintained between round-trips to the

server.

ASP.NET, the part of the .NET Framework that supports Web development, supports a

number of mechanisms for maintaining state, where appropriate, on both the client and

server. We’ll examine some of these as they relate to data access later in this chapter.

In addition to being stateless, traditional Internet applications are also disconnected.

When working with older data object models, this can sometimes be a problem, but as

we’ve seen, ADO.NET itself uses a disconnected data model, so this poses no problem.

Data Sources

Like controls on Windows forms, Web form controls can be bound to any data source,

not only traditional database tables. Technically, to qualify as a Web form data source,

an object must implement the IEnumerable interface. Arrays, Collections,

DataReaders, DataSets, DataViews, and DataRows all implement the IEnumerable

interface, and any of them can be used as the data source for a Web form control.

Because the management of server resources and the resulting scalability

issues are critical in the Internet environment, the choice of data access

methods must be given careful consideration. In most cases, when data is read

into the page and then discarded, it’s better to use an ADO.NET DataReader

rather than a DataSet because a DataReader provides better performance and

conserves server memory. However, this isn’t always the case, and there are

situations in which using the DataSet is both easier and more efficient.

If, for example, you’re working with related data, the DataSet object, with its

support for DataRelations and its GetChildRows and GetParentRows methods,

is both easier to implement and more efficient because it requires fewer roundtrips

to the data source. Also, as we’ll see in Chapter 15, the DataSet provides

the mechanism for reading data from and writing data to an XML stream.

Finally if the data will be accessed multiple times, as it is when you’re paging

through data, it can be more efficient to store a DataSet than to re-create it

each time. This isn’t always the case, however. In some situations, the memory

that is required to store a large DataSet outweighs the performance gains from

maintaining the data. Also, if the data being stored is at all vol-tile, you run the

risk of the stored data becoming out of sync with the primary data store.

Roadmap We’ll examine binding to DataRelations in Chapter 13.

There is one other major difference in the data-binding architectures of

Windows and Web forms: Web forms do not directly support data-binding to an

ADO.NET DataRelation object. As we saw in Chapter 11, binding to a

DataRelation provides a simple and efficient method for displaying

master/detail relationships. To perform the same function in a Web form, you

must use the DataBinder property. We’ll examine binding to DataRelations in

Chapter 13.

Binding Controls to an ADO.NET Data Source

Like controls on Windows forms, Web form controls support simple-binding virtually any

property to a single value in data source and complex-binding control properties that

Microsoft ADO.Net – Step by Step 280

display multiple values. However, the binding mechanisms for Web forms are somewhat

different from those that we’ve seen and used with Windows forms.

Note In the Web form document ation, simple- and complex-binding are

referred to as single-value and multirecord binding.

Simple-Binding Control Properties

Web form controls can always be bound at run time. They can also be bound at design

time if the data source is available. (Because Web Forms applications tend to use Data

commands more often than DataSets, the data source is less often available at design

time.)

Unlike Windows forms, simple-bound Web form control properties don’t expose databinding

properties. Instead, the value is explicitly retrieved and assigned to the property

at run time by using a data-binding expression.

In Microsoft Visual Studio .NET, the Properties window supports a tool for creating databinding

expressions, or you can create them at run time. The run time data-binding

expression is delimited by <%# and %>:

propName = (<%# dataExpression %>)

The dataExpression can be any expression that resolves to a single data item—a

column of a DataRow, a property of another control on the page, or even an expression.

Note, however, that Web forms don’t support a BindingContext object or anything similar

to it, so there is no concept of a current row. You must specifically indicate which row of

a data source, such as a DataTable, will be displayed in the bound property. So, for

example, to refer to a DataColumn within a DataSet, you would need to use the following

syntax:

<%# myDataSet.myTable.DefaultView(0).myColumn %>

You can use a data-binding expression almost anywhere in a Web form page, as long as

the expression evaluates at run time to the correct data type. You can, of course, use

type-casting to coerce the value to the correct type. For example:

myTextbox.Text = <%# myDataSet.myTable.Rows.Count.ToString() %>

Simple-Bind a Control Property at Design Time

1. Open the WebForms project from the Start page or the File menu.

2. In the Solution Explorer, double-click WebForm1.aspx.

Visual Studio displays the page in the form designer.

3. Select the tbCategoryName text box.

4. In the Properties window, select (DataBindings) and click the Ellipsis

button.

Microsoft ADO.Net – Step by Step 281

Visual Studio opens the DataBindings dialog box.

5. In the Simple Binding pane, expand

dsMaster1/Categories/DefaultView/DefaultView.[0], and select

CategoryName.

6. Click OK.

Visual Studio creates the binding.

Note You can examine the syntax of the data-binding attribute on the HTML

tab of the project. Find the tag that defines the tbCategoryName text

box.

If the data source isn’t available at design time, you can bind a control property at run

time. Although it’s possible to do this in the control tag, it’s much easier to do so by using

the DataBinding event that is raised when the DataBind method is called for the control.

Simple-Bind a Control Property at Run Time

Visual Basic .NET

1. Press F7 to display WebForm1.aspx.vb.

2. Select tbCategoryDescription in the Control Name combo box, and

then select DataBinding in the Method Name combo box.

Microsoft ADO.Net – Step by Step 282

Visual Studio adds the event handler.

3. Add the following code to the procedure:

Me.tbCategoryDescription.Text = Me.dsMaster1.Categories(0).Description

Visual C# .NET

1. Select tbCategoryDescription in the form designer.

2. In the Properties Window, click the Events button, and then doubleclick

DataBinding.

Visual Studio opens the code window and adds the event handler.

3. Add the following code to the procedure:

4. this.tbCategoryDescription.Text =

this.dsMaster1.Categories[0].Description;

Just as with Windows forms, before you can display the data on your Web form, you

must explicitly load it from the data source by filling a DataAdapter or executing a Data

command. But Web forms require an additional step: You must push the data into the

control properties.

This is done by calling the DataBind method, which is implemented by all controls that

inherit from System.Web.UI.Control. A call to the DataBind method cascades to its child

controls. Thus, calling DataBind for the Page class will call the DataBind method for all

the controls contained by the Page class.

When the DataBind method is invoked for a control, either directly or by cascading, the

data expressions embedded in control tags will be resolved and the DataBinding events

for the controls will be triggered.

If you’re using a Web form to update data, you must be careful when you call the

DataBind method. Much like a DataSet’s AcceptChanges method, DataBind replaces the

values currently contained in the bound properties.

Display Bound Data in the Page

Visual Basic .NET

1. In the code editor, add the following code to the Page_Load event:

2. Me.daCategories.Fill(Me.dsMaster1.Categories)

3. Me.daProducts.Fill(Me.dsMaster1.Products)

4. Me.daOrders.Fill(Me.dsMaster1.Orders)

Me.DataBind()

This code fills the three tables in the DataSet, and then calls the DataBind

event for the page, which will push the data into each of the bound controls

that it contains.

5. Press F5 to run the application.

Visual Studio displays the page in the default browser.

6. Close the browser.

Microsoft ADO.Net – Step by Step 283

Visual C# .NET

1. In the code editor, add the following code to the Page_Load event:

2. this.daCategories.Fill(this.dsMaster1.Categories);

3. this.daProducts.Fill(this.dsMaster1.Products);

4. this.daOrders.Fill(this.dsMaster1.Orders);

this.DataBind();

5. Press F5 to run the application.

Visual Studio displays the page in the default browser.

6. Close the browser.

Complex-Binding Control Properties

The process of complex-binding Web form controls closely resembles the process for

complex-binding Windows form controls. Complex-bound controls in both environments

expose the DataSource and DataMember properties for defining the source of the data,

and Web form controls expose a DataValueField property that is equivalent to the

ValueMember property of a Windows form control.

The DataList and DataGrid controls also expose a DataKeyField property that stores the

primary key information within the data source. The DataKeyField, which populates a

DataKeyFields collection, allows you to store the primary key information without

necessarily displaying it in the control.

In addition, the ListBox, DropDownList, CheckBoxList, RadioButtonList, and HtmlSelect

controls expose a DataTextField property that defines the column to be displayed. The

DataTextField property is equivalent to the DisplayMember property of a Windows form

control.

Roadmap We’ll examine binding to DataRelations in Chapter 13.

If the DataSource property is being set to a DataSet and the DataMember property is

being set to a DataTable, you can simply set the properties directly. As we’ll see in

Chapter 13, it is also possible to bind to DataRelations, but the process is somewhat less

than straightforward.

Complex-Bind a Control at Design Time

1. Display the form designer.

2. Select the dgProducts DataGrid.

3. In the Properties window, expand the Data section (if necessary),

select the DataSource property, and then select dsMaster1 in the

drop-down list.

Note Clear the Events button if you’re working in C#.

4. Select the DataMember property, and then select Products.

5. Press F5 to run the application.

Microsoft ADO.Net – Step by Step 284

Visual Studio displays the page in the default browser, showing all the

products in the data grid.

6. Close the browser.

In this exercise, we’ll bind the lbOrders ListBox control in response to the

SelectedItemChanged event of the dgProducts DataGrid control. The

SelectedItemChanged event occurs when the user clicks one of the Orders buttons in

the DataGrid because its CommandName property has been set to Select.

Complex-Bind a Control at Run Time

Visual Basic .NET

1. In the form designer, double-click the dgProducts DataGrid control.

Visual Studio adds a SelectedIndexChanged event handler to the code editor.

2. Add the following code to the procedure:

3. Me.dvOrders.Table = Me.dsMaster1.Orders

4. Me.dvOrders.RowFilter = “ProductID = ” & _

5. Me.dgProducts.SelectedItem.Cells(1).Text

6. Me.lbOrders.DataSource = Me.dvOrders

7. Me.lbOrders.DataTextField = “OrderDate”

Me.lbOrders.DataBind()

The code sets the RowFilter property of the dvOrders DataView to the

ProductID of the row selected in the DataGrid. It then sets the DataSource

and DataMember properties of the ListBox, and then calls the DataBind

method to push the data to the control.

8. Press F5 to run the application.

Visual Studio displays the page in the default browser.

9. Click the Orders button in one of the rows in the data grid.

The page displays the order dates in the list box. Note that the browser made

a round-trip to the server to retrieve the data.

Microsoft ADO.Net – Step by Step 285

10. Close the browser.

Visual C# .NET

1. In the form designer, double-click the dgProducts DataGrid control.

Visual Studio adds a Select Click handler to the code editor.

2. Add the following code to the procedure:

3. this.dvOrders.Table = this.dsMaster1.Orders;

4. this.dvOrders.RowFilter = “ProductID = ” +

5. this.dgProducts.SelectedItem.Cells[1].Text;

6. this.lbOrders.DataSource = this.dvOrders;

7. this.lbOrders.DataTextField = “OrderDate”;

this.lbOrders.DataBind();

The code sets the RowFilter property of the dvOrders DataView to the

ProductID of the row selected in the DataGrid. It then sets the DataSource

and DataMember properties of the ListBox, and then calls the DataBind

method to push the data to the control.

8. Press F5 to run the application.

Visual Studio displays the page in the default browser.

9. Click the Orders button in one of the rows in the data grid.

The page displays the order dates in the list box. Note that the browser made

a round-trip to the server to retrieve the data.

10. Close the browser.

Microsoft ADO.Net – Step by Step 286

Using the DataBinder Object

In addition to embedding data-binding expressions directly in the HTML stream, the .NET

Framework also exposes the DataBinder object, which evaluates data-binding

expressions and optionally formats the result as a string.

The DataBinder syntax is straightforward, and it can perform type conver-sion

automatically, which greatly simplifies coding in some circumstances. This is particularly

true when working with an ADO.NET object—multiple castings are required, and the

syntax is complex. However, the DataBinder object is late-bound, and like all late-bound

objects, it does incur a performance penalty, primarily due to its type conversion.

The DataBinder object is a static object, which means that it can be used without

instantiation. It can be called either from within the HTML for the page (surrounded by

<%# and %> brackets) or in code.

The DataBinder object exposes no properties or events, and only a single method, Eval.

The Eval method is overloaded to accept an optional format string, as shown in Table

12-1.

Table 12-1: Eval Methods

Method Description

Eval(dataSource, dataExpression) Returns the

value of

dataExpress

ion in the

dataSource

at run time

Eval(dataSource, dataExpression, formatStr) Returns the

value of

dataExpress

ion in the

dataSource

at run time,

and then

formats it

according to

the

formatStr

The Eval method expects a data container object as the first parameter. When working

with ADO.NET objects, this is usually a DataSet, DataTable, or DataView object. It can

also be the Container object if the expression runs from within a List control in a

template, such as a DataList, DataGrid, or Repeater, in which case the first parameter

should always be Container.DataItem.

The second parameter of the Eval method is a string that represents the specific data

item to be returned. When working with ADO.NET objects, this parameter would typically

be the name of a DataColumn, but it can be any valid data expression.

The final, optional parameter is a format specifier identical in format to those used by the

String.Format method. If the format specifier is omitted, the Eval method returns an

object, which must be explicitly cast to the correct type.

Use the DataBinder to Bind a Control Property

Visual Basic .NET

1. In the code editor, select tbCategoryID in the Control Name combo

box, and then select DataBinding in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following line to the procedure:

3. Me.tbCategoryID.Text = _

Microsoft ADO.Net – Step by Step 287

4. DataBinder.Eval(Me.dsMaster1.Categories.DefaultView(0), _

“CategoryID”)

Notice that you must explicitly record the first row of the DataTable’s

DefaultView. This is because Web forms have no CurrencyManager to handle

retrieving a current row from the DataSet.

5. Press F5 to run the application.

Visual Studio displays the page in the default browser with the CategoryID

value.

6. Close the browser.

Visual C# .NET

1. In the form designer, select tbCategoryID, display the events in the

Properties Window, and double-click DataBinding.

Visual Studio adds the event handler to the code editor window.

2. Add the following line to the procedure:

3. this.tbCategoryID.Text =

4. DataBinder.Eval(this.dsMaster1.Categories.DefaultView[0],

“CategoryID”).ToString();

Notice that you must explicitly record the first row of the DataTa ble’s

DefaultView. This is because Web forms have no CurrencyManager to handle

retrieving a current row from the DataSet.

5. Press F5 to run the application.

Visual Studio displays the page in the default browser with the CategoryID

value.

6. Close the browser.

Microsoft ADO.Net – Step by Step 288

Maintaining ADO.NET Object State

Because the Web form doesn’t maintain state between round-trips to the server, if you

want to maintain a DataSet between the time that the page is first created and the time

that it takes the user to send it back with changes, you must do so explicitly.

You can maintain a DataSet on the server by storing it in either the Application or

Session state, or you can maintain it on the client by storing it in the Page class’s

ViewState. You can also store the DataSet in a hidden field on the page, although

because this is how the Page class implements ViewState, there’s rarely any advantage

to doing so.

Whether you maintain the data on the server or the page, you must always be aware of

concurrency issues. You’re saving round-trips to the data source, and the performance

gains can be significant, particularly if the data requires calculations. However, changes

to the data source won’t be reflected in the stored data. If the data is volatile, you must

re-create the ADO.NET objects each time in order to ensure that they reflect the most

recent changes.

Maintaining ADO.NET Objects on the Server

ASP.NET provides a number of mechanisms for maintaining state within an Internet

application. On the server side, the two easiest mechanisms to use are the Application

state and the Session state. Both state structures are dictionaries that store data as

name/value pairs. The value is stored and retrieved as an object, so you must cast it to

the correct type when you restore it.

The Application and Session states are used identically; the difference is scope. The

Application state is global to all pages and all users within the application. The Session

state is specific to a single browser session. (Please refer to the ASP.NET

documentation for additional information about Application and Session states.)

The IsPostBack property of the Page class, which is False the first time a Page is loaded

for a specific browser session and True thereafter, can be used in the Page_Load event

to control when the data is created and loaded.

Store the DataSet in the Session State

Visual Basic .NET

1. Change the Page_Load event to store the DataSet in the Session

state:

2. If Me.IsPostBack Then

3. Me.dsMaster1 = CType(Session(“dsMaster”), DataSet)

4. Else

5. Me.daCategories.Fill(Me.dsMaster1.Categories)

6. Me.daProducts.Fill(Me.dsMaster1.Products)

7. Me.daOrders.Fill(Me.dsMaster1.Orders)

8. Session(“dsMaster”) = Me.dsMaster1

9. End If

Me.DataBind()

10. Press F5 to run the application.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 289

11. Click several items in the dgProducts data grid.

You might be able to notice a slight increase in performance.

12. Close the browser.

Visual C# .NET

1. Change the Page_Load event to store the DataSet in the Session

state:

2. if (this.IsPostBack)

3. this.dsMaster1 = (dsMaster) Session[“dsMaster”];

4. else

5. {

6. this.daCategories.Fill(this.dsMaster1.Categories);

7. this.daProducts.Fill(this.dsMaster1.Products);

8. this.daOrders.Fill(this.dsMaster1.Orders);

9. this.Session[“dsMaster”] = this.dsMaster1;

10. }

this.DataBind();

11. Press F5 to run the application.

Visual Studio displays the page in the default browser.

12. Click several items in the dgProducts data grid.

You may be able to notice a slight increase in performance.

13. Close the browser.

Maintaining ADO.NET Objects on the Page

Storing data on the server can be convenient, but it does consume server resources

which, in turn, negatively impacts application scalability. An alternative is to store the

data on the page itself. This relieves the pressure on the server, but because the data is

passed as part of the data stream, it can increase the time it requires to load and post

the page.

Microsoft ADO.Net – Step by Step 290

Data is stored on the page either in a custom hidden field or in the ViewState property of

a control. In theory, any ViewState property can be used, but the Page class’s ViewState

is the most common property.

Store the DataSet in the ViewState

Visual Basic .NET

1. Change the Page_Load event handler to store the data in the Page

class ViewState:

2. If Me.IsPostBack Then

3. Me.dsMaster1 = CType(ViewState(“dsMaster”), DataSet)

4. Else

5. Me.daCategories.Fill(Me.dsMaster1.Categories)

6. Me.daProducts.Fill(Me.dsMaster1.Products)

7. Me.daOrders.Fill(Me.dsMaster1.Orders)

8. ViewState(“dsMaster”) = Me.dsMaster1

9. End If

10. Me.DataBind()

11. Press F5 to run the application.

Visual Studio displays the page in the default browser.

12. Click several items in the dgProducts data grid.

13. Close the browser.

Visual C# .NET

1. Change the Page_Load event handler to store the data in the Page

class ViewState:

2. if (this.IsPostBack)

3. this.dsMaster1 = (dsMaster) ViewState[“dsMaster”];

4. else

5. {

6. this.daCategories.Fill(this.dsMaster1.Categories);

7. this.daProducts.Fill(this.dsMaster1.Products);

8. this.daOrders.Fill(this.dsMaster1.Orders);

9. this.ViewState[“dsMaster”] = this.dsMaster1;

10. }

11. this.DataBind();

12. Press F5 to run the application.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 291

13. Click several items in the dgProducts data grid.

14. Close the browser.

Updating a Data Source from a Web Form

Remember that ADO.NET objects behave in exactly the same manner when they’re

instantiated in a Web form page as when they’re used in a Windows form. Because of

this, in theory, the processes of updating a data source should be identical.

On one level, this is true. The actual update is performed by directly running a Data

command or by calling the Update method of a DataAdapter. But remember that a Web

form page doesn’t maintain its state and that data-binding architecture is one-way.

Because the Web form data-binding architecture is one-way, you must explic-itly push

the values returned by the page into the appropriate object. With a Windows form, after a

control property has been bound to a column in a DataTable, any changes that the user

makes to the value will be immediately and automatically reflected in the DataTable.

On a Web form, on the other hand, you must explicitly retrieve the value from the control

and update the ADO.NET object. You might, for example, use the control values to set

the parameters of a Data command or update a row in a DataTable.

Update a Data Source Using a Command Object

Visual Basic .NET

1. Change the Page_Load event code to read:

2. If Not IsPostBack Then

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4. Me.daProducts.Fill(Me.dsMaster1.Products)

5. Me.daOrders.Fill(Me.dsMaster1.Orders)

6. Me.DataBind()

End If

The IsPostBack property prevents the Fill and DataBind methods from being

called when the page is posted back. Remember that DataBind replaces

existing values.

7. In the code editor, select btnCommand in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

8. Add the following code to the event handler:

9. Dim cmdUpdate As System.Data.OleDb.OleDbCommand

10. cmdUpdate = Me.daCategories.UpdateCommand

11.

12. With cmdUpdate

Microsoft ADO.Net – Step by Step 292

13. .Parameters(0).Value = Me.tbCategoryName.Text

14. .Parameters(1).Value = Me.tbCategoryDescription.Text

15. .Parameters(2).Value = Me.tbCategoryID.Text

16. End With

17.

18. Me.cnNorthwind.Open()

19. cmdUpdate.ExecuteNonQuery()

Me.cnNorthwind.Close()

The code uses the UpdateCommand of the daCategories DataAdapter to

perform the update. (This is a shortcut that wouldn’t ordinarily be available.)

The three parameters are set to the values of the relevant fields on the page,

and then the Connection is opened, the Command is executed, and the

Connection is closed.

20. Press F5 to run the application.

Visual Studio displays the page in the default browser.

21. Change the Category Name to Categories New.

22. Click Command.

The page updates the database.

23. Close the browser.

Visual C# .NET

1. Change the Page_Load event code to read:

2. if (IsPostBack == false)

3. {

4. this.daCategories.Fill(this.dsMaster1.Categories);

5. this.daProducts.Fill(this.dsMaster1.Products);

6. this.daOrders.Fill(this.dsMaster1.Orders);

7. this.DataBind();

}

The IsPostBack property prevents the Fill and DataBind methods from being

called when the page is posted back. Remember that DataBind replaces

existing values.

8. In the form designer, double-click btnCommand.

Visual Studio adds the event handler to the code.

9. Add the following code to the event handler:

10. System.Data.OleDb.OleDbCommand cmdUpdate;

Microsoft ADO.Net – Step by Step 293

11. cmdUpdate = this.daCategories.UpdateCommand;

12. cmdUpdate.Parameters[0].Value =

this.tbCategoryName.Text;

13. cmdUpdate.Parameters[1].Value =

this.tbCategoryDescription.Text;

14. cmdUpdate.Parameters[2].Value =

this.tbCategoryID.Text;

15.

16. this.cnNorthwind.Open();

17. cmdUpdate.ExecuteNonQuery();

this.cnNorthwind.Close();

The code uses the UpdateCommand of the daCategories DataAdapter to

perform the update. (This is a shortcut that wouldn’t ordinarily be available.)

The three parameters are set to the values of the relevant fields on the page,

and then the Connection is opened, the Command is executed, and the

Connection is closed.

18. Press F5 to run the application.

Visual Studio displays the page in the default browser.

19. Change the Category Name to Categories New.

20. Click Command.

The page updates the database.

21. Close the browser.

Chapter 12 Quick Reference

To Do this

Simple-bind a

control at design

time

Use the dialog displayed when you click the Ellipsis

button in the DataBindings property in the Properties

Window

Simple-bind a

control at run time

Push the data into the control in the control’s

DataBinding event:

myControl.Text = myTable[0].myColumn

Display bound data

on a page

Call the DataBind method for the Page, or individual

controls:

Me.DataBind()

Complex-bind

controls at design

time

Set the DataSource and DataMember properties in the

Properties Window

Microsoft ADO.Net – Step by Step 294

To Do this

Complex-bind

controls at run time

Set the DataSource, DataMember and, if applicable, the

DataTextField properties of the control, and call its

DataBind method

Use the DataBinder

object

Call its Eval method, passing in the container and

column values:

myControl.Text =

DataBinder.Eval(myTable[0], “myColumn”)

Store data in the

Session state

Set or retrieve the DataSet based on the IsPostBack

property:

If Me.IsPostBack Then

myTable = CType(Session(“myTable”),

DataTable)

Else

myDA.Fill(myTable)

Session(“myTable”) = myTable

EndIf

Store data in the

ViewState

Set or retrieve the DataSet based on the IsPostBack

property:

If Me.IsPostBack Then

myTable = CType(ViewState(“myTable”),

DataTable)

Else

myDA.Fill(myTable)

ViewState(“myTable”) = myTable

EndIf

Chapter 13: Using ADO.NET in Web Forms

Overview

In this chapter, you’ll learn how to:

§ Display data in a DataGrid control

§ Implement sorting in a DataGrid control

§ Display data in a DataList control

§ Display a DataList control as flowed text

§ Implement paging in a DataGrid control

§ Implement manual navigation in a Web form

§ Use validation controls to control user entry

In the previous chapter, we examined the basic data-binding architecture for Web forms.

In this chapter, we’ll examine a few common data-binding tasks in more detail.

Using Template-Based Web Controls

Microsoft ASP.NET Web Forms expose two controls that are specifically designed to

display data: the DataGrid and DataList. Both controls display the rows of a data source,

but vary in their capabilities.

Like its Windows forms equivalent, the DataGrid control displays data in a tabular format.

It provides intrinsic support for in-place editing and paging data, but it has relatively

limited formatting capabilities. The DataList control also provides intrinsic support for inplace

editing, and allows for more flexible formatting.

Microsoft ADO.Net – Step by Step 295

The Microsoft .NET Framework also supports a Repeater control that allows almost

unlimited formatting capability, but it has limited support in the Design View of the Page

Designer—the majority of the formatting must be done directly in the HTML View of the

Page Designer.

All three of these controls support templates, which are sets of controls that define the

content of each section of the control. (A template is not the same as a style, which

defines appearance, rather than content.) The template sections that are available, as

well as the precise behavior of each section, differ between controls.

The DataGrid control, for example, doesn’t support an AlternatingItemTemplate, and its

ItemTemplates define the contents of a column, while the ItemTemplate for a DataList

defines the contents of a row. We’ll examine the specific templates supported by each

control later in this chapter.

All three template-based controls can contain buttons that raise events on the server. As

we’ll see, the DataGrid and DataList controls have intrinsic support for in-place editing,

and all three controls also support user-defined buttons. When a user clicks a userdefined

button, an ItemCommand event is sent to the control that contains the template.

The ItemCommand’s event argument parameter exposes the properties required to

determine which button and which item within the control triggered the event. The three

controls expose different classes of event arguments, but all three expose the same

properties, as shown in Table 13-1.

Table 13-1: ItemCommand Event Arguments

Property Description

CommandArgument String used

as an

argument

for the

command

CommandName String used

to determine

the

command to

be

performed

CommandSource The button

that

generated

the event

Item The

selected

item in the

containing

control

The CommandArgument and CommandName properties are defined when the button is

added to the control. The CommandSource property refers to the button itself, while the

Item is the selected row in the control.

Using the DataGrid Control

As with Windows forms, the DataGrid control is bound to a data source by using the

DataSource and DataMember properties. One row will be displayed in the DataGrid for

every row in the data source. By default, a column will be displayed for each column in

the data source, but as we’ll see, this can be configured through the Property Builder.

Microsoft ADO.Net – Step by Step 296

In addition to the DataSource and DataMember properties, the DataGrid control exposes

a DataKeyField, which is roughly equivalent to the ValueMember property of the

Windows form version and can be set to the name of a column in the data source that

uniquely identifies each row. The column specified as the DataKeyField doesn’t need to

be displayed in the DataGrid. Note, however, that the DataKeyField doesn’t support

multicolumn keys.

Add a DataGrid to a Web Form

1. Open the UsingWebForms project from the Start page or the File

menu.

2. In the Solution Explorer, double-click the DataGrid.aspx file.

Microsoft Visual Studio .NET opens the page in the form designer.

3. Select the DataGrid, and then click Property Builder in the bottom

pane of the Properties window.

Visual Studio displays the dgCategories Property Builder.

4. Set dvCategories as the DataSource and CategoryID as the

DataKeyField.

Microsoft ADO.Net – Step by Step 297

The columns displayed in the DataGrid are defined on the Columns tab of the Property

Builder. Five types of columns are available, as shown in Table 13-2.

Table 13-2: DataGrid Column Types

Column Type Description

Bound A column

from the

data source

Button A button

with custom

functionality

Select An intrinsic

button that

allows a row

to be

selected

Edit, Update, Cancel Intrinsic

buttons that

support inplace

editing

Delete An intrinsic

button that

allows a row

to be

deleted

Hyperlink Displays the

data as a

hyperlink

Template Custom

combination

s of

controls,

which may

be databound

Microsoft ADO.Net – Step by Step 298

A Bound column displays a column from the data source. You can determine whether

the column is visible and whether it is read-only in the Property Builder. The Property

Builder also allows you to specify a data formatting expression to control the way the

data is displayed.

A Button column is a user-defined control. You specify fixed text for the button or bind

the text to a column in the data source by setting its TextField property.

In addition to the generic Button column, the DataGrid exposes a set of intrinsic buttons

to support in-place editing: Edit, Update, and Cancel (which work as a set), Select, and

Delete. As we’ll see, these intrinsic buttons trigger custom server-side events rather than

the generic ItemCommand. The Select and Delete buttons can be data-bound by setting

their TextField properties.

A Hyperlink column embeds an <HREF> tag in the text, allowing the user to navigate to

a different page by selecting a value in the column.

Finally the Template column allows a fine degree of formatting control by using the

Template editor. Any of the other column types can also converted to Template columns

by clicking the Link button in the Properties window. We’ll examine the use of Template

columns later in this chapter.

Add Data-Bound Columns to a DataGrid

1. Select Columns in the left pane of the Property Builder.

Visual Studio displays the Columns tab.

2. Clear the Create Column Automatically At Run Time check box.

3. In the Available Columns list, expand the Button column node, choose

the Select Column type, and then click the Add button (“>”) to move it

to the Selected Columns list.

Microsoft ADO.Net – Step by Step 299

4. Delete the Text property, and then set the TextField property to

CategoryID.

5. In the Available Columns list, expand the Data Fields node (if

necessary), and then move CategoryName and Description to the

Selected Columns list.

Microsoft ADO.Net – Step by Step 300

6. Click OK.

Visual Studio configures the DataGrid columns.

7. Press F5.

Visual Studio displays the page in the default browser.

8. Close the browser.

Unlike the other two template-based controls, the DataGrid control doesn’t require you to

specify the contents of each template. Except for columns that are explicitly declared to

be Template columns, the general formatting of the DataGrid controls the contents and

Microsoft ADO.Net – Step by Step 301

the layout of each section. You can con-vert any column to a Template column by

clicking the Link button in the Property Builder.

Template columns in the DataGrid expose the following sections:

§ HeaderTemplate

§ FooterTemplate

§ ItemTemplate

§ AlternatingItemTemplate

§ EditItemTemplate

§ Pager

The HeaderTemplate and FooterTemplate sections define the layout of the fixed top and

bottom sections of the DataGrid. The ItemTemplate and AlternatingItemTemplate

sections define the controls used to display values, while the EditItemTemplate section

defines the controls that are used to edit the values. The Pager section is used for

automatic data paging, which we’ll discuss later in this chapter.

Add a Template Column to the DataGrid

1. Select the DataGrid in the form designer, and then click Property

Builder in the bottom pane of the Properties window.

Visual Studio displays the Property Builder.

2. Select Columns in the left pane of the Property Builder.

Visual Studio displays the Columns tab.

3. In the Available Columns list, expand the Data Fields node (if

necessary), and then add Current to the Selected Columns list. Use

the up and down arrows to position the Current column between the

Button column and CategoryName.

4. Click the link labeled Convert This Column Into a Template Column.

Visual Studio displays the Template column properties.

Microsoft ADO.Net – Step by Step 302

5. Click OK.

Visual Studio adds the column to the DataGrid in the form designer.

6. Right -click the DataGrid in the form designer. On the context menu,

choose Edit Template, and then on the submenu, select Columns[1]—

Current.

Visual Studio displays the Template editor.

Microsoft ADO.Net – Step by Step 303

7. Delete the label in the ItemTemplate section, and then drag a

CheckBox control from the Web Forms tab of the Toolbox onto the

ItemTemplate section.

8. Use the same procedure to replace the TextBox control in the EditItem

section with a CheckBox control.

9. Right -click the Template editor, and then choose End Template

Editing.

Visual Studio displays the column as a CheckBox in the form designer.

10. Press F5.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 304

11. Close the browser.

In addition to the ItemCommand event, which is raised by custom buttons (columns of

Button type), the DataGrid also exposes the events shown in Table 13-3.

Table 13-3: DataGrid Events

Event Description

ItemCreated Occurs

when an

item in the

DataGrid is

first created

ItemDataBound Occurs after

the item is

bound to a

data value

EditCommand Occurs

when the

user clicks

the intrinsic

Edit button

DeleteCommand Occurs

when the

user clicks

the intrinsic

Delete

button

UpdateCommand Occurs

when the

user clicks

the intrinsic

Update

button

CancelCommand Occurs

when the

user clicks

the intrinsic

Cancel

button

SortCommand Occurs

when a

column is

Microsoft ADO.Net – Step by Step 305

Table 13-3: DataGrid Events

Event Description

sorted

PageIndexChanged Occurs

when a

page index

item is

clicked

The ItemCreated and ItemDataBound events occur during the initial layout of the page.

They’re typically used to format data or other elements on the page. The Edit, Delete,

Update, and Cancel commands are triggered by the intrinsic in-place editing buttons.

The SortCommand event occurs when the DataGrid is set to allow sorting and the user

clicks a column head in the DataGrid. Finally the PageIndexChanged event occurs as

part of the automatic paging of the DataGrid. We’ll discuss this event in detail later in this

chapter.

Note The use of the intrinsic in-place editing commands is

straightforward and well-documented in the Visual Studio online

Help. We won’t be discussing them in any detail here.

Implement Sorting in a DataGrid

Visual Basic .NET

1. Select the DataGrid in the form designer, and then click Property

Builder in the bottom pane of the Properties window.

Visual Studio displays the Property Builder.

2. On the General tab, select the Allow Sorting check box.

3. Click OK.

Visual Studio displays the column headings as link buttons.

Microsoft ADO.Net – Step by Step 306

4. Press F7 to open the code editor for the page.

5. Select dgCategories in the Control Name combo box, and then select

SortCommand in the Method Name combo box.

Visual Studio adds the event handler to the code.

6. Add the following lines to the event handler:

7. Me.dvCategories.Sort = e.SortExpression

8. DataBind()

9. Press F5 to run the application.

Visual Studio displays the page in the default browser.

10. Click the Description column heading.

The page is displayed with the DataGrid sorted by Description.

11. Close the browser.

12. Close the code editor and the form designer.

Visual C# .NET

1. Select the DataGrid in the form designer, and then click Property

Builder in the bottom pane of the Properties window.

Visual Studio displays the Property Builder.

2. On the General tab, select the Allow Sorting check box.

Microsoft ADO.Net – Step by Step 307

3. Click OK.

Visual Studio displays the column headings as link buttons.

4. Display the DataGrid events in the Properties Window, and doubleclick

the SortCommand property.

Visual Studio opens the code editor window and adds the event handler to the

code.

5. Add the following lines to the event handler:

6. this.dvCategories.Sort = e.SortExpression;

DataBind();

7. Press F5 to run the application.

Visual Studio displays the page in the default browser.

8. Click the Description column heading.

The page is displayed with the DataGrid sorted by Description.

Microsoft ADO.Net – Step by Step 308

9. Close the browser.

10. Close the code editor and the form designer.

Using the DataList Control

As we’ve seen, the DataGrid has a default structure. You need to use templates only

where your application requires advanced formatting. The DataList doesn’t assume any

structure and requires that you specify at least the ItemTemplate section before it can

display any data.

The DataList control is bound in the same way as the DataGrid control: by setting the

DataSource property, the DisplayMember property (if necessary), and, optionally, the

DataKeyField property.

The DataList control supports the following templates:

§ HeaderTemplate

§ FooterTemplate

§ ItemTemplate

§ AlternatingItemTemplate

§ SeparatorTemplate

§ SelectedItemTemplate

§ EditItemTemplate

The HeaderTemplate and FooterTemplate are identical to the corresponding templates

in the DataGrid. Unlike the DataGrid, the four Item templates do not necessarily

correspond to a column, only to a single row in the data source. The SeparatorTemplate

is used when the contents of the DataList are displayed as flowed text. We’ll examine

flowed text later in this chapter.

Add a DataList to a Web Form

1. In the Solution Explorer, right-click DataList.aspx, and choose Set as

Start Page.

2. Double-click the file.

Visual Studio displays the Web form in the form designer.

Microsoft ADO.Net – Step by Step 309

3. Drag a DataList control from the Web Form tab of the Toolbox onto the

form designer.

Visual Studio adds a placeholder for the DataList control.

4. In the Properties window, set the DataSource property of the DataList

to dsCategories1, and then set its DataMember property to

Categories.

5. Right -click the DataList in the form designer. On the context menu,

select Edit Template, and then on the submenu, select Item

Templates.

Visual Studio displays the Template editor.

Microsoft ADO.Net – Step by Step 310

6. Drag a Label control from the Toolbox onto the ItemTemplate section

of the Template editor.

7. In the Properties Window, select the (DataBindings) property and click

the Ellipsis button.

Visual Studio opens the DataBindings dialog box.

8. Expand the Container node and the DataItem node, and then select

CategoryName.

Microsoft ADO.Net – Step by Step 311

9. Click OK.

10. Right -click the DataList control, and then on the context menu, select

End Template Editing.

Visual Studio displays the bound item in the DataList placeholder.

11. Press F5.

Visual Studio displays the page in the default browser.

12. Close the browser.

Microsoft ADO.Net – Step by Step 312

The DataList control doesn’t presuppose a table layout, although that is the default

layout. There are two options for the layout of the data in the DataList, which is controlled

by the RepeatLayout property. If the RepeatLayout property is set to Table, the data

items are displayed as an HTML table. If the RepeatLayout property is set to Flow, the

items are included in-line as part of the document’s regular flow of text.

If the DataList values are displayed as a table, the RepeatDirection property controls the

way in which the table will be filled. A value of Vertical fills the table cells from top to

bottom, like a newspaper column, while setting the RepeatDirection property to

Horizontal fills the cells from left to right, like a calendar. The actual number of columns

is determined by the RepeatColumns property.

Display a DataList as Flowed Text

1. Select the DataList control in the form designer.

2. In the Properties window, set the RepeatLayout property to Flow, set

the RepeatColumns property to 3, and then set the RepeatDirection

property to Vertical.

3. Right -click the DataList in the form designer, select Edit Template on

the context menu, and then on the submenu, select Separator

Template.

Visual Studio displays the Template editor.

4. Add a comma and a space to the template.

5. Right -click the Template editor, and then on the context menu, select

End Template Editing.

Visual Studio displays the data items separated by the comma and a space.

6. Increase the width of the control to about the width of a browser page.

7. Press F5.

Microsoft ADO.Net – Step by Step 313

Visual Studio displays the page in the default browser.

8. Close the browser.

9. Close the form designer.

Moving Through Data

Whenever performance and scalability are issues, it’s important to limit the amount of

data displayed on a single page. For usability reasons, you should always limit the

amount of data that is displayed, no matter what the environment—users don’t

appreciate having to wade through masses of data to find the single bit of information

they require.

One common technique in the Internet environment for limiting the amount of data on a

single Web page is to display only a fixed number of rows and allow the user to move

forward and backward through the DataSet. This technique is usually referred to as

paging.

The Web form DataGrid control provides intrinsic support for paging by using the three

methods shown in Table 13-4.

Table 13-4: DataGrid Paging Methods

Method Description

Default Displays either

Next and

Previous buttons

or page

Paging/Default numbers as part

of the DataGrid;

the

CurrentPageIndex

Navigation property is

updated by the

DataGrid

Default Navigation

buttons are

outside the grid,

and the

Paging/Custom Navigation CurrentPageIndex

property is set

manually

Custom Paging Navigation

Microsoft ADO.Net – Step by Step 314

Table 13-4: DataGrid Paging Methods

Method Description

buttons are

outside the grid,

and all paging is

handled within

application code

The simplest method is, of course, to use the DataGrid control’s Default Paging/Default

Navigation method, but the custom options are only slightly more difficult to implement.

DataGrid paging is controlled by two of its properties. The PageSize property, which

defaults to 10, determines the number of items to display. The CurrentPageIndex

property determines the set of rows that will be displayed when the page is rendered.

Though it doesn’t control paging, the read-only property PageCount returns the total

number of pages of data in the data source.

When the user selects either one of the default navigation buttons, ASP.NET raises a

PageIndexChanged event. The event arguments parameter of this event includes a

NewPageIndex property. Rendering the new page in the DataGrid is as simple as setting

the DataGrid control’s CurrentPageIndex property to the value of NewPageIndex and

calling the DataBind method.

Implement Default Paging in a DataGrid Control

Visual Basic .NET

1. In the Solution Explorer, right-click DataGrid.aspx, and then on the

context menu, select Set as Start Page.

2. In the Solution Explorer, double-click DataGrid.aspx.

Visual Studio displays the page in the form designer.

3. Select the DataGrid, and then click Property Builder in the bottom

pane of the Properties window.

Visual Studio displays the Property Builder.

4. Select Paging in the left pane of the Property Builder.

Visual Studio displays the Paging properties.

5. Select the Allow Paging check box, and then set the Page Size

property to 5 rows.

Microsoft ADO.Net – Step by Step 315

6. Click OK.

7. Press F7 to display the code editor.

8. Select dgCategories in the Control Name combo box, and then select

PageIndexChanged in the Method Name combo box.

Visual Studio adds the event handler to the code.

9. Add the following lines to the procedure:

10. Me.dgCategories.CurrentPageIndex = e.NewPageIndex

DataBind()

11. Press F5.

Visual Studio displays the page in the default browser.

12. Click the Next (“>”) button.

Visual Studio displays the remaining 3 rows in the DataGrid.

13. Close the browser.

14. Close the code editor and the form designer.

Visual C# .NET

1. In the Solution Explorer, right-click DataGrid.aspx, and then on the

context menu, select Set as Start Page.

2. In the Solution Explorer, double-click DataGrid.aspx.

Visual Studio displays the page in the form designer.

3. Select the DataGrid, and then click Property Builder in the bottom

pane of the Properties window.

Visual Studio displays the Property Builder.

Microsoft ADO.Net – Step by Step 316

4. Select Paging in the left pane of the Property Builder.

Visual Studio displays the Paging properties.

5. Select the Allow Paging check box, and then set the Page Size

property to 5 rows.

6. Click OK.

7. Display the DataGrid events in the Properties Window, and doubleclick

the PageIndexChanged property.

Visual Studio opens the code editor window adds the event handler to the

code.

8. Add the following lines to the procedure:

9. this.DataGrid1.CurrentPageIndex = e.NewPageIndex;

DataBind();

10. Press F5.

Visual Studio displays the page in the default browser.

11. Click the Next (“>”) button.

Visual Studio displays the remaining 3 rows in the DataGrid.

Microsoft ADO.Net – Step by Step 317

12. Close the browser.

13. Close the code editor and the form designer.

Web forms don’t implement a BindingContext property that maintains a reference to a

current position in a data source. It’s easy enough, however, to maintain a Position

property, stored either in the Session state or in the Page object’s ViewState, and handle

the data manipulation manually.

You might use this technique, for example, if you want to display only a single row on the

Web page, but allow the user to navigate through all the rows by using the same

navigation buttons that are typically available on a Windows form.

Implement Manual Navigation on a Web Form

Visual Basic .NET

1. In the Solution Explorer, right-click Position.aspx, and then select Set

as Start Page.

2. Double-click the file.

Visual Studio displays the page in the form designer.

3. Press F7 to display the code editor.

4. Add the following global declaration to the top of the class:

5. Public Position as Integer

6. Add the following lines to the Page_Load Sub:

7. If Me.IsPostBack Then

8. Me.dsCategories1 = CType(ViewState(“dsCategories”),

DataSet)

9. Me.Position = CType(ViewState(“Position”), Integer)

Microsoft ADO.Net – Step by Step 318

10. Else

11. Me.daCategories.Fill(Me.dsCategories1.Categories)

12. ViewState(“dsCategories”) = Me.dsCategories1

13. ViewState(“Postion”) = 0

14. End If

Me.DataBind()

This code is very similar to the procedure we used in Chapter 12 to store the

DataSet with the page, but we’re also storing the value of the new variable,

Position.

15. Select (Base Class Events) in the Control Name combo box, and

then select DataBinding in the Method Name combo box.

Visual Studio adds the event handler to the code.

16. Add the following lines to the procedure:

17. Dim dr As DataRow

18.

19. dr = Me.dsCategories1.Categories.DefaultView(Position).Row

20. Me.txtCatID.Text = DataBinder.Eval(dr, “CategoryID”)

21. Me.txtName.Text = DataBinder.Eval(dr, “CategoryName”)

Me.txtDescription.Text = DataBinder.Eval(dr, “Description”)

The first two lines declare a local variable, dr, and set it to the row of the

Categories table specified by the Position variable. The next three bind the

value of columns in the row to the Text properties of the appropriate controls.

22. Select btnNext in the Control Name combo box, and then select

Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

23. Add the following lines to the procedure:

24. If Me.Position < Me.dsCategories1.Categories.Count Then

25. Me.Position += 1

26. ViewState(“Position”) = Me.Position

27. DataBind()

End If

The code checks that the current value of Position is less than the number of

rows in the Categories table, and if so, it increments the value and stores it to

the ViewState.

28. Select btnPrevious in the Control Name combo box, and then select

Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

29. Add the following lines to the procedure:

30. If Me.Position > 0 Then

31. Me.Position -= 1

32. ViewState(“Position”) = Me.Position

33. DataBind()

End If

34. Press F5.

Visual Studio displays the page in the default browser.

Microsoft ADO.Net – Step by Step 319

35. Click the Next button.

The page displays the next category.

36. Click the Previous button.

The page displays the previous category.

37. Close the browser.

38. Close the code editor and the form designer.

Visual C# .NET

1. In the Solution Explorer, right-click Position.aspx, and then select Set

as Start Page.

2. Double-click the file.

Visual Studio displays the page in the form designer.

Microsoft ADO.Net – Step by Step 320

3. Press F7 to display the code editor.

4. Add the following global declaration to the top of the class:

public int pagePosition;

5. Add the following lines to the Page_Load method:

6. if (this.IsPostBack == true)

7. {

8. this.dsCategories1 = (dsCategories) ViewState[“dsCategories”];

9. this.pagePosition = (int) ViewState[“pagePosition”];

10. }

11. else

12. {

13.

this.daCategories.Fill(this.dsCategories1.Categories);

14. ViewState[“dsCategories”] = this.dsCategories1;

15. ViewState[“pagePosition”] = 0;

16. }

this.DataBind();

This code is very similar to the procedure we used in Chapter 12 to store the

DataSet with the page, but we’re also storing the value of the new variable,

pagePosition.

17. In the Properties Window of the form designer, select Position from

the controls combo box. Click the Events button, and then doubleclick

the DataBinding event.

Visual Studio adds the event handler to the code.

18. Add the following lines to the Position_DataBinding procedure:

19. DataRow dr;

20.

21. dr =

this.dsCategories1.Categories.DefaultView[pagePosition].Row;

22. this.txtCatID.Text = DataBinder.Eval(dr,

“CategoryID”).ToString();

23. this.txtName.Text = (string) DataBinder.Eval(dr,

“CategoryName”);

this.txtDescription.Text = (string) DataBinder.Eval(dr, “Description”);

Microsoft ADO.Net – Step by Step 321

The first two lines declare a local variable, dr, and set it to the row of the

Categories table specified by the Position variable. The next three bind the

value of columns in the row to the Text properties of the appropriate controls.

24. In the form designer, double-click the Next button.

Visual Studio adds the event handler to the code.

25. Add the following lines to the procedure:

26. if (this.pagePosition < this.dsCategories1.Categories.Count)

27. {

28. this.pagePosition++;

29. ViewState[“pagePosition”] = this.pagePosition;

30. DataBind();

}

The code checks that the current value of Position is less than the number of

rows in the Categories table, and if so, it increments the value and stores it to

the ViewState.

31. In the Form Designer, double-click the Previous button.

Visual Studio adds the event handler to the code.

32. Add the following lines to the procedure:

33. if (this.pagePosition > 0)

34. {

35. this.pagePosition—;

36. ViewState[“pagePosition”] = this.pagePosition;

37. DataBind();

}

38. Press F5.

Visual Studio displays the page in the default browser.

39. Click the Next button.

The page displays the next category.

Microsoft ADO.Net – Step by Step 322

40. Click the Previous button.

The page displays the previous category.

41. Close the browser.

42. Close the code editor and the form designer.

Web Form Validation

The .NET Framework supports a number of validation controls which can be used to

validate data. The Web form validation controls, which are shown in Table 13-5, are

more sophisticated than the Windows Forms ErrorProvider control, which only displays

error messages. The Web form controls perform the validation checks and display any

resulting error messages.

Table 13-5: Validation Controls

Validation Control Description

RequiredFieldValidator Ensures that

the input

control

contains a

value

CompareValidator Compares

the contents

of the input

control to a

constant

value or the

contents of

another

control

Microsoft ADO.Net – Step by Step 323

Table 13-5: Validation Controls

Validation Control Description

RangeValidator Checks that

the contents

of the input

control are

between the

specified

upper and

lower

bounds,

which may

be

characters,

numbers, or

dates

RegularExpressionValidator Checks that

the contents

of the input

control

match the

pattern

specified by

a regular

expression

CustomValidator Checks that

the contents

of the input

control are

based on

custom logic

Each validation control checks for a single condition in a single control on the page,

which is known as the input control. To check for multiple conditions, multiple validation

controls can be assigned to a single input control. This is frequently the case because all

of the controls except RequiredFieldValidator consider a blank field to be valid.

The conditions specified by the validation controls assigned to a given input control will

be combined with a logical AND—all of the conditions must be met or the control will be

considered invalid. If you need to combine validation conditions with a logical OR, you

can use a CustomValidator control to manually check the value.

If the browser supports DHTML, validation will first take place on the client, and the form

will not be submitted until all conditions are met. Whether or not validation has occurred

on the client, validation will always occur on the server when a Click event is processed.

Additionally, you can manually call a control’s Validate method to validate its contents

from code.

When the page is validated, the contents of the input control are passed to the validation

control (or controls), which tests the contents and sets the control’s IsValid property to

false. If any control is invalid, the Page object’s IsValid property is also set to false. You

can check for these conditions in code and take whatever action is required.

Add a RequiredFieldValidator Control to a Form

1. In the Solution Explorer, right-click Validation.aspx, and then on the

context menu, select Set as Start Page.

2. In the Solution Explorer, double-click the Validation.aspx.

Visual Studio displays the page in the form designer.

Microsoft ADO.Net – Step by Step 324

3. Drag a RequiredFieldValidator control from the Web Forms tab of the

Toolbox to the right of the CategoryName TextBox control.

4. In the Properties window, set the RequiredFieldValidator control’s

ErrorMessage property to Name cannot be left blank , and then set its

ControlToValidate property to txtName.

5. Press F5.

Visual Studio displays the page in the default browser.

6. Click Submit.

The validation control displays the error message next to the text box.

Microsoft ADO.Net – Step by Step 325

7. Close the browser.

Chapter 13 Quick Reference

To Do this

Display data in a

DataGrid control

Set the DataSource and optionally set the

DataKeyField in the Property Builder

Control the Columns

displayed in a databound

DataGrid

In the Columns section of the Property Builder,

cancel the selection of Create columns automatically

at run time, and then select the columns to be

displayed

Implement sorting in a

DataGrid control

Bind the DataGrid to a DataView, select Allow

Sorting in the Property Builder, and then build an

event handler for the SortCommand event:

Me.myDataView.Sort = e.SortExpression

DataBind()

Display data in a

DataList control

Set the DataSource and DataMember properties of

the DataList, and the specify the data binding for

each control in the DataList control’s templates

Implement Paging in a

DataGrid control

Select a paging option from the Paging pane of the

DataGrid control’s Property Builder

Part V: ADO.NET and XML

Chapter 14: Using the XML Designer

Chapter 15: Reading and Writing XML

Chapter 16: Using ADO in the .NET Framework

Chapter 14: Using the XML Designer

Overview

In this chapter, you’ll learn how to:

§ Create an XML schema

§ Create a Typed DataSet

§ Generate a Typed DataSet from an XML schema

Microsoft ADO.Net – Step by Step 326

§ Add DataTables to an XML DataSet schema from an existing data source

§ Create DataTables in an XML DataSet schema

§ Add keys to an XML schema

§ Add relations to an XML schema

§ Create elements

§ Create simple types

§ Create complex types

§ Create attributes

In this chapter, we’ll look at the XML Designer, the Microsoft Visual Studio .NET tool that

supports the creation of XML schemas and Microsoft ADO.NET Typed DataSets.

Understanding the XML Schemas

An XML schema is a document that defines the structure of XML data. Much like a

database schema, an XML schema can also be used to validate the contents and

structure of an XML file.

An XML schema is defined using the XML Schema Definition language (XSD). XSD is

similar in structure to HTML, but whereas HTML defines the layout of a document, XSD

defines the structure and content of the data.

Note XML schemas in the Microsoft .NET Framework conform to the

World Wide Web Consortium (W3C) recommendation, as defined

at http://www.w3.org/2001/XMLSchema. Additional schema

elements that are used to support .NET Framework objects, such

as DataSets and DataRelations, conform to the schema defined at

urn:schemas-microsoft-com:xml-msdata. (Such extensions

conform to the W3C recommendation and will simply be ignored

by XML parsers that do not support them.)

XML schemas are defined in terms of elements and attributes. Elements and attributes

are very similar, and can often be used interchangeably, although there are some

distinctions:

§ Elements can contain other items; attributes are always atomic.

§ Elements can occur multiple times in the data; attributes can occur only once.

§ By using the <xs:sequence> tag, a schema can specify that elements must

occur in the order they are specified; attributes can occur in any order.

§ Only elements can be nested within <xs:choice> tags, which specify mutually

exclusive elements (that is, one and only one of the elements can occur).

§ Attributes are restricted to built-in data types; elements can be defined using

user-defined types.

By convention, elements are used for raw data, while attributes are used for metadata;

but you can use whichever best suits your purposes.

Both elements and attributes define items in terms of a type, which defines the data that

the element or attribute can validly contain. XML schemas support simple types, which

are atomic values such as string or Boolean, and complex types, which are composed of

other elements and attributes in any combination. We’ll examine types in more detail

later in this chapter.

Optionally, elements and attributes can define a name that identifies the element that is

being defined. XML element names cannot begin with a number or the letters XML, nor

can they contain spaces. Note that XML is case-sensitive, so the names MyName and

myName are considered distinct.

XML schemas are stored in text files with an XSD extension (XSD schema files). Visual

Studio provides a visual user interface for creating XML schemas, the XML Designer.

The XML tab of the XML Designer allows you to examine the contents of the XSD file

directly, while the DataSet or Schema tab provides a visual interface. Like the form

designer, the XML Designer is closely related to the XSD schema file—changes that you

make to one are reflected in the other.

Microsoft ADO.Net – Step by Step 327

Creating XML Schema and Typed DataSets

Like HTML and other markup languages descended from SGML, XML schema files are

created using tags that are delimited by angle brackets:

<tag> some text </tag>

XML schema files begin with a tag that identifies the version of XML that is being used.

.NET Framework XML schema files follow this with an <xs:schema> tag whose

targetNamespace attribute defines the namespace of all the components in this schema

and any included schemas. The <xs:schema> tag also includes references to two

namespaces—the W3C XML schema definition and the Microsoft extensions.

This standard header is created automatically by the XML Designer. If you create an

XML schema in a text editor or some other design tool, the heading has the following

structure:

<?xml version=”1.0″encoding=”utf-8″?>

<xs:schema targetNamespace=”http://tempuri.org/XMLSchema1.xsd

xmlns:xs=”http://www.w3.org/2001/XMLSchema&#8221;

xmlns:msdata=”urn:schemas-microsoft-com:xml-msdata”

>

The basic structure of the XML schema file created by the XML Schema Designer is:

<?xml version=”1.0″encoding=”utf-8″?>

<xs:schema id=”myDataSet” …>

<xs:element name=”myDataSet” msdata:IsDataSet=”true”>

<xs:complexType maxoccurs=”unbounded”>

<xs:choice>

</xs:choice>

</xs:complexType>

</xs:element>

</xs:schema>

The first two lines are the schema heading. (The xs:schema tag contains attributes that

aren’t shown.) The next tag, <xs:element>, represents the DataSet itself. It has two

attributes: name and msdata:IsDataSet. The first attribute specifies the name of the

DataSet; the second is a Microsoft schema extension that identifies the element as a

DataSet.

The next set of tags creates a complexType. ComplexTypes, which we’ll examine in

detail in this chapter, are elements that can contain other elements and attributes. Note

that this complexType element is not assigned a name—it’s used only for structural

purposes and not referred to elsewhere in the schema.

The final set of tags creates a choice group. Groups, which we’ll also examine later in

this chapter, define how individual elements can validly occur in the XML data. The

choice group creates a mutually exclusive set. The maxOccurs=”unbounded” attribute

specifies that the data can occur any number of times within the group, but because it is

a choice group, all of the data must be the same type.

The DataTables are defined as elements within the choice group. We’ll examine their

structure later in the chapter.

Visual Studio supports the creation of XML schemas and Typed DataSets interactively.

Both types of items use the XML Designer, but an XML schema will output only an XML

schema (XSD), while the DataSet will automatically generate both the schema and a

class file defining the Typed DataSet.

Microsoft ADO.Net – Step by Step 328

Creating Schemas

Like any other project component, XML schemas are added to a project by using the

Add New Item dialog box.

Add a Schema to the XML Designer

1. In Visual Studio .NET, open the SchemaDesigner project from the

Start page or the File menu.

2. On the Project menu, choose Add New Item.

Visual Studio displays the Add New Item dialog box.

3. Select XML Schema in the Templates pane, and then click Open.

Visual Studio adds an XML schema named XMLSchema1 to the project, and

then opens the XML Designer.

4. Close the XML Designer.

Creating DataSets

In previous chapters, we have seen how to generate a Typed DataSet based on

DataAdapters that have been added to the project. It’s also possible to add a DataSet to

a project and configure it manually, using the same technique we used in the previous

exercise to add an XML schema to the project.

Add a DataSet to the XML Designer

1. On the Project menu, choose Add New Item.

Visual Studio displays the Add New Item dialog box.

Microsoft ADO.Net – Step by Step 329

2. Select DataSet in the Templates pane, and then click Open.

Visual Studio adds a Typed DataSet named Dataset1 to the project, and

opens the XML Designer.

3. Select the XML tab of the XML Designer.

Visual Studio displays the XML schema source code.

4. Close the XML Designer.

When you specify a DataSet in the Add New Item dialog box, Visual Studio automatically

generates a class file from the XML schema to define the DataSet. If you create only an

XML schema, or if you import an XML schema from another source, the Typed DataSet

Microsoft ADO.Net – Step by Step 330

won’t automatically be added; but you can create it by using the Generate DataSet

command on the XML Designer’s Schema menu.

Generate a DataSet from a Schema

1. In the Solution Explorer, double-click XMLSchema1.xsd.

Visual Studio opens the (blank) schema in the XML Designer.

2. On the Schema menu, choose Generate Dataset.

Visual Studio creates a Typed DataSet class based on the XML schema.

3. Expand XMLSchema1 to display the class file in the Solution Explorer.

You may need to click the Show All Files button on the Solution

Explorer toolbar.

4. Close the form designer.

Understanding Schema Properties

The XML Designer exposes two sets of properties for schemas: DataSet properties,

which are available only for DataSet schemas, and miscellaneous properties that are

defined by the W3C recommendation.

The properties exposed by the Microsoft schema extensions are shown in Table 14-1.

The IsDataSet property identifies this particular element as the root of the Typed DataSet

definition. The XML Designer will generate an error if more than one element has

IsDataSet set to true.

The CaseSensitive, dataSetName, and Locale properties map directly to their DataSet

counterparts, while the key property is used internally by the .NET Framework.

Table 14-1: Microsoft Schema Extension Properties

Property Description

CaseSensitive Controls

whether the

DataSet is

casesensitive.

Note that

this affects

only the

DataSet.

The XML

Microsoft ADO.Net – Step by Step 331

Table 14-1: Microsoft Schema Extension Properties

Property Description

schema is

always

casesensitive

dataSetName The name of

the Typed

DataSet

based on

the XML

schema

IsDataSet Defines the

element as

the root of a

DataSet

key Set of

unique

constraints

defined on

the DataSet

Locale Locale

information

used to

compare

strings in

the DataSet

The Misc section of the Properties window exposes the attributes of the schema element

defined by the W3C recommendation, as shown in Table 14-2. The id,

targetNamespace, and version properties set the value of these two attributes for the

schema, while the remaining properties define the behavior of other schema

components.

Table 14-2: XML Schema Properties

Property Description

attributeFormDefault Determines

whether

attribute

names from

the target

namespace

must be

namespacequalified

blockDefault Sets the

default

value for the

block

attribute of

elements

and

complex

types in the

schema

Microsoft ADO.Net – Step by Step 332

Table 14-2: XML Schema Properties

Property Description

namespace

elementFormDefault Determines

whether

element

names from

the target

namespace

must be

namespacequalified

finalDefault Sets the

default

value for the

final

attribute of

elements

and

complex

types in the

schema

namespace

id The value of

the

element’s ID

attribute

import Collection of

imported

schemas

include Collection of

included

schemas

NameSpace Collection of

namespace

s declared

in the

schema

targetNamespace The target

namespace

of the

schema

version The value of

the

element’s

version

attribute

The attributeFormDefault and elementFormDefault properties determine whether

attribute and element names, respectively, must be preceded with a namespace

identifier and a colon (for example, name=”myDS:myName” as opposed to

name=”myName”).

Microsoft ADO.Net – Step by Step 333

The blockDefault and finalDefault properties define the default values for the block and

final attributes of elements within the namespace. We’ll examine these attributes in the

following section.

Finally the import, include, and NameSpace properties contain collections of

namespaces that are imported, included, and declared in the schema, respectively.

Examine the Namespaces Declared in an XML Schema

1. In the Solution Explorer, double-click Dataset1.xsd.

Visual Studio opens the schema in the XML Designer.

2. In the Properties window, select Namespace, and then click the

Ellipsis button.

The XML Designer displays the XMLNamesSpace Collection Editor.

3. In the Members pane, select xs.

The XMLNamesSpace Collection Editor displays the NameSpace property

and qualifier of the W3C XSD recommendation.

Working with DataTables in the XML Designer

In the previous section, we examined the structure of tags within a DataSet schema.

Remember that we said that DataTables are defined as elements within a choice group.

The DataTable itself has the following nominal structure:

<xs:element name=”myTable”>

<xs:complexType>

<xs:sequence>

<xs:element name:”Column1″ type:”xs:string” />

<xs:element name:”Column2″ type:”xs:Boolean” />

</xs:sequence>

</xs:complexType>

</xs:element>

The structure is similar to the nominal structure of a schema: an element is created and

assigned the name of the table. Within the element is an unnamed complex type, and

within that is an XML group, and within that are the column elements. The XML group

used for a schema is a choice, which makes element types mutually exclusive. The

DataTable structure uses a sequence group, which ensures that the nested elements will

be in the order specified.

Microsoft ADO.Net – Step by Step 334

Adding DataTables to the XML Designer

Visual Studio supports a number of methods for creating DataTables in the XML

Designer. We’ve been using one of them, generating a DataSet based on DataAdapters

that have been added to a form, for several chapters.

You can also drag an existing table, view, or stored procedure from the Server Explorer

to the XML Designer Schema tag, or create a DataTable from scratch. As we’ll see in

Chapter 15, you can also infer schemas from XML data at run time.

Add a Table or View to a Schema

1. In the XML Designer, open the Dataset1 schema (if necessary), and

then select the DataSet tab.

2. In the Server Explorer, expand the connection to the SQL Northwind

database, and then expand the Tables node.

3. Select the Categories table, and drag it onto the XML Designer.

Visual Studio adds the table to the schema.

4. Select the XML tag of the XML Designer.

Visual Studio displays the XML schema source code.

Create a Table from Scratch

1. In the XML Designer, select the DataSet tab.

Microsoft ADO.Net – Step by Step 335

2. In the XML Schema section of the Toolbox, drag an Element onto the

design surface.

Visual Studio adds a new Element to the schema.

3. The element name, element1, is selected on the design surface.

Change it to Products.

4. Click the first column of the first row of the element, and then expand

the drop-down list.

5. Select element from the drop-down list.

The XML Designer adds a nested element to the Products element.

Microsoft ADO.Net – Step by Step 336

6. Change the element name to ProductID.

Creating Keys

The XML Designer supports three different tags that pertain to entity and referential

integrity: primary keys, keyrefs, and unique keys. Primary keys guarantee uniqueness

within a DataSet. A <keyref> tag is essentially a foreign key reference and is used to

implement a one-to-many relationship. Unique keys guarantee uniqueness, but they are

not typically used for referential integrity.

Creating Primary Keys

The W3C recommendation supports the <key> tag, which specifies that the values of the

specified element must be unique, always present, and not null. The Microsoft schema

extensions add an attribute to this tag, msdata:PrimaryKey, which identifies the key as

being the primary key for the DataTable.

The scope of a key is the scope of the element that contains it. In a .NET Framework

DataSet schema, keys are defined at the DataSet level, which means that the key needs

to be unique, not just within a DataTable, but within the DataSet as a whole.

Primary keys are added to a DataTable by using the Edit Key dialog box, which is

displayed if you drag a key onto an element or choose Add Key from the Schema menu

or an element’s context menu. The Add Key dialog box allows you to specify multiple

fields for a key, if necessary, and also specify whether the key should accept null values

or be designated as the primary key for the DataTable.

Microsoft ADO.Net – Step by Step 337

Add a Primary Key to a DataTable

1. On the Schema menu, point to Add, and then choose New Key.

Visual Studio displays the Edit Key dialog box.

2. Change the name of the key to ProductsPK, and then select the

Dataset Primary Key check box.

Microsoft ADO.Net – Step by Step 338

3. Click OK.

The XML Designer adds the primary key to the Products element.

4. Select the XML tab of the XML Designer.

The XML Designer displays the code for the new key.

Creating Unique Keys

Primary keys are, as we’ve seen, required elements that must be unique within the

DataSet and cannot be null. There can be only one primary key defined for a DataTable.

Unique keys differ from primary keys in that they can allow nulls, and you can define

multiple unique keys for any given DataTable.

Unique keys are added by using the same Edit Key dialog box that is used to add

primary keys.

Add a Unique Key to a DataTable

1. Select the DataSet tab of the XML Designer.

2. Drag a key tag from the XML Schema tab of the Toolbox onto the

Categories element.

The XML Designer displays the Edit Key dialog box.

3. Change the name of the key to CategoryName.

4. Select CategoryID in the Fields pane, expand the drop-down list, and

then select the CategoryName field.

Microsoft ADO.Net – Step by Step 339

5. Click OK.

The XML Designer adds the new key to the Categories element.

6. Select the XML tab of the XML Designer.

Visual Studio displays the XML schema code.

Microsoft ADO.Net – Step by Step 340

Creating Relations

KeyRefs are implemented as Relations in the XML Designer. A Relation translates

directly to a DataRelation within a DataSet. Relations are added to a DataSet by using

the Edit Relation dialog box, which, like the Edit Key dialog box, can be displayed by

dragging a Relation from the Toolbox or by choosing New Relation on the Schema

menu.

In addition to the basic relationship information, the Edit Relation dialog box allows you

the option of creating a foreign key constraint only. If you select this option, the DataSet

class produced from the XML schema will be slightly more efficient, but you will not be

able to use the GetChildRows and GetParentRows methods to reference related data.

In addition, the Edit Relation dialog box allows you to specify three referential integrity

rules: Update, Delete, and Accept/Reject. These rules determine what happens when

primary key rows are updated or deleted, or when changes are accepted or rejected.

The possible values for these rules are shown in Table 14-3. The Accept/Reject rule

supports only Cascade and None.

Table 14-3: Referential Integrity Rules

Rule Description

Cascade Deletes or

updates

related rows

SetNull Sets the

foreign key

values in

related rows

to null

SetDefault Sets the

foreign key

values in

related rows

to their

default

values

None Takes no

action on

related rows

Add a Relation to a DataSet

1. Select the DataSet tab of the XML Designer.

Microsoft ADO.Net – Step by Step 341

2. Select the Categories element.

3. The XML Designer displays the Edit Relation dialog box.

4. Change the Relation name to CategoryProducts.

5. Choose Products in the Child Element combo box.

Microsoft ADO.Net – Step by Step 342

6. Click OK.

Visual Studio adds the Relation to the XML Schema Designer.

Working with Elements

Throughout this chapter, we’ve been talking about elements, and even creating them,

without examining them in any detail. We’ll correct that now. An element in a XML

schema DataSet is a description of an item of data.

At its simplest, an element consists only of the <xs:element> tag:

<xs:element />

However, most elements, unless they’re being used only as containers, contain a name

and type attribute:

<xs:element name=”productID” type=”xs:integer” />

Microsoft ADO.Net – Step by Step 343

Elements may also contain other tags. (Help states that ‘elements can contain other

elements,’ but that’s not strictly true. Specifically, ‘other elements’ doesn’t refer to

element tags.) The tags that can be nested within an element tag are:

§ <xs:annotation>

§ <xs:complexType>

§ <xs:key>

§ <xs:keyref>

§ <xs:simpleType>

§ <xs:unique>

As we saw in the previous section, the <xs:key>, <xs:keyref>, and <xs:unique> tags are

used to define constraints. The <xs:annotation> tag, as might be expected, is used to

add information to be used by applications or displayed to users.

The <xs:complexType> type is a container tag, used to group other tags. We’ve seen it

used in the structure of both schemas and DataTables in the XML Designer. The

<xs:simpleType> tag defines a data type by specifying valid values, based on other

types. We’ll examine both of these tags in detail later in this chapter.

Element Properties

As usual, the XML Designer exposes the attributes of the <xs:element> tag as

properties. The attributes exposed by the W3C recommendation are shown in Table 14-

4.

Table 14-4: XML Schema Element Properties

Property Description

abstract Indicates

whether an

instance of

the element

can appear

in a

document

block Prevents

elements of

the specified

type of

derivation

from being

used in

place of the

element

default The default

value of the

element

final The type of

derivation

fixed The

predetermin

ed,

unchangeab

le value of

the element

form The form of

the element

id The ID of

Microsoft ADO.Net – Step by Step 344

Table 14-4: XML Schema Element Properties

Property Description

the element

key The

collection of

unique keys

defined for

this element

maxOccurs The

maximum

number of

times the

element can

occur within

the

containing

element

minOccurs The

minimum

number of

times the

element can

occur within

the

containing

element

name The name of

the element

nillable Determines

whether an

explicit nil

can be

assigned to

the element

ref The name of

an element

declared in

the

namespace

substitutionGroup The name of

the element

for which

this element

can be

substituted

type The data

type of the

element

The abstract, block, final, form, ref, and substitutionGroup properties pertain to the

derivation of elements from other elements. Their use is outside the scope of this book,

but they are extensively documented in online Help and other XML documentation

sources.

Microsoft ADO.Net – Step by Step 345

The name and id properties are used to identify the element. The ID attribute must be

unique within the XML schema. The name property is also shown in the visual

representation of the element.

The remaining properties define the value of the element. Of these, the most important

property is type, which defines the data type of the element. The type of an element can

be either a built-in XML type or a simple or complex type defined elsewhere in the XML

schema. Like the name prop-erty, the type property is shown in the visual display of the

element.

The default property, not surprisingly, specifies a default value if none is specified, while

the fixed property specifies a value that the element must always contain. Both of these

properties must be of the data type specified by the type attribute, and they are mutually

exclusive. The nillable property indicates whether the value can be set to a null value or

omitted.

Finally the maxOccurs and minOccurs properties specify the maximum and minimum

number of times the element can occur, respectively. The maxOccurs property can be

set to either a non-negative integer or the string ‘unbounded,’ which indicates that there

is no limit to the number of occurrences.

In addition to the element attributes defined by the W3C recommendation, the Microsoft

schema extensions expose the properties shown in Table 14-5. All of these coincide

directly to their counterparts in the DataColumn object.

Table 14-5: Microsoft Schema Extension Element Properties

Property Description

AutoIncrement Determines

whether the

value

automaticall

y

increments

when a row

is added

AutoIncrementSeed Sets the

starting

value for an

AutoIncrem

ent element

AutoIncrementStep Determines

the step by

which

AutoIncrem

ent

elements

are

increased

Caption Specifies

the display

name for an

element

Expression A

DataColumn

expression

for the

element

Microsoft ADO.Net – Step by Step 346

Table 14-5: Microsoft Schema Extension Element Properties

Property Description

ReadOnly Determines

whether

element

values can

be modified

after the row

has been

added to the

DataTable

Define the type Property of an Element

1. Select the ProductID nested element in the XML Designer, expand the

type drop-down list, and then select int.

2. In the Properties window, select the AutoIncrement property, expand

the drop-down list, and then choose true.

3. Save and close DataSet1.

Working with Types

As we’ve seen, the type property of an element defines the data type of an element or

attribute. XML schemas support two kinds of data types: simple and complex. A simple

type resolves to an atomic value, while a complex type contains other complex types,

elements, or attributes.

Microsoft ADO.Net – Step by Step 347

The W3C recommendation allows XML schemas to define user-defined types. As we’ve

seen, the nominal structure of a .NET Framework DataSet XML schema uses userdefined

complex types to define the columns of a table.

The XML Designer supports the creation of user-defined types as well. User-defined

types are useful for encapsulating business rules. For example, if a ShipMethod element

is limited to the values USPS or 2nd Day Air, a user-defined enumeration can be used to

restrict the values rather than adding another DataTable to the schema.

Simple Types

The XML schema recommendation supports two different kinds of simple types (primitive

and derived) and supports the creation of new, user-defined simple types. Primitive types

are the fundamental types. Examples of primi-tive types include string, float, and

Boolean. Derived types are defined by limiting the valid range of values for a primitive

type. An example of a built-in derived type is positiveInteger, which is an integer that

allows only values greater than zero.

Like derived types, user-defined simple types restrict the values of existing simple types

by limiting the valid range of values. User-defined simple types can be derived from base

types by using any of the methods shown in Table 14-6.

Table 14-6: Simple Type Derivation Methods

Method Description

restriction Restricts the

range of

values to a

subset of

those

allowed by

the base

type

list Defines a

list of values

of the base

type that are

valid for the

type

union Defines a

type by

combining

the values

of two or

more other

simple types

Of the available derivation methods, restriction is the most common. The valid range of

values of a simple type is restricted by applying facets to the type. A facet is much like an

attribute, but it specifically limits the valid range of values for a user-defined type. Table

14-7 describes the various facets available for restriction of values.

Table 14-7: Data Type Facets

Facet Description

enumeration Constrains data

to the specified

set of values.

fractionDigits Specifies the

maximum

number of

Microsoft ADO.Net – Step by Step 348

Table 14-7: Data Type Facets

Facet Description

decimal digits.

length Specifies the

nonNegativeInte

ger length of the

value. The

exact meaning

is determined

by the data

type.

maxExclusive Specifies the

exclusive upperbound

value—

all values must

be less than this

value.

maxInclusive Specifies the

inclusive upperbound

value—

all values must

be equal to or

less than this

value.

maxLength Specifies the

nonNegativeInte

ger maximum

length of the

value. The

exact meaning

is determined

by the data

type.

minExclusive Specifies the

exclusive lowerbound

value—

all values must

be greater than

this value.

minInclusive Specifies the

inclusive lowerbound

value—

all values must

be equal to or

greater than this

value.

minLength Specifies the

nonNegativeInte

ger minimum

length of the

value. The

exact meaning

is determined

by the data

Microsoft ADO.Net – Step by Step 349

Table 14-7: Data Type Facets

Facet Description

type.

pattern A regular

expression

specifying a

pattern that the

value must

match.

totalDigits Specifies the

nonNegativeInte

ger maximum

number of

decimal digits

for the value.

whiteSpace Specifies how

white space in

the value is to

be handled.

Create a simpleType Using the length Facet

1. In the Solution Explorer, double-click XMLSchema1.

Visual Studio opens the schema in the XML Designer.

2. Drag a simpleType control from the XML Schema tab of the Toolbox

onto the design surface.

The XML Designer adds a simple type to the schema.

3. Change the name of the type to IDString.

4. Click the first column of the first row of the type, and then expand the

drop-down list.

5. From the drop-down list, select facet.

6. From the drop-down list in the second column, select length.

Microsoft ADO.Net – Step by Step 350

7. In the third column, type 2.

The XML Designer creates a user-defined simpleType that limits the length of

a string to two characters.

8. Select the XML tab of the XML Designer.

The XML Schema Designer displays the XML code for the simpleType

definition.

Complex Types

Complex types are user-defined types that contain elements, attributes, and group

declarations. The elements of a complex type can be other complex types, allowing

infinite nesting.

We’ve already seen unnamed complex types used to define the columns of an ADO.NET

DataTable. A DataTable uses a sequence group to specify that the elements contained

within the group must occur in a particular order. The W3C XML schema

recommendation supports two other types of element groups, choice and all, as shown

in Table 14-8.

Microsoft ADO.Net – Step by Step 351

Table 14-8: Element Group Types

Type Description

sequence Elements

must occur

in the order

specified

choice Only one of

the

elements

specified

can occur

all Either all of

the

elements

specified

must occur,

or none of

them can

occur

Create a complexType Containing a Choice Group

1. Drag a complexType control from the XML Schema tab of the Toolbox

onto the design surface.

The XML Designer adds a complex type to the schema.

2. Change the name of the type to ChoiceGroup.

3. Click the first column of the first row of the type, and then expand the

drop-down list.

Microsoft ADO.Net – Step by Step 352

4. Select choice from the drop-down list.

The XML Designer adds a choice group to the type.

5. Add two elements, Value1 and Value2, to the choice group.

6. Select the XML tab of the XML Designer.

The XML Designer displays the XML code for the complex type.

Microsoft ADO.Net – Step by Step 353

Working with Attributes

Attributes are similar to elements, with some restrictions. Attributes cannot contain other

tags, they cannot be used to derive simple types, and they cannot be included in element

groups. They do, however, require slightly less storage than elements, and for that

reason, they can be useful if you’re working outside the context of ADO.NET objects.

Attribute Properties

Attributes expose the same extensions to the W3C recommendation as elements. The

W3C properties exposed by attributes are shown in Table 14-9. The attribute property

set is a subset of the properties exposed by the element. Because attributes cannot be

used to derive types, the properties that control derivation are not exposed.

Table 14-9: Attribute Properties

Property Description

default The default

value of the

element

fixed The

predetermin

ed,

unchangeab

le value of

the element

form The form of

the element,

either

qualified or

unqualified

id The ID of

the element;

must be

unique

within the

document

Name The name

(NCName)

of the

element

Ref The name of

an element

declared in

the

namespace

Type The data

type of the

element

Use Specifies

how the

attribute is

used

Attributes expose one property, use, that is not exposed by elements. The use property

determines how the attribute can be used when it is included in elements and complex

Microsoft ADO.Net – Step by Step 354

types. The use property can be assigned to one of three values: optional, prohibited, or

required.

The meanings of optional and required are self-evident. Prohibited is used to exclude the

attribute from user-defined types based on a complex type that includes the attribute.

Create an Attribute

1. Drag an Attribute control from the XML Schema tab of the Toolbox

onto the design surface.

The XML Designer adds an attribute to the schema.

2. Change the name of the attribute to companyName.

3. In the Properties window, set the fixed property to XML, Inc.

The attribute, which will always have the value XML, Inc., is added to the

schema.

4. Select the XML tab of the XML Designer.

Visual Studio displays the XML source code for the attribute.

Chapter 14 Quick Reference

To Do this

Create an XML

schema

Choose XML Schema in the Add New Item dialog box

Create a Typed

DataSet

Choose DataSet in the Add New Item dialog box

Microsoft ADO.Net – Step by Step 355

To Do this

Generate a

Typed DataSet

from an XML

schema

Choose Generate DataSet on the Schema menu of the

XML Designer

Add DataTables

from an existing

data source

Drag the table, view, or stored procedure from the Solution

Explorer to the XML Designer

Create

DataTables

Add an element to the XML Designer, and create columns

as nested elements

Add keys to an

XML schema

Select the DataTable and then choose New Key on the

Schema menu, or drag a Key control from the XML

Schema tab of the Toolbox onto the element

Add relations to

an XML schema

Select the DataTable, and then choose New Relation on

the Schema menu, or drag a Relation control from the

XML Schema tab of the Toolbox onto the element

Create elements Drag an element control from the XML Schema tab of the

Toolbox onto the design surface

Create simple

types

Drag a simpleType control from the XML Schema tab of

the Toolbox onto the design surface

Create complex

types

Drag a complexType control from the XML Schema tab of

the Toolbox onto the design surface

Create attributes Drag an attribute control from the XML Schema tab of the

Toolbox onto the design surface

Chapter 15: Reading and Writing XML

Overview

In this chapter, you’ll learn how to:

§ Retrieve an XML Schema from a DataSet

§ Create a DataSet Schema using ReadXmlSchema

§ Infer the Schema of an XML Document

§ Load XML Data using ReadXml

§ Create an XML Schema using WriteXmlSchema

§ Write Data to an XML Document

§ Create a synchronized XML View of a DataSet

In the previous chapter, we looked at the XML Schema Designer, the Microsoft Visual

Studio .NET tool that supports the creation of XML schemas and Typed DataSets. In this

chapter, we’ll look at the DataSet methods that support reading and writing data from an

XML data stream.

The Microsoft .NET Framework provides extensive support for manipulating XML, most

of which is outside the scope of this book. In this chapter, we’ll examine only the

interface between XML and Microsoft ADO.NET DataSets.

Microsoft ADO.Net – Step by Step 356

Understanding ADO.NET and XML

The .NET Framework provides a complete set of classes for manipulating XML

documents and data. The XmlReader and XmlWriter objects, and the classes that

descend from them, provide the ability to read and optionally validate XML. The

XmlDocument and XmlSchema objects and their related classes represent the XML

itself, while the XslTransform and XPathNavigator classes support XSL Transformations

(XSLT) and apply XML Path Language (XPath) queries, respectively.

In addition to providing the ability to manipulate XML data, the XML standard is

fundamental to data transfer and serialization in the .NET Framework. For the most part,

this happens behind the scenes, but we’ve already seen that ADO.NET Typed DataSets

are represented using XML schemas.

Additionally, the ADO.NET DataSet class provides direct support for reading and writing

XML data and schemas, and the XmlDataDocument provides the ability to synchronize

XML data and a relational ADO.NET DataSet, allowing you to manipulate a single set of

data using both XML and relational tools. We’ll explore these techniques in this chapter.

Using the DataSet XML Methods

As we’ve seen, the .NET Framework exposes a set of classes that allow you to

manipulate XML data directly. However, if you need to use relational operations such as

sorting, filtering, or retrieving related rows, the DataSet provides an easier mechanism.

Furthermore, the XML classes don’t support data binding, so if you intend to display the

data to users, you must use the DataSet XML methods.

Fortunately, the choice between treating any given set of data as an XML hierarchy or

relational DataSet isn’t mutually exclusive. As we’ll see later in this chapter, the

XmlDataDocument allows you to manipulate a single set of data by using either or both

sets of tools.

The GetXml and GetXmlSchema Methods

Perhaps the most straightforward of the XML methods supported by the DataSet are

GetXml and GetXmlSchema, which simply return the XML data or XSD schema as a

string value.

Retrieve a DataSet Schema Using GetXmlSchema

Visual Basic .NET

1. Open the XML project from the Start page or the File menu.

2. In the Solution Explorer, double-click GetXml.vb.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 357

3. Double-click Show Schema.

Visual Studio opens the code editor and adds the Click event handler.

4. Add the following code to the handler:

5. Dim xmlStr As String

6.

7. xmlStr = Me.dsMaster1.GetXmlSchema()

Me.tbResult.Text = xmlStr

8. Press F5 to run the application.

Visual Studio displays the application window.

9. Click GetXml.

Visual Studio displays the GetXml form.

Microsoft ADO.Net – Step by Step 358

10. Click Show Schema.

The application displays the DataSet schema in the text box.

11. Close the GetXml form and the application.

Visual C# .NET

1. Open the XML project from the Start page or the File menu.

2. In the Solution Explorer, double-click GetXml.cs.

Visual Studio displays the form in the form designer.

Microsoft ADO.Net – Step by Step 359

3. Double-click Show Schema.

Visual Studio opens the code editor and adds the Click event handler.

4. Add the following code to the handler:

5. string xmlStr;

6.

7. xmlStr = this.dsMaster1.GetXmlSchema();

this.tbResult.Text = xmlStr;

8. Press F5 to run the application.

Visual Studio displays the application window.

9. Click GetXml.

Visual Studio displays the GetXml form.

Microsoft ADO.Net – Step by Step 360

10. Click Show Schema.

The application displays the DataSet schema in the text box.

11. Close the GetXml form and the application.

Retrieve a DataSet’s Data Using GetXml

Visual Basic .NET

1. In the code editor, select btnData in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the Click event handler to the code.

2. Add the following code to the handler:

3. Dim xmlStr As String

Microsoft ADO.Net – Step by Step 361

4.

5. xmlStr = Me.dsMaster1.GetXml

Me.tbResult.Text = xmlStr

6. Press F5 to run the application.

Visual Studio displays the application window.

7. Click GetXml.

Visual Studio displays the GetXml form.

8. Click Show Data.

Visual Studio displays the XML data in the text box.

9. Close the GetXml form and the application.

10. Close the GetXml form designer and code editor window.

Visual C# .NET

1. In the form designer, double-click Show Data.

Visual Studio displays the code editor window and adds the Click event

handler to the code.

2. Add the following code to the handler:

3. string xmlStr;

4.

5. xmlStr = this.dsMaster1.GetXml();

this.tbResult.Text = xmlStr;

6. Press F5 to run the application.

Visual Studio displays the application window.

7. Click GetXml.

Visual Studio displays the GetXml form.

8. Click Show Data.

Visual Studio displays the XML data in the text box.

Microsoft ADO.Net – Step by Step 362

9. Close the GetXml form and the application.

10. Close the GetXml form designer and code editor window.

The ReadXmlSchema Method

The DataSet’s ReadXmlSchema method loads a DataSet schema definition either from

the XSD schema definition or from XML. ReadXmlSchema supports four versions, as

shown in Table 15-1. You can pass the method a stream, a string identifying a file name,

a TextReader, or an XmlReader object.

Table 15-1: ReadXmlSchema Methods

Method Description

ReadXmlSchema(stream) Reads an

XML

schema

from the

specified

stream

ReadXmlSchema(string) Reads an

XML

schema

from the

files

specified in

the string

parameter

ReadXmlSchema(TextReader) Reads an

XML

schema

from the

specified

TextReader

ReadXmlSchema(XmlReader) Reads an

XML

schema

from the

Microsoft ADO.Net – Step by Step 363

Table 15-1: ReadXmlSchema Methods

Method Description

specified

XmlReader

ReadXmlSchema does not load any data; it loads only tables, columns, and constraints

(keys and relations). If the DataSet already contains schema information, new tables,

columns, and constraints will be added to the existing schema, as necessary. If an object

defined in the schema being read conflicts with the existing DataSet schema, the

ReadXmlSchema method will throw an exception.

Note If the ReadXmlSchema method is passed XML that does not

contain inline schema information, the method will infer the

schema according to the rules discussed in the following section.

Create a DataSet Schema Using ReadXmlSchema

Visual Basic .NET

1. In the Solution Explorer, double-click XML.vb.

Visual Studio displays the form in the form designer.

2. Double-click Read Schema.

Visual Studio opens the code editor and adds a Click event handler.

3. Add the following code to the handler:

4. Dim newDS As New System.Data.DataSet()

5. newDS.ReadXmlSchema(“masterSchema.xsd”)

6.

7. Me.daCategories.Fill(newDS.Tables(“Categories”))

8. Me.daProducts.Fill(newDS.Tables(“Products”))

9. SetBindings(newDS)

The first two lines declare a new DataSet and configure it by using the

ReadXmlSchema method based on the XSD schema that is defined in the

masterSchema.xsd file, which is in the bin folder of the project directory.

The remaining three lines fill the new DataSet and then call the SetBindings

function, passing it to the DataSet object. SetBindings, which is in the Utility

Functions region of the code editor, binds the controls on the XML form to the

DataSet provided.

10. Press F5 to run the application.

11. Click Read Schema.

Microsoft ADO.Net – Step by Step 364

The application displays the data from the new DataSet in the form’s controls.

(Note that the navigation buttons will not work because they are specifically

bound to the dsMaster1 DataSet.)

12. Close the application.

Visual C# .NET

1. In the Solution Explorer, double-click XML.cs.

Visual Studio displays the form in the form designer.

2. Double-click Read Schema.

Visual Studio opens the code editor and adds a Click event handler.

3. Add the following code to the handler:

4. System.Data.DataSet newDS = new System.Data.DataSet();

5. newDS.ReadXmlSchema(“masterSchema.xsd”);

6.

7. this.daCategories.Fill(newDS.Tables[“Categories”]);

8. this.daProducts.Fill(newDS.Tables[“Products”]);

SetBindings(newDS);

The first two lines declare a new DataSet and configure it by using the

ReadXmlSchema method based on the XSD schema that is defined in the

masterSchema.xsd file, which is in the Debug folder, in the bin folder of the

project directory.

Microsoft ADO.Net – Step by Step 365

The remaining three lines fill the new DataSet and then call the SetBindings

function, passing it to the DataSet object. SetBindings, which is in the Utility

Functions region of the code editor, binds the controls on the XML form to the

DataSet provided.

9. Press F5 to run the application.

10. Click Read Schema.

The application displays the data from the new DataSet in the form’s controls.

(Note that the navigation buttons will not work because they are specifically

bound to the dsMaster1 DataSet.)

11. Close the application.

The InferXmlSchema Method

The DataSet’s InferXmlSchema method derives a DataSet schema from the structure of

the XML data passed to it. As shown in Table 15-2, InferXmlSchema has the same input

sources as the ReadXmlSchema method we examined in the previous section.

Additionally, the InferXmlSchema method accepts an array of strings representing the

namespaces that should be ignored when generating the DataSet schema.

Table 15-2: InferXmlSchema Methods

Method Description

InferXmlSchema (stream, namespaces()) Reads a

schema

from the

specified

stream,

ignoring the

namespace

s identified

in the

namespace

s string

array

InferXmlSchema (file, namespaces()) Reads a

schema

from the file

specified in

the file

parameter,

ignoring the

namespace

Microsoft ADO.Net – Step by Step 366

Table 15-2: InferXmlSchema Methods

Method Description

s identified

in the

namespace

s string

array

InferXmlSchema (textReader, namespaces()) Reads a

schema

from the

specified

textReader,

ignoring the

namespace

s identified

in the

namespace

s string

array

InferXmlSchema (XmlReader, namespaces()) Reads a

schema

from the

specified

XmlReader,

ignoring the

namespace

s identified

in the

namespace

s string

array

InferXmlSchema follows a fixed set of rules when generating a DataSet schema:

§ If the root element in the XML has no attributes and no child elements

that would otherwise be inferred as columns, it is inferred as a DataSet.

Otherwise, the root element is inferred as a table.

§ Elements that have attributes are inferred as tables.

§ Elements that have child elements are inferred as tables.

§ Elements that repeat are inferred as a single table.

§ Attributes are inferred as columns.

§ Elements that have no attributes or child elements and do not repeat are

inferred as columns.

§ If elements that are inferred as tables are nested within other elements

also inferred as tables, a DataRelation is created between the two tables.

A new, primary key column named “TableName_Id” is added to both

tables and used by the DataRelation. A ForeignKeyConstraint is created

between the two tables by using the “TableName_Id” column as the

foreign key.

§ If elements that are inferred as tables contain text but have no child

elements, a new column named “TableName_Text” is created for the text

of each of the elements. If an element is inferred as a table and has text

but also has child elements, the text is ignored.

Note Only nested (hierarchical) data will result in the creation of a

DataRelation. By default, the XML that is created by the DataSet’s

WriteXml method doesn’t create nested data, so a round-trip won’t

result in the same DataSet schema. As we’ll see, however, this

can be controlled by setting the Nested property of the

DataRelation object.

Microsoft ADO.Net – Step by Step 367

Infer the Schema of an XML Document

Visual Basic .NET

1. In the code editor, select btnInferSchema in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. Dim newDS As New System.Data.DataSet()

4. Dim nsStr() As String

5.

6. newDS.InferXmlSchema(“dataOnly.xml”, nsStr)

7.

8. Me.daCategories.Fill(newDS.Tables(“Categories”))

9. Me.daProducts.Fill(newDS.Tables(“Products”))

10. newDS.Relations.Add(“CategoriesProducts”, _

11. newDS.Tables(“Categories”).Columns(“CategoryID”), _

newDS.Tables(“Products”).Columns(“CategoryID”))

The first two lines declare DataSet and String array variables, while the third

line passes them to the InferXmlSchema method. The remaining code adds a

new DataRelation to the new DataSet, fills it, and then calls the SetBindings

utility function that binds the XML form controls to the DataSet.

12. Press F5 to run the application.

13. Click Infer Schema.

The application displays the data in the form controls.

14. Close the application.

Visual C# .NET

1. In the form designer, double-click Infer Schema.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. System.Data.DataSet newDS = new System.Data.DataSet();

4. string[] nsStr = {};

5.

6. newDS.InferXmlSchema(“dataonly.xml”, nsStr);

Microsoft ADO.Net – Step by Step 368

7.

8. newDS.Relations.Add(“CategoriesProducts”,

9. newDS.Tables[“Categories”].Columns[“CategoryID”],

10. newDS.Tables[“Products”].Columns[“CategoryID”]);

11. this.daCategories.Fill(newDS.Tables[“Categories”]);

12. this.daProducts.Fill(newDS.Tables[“Products”]);

SetBindings(newDS);

The first two lines declare DataSet and String array variables, while the third

line passes them to the InferXmlSchema method. The remaining code adds a

new DataRelation to the new DataSet, fills it, and then calls the SetBindings

utility function that binds the XML form controls to the DataSet.

13. Press F5 to run the application.

14. Click Infer Schema.

The application displays the data in the form controls.

15. Close the application.

The ReadXml Method

The DataSet’s ReadXml method reads XML data into a DataSet. Optionally, it may also

create or modify the DataSet schema. As shown in Table 15-3, the ReadXml method

supports the same input sources as the other DataSet XML methods we’ve examined.

Table 15-3: ReadXml Methods

Method Description

ReadXml(Stream) Reads an

XML

schema and

data to the

specified

stream

ReadXml(String) Reads an

XML

schema and

data to the

file specified

in the string

parameter

ReadXml(TextReader) Reads an

Microsoft ADO.Net – Step by Step 369

Table 15-3: ReadXml Methods

Method Description

XML

schema and

data to the

specified

TextReader

ReadXml(XmlReader) Reads an

XML

schema and

data to the

specified

XmlReader

ReadXml(Stream, XmlReadMode) Reads an

XML

schema,

data, or both

to the

specified

stream, as

determined

by the

XmlReadMo

de

ReadXml(String, XmlReadMode) Reads an

XML

schema,

data, or both

to the file

specified in

the string

parameter,

as

determined

by the

XmlReadMo

de

ReadXml(TextReader, XmlReadMode) Reads an

XML

schema,

data, or both

to the

specified

TextReader,

as

determined

by the

XmlReadMo

de

ReadXml(XmlReader, XmlReadMode) Reads an

XML

schema,

data, or both

to the

specified

XmlReader,

as

Microsoft ADO.Net – Step by Step 370

Table 15-3: ReadXml Methods

Method Description

determined

by the

XmlReadMo

de

The ReadXml method exposes an optional XmlReadMode parameter that determines

how the XML is interpreted. The possible values for XmlReadMode are shown in Table

15-4.

Table 15-4: ReadXMLMode Values

Value Description

Auto Chooses a

ReadMode

based on

the contents

of the XML

ReadSchema Reads an

inline

schema and

then loads

the data,

adding

DataTables

as

necessary

IgnoreSchema Loads data

into an

existing

DataSet,

ignoring any

schema

information

in the XML

InferSchema Infers a

DataSet

schema to

the XML,

ignoring any

inline

schema

information

DiffGram Reads

DiffGram

information

into an

existing

DataSet

schema

Fragment Adds XML

fragments

that match

the existing

DataSet

Microsoft ADO.Net – Step by Step 371

Table 15-4: ReadXMLMode Values

Value Description

schema to

the DataSet

and ignores

those that

do not

Unless the ReadXml method is passed an XmlReadMode parameter of DiffGram, it does

not merge the data that it reads with existing rows in the DataSet. If a row is read with

the same primary key as an existing row, the method will throw an exception.

A DiffGram is an XML format that encapsulates the current and original versions of an

element, along with any DataRow errors. The nominal structure of a DiffGram is shown

here:

<diffgr:diffgram

xmlns:msdata=”urn:schemas-microsoft -com:xml-msdata”

xmlns:diffgr=”urn:schemas-microsoft-com:xml-diffgram-v1″

xmlns:xsd=”http://www.w3.org/2001/XMLSchema”&gt;

<ElementName>

</ElementName>

<diffgr:before>

</diffgr:before>

<diffgr:errors>

</diffgr:errors>

</diffgr:diffgram>

In the real DiffGram, the first section (shown as <ElementName> </ElementName> in

the example) will have the name of the complexType defining the DataRow. The section

contains the current version of the contents of the DataRow. The <diffgr:before> section

contains the original version, while the <diffgr:errors> section contains error information

for the row.

In order for DiffGram to be passed as the XmlReadMode parameter, the data must be in

DiffGram format. If you need to merge XML that is written in standard XML format with

existing data, create a new DataSet and then call the DataSet.Merge method to merge

the two sets of data.

Load XML Data Using ReadXml

Visual Basic .NET

1. In the code editor, select btnReadData in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. Dim newDS As New System.Data.DataSet()

4. Dim nsStr() As String

5.

6. newDS.ReadXml(“data.xml”, XmlReadMode.ReadSchema)

SetBindings(newDS)

Microsoft ADO.Net – Step by Step 372

The data.xml file contains an inline schema definition, so by passing the

ReadSchema XmlReadMode parameter to the ReadXml method, the code

instructs the DataSet to first create the DataSet schema and then load the

data.

7. Press F5 to run the application.

8. Click Read Data.

The application displays the data retrieved from the file.

9. Close the application.

Visual C# .NET

1. In the form designer, double-click Read Data.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. System.Data.DataSet newDS = new System.Data.DataSet();

4. string[] nsStr ={};

5.

6. newDS.ReadXml(“data.xml”, XmlReadMode.ReadSchema);

SetBindings(newDS);

The data.xml file contains an inline schema definition, so by passing the

ReadSchema XmlReadMode parameter to the ReadXml method, the code

instructs the DataSet to first create the DataSet schema and then load the

data.

7. Press F5 to run the application.

8. Click Read Data.

The application displays the data retrieved from the file.

Microsoft ADO.Net – Step by Step 373

9. Close the application.

The WriteXmlSchema Method

As might be expected, the WriteXmlSchema method writes the schema of the DataSet,

including tables, columns, and constraints, to the specified output. The versions of the

method, which accept the same output parameters as the other XML methods, are

shown in Table 15-5.

Table 15-5: WriteXmlSchema Methods

Method Description

WriteXml(stream) Writes an

XML

schema to

the specified

stream

WriteXml(string) Writes an

XML

schema to

the files

specified in

the string

parameter

WriteXml(TextReader) Writes an

XML

schema to

the specified

TextReader

WriteXml(XmlReader) Writes an

XML

schema to

the specified

XmlReader

Create an XML Schema Using WriteXmlSchema

Visual Basic .NET

1. In the code editor, select btnWriteSchema in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

Microsoft ADO.Net – Step by Step 374

2. Add the following lines to the event handler:

3. Me.dsMaster1.WriteXmlSchema(“testSchema.xsd”)

Messagebox.Show(“Finished”, “WriteXmlSchema”)

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

4. Press F5 to run the application.

5. Click Write Schema.

The application displays a message box after the file has been written.

6. Close the message box, and then close the application.

7. Open Microsoft Windows Explorer, navigate to the XML/bin project

directory, right-click the testSchema.xsd file, and then select Open

with Notepad.

Windows displays the schema file.

8. Close Microsoft Notepad, and return to Visual Studio.

Visual C# .NET

1. In the form designer, double-click Write Schema.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. this.dsMaster1.WriteXmlSchema(“testSchema.xsd”);

MessageBox.Show(“Finished”, “WriteXmlSchema”);

Microsoft ADO.Net – Step by Step 375

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

4. Press F5 to run the application.

5. Click Write Schema.

The application displays a message box after the file has been written.

6. Close the message box, and then close the application.

7. Open Microsoft Windows Explorer, navigate to the XML/bin/Debug

project directory, right-click the testSchema.xsd file, and then select

Open with Notepad.

Windows displays the schema file.

8. Close Microsoft Notepad, and return to Visual Studio.

The WriteXml Method

Like the ReadXml method, the DataSet’s WriteXml method writes XML data and,

optionally, DataSet schema information, to a specified output, as shown in Table 15-6.

As we’ll see in the following section, the structure of the XML resulting from the WriteXml

method is controlled by DataSet property settings.

Table 15-6: WriteXml Methods

Method Description

WriteXml(Stream) Writes an

XML

schema and

data to the

Microsoft ADO.Net – Step by Step 376

Table 15-6: WriteXml Methods

Method Description

specified

stream

WriteXml(String) Writes an

XML

schema and

data to the

file specified

in the string

parameter

WriteXml(TextReader) Writes an

XML

schema and

data to the

specified

TextReader

WriteXml(XmlReader) Writes an

XML

schema and

data to the

specified

XmlReader

WriteXml(Stream, XmlWriteMode) Writes an

XML

schema,

data, or both

to the

specified

stream, as

determined

by the

XmlWriteMo

de

WriteXml(String, XmlWriteMode) Writes an

XML

schema,

data, or both

to the file

specified in

the string

parameter,

as

determined

by the

XmlWriteMo

de

WriteXml(TextReader, XmlWriteMode) Writes an

XML

schema,

data, or both

to the

specified

TextReader,

as

determined

by the

Microsoft ADO.Net – Step by Step 377

Table 15-6: WriteXml Methods

Method Description

XmlWriteMo

de

WriteXml(XmlReader, XmlWriteMode) Writes an

XML

schema,

data, or both

to the

specified

XmlReader,

as

determined

by the

XmlWriteMo

de

The valid XmlWriteMode parameters are shown in Table 15-7. The DiffGram parameter

causes the output to be written in DiffGram format. If no XmlWriteMode parameter is

specified, WriteSchema is assumed.

Table 15-7: WriteXMLMode Values

Value Description

IgnoreSchema Writes the

data without

a schema

WriteSchema Writes the

data with an

inline

schema

DiffGram Writes the

entire

DataSet in

DiffGram

format

Write Data to a File in XML Format

Visual Basic .NET

1. In the code editor, select btnWriteData in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4. Me.daProducts.Fill(Me.dsMaster1.Products)

5.

6. Me.dsMaster1.WriteXml(“newData.xml”,

XmlWriteMode.IgnoreSchema)

MessageBox.Show(“Finished”, “WriteXml”)

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

7. Press F5 to run the application.

8. Click Write Data.

The application displays a message box after the file has been written.

Microsoft ADO.Net – Step by Step 378

9. Close the message box, and then close the application.

10. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the data.xml file.

The XML file opens in Microsoft Internet Explorer.

11. Close Internet Explorer, and return to Visual Studio.

Visual C# .NET

1. In the form designer, double-click Write Data.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. this.daCategories.Fill(this.dsMaster1.Categories);

4. this.daProducts.Fill(this.dsMaster1.Products);

5.

6. this.dsMaster1.WriteXml(“newData.xml”,

XmlWriteMode.IgnoreSchema);

MessageBox.Show(“Finished”, “WriteXml”);

Because no path is passed to the method, the file will be written to the bin

subdirectory of the project directory.

7. Press F5 to run the application.

8. Click Write Data.

The application displays a message box after the file has been written.

Microsoft ADO.Net – Step by Step 379

9. Close the message box, and then close the application.

10. Open Windows Explorer, navigate to the XML/bin/Debug project

directory, and double-click the data.xml file.

The XML file opens in Microsoft Internet Explorer.

11. Close Internet Explorer, and return to Visual Studio.

Controlling How the XML Is Written

By default, the WriteXml method generates XML that is formatted according to the

nominal structure we examined in Chapter 14, with DataTables structured as

complexTypes and DataColumns as elements within them.

This isn’t necessarily what you want the output to be. If, for example, you want to read

the data back into a DataSet, ADO.NET won’t create relationships correctly unless the

schema is present, which is an unnecessary overhead in many situations, or the related

data is nested hierarchically in the XML.

In other situations, you may need to control whether individual columns are written as

elements, attributes, or simple text, or even prevent some columns from being written at

all. This might be the case, for example, if you’re interchanging data with another

application.

Microsoft ADO.Net – Step by Step 380

Using the Nested Property of the DataRelation

By convention, XML data is usually represented hierarchically—related rows are nested

inside their parent rows.

The Nested property of DataRelation causes the XML to be written so that the child rows

are nested within the parent rows.

Write Related Data Hierarchically

Visual Basic .NET

1. In the code editor, select btnWriteNested in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4. Me.daProducts.Fill(Me.dsMaster1.Products)

5.

6. Me.dsMaster1.Relations(“CategoriesProducts”).Nested = True

7. Me.dsMaster1.WriteXml(“nestedData.xml”,

XmlWriteMode.IgnoreSchema)

MessageBox.Show(“Finished”, “WriteXml Nested”)

The code sets the Nested property to True before writing it to the

nestData.xml file.

8. Press F5 to run the application.

9. Click Write Nested.

The application displays a message box after the file has been written.

10. Close the message box, and then close the application.

11. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the nestedData.xml file.

The XML file opens in Internet Explorer.

Microsoft ADO.Net – Step by Step 381

12. Close Internet Explorer, and return to Visual Studio.

Visual C# .NET

1. In the form designer, double-click Write Nested.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. this.daCategories.Fill(this.dsMaster1.Categories);

4. this.daProducts.Fill(this.dsMaster1.Products);

5.

6. this.dsMaster1.Relations[“CategoriesProducts”].Nested = true;

7. this.dsMaster1.WriteXml(“nestedData.xml”,

XmlWriteMode.IgnoreSchema);

MessageBox.Show(“Finished”, “WriteXml Nested”);

The code sets the Nested property to true before writing it to the nestData.xml

file.

8. Press F5 to run the application.

9. Click Write Nested.

The application displays a message box after the file has been written.

10. Close the message box, and then close the application.

11. Open Windows Explorer, navigate to the XML/bin/Debug project

directory, and double-click the nestedData.xml file.

The XML file opens in Internet Explorer.

Microsoft ADO.Net – Step by Step 382

12. Close Internet Explorer, and return to Visual Studio.

Using the ColumnMapping Property of the DataColumn

The DataColumn’s ColumnMapping property controls how the column will be written by

the WriteXml method. The possible values for the ColumnMapping property are shown in

Table 15-8.

Element, the default value, writes the column as a nested element within the

complexType representing the DataTable, while Attribute writes the column as one of its

attributes. These two values can be freely mixed within any given DataTable. The

Hidden value prevents the column from being written at all.

SimpleContent, which writes the column as a simple text value, cannot be combined with

columns that are written as elements or attributes, nor can it be used if the Nested

property of a DataRelation referencing the table has its Nested property set to true.

Table 15-8: Column MappingType Values

Value Description

Element The column

is written as

an XML

element

Attribute The column

is written as

an XML

attribute

SimpleContent The

contents of

the column

are written

as text

Hidden The column

will not be

included in

the XML

output

Write Columns as Attributes

Visual Basic .NET

1. In the code editor, select btnAttributes in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

Microsoft ADO.Net – Step by Step 383

2. Add the following lines to the event handler:

3. Me.daCategories.Fill(Me.dsMaster1.Categories)

4.

5. With Me.dsMaster1.Categories

6. .Columns(“CategoryID”).ColumnMapping =

MappingType.Attribute

7. .Columns(“CategoryName”).ColumnMapping =

MappingType.Attribute

8. .Columns(“Description”).ColumnMapping =

MappingType.Attribute

9. End With

10. Me.dsMaster1.WriteXml(“attributes.xml”,

XmlWriteMode.IgnoreSchema)

MessageBox.Show(“Finished”, “Write Attributes”)

11. Press F5 to run the application.

12. Click Attributes.

The application displays a message box after the file has been written.

13. Close the message box, and then close the application.

14. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the attributes.xml file.

The XML file opens in Internet Explorer.

15. Close Internet Explorer, and return to Visual Studio.

Microsoft ADO.Net – Step by Step 384

Visual C# .NET

1. In the form designer, double-click Attributes.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. System.Data.DataTable cat = this.dsMaster1.Categories;

4. this.daCategories.Fill(cat);

5.

6. cat.Columns[“CategoryID”].ColumnMapping =

MappingType.Attribute;

7. cat.Columns[“CategoryName”].ColumnMapping =

MappingType.Attribute;

8. cat.Columns[“Description”].ColumnMapping =

MappingType.Attribute;

9.

10. this.dsMaster1.WriteXml(“attributes.xml”,

XmlWriteMode.IgnoreSchema);

MessageBox.Show(“Finished”, “Write Attributes”);

11. Press F5 to run the application.

12. Click Attributes.

The application displays a message box after the file has been written.

13. Close the message box, and then close the application.

14. Open Windows Explorer, navigate to the XML/bin project directory,

and double-click the attributes.xml file.

The XML file opens in Internet Explorer.

Microsoft ADO.Net – Step by Step 385

15. Close Internet Explorer, and return to Visual Studio.

The XmlDataDocument Object

Although the relational data model is efficient, there are times when it is convenient to

manipulate a set of data by using the tools provided by XML—the Extensible Stylesheet

Language (XSL), XSLT, and XPath.

The .NET Framework’s XmlDataDocument makes that possible. The XmlDataDocument

allows XML-structured data to be manipulated as a DataSet. It doesn’t create a new set

of data, but rather it creates a DataSet that references all or part of the XML data.

Because there’s only one set of data, changes made in one view will automatically be

reflected in the other view, and of course, memory resources are conserved because

only one copy of the data is being maintained.

Depending on the initial source of your data, you can create an XmlDataDocument

based on the schema and contents of a DataSet, or you can create a DataSet based on

the contents of an XmlDataDocument. In either case, changes made to the data stored

in one view will be reflected in the other view.

To create an XmlDataDocument based on an existing DataSet, pass the DataSet to the

XmlDataDocument constructor:

myXDD = New XmlDataDocument(myDS)

If the DataSet schema has not been established prior to creating the XmlDataDocument,

both schemas must be established manually—schema changes made to one object will

not be propagated to the other object.

Alternatively, to begin with an XML document and create a DataSet, you can use the

default XmlDataDocument constructor and then reference its DataSet property:

myXDD = New XmlDataDocument()

myDS = myXDD.DataSet

If you use this method, you must create the DataSet schema manually by adding objects

to the DataSet’s Tables collection and the DataTable’s Columns collection. In order for

the data in the XmlDataDocument to be available through the DataSet, the DataTable

and DataColumn names must match those in the XmlDataDocument. The matching is

case-sensitive.

The second method, while it requires slightly more code, provides a mecha-nism for

creating a partial relational view of the XML data. There is no requirement to duplicate

the entire XML schema in the DataSet. Any DataTables or DataColumns that are not in

the DataSet will simply be ignored during DataSet operations.

Microsoft ADO.Net – Step by Step 386

Data can be loaded into either document at any time, before or after synchro-nization.

Any data changes made to one object, including adding, deleting, or changing values,

will automatically be reflected in the other object.

Create a Synchronized XML View of a DataSet

Visual Basic .NET

1. In the code editor, select btnDocument in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Dim myXDD As System.Xml.XmlDataDocument

4.

5. myXDD = New System.Xml.XmlDataDocument(Me.dsMaster1)

6. myXDD.Load(“dataOnly.xml”)

7.

SetBindings(Me.dsMaster1)

The first line declares the XmlDataDocument variable, while the second line

synchronizes it with the dsMaster1 DataSet. The third line loads data into the

XmlDataDocument.

The final line binds the form controls to dsMaster1. Because the DataSet has

been synchronized with the myXDD XmlDataDocument, the data loaded into

myXDD will be available in dsMaster1.

8. Press F5 to run the application.

9. Click Documents.

The application displays the data in the form.

10. Close the application.

Visual C# .NET

1. In the form designer, double-click Document.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. System.Xml.XmlDataDocument myXDD;

4.

5. myXDD = new System.Xml.XmlDataDocument(this.dsMaster1);

Microsoft ADO.Net – Step by Step 387

6. myXDD.Load(“dataOnly.xml”);

7.

SetBindings(this.dsMaster1);

The first line declares the XmlDataDocument variable, while the second line

synchronizes it with the dsMaster1 DataSet. The third line loads data into the

XmlDataDocument.

The final line binds the form controls to dsMaster1. Because the DataSet has

been synchronized with the myXDD XmlDataDocument, the data loaded into

myXDD will be available in dsMaster1.

8. Press F5 to run the application.

9. Click Documents.

The application displays the data in the form.

10. Close the application.

Chapter 15 Quick Reference

To Do this

Retrieve an

XML

schema

from a

DataSet

Use the DataSet’s GetXmlSchema method:

XmlSchemaString = myDataSet.GetXmlSchema()

Retrieve

data from a

DataSet in

XML format

Use the DataSet’s GetXml method:

XmlDataString = myDataSet.GetXml()

Create a

DataSet

schema

from an

XML

schema

Use the DataSet’s ReadXmlSchema method:

myDataSet.ReadXmlSchema(“schema.xsd”)

Infer the

schema of

an XML

document

Use the DataSet’s InferXmlSchema method:

myDataSet.InferXmlSchema(“data.xml”, string[])

Microsoft ADO.Net – Step by Step 388

To Do this

Load XML

data into a

DataSet

Use the DataSet’s ReadXml method:

myDataSet.ReadXml(“data.xml”)

Create an

XML

schema

from a

DataSet

Use the DataSet’s WriteXmlSchema method:

myDataSet.WriteXmlSchema(“schema.xsd”)

Write data

to an XML

document

Use the DataSet’s WriteXml method:

myDataSet.WriteXml(“data.xml”)

Create a

synchroniz

ed XML

view of a

DataSet

Create an instance of an XmlDataDocument that

references the DataSet:

Dim myXDD As System.Xml.XmlDataDocument

myXDD = New

System.Xml.XmlDataDocument(myDataSet)

Chapter 16: Using ADO in the .NET Framework

Overview

In this chapter, you’ll learn how to:

§ Establish a reference to the ADO and ADOX COM libraries

§ Create an ADO connection

§ Retrieve data from an ADO Recordset

§ Update an ADO Recordset

§ Create a database using ADOX

§ Add a table to a database using ADOX

In the previous two chapters, we examined using XML data with Microsoft ADO.NET

objects. In this chapter, we’ll look at the interface to another type of data, legacy data

objects created by using previous versions of ADO.

We’ll also examine the ADOX library, which provides the ability to create database

objects under programmatic control. This functionality is not available in ADO.NET,

although you can execute DDL statements such as CREATE TABLE on servers that

support them.

Understanding COM Interoperability

Maintaining interoperability with COM components was one of the design goals of the

Microsoft .NET Framework, and this achievement extends to previous versions of ADO.

By using the COM Interop functions provided by the .NET Framework, you can gain

access to all the objects, methods, and events that are exposed by any COM object

simply by establishing a reference to it. This includes previous versions of ADO and

COM objects that you’ve developed using them.

After the reference has been established, the COM objects behave just as though they

were .NET Framework classes. What happens behind the scenes, of course, is more

complicated. When a reference to any COM object, including ADO or ADOX, is declared,

Microsoft ADO.Net – Step by Step 389

the .NET Framework creates an interop assembly that handles communication between

the .NET Framework and COM.

The interop assembly handles a number of tasks, but the most important is data type

marshaling. Table 16-1 shows the type conversion performed by the interop assembly

for standard COM value types.

Table 16-1: COM Data Type Marshaling

Com Data Type .NET

Framew

ork Type

bool Int32

char, small SByte

Short Int16

long, int Int32

hyper Int64

unsigned char, byte Byte

wchar_t, unsigned short UInt16

unsigned long, unsigned int UInt32

unsigned hyper UInt64

float Single

double Double

VARIANT_BOOL Boolean

void * IntPtr

HRESULT Int16 or

IntPtr

SCODE Int32

BSTR String

LPSTR String

LPWSTR String

VARIANT Object

DECIMAL Decimal

DATE DateTime

GUID Guid

CURRENCY Decimal

IUnknown * Object

IDispatch * Object

SAFEARRAY(type) type[]

Microsoft ADO.Net – Step by Step 390

Using ADO in the .NET Framework

In addition to the generic COM interoperability and data type marshaling provided by the

.NET Framework for all COM objects, the .NET Framework provides specific support for

the ADO and ADOX libraries, and COM objects built using them.

This additional support includes data marshaling for core ADO data types. The .NET

Framework equivalents for core ADO types are shown in Table 16-2. Of course, after a

reference to ADO is established, complex types such as Recordset and ADO Connection

become available through the ADO component.

Table 16-2: ADO Data Type Marshaling

ADO Data Type .NET Framework

Type

adEmpty null

adBoolean Int16

adTinyInt SByte

adSmallInt Int16

adInteger Int32

adBigInt Int64

adUnsignedTinyInt promoted to Int16

adUnsignedSmallInt promoted to Int32

adUnsignedInt promoted to Int64

adUnsignedBigInt promoted to

Decimal

adSingle Single

adDouble Double

adCurrency Decimal

adDecimal Decimal

adNumeric Decimal

adDate DateTime

adDBDate DateTime

adDBTime DateTime

adDBTimeStamp DateTime

adFileTime DateTime

adGUID Guid

adError ExternalException

adIUnknown object

adIDispatch object

adVariant object

adPropVariant object

adBinary byte[]

Microsoft ADO.Net – Step by Step 391

Table 16-2: ADO Data Type Marshaling

ADO Data Type .NET Framework

Type

adChar string

adWChar string

adBSTR string

adChapter not supported

adUserDefined not supported

adVarNumeric not supported

Establishing a Reference to ADO

The first step in using a previous version of ADO, or a COM component that references a

previous version, is to set a reference to the component. There are several methods for

exposing the ADO component, but the most convenient is to simply add the reference

within Microsoft Visual Studio .NET.

Add References to the ADO and ADOX Libraries

1. In Visual Studio, open the ADOInterop project from the Start page or

the File menu.

2. In the Solution Explorer, double-click ADOInterop.vb (or

ADOInterop.cs if you’re using C#).

Visual Studio displays the form in the form designer.

3. On the Project menu, select Add Reference.

Visual Studio opens the Add Reference dialog box.

Microsoft ADO.Net – Step by Step 392

4. On the COM tab, select the component named Microsoft ActiveX Data

Objects 2.1 Library, and then click Select.

5. Select the component named Microsoft ADO Ext. 2.7 for DDL and

Security, and then click Select.

6. Click OK.

Visual Studio closes the dialog box and adds the references to the project.

7. In the Solution Explorer, expand the references node.

Microsoft ADO.Net – Step by Step 393

Visual Studio displays the new references.

Creating ADO Objects

After the references to the ADO components have been established, ADO objects can

be created and their properties set just like any object exposed by the .NET Framework

class library.

Like ADO.NET, ADO uses a Connection object to represent a unique session with a data

source. The most important property of an ADO connection, just like an ADO.NET

connection, is the ConnectionString, which establishes the Data Provider, the database

information, and, if appropriate, the user information.

Create an ADO Connection

Visual Basic .NET

1. Press F7 to open the code editor.

2. Add the following procedure, specifying the complete path for the

dsStr text value:

3. Private Function create_connection() As ADODB.Connection

4. Dim dsStr As String

5. Dim dsCn As String

6. Dim cn As New ADODB.Connection()

7.

8. dsStr = “<<Specify the path to the Access nwind sample db

here>>”

9. dsCn = “Provider=Microsoft.Jet.OLEDB.4.0;Data Source=” & _

10. dsStr & “;”

11. cn.ConnectionString = dsCn

Microsoft ADO.Net – Step by Step 394

12.

13. Return cn

14.

End Function

Visual C# .NET

1. Press F7 to open the code editor.

2. Add the following procedure, specifying the complete path for the

dsStr text value:

3. private ADODB.Connection create_connection()

4. {

5. string dsStr;

6. string dsCn;

7.

8. ADODB.Connection cn = new ADODB.Connection();

9. dsStr = “<<Specify the path to the Access nwind sample db

here>>”;

10. dsCn = “Provider=Microsoft.Jet.OLEDB.4.0;Data

Source=” +

11. dsStr + “;”;

12. cn.ConnectionString = dsCn;

13.

14. return cn;

}

This function simply creates an ADO connection and returns it to the caller.

We’ll use the function to simplify creating connections in later exercises.

(ConnectionStrings can be tedious to type.)

In addition to support for ADO data types, the OleDbDataAdapter provides direct support

for ADO Recordsets by exposing the Fill method that accepts an ADO Recordset as a

parameter. There are two versions of the method, as shown in Table 16-3.

Table 16-3: OleDbDataAdapter Fill Methods

Method Description

Fill(DataTable, Recordset) Adds or

refreshes

rows in the

DataTable

to match

those in the

Recordset

Fill(DataSet, Recordset, Adds or

refreshes

rows in the

DataTable

in

DataTable) the specified

DataSet to

match those

in the

Recordset

If the DataTable passed to the Fill method doesn’t exist in the DataSet, it is created

based on the schema of the ADO Recordset. Unless primary key information exists, the

Microsoft ADO.Net – Step by Step 395

rows in the ADO Recordset will simply be added to the DataTable. If primary key

information does exist, matching rows in the ADO Recordset will be merged with those in

the DataTable.

Retrieve Data from an ADO Recordset

Visual Basic .NET

1. In the code editor, select btnOpen in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Dim rs As New ADODB.Recordset()

4. Dim cnADO As ADODB.Connection

5. Dim daTemp As New OleDb.OleDbDataAdapter()

6.

7. cnADO = create_connection()

8. cnADO.Open()

9.

10. rs.Open(“Select * From CategoriesByName”, cnADO)

11. daTemp.Fill(Me.dsCategories1.Categories, rs)

12. cnADO.Close()

13.

SetBindings(Me.dsCategories1)

The first three lines declare an ADO Recordset, an ADO Connection, and an

OleDbDataAdapter. The next two lines call the create_connection function

that we created in the previous exercise to create the ADO Connection object,

and then open the connection.

The next three lines open the ADO Recordset, load the rows into the

DataAdapter, and then close the ADO Recordset, while the final line calls a

function (in the Utility Functions region of the code editor) that binds the

form’s text boxes to the specified DataSet.

14. Press F5 to run the application.

15. Click Open ADO.

The application loads the data from ADO and displays it in the form’s text

boxes.

16. Close the application.

Microsoft ADO.Net – Step by Step 396

Visual C# .NET

1. In the form designer, double-click Open ADO.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. ADODB.Recordset rs = new ADODB.Recordset();

4. ADODB.Connection cnADO;

5.

6. System.Data.OleDb.OleDbDataAdapter daTemp =

7. new System.Data.OleDb.OleDbDataAdapter();

8. cnADO = create_connection();

9.

10. cnADO.Open(cnADO.ConnectionString, “”, “”, -1);

11. rs.Open(“Select * From CategoriesByName”,

12. cnADO, ADODB.CursorTypeEnum.adOpenForwardOnly,

13. ADODB.LockTypeEnum.adLockOptimistic, 1);

14. daTemp.Fill(Me.dsCategories1.Categories, rs);

15.

16. cnADO.Close();

17. SetBindings(Me.dsCategories1);

The first three lines declare an ADO Recordset, an ADO Connection, and an

OleDbDataAdapter. The next two lines call the create_connection function

that we created in the previous exercise to create the ADO Connection object,

and then open the connection.

The next three lines open the ADO Recordset, load the rows into the

DataAdapter, and then close the ADO Recordset, while the final line calls a

function (in the Utility Functions region of the code editor) that binds the

form’s text boxes to the specified DataSet.

18. Press F5 to run the application.

19. Click Open ADO.

The application loads the data from ADO and displays it in the form’s text

boxes.

20. Close the application.

The OleDbDataAdapter’s Fill method provides a convenient mechanism for loading data

from an ADO Recordset into a .NET Framework DataTable, but unfortunately, the

communication is one-way. The .NET Framework doesn’t provide a direct method for

updating an ADO Recordset based on ADO.NET data.

Microsoft ADO.Net – Step by Step 397

Fortunately, it isn’t difficult to update an ADO data source from within the .NET

Framework—simply copy the data values from the appropriate source and use the

intrinsic ADO functions to do the update.

Update an ADO Recordset

Visual Basic .NET

1. In the code editor, select btnUpdate in the Control Name combo box,

and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. Dim rsADO As New ADODB.Recordset()

4. Dim cnADO As ADODB.Connection

5.

6. cnADO = create_connection()

7. cnADO.Open()

8. rsADO.ActiveConnection = cnADO

9. rsADO.Open(“Select * From CategoriesByName”, cnADO, _

10. ADODB.CursorTypeEnum.adOpenDynamic, _

11. ADODB.LockTypeEnum.adLockOptimistic)

12.

13. rsADO.AddNew()

14. rsADO.Fields(“CategoryName”).Value = “Test”

15. rsADO.Fields(“Description”).Value = “Description”

16. rsADO.Update()

17.

18. rsADO.Close()

19. cnADO.Close()

MessageBox.Show(“Finished”, “Update”)

As always, the first few lines declare some local values. The next five lines

create a connection and an ADO Recordset. The next four lines use ADO’s

AddNew and Update methods to create a new row and set its values. Finally,

the Recordset and ADO Connection are closed, and a message box is

displayed.

20. Press F5 to run the application.

21. Click Update ADO.

The application adds the row to the DataTable, and then displays amessage

box telling you that the new row has been added.

22. Close the message box.

23. Click Open ADO to load the data into the form, and then click the

Last (“>|”) button to display the last row.

The application displays the new row.

Microsoft ADO.Net – Step by Step 398

24. Close the application.

25. If you have Microsoft Access, open the nwind database and confirm

that the row has been added.

Visual C# .NET

1. In the form designer, double-click Update ADO.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler:

3. ADODB.Recordset rsADO = new ADODB.Recordset();

4. ADODB.Connection cnADO;

5.

6. cnADO = create_connection();

7. cnADO.Open(cnADO.ConnectionString,””,””,-1);

8.

9. rsADO.ActiveConnection = cnADO;

10. rsADO.Open(“Select * From CategoriesByName”, cnADO,

11. ADODB.CursorTypeEnum.adOpenDynamic,

12. ADODB.LockTypeEnum.adLockOptimistic, -1);

13.

14. rsADO.AddNew(Type.Missing, Type.Missing);

15. rsADO.Fields[1].Value = “Test”;

16. rsADO.Fields[2].Value = “Description”;

17. rsADO.Update(Type.Missing, Type.Missing);

18.

19. rsADO.Close();

20. cnADO.Close();

MessageBox.Show(“Finished”, “Update”);

Microsoft ADO.Net – Step by Step 399

As always, the first few lines declare some local values. The next five lines

create a connection and an ADO recordset. The next four lines use ADO’s

AddNew and Update methods to create a new row and set its values. Finally,

the recordset and ADO connection are closed, and a message box is

displayed.

21. Press F5 to run the application.

22. Click Update ADO.

The application adds the row to the DataTable, and then displays a message

box telling you that the new row has been added.

23. Close the message box.

24. Click Open ADO to load the data into the form, and then click the

Last (“>|”) button to display the last row.

The application displays the new row.

25. Close the application.

26. If you have Access, open the nwind database and confirm that the

row has been added.

Using ADOX in the .NET Framework

ADOX, more formally the “Microsoft ADO Extensions for DDL and Security,” exposes an

object model that allows data source objects to be created and manipulated.

The ADOX object model is shown in the following figure. Not all data sources support all

of the objects in the model; this is determined by the specific OleDb Data Provider.

Microsoft ADO.Net – Step by Step 400

The top-level object, Catalog, equates to a specific data source. This will almost always

be a database, but specific OleDb Data Providers might expose different objects. The

Groups and Users collections control access security for those data sources that

implement it.

The Tables object represents the tables within the database. Each table contains a

Columns collection, which represents individual fields in the table; an Indexes collection,

which represents physical indexes; and a Keys collection, which is used to define

unique, primary, and foreign keys.

The Procedures collection represents stored procedures on the data source, while the

Views collection represents Views or Queries. This model doesn’t always match the

object model of the data source. For example, Microsoft Jet (the underlying data source

for Access) represents both Views and Procedures as Query objects. When mapped to

an ADOX Catalog, any query that updates or inserts rows, along with any query that

contains parameters, is mapped to a Procedure object. Queries that consist solely of

SELECT statements are mapped to Views.

Creating Database Objects Using ADOX

As we’ve seen, ADOX provides a mechanism for creating data source objects

programmatically. ADO.NET doesn’t support this functionality. You can, of course,

execute a CREATE <object> SQL statement using an ADO.NET DataCommand, but

data definition syntax varies wildly between data sources, so it will often be more

convenient to use ADOX and let the OleDb Data Provider handle the operation.

The Catalog object supports a Create method that creates a new database, while the

Tables and Columns collections support Append methods that are used to create new

schema objects.

Create a Database Using ADOX

Visual Basic .NET

1. In the code editor, select btnMakeDB in the Control Name combo box,

and then select Click in the Method Name combo box.

Microsoft ADO.Net – Step by Step 401

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler, specifying the path to the

Sample DBs directory on your system where indicated:

3. Dim dsStr, dsCN As String

4. Dim cnADO As New ADODB.Connection()

5. Dim mdb As New ADOX.Catalog()

6.

7. dsStr = “<<specify the path to the Sample DBs directory>>” _

8. + “\test.mdb”

9. dsCN = “Provider=Microsoft.Jet.OLEDB.4.0;Data Source=” &

dsStr & “;”

10. cnADO.ConnectionString = dsCN

11.

12. mdb.Create(dsCN)

13.

14. mdb.ActiveConnection.Close()

MessageBox.Show(“Finished”, “Make DB”)

15. Press F5 to run the application, and then click Make DB.

The application creates a Jet database named Test in the Sample DBs

directory and then displays a finished method.

16. Close the dialog box, and then close the application.

17. Verify that the new database has been added using Microsoft

Windows Explorer.

Microsoft ADO.Net – Step by Step 402

Visual C# .NET

1. In the form designer, double-click Make DB.

Visual Studio adds the event handler to the code.

2. Add the following lines to the event handler, specifying the path to the

Sample DBs directory on your system where indicated:

3. string dsStr, dsCN;

4. ADODB.Connection cnADO = new ADODB.Connection();

5. ADOX.Catalog mdb = new ADOX.Catalog();

6.

7. dsStr = “<<specify the path to the Sample DBs directory>>” _

8. + “\\test.mdb”;

9. dsCN = “Provider=Microsoft.Jet.OLEDB.4.0;Data Source=” +

dsStr + “;”;

10. cnADO.ConnectionString = dsCN;

11.

12. mdb.Create(dsCN);

13.

14. MessageBox.Show(“Finished”, “Make DB”);

15. Press F5 to run the application, and then click Make DB.

The application creates a Jet database named Test in the Sample DBs

directory and then displays a finished method.

Microsoft ADO.Net – Step by Step 403

16. Close the dialog box, and then close the application.

17. Verify that the new database has been added using Microsoft

Windows Explorer.

Add a Table to a Database Using ADOX

Visual Basic .NET

1. In the code editor, select btnMakeTable in the Control Name combo

box, and then select Click in the Method Name combo box.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. Dim cnADO As ADODB.Connection

4. Dim mdb As New ADOX.Catalog()

5. Dim dt As New ADOX.Table()

6.

7. cnADO = create_connection()

8. cnADO.Open()

9. mdb.ActiveConnection = cnADO

10.

11. With dt

12. .Name = “New Table”

13. .Columns.Append(“TableID”,

ADOX.DataTypeEnum.adWChar, 5)

14. .Columns.Append(“Value”,

ADOX.DataTypeEnum.adWChar, 20)

15. .Keys.Append(“PK_NewTable”,

ADOX.KeyTypeEnum.adKeyPrimary, _

Microsoft ADO.Net – Step by Step 404

16. “TableID”)

17. End With

18. mdb.Tables.Append(dt)

19.

20. mdb.ActiveConnection.Close()

MessageBox.Show(“Finished”, “Make Table”)

21. Press F5 to run the application, and then click Make Table.

The application adds the table to the nwind database and displays a message

box telling you that the new table has been added.

22. Close the message box, and then close the application.

23. If you have Access, open the nwind database and confirm that the

new table has been added.

Visual C# .NET

1. In the form designer, double-click Make Table.

Visual Studio adds the event handler to the code.

2. Add the following code to the event handler:

3. ADODB.Connection cnADO;

4. ADOX.Catalog mdb = new ADOX.Catalog();

5. ADOX.Table dt = new ADOX.Table();

6.

7. cnADO = create_connection();

8. cnADO.Open(cnADO.ConnectionString, “”, “”, -1);

9. mdb.ActiveConnection = cnADO;

10.

11. dt.Name = “New Table”;

12. dt.Columns.Append(“TableID”,

ADOX.DataTypeEnum.adWChar, 5);

Microsoft ADO.Net – Step by Step 405

13. dt.Columns.Append(“Value”,

ADOX.DataTypeEnum.adWChar, 20);

14. dt.Keys.Append(“PK_NewTable”,

ADOX.KeyTypeEnum.adKeyPrimary, “TableID”);

15. mdb. Tables.Append(dt);

16.

17. MessageBox.Show(“Finished”, “Make Table”);

18. Press F5 to run the application, and then click Make Table.

The application adds the table to the nwind database and displays a message

box telling you that the new table has been added.

19. Close the message box, and then close the application.

20. If you have Access, open the nwind database and confirm that the

new table has been added.

Chapter 16 Quick Reference

To Do this

Establish a reference to an

ADO or ADOX library

On the Projects menu, choose Add Reference,

select the library from the COM tab of the Add

Reference dialog box, click Select, and then

click OK

Create an ADO object Reference the ADO COM library, and then use

the usual .NET Framework object creation

commands

Load data from an ADO

Recordset to a ADO.NET

DataSet

Use the DataAdapter’s Fill method:

myDataAdapter.Fill(DataTable,

ADORecordset)

Update an ADO Recordset Open the ADO Connection and ADO Recordset,

and then use the AddNew or Update methods

Microsoft ADO.Net – Step by Step 406

To Do this

Create a database using

ADOX

Use the ADOX Catalog object’s Create method:

adoxCatalog.Create

Add a table to a database

using ADOX

Use the Append method of the ADO Catalog

object’s Tables collection:

adoxCatalog.Tables.Append(adoxTable)

List of Tables

Chapter 2: Creating Connections

Table 2-1: Connection Constructors

Table 2-2: OleDbConnection Properties

Table 2-3: SqlConnection Properties

Table 2-4: Connection Methods

Table 2-5: Connection States

Chapter 3: Data Commands and the DataReader

Table 3-1: Command Constructors

Table 3-2: Data Command Properties

Table 3-3: CommandType Values

Table 3-4: UpdatedRowSource Values

Table 3-5: Parameters Collection Methods

Table 3-6: Command Methods

Table 3-7: CommandBehavior Values

Table 3-8: DataReader Properties

Table 3-9: DataReader Methods

Table 3-10: GetType Methods

Chapter 4: The DataAdapter

Table 4-1: DataAdapter Properties

Table 4-2: MissingMappingAction Values

Table 4-3: MissingSchemaAction Values

Table 4-4: DbDataAdapter Fill Methods

Table 4-5: OleDbDataAdapter Fill Methods

Table 4-6: DbDataAdapter Update Methods

Table 4-7: RowUpdatingEventArgs Properties

Chapter 5: Transaction Processing in ADO.NET

Table 5-1: Connection BeginTransaction Methods

Table 5-2: Additional SQL BeginTransaction Methods

Table 5-3: Isolation Levels

Table 5-4: Transaction BeginTransaction Methods

Chapter 6: The DataSet

Table 6-1: DataSet Constructors

Table 6-2: DataSet Properties

Table 6-3: Primary DataSet Methods

Chapter 7: The DataTable

Table 7-1: DataTable Constructors

Table 7-2: DataSet Add Table Methods

Table 7-3: DataTable Properties

Table 7-4: DataColumn Constructors

Table 7-5: DataColumn Properties

Table 7-6: DataRow Properties

Table 7-7: Rows.Add Methods

Table 7-8: DataRowState Values

Table 7-9: Constraint Properties

Table 7-10: ForeignKeyConstraint Properties

Table 7-11: Action Rules

Microsoft ADO.Net – Step by Step 407

Table 7-12: UniqueConstraint Properties

Table 7-13: DataTable Methods

Table 7-14: DataRow Methods

Table 7-15: DataTable Events

Chapter 8: The DataView

Table 8-1: DataRowView Properties

Table 8-2: DataView Constructors

Table 8-3: DataView Properties

Table 8-4: Aggregate Functions

Table 8-5: Comparison Operators

Table 8-6: Arithmetic Operators

Table 8-7: Special Functions

Table 8-8: DataViewRowState Values

Table 8-9: DataView Methods

Chapter 9: Editing and Updating Data

Table 9-1: DataRowStates

Table 9-2: DataRowVersions

Table 9-3: Remove Methods

Table 9-4: DataRow Item Properties

Table 9-5: DbDataAdapter Update Methods

Table 9-6: UpdateRowSource Values

Chapter 10: ADO.NET Data-Binding in Windows Forms

Table 10-1: BindingContext Properties

Table 10-2: CurrencyManager Properties

Table 10-3: CurrencyManager Methods

Table 10-4: CurrencyManager Events

Table 10-5: Binding Properties

Table 10-6: BindingMemberInfo Properties

Table 10-7: Binding Events

Chapter 11: Using ADO.NET in Windows Forms

Table 11-1: ConvertEventArgs Properties

Chapter 12: Data-Binding in Web Forms

Table 12-1: Eval Methods

Chapter 13: Using ADO.NET in Web Forms

Table 13-1: ItemCommand Event Arguments

Table 13-2: DataGrid Column Types

Table 13-3: DataGrid Events

Table 13-4: DataGrid Paging Methods

Table 13-5: Validation Controls

Chapter 14: Using the XML Designer

Table 14-1: Microsoft Schema Extension Properties

Table 14-2: XML Schema Properties

Table 14-3: Referential Integrity Rules

Table 14-4: XML Schema Element Properties

Table 14-5: Microsoft Schema Extension Element Properties

Table 14-6: Simple Type Derivation Methods

Table 14-7: Data Type Facets

Table 14-8: Element Group Types

Table 14-9: Attribute Properties

Chapter 15: Reading and Writing XML

Table 15-1: ReadXmlSchema Methods

Table 15-2: InferXmlSchema Methods

Table 15-3: ReadXml Methods

Table 15-4: ReadXMLMode Values

Table 15-5: WriteXmlSchema Methods

Table 15-6: WriteXml Methods

Table 15-7: WriteXMLMode Values

Microsoft ADO.Net – Step by Step 408

Table 15-8: Column MappingType Values

Chapter 16: Using ADO in the .NET Framework

Table 16-1: COM Data Type Marshaling

Table 16-2: ADO Data Type Marshaling

Table 16-3: OleDbDataAdapter Fill Methods

List of Sidebars

Chapter 2: Creating Connections

Database References

Using Dynamic Properties

Connection Pooling

Chapter 8: The DataView

DataViewManagers

Chapter 9: Editing and Updating Data

Concurrency

Chapter 10: ADO.NET Data-Binding in Windows Forms

Data Sources

Chapter 12: Data-Binding in Web Forms

Data Sources

Microsoft ADO.Net – Step by Step 409

Posted in Uncategorized | Leave a comment

Dynamic Data CenterGuidance for Hosting Providers

 

Dynamic Data Center

Guidance for Hosting Providers

 

 

 

 

 

 

© 2009 Microsoft Corporation. All rights reserved. The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication and is subject to change at any time without notice to you. This document and its contents are provided AS IS without warranty of any kind, and should not be interpreted as an offer or commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Microsoft, Active Directory, Hyper-V, Silverlight, SQL Server, Windows, Windows Powershell, and Windows Server are trademarks of the Microsoft group of companies.

All other trademarks are property of their respective owners.

The descriptions of other companies’ products in this document, if any, are provided only as a convenience to you. Any such references should not be considered an endorsement or support by Microsoft. Microsoft cannot guarantee their accuracy, and the products may change over time. Also, the descriptions are intended as brief highlights to aid understanding, rather than as thorough coverage. For authoritative descriptions of these products, please consult their respective manufacturers.

Microsoft will not knowingly provide advice that conflicts with local, regional, or international laws; however, it is your responsibility to confirm that your implementation of any advice is in accordance with all applicable laws.

 

 

Posted in Uncategorized | Leave a comment

Delivering Business-Critical Solutions with SharePoint 2010


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Disclaimer

The information contained in this document represents the current plans of Microsoft Corporation on the issues presented at the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. Schedules and features contained in this document are subject to change.

Unless otherwise noted, the companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the expressed written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give any license or rights to these patents, trademarks, copyrights, or other intellectual property.

© 2011 Microsoft Corporation. All rights reserved.

Microsoft, the Microsoft logo, Access, Excel, Outlook, SharePoint, Visio, and other product names are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.

 

 

Table of Contents

Who Should Read This White Paper?    3

Challenge: Siloed Information and Processes Limit Business Performance and
Consume IT Resources    4

Surface LOB Data in SharePoint 2010    5

Business Connectivity Services    5

Implement, Extend, and Improve Business Processes by Building Solutions on
SharePoint 2010    6

Business Solutions in SharePoint 2010    6

IT-Managed Solutions    6

Advanced User and Information Worker Solutions    7

Increase Productivity with Enterprise Search and Business Intelligence    9

Use the SharePoint Platform to Speed ROI and Decrease Business Risk    10

Conclusion    11

Resources    12

 

Who Should Read This White Paper?

Business users at every level of your organization should have access to important data and be connected to processes that enable them to support operations. However, critical business data often is stored in disparate systems and ad hoc processes block efficiencies. Users call on IT to assist them in reaching and reconciling this business data, which can divert important IT resources away from strategic work that positions IT as a business partner, rather than a cost center.

This white paper is intended for Chief Information Officers, Chief Technical Officers, infrastructure managers and information system managers who want to deliver business-critical solutions while driving business value and reducing business risk. These solutions can be implemented across the organization and empower business users to get more value out of line-of-business (LOB) systems, thus extending the reach of important business data and improving business processes. This can be accomplished by connecting LOB applications to Microsoft® SharePoint® 2010 to organize and facilitate broad access to previously siloed information.

This white paper explains how to:

  • Increase access to critical backend business data by surfacing it in SharePoint 2010.
  • Enhance the effectiveness of business processes by building on SharePoint 2010 and using its platform capabilities.
  • Deliver fast return on investment (ROI) by lowering business risk, decreasing training costs and enhancing compliance.

Challenge: Siloed Information and Processes Limit Business Performance and Consume IT Resources

Most business-based IT applications are built for and deployed within vertical business functions, such as product lifecycle management (PLM) for design and engineering, customer relationship management (CRM) for sales and service, and enterprise resource planning (ERP) for finance and human resources.

Your organization is familiar with—or has even deployed—a number of these solutions, like PeopleSoft, JDEdwards, Oracle Financials or SAP. These applications support structured decision making and processes within the business function; nevertheless, silos exist between the applications and organizational functions and processes. This environment can make it difficult for business users to drive collaborative decision making that crosses business units and spans multiple business functions.

Users who need access to the data that resides in these systems make frequent requests to IT to extract the information and organize it. These requests are compounded by users’ demands for anywhere, anytime access on a variety of devices, like laptops, smartphones, and tablets. This leaves IT to solve configuration, security, and compliance concerns, while stretching an already slim IT budget.

These issues become less critical when data is surfaced through a unified platform. SharePoint 2010 can connect a broad range of users to business data that currently resides in siloed systems and is accessible only to specialized users and IT professionals. It also can empower every user—from information workers to power users and professional developers—to build solutions based on this business data, solutions that streamline processes and result in better, faster decisions by the organization.

Surface LOB Data in SharePoint 2010

SharePoint 2010 offers a variety of ways for your organization to surface information buried in siloed LOB systems, so you can quickly begin to improve processes by building solutions that span departmental and even cross-organizational boundaries. For the purpose of this paper, we will highlight Business Connectivity Services (BCS), the SharePoint 2010 technology that provides the core line-of-business connectivity capabilities. However, there are other options for connecting your LOB applications to SharePoint, including Web Services using Windows Communication Foundation (WCF), and more.

Business Connectivity Services

Business Connectivity Services in SharePoint 2010 enables connectivity to external data sources, such as databases and LOB systems (Figure 1). When your LOB systems are connected to SharePoint, users can interact with business data from within the familiar Microsoft Office and SharePoint user interface, so they do not need to learn many complex applications to get their jobs done. This also enables IT to use a unified platform across a range of LOB systems to simplify administration and support.


Figure 1: BCS architecture diagram

Specifically, Business Connectivity Services can help your organization:

  • Bring external data into SharePoint and Office, helping users to read, edit, and write LOB data in familiar tools such as Microsoft Outlook®, Excel®, and Word. Example: An organization brings inventory data from its ERP system into SharePoint to give sales the ability to update the information in real time based on order changes.
  • Address users’ collaboration needs by extending SharePoint capabilities and the Office user experience to include business data and processes. Example: A manufacturing plant foreman searches SharePoint to identify his peers in other plants so that he can reach out and discuss a machinery issue.
  • Create fast, incremental user-driven solutions like workflows and templates that position IT as a strategic asset within the organization. Example: The service department leverages a pre-built workflow for problem resolution that reduces response time and increases customer satisfaction.

Implement, Extend, and Improve Business Processes by Building Solutions on SharePoint 2010

SharePoint 2010 allows IT to focus on executing high-priority projects that deliver strategic business advantages, while maintaining a stable infrastructure. This is because SharePoint provides the right environment for IT to meet business demands by enabling business-critical processes within and across organizational boundaries. In turn, users can become more empowered, translating into increased efficiencies for IT and improved productivity for your organization.

Certainly, user empowerment is key to keeping your organization agile and productive. You can increase organizational agility by using the SharePoint platform to help users access, visualize, and consume the business-critical data currently locked in LOB applications. Plus, IT can maintain control through centralized management and security tools, such as data storage management, backup, versioning, and records management.

It is important to remember that to achieve this level of business benefit, your organization must have a deployment plan that prioritizes business needs. By focusing on business needs, you can help to ensure that SharePoint 2010 is broadly adopted across the organization, which ultimately means that the right information can be delivered to the right people at the right time.

Business Solutions in SharePoint 2010

After connecting external systems to SharePoint 2010, you can begin to build solutions to improve the processes that are crucial to organizational success. SharePoint 2010 offers solution models to help your organization develop, improve, and extend business processes:

  • IT-managed solutions
  • Advanced user and information worker solutions

IT-Managed Solutions

When implementing or extending business-critical processes based on business data from operational systems, IT leads the way more often than not while ensuring that the right methodologies, security models and governance approach are being applied. Nevertheless, these IT-led project tend to be both complex and costly, making it hard for organizations to decide to take-on these challenges.

With SharePoint 2010, IT does not need to develop web applications from the ground up. Instead, they can use platform services to quickly create robust custom solutions. The typical application development lifecycle is a time-consuming and costly endeavor. Each application needs its own security model, workflow engine, repository for storing information, and more. SharePoint provides all of these capabilities out of the box. By building applications on top of SharePoint, your organization can get started faster and deliver value to the business more quickly.

The SharePoint partner ecosystem gives organizations access to an extensive range of solutions from independent software vendors (ISVs) as an alternative to writing custom code, thereby providing sophisticated solutions in a prebuilt package. In addition to the partner ecosystem, a community of systems integrators (SIs) can help users to plan and deploy these types of solutions.

Advanced User and Information Worker Solutions

Advanced users have a deeper understanding of the tools and technologies that IT professionals often use to develop and deploy solutions (for example, Microsoft SharePoint Designer, Microsoft Access® Services, and Microsoft Visio® Services). SharePoint integrates with design tools to give advanced users greater flexibility in building solutions, while still promoting quality assurance and allowing IT professionals to maintain control over finished products. Examples of advanced user solutions include creating a business connection from SharePoint to an LOB system and creating custom workflows to automate tedious business processes (Figure 2). To support the creation of these types of solutions, IT needs to identify advanced users and train them on self-service best practices, and establish governance to define how advanced user solutions are built and deployed.


Figure 2: Example of a workflow for an advanced user solution

 

Scenario: Advanced User Solution

Frank Zhang, a customer service representative for an engineer-to-order company, uses a SharePoint solution for order entry and change management. Business Connectivity Services provides integration with customer data, product catalogs, engineering specifications, on-hand stock availability, and pricing and discount information dynamically linked to various LOB sources, including master data repositories, engineering, and ERP.

Before implementing this process through SharePoint, Frank needed to access and assemble data from various systems to complete the order placement process. This often required exchanging multiple emails and Excel workbooks among various departments, such as Engineering, Manufacturing, and Finance. With simplified and integrated access to all needed data in his SharePoint-based workplace, Frank now typically can manage order completion on his own, with fewer errors and delays. When needed, automated workflows take care of the process across various cross-functional teams, with shared information maintained in a single workspace (such as through Excel Services), helping to eliminate the need for inefficient ad hoc communications.

Information workers can take advantage of out-of-the-box capabilities in SharePoint 2010 that increase productivity. For instance, they can leverage workflows and customizable views of their critical business data created by IT or advanced users. SharePoint 2010 also can create forms automatically based on templates within SharePoint and other Microsoft Office applications (Figure 3).


Figure 3: Example of a form for an information worker solution

Further, information workers can create lists and document libraries that allow them to collect information, collaborate on documents, and share information easily. IT is tasked with publishing templates for the most common solutions and with teaching users best practices for creating lists and collecting information. For additional information about the advantages of user-created solutions, refer to the resources at the end of this paper.

Scenario: Information Worker Solution

Nina Vietzen, a customer service representative, used Microsoft SharePoint and Word to create her own solution for tracking customer inquiries and associated documentation. Custom templates in Word allowed Nina to start with a standard document, while a centralized SharePoint framework provides document versioning, document metadata, and backup and restore.

Before Nina implemented this process through SharePoint, users had to create new Word documents for each customer inquiry and store them in a file share without version control. Now, versioning, metadata and search, and central backups provide Nina and her colleagues with a time-saving solution that keeps documents safe.

Increase Productivity with Enterprise Search and Business Intelligence

SharePoint 2010 includes multiple capabilities with built-in security and manageability that IT can deploy to help improve business user productivity based on accessing and visualizing the business data. Two of these key capabilities are Search and Insights.

SharePoint Search enables cross-platform search to help business users consume and manage important business data. SharePoint 2010 Search provides an interactive, visual search experience. Visual cues help people find information quickly, while refiners let them drill into the results and discover insights.

Example: An account manager receives a customer request to adjust a custom order. Before responding to her customer, she must determine whether any of her organization’s warehouses have the items in stock to amend the order. Her ERP system is connected to SharePoint 2010, so she opens up her team portal, searches for the part, and finds that it is available in two warehouses.

SharePoint 2010 Insights provides interactive dashboards and scorecards that can help people to define and measure success: key metrics can be matched to specific strategies and then shared, tracked, and discussed. Users can create meaningful visualizations that convey the right information the first time, aggregating content from multiple sources and displaying it in a web browser in an understandable and collaborate environment. Moreover, rich interactivity allows users to analyze up-to-the-minute information and work with data quickly and easily to identify key opportunities and trends. Figure 4 (next page) shows a user’s dashboard in SharePoint 2010 Insights.


Figure 4: Dashboard in SharePoint 2010 Insights

Use the SharePoint Platform to Speed ROI and Decrease Business Risk

Using SharePoint 2010 to surface business data from your LOB systems and build solutions can increase the ROI of your legacy systems, speed solutions’ time-to-market, and empower users to help themselves—all of which frees IT resources to focus on more strategic initiatives.

Business units can reduce training costs because SharePoint 2010 offers the familiar Microsoft Office experience that enables people to quickly and easily adopt SharePoint (as opposed to training users on a variety of more complex LOB applications).

SharePoint can speed time-to-market of otherwise time-consuming and resource-intensive solutions to streamline business-critical processes. In addition, powerful Search and BI capabilities provide self-service functionality, which boosts productivity, reduces costs, and increases user satisfaction.

Finally, SharePoint can help to reduce your organization’s overall risk by increasing the visibility of business-critical data. The ability to access accurate, real-time business data has a major impact on your organization. In his 2009 white paper, “Business Intelligence: A Guide for Midsize Companies,” MAS Strategies’ Founder and Principal Analyst Michael Schiff said, “All employees have the responsibility to make the best decisions possible, based upon the data available to them at that time. If their ability to analyze this data and transform it into useful information is improved, the overall quality of their decisions can be improved as well.”

When you surface all relevant data to the people who need it when they need it, you enable them to make better decisions faster. This can reduce mistakes that result from misinformation and decrease your organization’s business risk.

SharePoint also reduces risk by enhancing security, privacy, and compliance through a flexible authentication model. This authentication model can help your organization to maximize its SharePoint 2010 deployment while maintaining highly secure control over corporate assets to increase compliance.

Conclusion

This white paper has discussed extending the reach of your business-critical data across departmental and organizational boundaries to improve business-critical solutions. It also has shown the benefits of surfacing and visualizing this data in SharePoint 2010:

  • Surface LOB Data in SharePoint 2010: Identify business-critical data and the users who need it, and extend the reach of your data by connecting SharePoint to your LOB applications. SharePoint 2010 provides many ways to achieve this state, faster and easier than in previous versions and without complex, expensive custom development.
  • Implement, Extend, and Improve Business Processes: Find and visualize the information you need in SharePoint 2010. Take advantage of out-of-the-box platform capabilities like collaboration, social computing, and content management to enable the right people to access the right information at the right time. IT can design and administer solutions quickly so that users can build their own templates and workflows to connect business data to their processes.
  • Gain Additional Productivity with SharePoint 2010: SharePoint provides several capabilities, including Search and Insights, that can help organizations to improve workforce productivity and visualize their business data in real-time. These capabilities have built-in security and manageability to help ensure safe and easy use.
  • Speed ROI and Decrease Risk: Connecting SharePoint 2010 to your LOB applications can increase the ROI of these systems and decrease business risk by surfacing important data across the organization to users who need it, when they need it. Plus, out-of-the-box capabilities in SharePoint can speed the time-to-market of previously labor-intensive solutions. SharePoint also can reduce IT administrator and end user training costs by enabling users to access information through a familiar interface.

 

Resources

Learn more about the SharePoint capabilities outlined in this white paper by visiting the following:

Posted in Uncategorized | Leave a comment

SharePoint Deployment on Windows Azure Virtual Machines

DISCLAIMER

This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. 

Some examples are for illustration only and are fictitious. No real association is intended or inferred.

This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.

2012 Microsoft Corporation.  All rights reserved. 

 

Table of Contents

 

Executive Summary    4

Who Should Read This Paper?    4

Why Read This Paper?    4

Shift to Cloud Computing    5

Delivery Models for Cloud Services    6

Windows Azure Virtual Machines    7

SharePoint on Windows Azure Virtual Machines    7

Shift in IT Focus    8

Faster Deployment    8

Scalability    8

Metered Usage    8

Flexibility    9

Provisioning Process    9

Deploying SharePoint 2010 on Windows Azure    10

Creating and Uploading a Virtual Hard Disk    15

Usage Scenarios    16

Scenario 1: Simple SharePoint Development and Test Environment    16

Scenario 2: Public-facing SharePoint Farm with Customization    18

Scenario 3: Scaled-out Farm for Additional BI Services    20

Scenario 4: Completely Customized SharePoint-based Website    22

Conclusion    25

Additional Resources    25

 

 

Executive Summary

Microsoft SharePoint Server 2010 provides rich deployment flexibility, which can help organizations determine the right deployment scenarios to align with their business needs and objectives. Hosted and managed in the cloud, the Windows Azure Virtual Machines offering provides complete, reliable, and available infrastructure to support various on-demand application and database workloads, such as Microsoft SQL Server and SharePoint deployments.

While Windows Azure Virtual Machines support multiple workloads, this paper focuses on SharePoint deployments. Windows Azure Virtual Machines enable organizations to create and manage their SharePoint infrastructure quickly—provisioning and accessing nearly any host universally. It allows full control and management over processors, RAM, CPU ranges, and other resources of SharePoint virtual machines (VMs).

Windows Azure Virtual Machines mitigate the need for hardware, so organizations can turn attention from handling high upfront cost and complexity to building and managing infrastructure at scale. This means that they can innovate, experiment, and iterate in hours—as opposed to days and weeks with traditional deployments.

Who Should Read This Paper?

This paper is intended for IT professionals. Furthermore, technical decision makers, such as architects and system administrators, can use this information and the provided scenarios to plan and design a virtualized SharePoint infrastructure on Windows Azure.

Why Read This Paper?

This paper explains how organizations can set up and deploy SharePoint within Windows Azure Virtual Machines. It also discusses why this type of deployment can be beneficial to organizations of many sizes.

 

Shift to Cloud Computing

According to Gartner, cloud computing is defined as a “style of computing where massively scalable IT-enabled capabilities are delivered ‘as a service’ to external customers using Internet technologies.” The significant words in this definition are scalable, service, and Internet. In short, cloud computing can be defined as IT services that are deployed and delivered over the Internet and are scalable on demand.

Undeniably, cloud computing represents a major shift happening in IT today. Yesterday, the conversation was about consolidation and cost. Today, it’s about the new class of benefits that cloud computing can deliver. It’s all about transforming the way IT serves organizations by harnessing a new breed of power. Cloud computing is fundamentally changing the world of IT, impacting every role—from service providers and system architects to developers and end users.

Research shows that agility, focus, and economics are three top drivers for cloud adoption:

  • Agility: Cloud computing can speed an organization’s ability to capitalize on new opportunities and respond to changes in business demands.
  • Focus: Cloud computing enables IT departments to cut infrastructure costs dramatically. Infrastructure is abstracted and resources are pooled, so IT runs more like a utility than a collection of complicated services and systems. Plus, IT now can be transitioned to more innovative and strategic roles.
  • Economics: Cloud computing reduces the cost of delivering IT and increases the utilization and efficiency of the data center. Delivery costs go down because with cloud computing, applications and resources become self-service, and use of those resources becomes measurable in new and precise ways. Hardware utilization also increases because infrastructure resources (storage, compute, and network) are now pooled and abstracted.

Delivery Models for Cloud Services

In simple terms, cloud computing is the abstraction of IT services. These services can range from basic infrastructure to complete applications. End users request and consume abstracted services without the need to manage (or even completely know about) what constitutes those services. Today, the industry recognizes three delivery models for cloud services, each providing a distinct trade-off between control/flexibility and total cost:

  • Infrastructure as a Service (IaaS): Virtual infrastructure that hosts virtual machines and mostly existing applications.
  • Platform as a Service (PaaS): Cloud application infrastructure that provides an on-demand application-hosting environment.
  • Software as a Service (SaaS): Cloud services model where an application is delivered over the Internet and customers pay on a per-use basis (for example, Microsoft Office 365 or Microsoft CRM Online).

Figure 1 depicts the cloud services taxonomy and how it maps to the components in an IT infrastructure. With an on-premises model, the customer is responsible for managing the entire stack—ranging from network connectivity to applications. With IaaS, the lower levels of the stack are managed by a vendor, while the customer is responsible for managing the operating system through applications. With PaaS, a platform vendor provides and manages everything from network connectivity through runtime. The customer only needs to manage applications and data. (The Windows Azure offering best fits in this model.) Finally, with SaaS, a vendor provides the applications and abstracts all services from all underlying components.

Figure 1: Cloud services taxonomy


Windows Azure Virtual Machines

Windows Azure Virtual Machines introduce functionality that allows full control and management of VMs, along with extensive virtual networking. This offering can provide organizations with robust benefits, such as:

  • Management: Centrally manage VMs in the cloud with full control to configure and maintain the infrastructure.
  • Application mobility: Move virtual hard drives (VHDs) back and forth between on-premises and cloud-based environments. There is no need to rebuild applications to run in the cloud.
  • Access to Microsoft server applications: Run the same on-premises applications and infrastructure in the cloud, including Microsoft SQL Server, SharePoint Server, Windows Server, and Active Directory.

Windows Azure Virtual Machines is an easy, open and flexible, and powerful platform that allows organizations to deploy and run Windows Server and Linux VMs in minutes:

  • Easy: With Windows Azure Virtual Machines, it is easy and simple to build, migrate, deploy, and manage VMs in the cloud. Organizations can migrate workloads to Windows Azure without having to change existing code, or they can set up new VMs in Windows Azure in only a few clicks. The offering also provides assistance for new cloud application development by integrating the IaaS and PaaS functionalities of Windows Azure.
  • Open and flexible: Windows Azure is an open platform that gives organizations flexibility. They can start from a prebuilt image in the image library, or they can create and use customized and on-premises VHDs and upload them to the image library. Community and commercial versions of Linux also are available.
  • Powerful: Windows Azure is an enterprise-ready cloud platform for running applications such as SQL Server, SharePoint Server, or Active Directory in the cloud. Organizations can create hybrid on-premises and cloud solutions with VPN connectivity between the Windows Azure data center and their own networks.

SharePoint on Windows Azure Virtual Machines

SharePoint 2010 flexibly supports most of the workloads in a Windows Azure Virtual Machines deployment. Windows Azure Virtual Machines are an optimal fit for FIS (SharePoint Server for Internet Sites) and development scenarios. Likewise, core SharePoint workloads are also supported. If an organization wants to manage and control its own SharePoint 2010 implementation while capitalizing on options for virtualization in the cloud, Windows Azure Virtual Machines are ideal for deployment.

The Windows Azure Virtual Machines offering is hosted and managed in the cloud. It provides deployment flexibility and reduces cost by mitigating capital expenditures due to hardware procurement. With increased infrastructure agility, organizations can deploy SharePoint Server in hours—as opposed to days or weeks. Windows Azure Virtual Machines also enables organizations to deploy SharePoint workloads in the cloud using a “pay-as-you-go” model.
As SharePoint workloads grow, an organization can rapidly expand infrastructure; then, when computing needs decline, it can return the resources that are no longer needed—thereby paying only for what is used.

Shift in IT Focus

Many organizations contract out the common components of their IT infrastructure and management, such as hardware, operating systems, security, data storage, and backup—while maintaining control of mission-critical applications, such as SharePoint Server. By delegating all non-mission-critical service layers of their IT platforms to a virtual provider, organizations can shift their IT focus to core, mission-critical SharePoint services and deliver business value with SharePoint projects, instead of spending more time on setting up infrastructure.

Faster Deployment

Supporting and deploying a large SharePoint infrastructure can hamper IT’s ability to move rapidly to support business requirements. The time that is required to build, test, and prepare SharePoint servers and farms and deploy them into a production environment can take weeks or even months, depending on the processes and constraints of the organization. Windows Azure Virtual Machines allow organizations to quickly deploy their SharePoint workloads without capital expenditures for hardware. In this way, organizations can capitalize on infrastructure agility to deploy in hours instead of days or weeks.

Scalability

Without the need to deploy, test, and prepare physical SharePoint servers and farms, organizations can expand and contract compute capacity on demand, at a moment’s notice. As SharePoint workload requirements grow, an organization can rapidly expand its infrastructure in the cloud. Likewise, when computing needs decrease, the organization can diminish resources, paying only for what it uses. Windows Azure Virtual Machines reduces upfront expenses and long-term commitments, enabling organizations to build and manage SharePoint infrastructures at scale. Again, this means that these organizations can innovate, experiment, and iterate in hours—as opposed to days and weeks with traditional deployments.

Metered Usage

Windows Azure Virtual Machines provide computing power, memory, and storage for SharePoint scenarios, whose prices are typically based on resource consumption. Organizations pay only for what they use, and the service provides all capacity needed for running the SharePoint infrastructure. For more information on pricing and billing, go to Windows Azure Pricing Details. Note that there are nominal charges for storage and data moving out of the Windows Azure cloud from an on-premises network. However, Windows Azure does not charge for uploading data.

Flexibility

Windows Azure Virtual Machines provide developers with the flexibility to pick their desired language or runtime environment, with official support for .NET, Node.js, Java, and PHP. Developers also can choose their tools, with support for Microsoft Visual Studio, WebMatrix, Eclipse, and text editors. Further, Microsoft delivers a low-cost, low-risk path to the cloud and offers cost-effective, easy provisioning and deployment for cloud reporting needs—providing access to business intelligence (BI) across devices and locations. Finally, with the Windows Azure offering, users not only can move VHDs to the cloud, but also can copy a VHD back down and run it locally or through another cloud provider, as long as they have the appropriate license.

Provisioning Process

This subsection discusses the basic provisioning process in Windows Azure. The image library in Windows Azure provides the list of available preconfigured VMs. Users can publish SharePoint Server, SQL Server, Windows Server, and other ISO/VHDs to the image library. To simplify the creation of VMs, base images are created and published to the library. Authorized users can use these images to generate the desired VM. For more information, go to Create a Virtual Machine Running Windows Server 2008 R2 on the Windows Azure site. Figure 2 shows the basic steps for creating a VM using the Windows Azure Management Portal:

Figure 2: Overview of steps for creating a VM


Users also can upload a sysprepped image on the Windows Azure Management Portal. For more information, go to Creating and Uploading a Virtual Hard Disk. Figure 3 shows the basic steps for uploading an image to create a VM:

Figure 3: Overview of steps for uploading an image

Deploying SharePoint 2010 on Windows Azure

You can deploy SharePoint 2010 on Windows Azure by following these steps:

  1. Log on to the Windows Azure (Preview) Management Portal through your account.
  1. Create a VM with base operating system: On the Windows Azure Management Portal, click +NEW, then click VIRTUAL MACHINE, and then click FROM GALLERY.

  2. The VM OS Selection dialog box appears. Click Platform Images, select the Windows Server 2008 R2 SP1 platform image.

 

  1. The VM Configuration dialog box appears. Provide the following information:
  • Enter a VIRTUAL MACHINE NAME.
    • This machine name should be globally unique.
  • Leave the NEW USER NAME box as Administrator.
  • In the NEW PASSWORD box, type a strong password.
  • In the CONFIRM PASSWORD box, retype the password.
  • Select the appropriate SIZE.
    • For a production environment (SharePoint application server and database), it is recommended to use Large (4 Core, 7GB memory).

  1. The VM Mode dialog box appears. Provide the following information:
  • Select Standalone Virtual Machine.
  • In the DNS NAME box, provide the first portion of a DNS name of your choice.
    • This portion will complete a name in the format MyService1.cloudapp.net.
  • In the STORAGE ACCOUNT box, choose one of the following:
    • Select a storage account where the VHD file is stored.
    • Choose to have a storage account automatically created.
      • Only one storage account per region is automatically created. All other VMs created with this setting are located in this storage account.
      • You are limited to 20 storage accounts.
      • For more information, go to Create a Storage Account in Windows Azure.

 

  • In the REGION/AFFINITY GROUP/VIRTUAL NETWORK box, select the region where the virtual image will be hosted.

  1. The VM Options dialog box appears. Provide the following information:
  • In the AVAILABILITY SET box, select (none).
  • Read and accept the legal terms.
  • Click the checkmark to create the VM.

 

  1. The VM Instances page appears. Verify that your VM was created successfully.

  2. Complete VM setup:
  • Open the VM using Remote Desktop.
  • On the Windows Azure Management Portal, select your VM, and then select the DASHBOARD page.
  • Click Connect.

  1. Build the SQL Server VM using any of the following options:
  • Create a SQL Server 2012 VM by following steps 1 to 7 above—except in step 3, use the SQL Server 2012 image instead of the Windows Server 2008 R2 SP1 image. For more information, go to Provisioning a SQL Server Virtual Machine on Windows Azure.
    • When you choose this option, the provisioning process keeps a copy of SQL Server 2012 setup files in the C:\SQLServer_11.0_Full directory path so that you can customize the installation. For example, you can convert the evaluation installation of SQL Server 2012 to a licensed version by using your license key.
  • Use the SQL Server System Preparation (SysPrep) tool to install SQL Server on the VM with base operating system (as shown above in steps 1 to 7). For more information, go to Install SQL Server 2012 Using SysPrep.
  • Use the Command Prompt to install SQL Server. For more information, go to Install SQL Server 2012 from the Command Prompt.
  • Use supported SQL Server media and your license key to install SQL Server on the VM with base operating system (as shown above in steps 1 to 7).
  1. Build the SharePoint farm using the following substeps:
  • Substep 1: Configure the Windows Azure subscription using script files.
  • Substep 2: Provision SharePoint servers by creating another VM with base operating system (as shown above in steps 1 to 7). To build a SharePoint server on this VM, choose one of the following options:
  • Substep 3: Configure SharePoint. After each SharePoint VM is in the ready state, configure SharePoint Server on each server by using one of the following options:
    • Configure SharePoint from the GUI.
    • Configure SharePoint using Windows PowerShell. For more information, go to Install SharePoint Server 2010 by Using Windows PowerShell.
      • You also can use the CodePlex Project’s AutoSPInstaller, which consists of Windows PowerShell scripts, an XML input file, and a standard Microsoft Windows batch file. AutoSPInstaller provides a framework for a SharePoint 2010 installation script based on Windows PowerShell. For more information, go to CodePlex: AutoSPInstaller.

  1. After the script gets completed, connect to the VM using the VM Dashboard.
  2. Verify SharePoint configuration: Log on to the SharePoint server, and then use Central Administration to verify the configuration.

Creating and Uploading a Virtual Hard Disk

You also can create your own images and upload them to Windows Azure as a VHD file. To create and upload a VHD file on Windows Azure, follow these steps:

  1. Create the Hyper-V-enabled image: Use Hyper-V Manager to create the Hyper-V-enabled VHD. For more information, go to Create Virtual Hard Disks.
  2. Create a storage account in Windows Azure: A storage account in Windows Azure is required to upload a VHD file that can be used for creating a VM. This account can be created using the Windows Azure Management Portal. For more information, go to Create a Storage Account in Windows Azure.
  3. Prepare the image to be uploaded: Before the image can be uploaded to Windows Azure, it must be generalized using the SysPrep command. For more information, go to How to Use SysPrep: An Introduction.
  4. Upload the image to Windows Azure: To upload an image contained in a VHD file, you must create and install a management certificate. Obtain the thumbprint of the certificate and the subscription ID. Set the connection and upload the VHD file using the CSUpload command-line tool. For more information, go to Upload the Image to Windows Azure.

 

Usage Scenarios

This section discusses some leading customer scenarios for SharePoint deployments using Windows Azure Virtual Machines. Each scenario is divided into two parts—a brief description about the scenario followed by steps for getting started.

Scenario 1: Simple SharePoint Development and Test Environment

Description

Organizations are looking for more agile ways to create SharePoint applications and set up SharePoint environments for onshore/offshore development and testing. Fundamentally, they want to shorten the time required to set up SharePoint application development projects, and decrease cost by increasing the use of their test environments. For example, an organization might want to perform on-demand load testing on SharePoint Server and execute user acceptance testing (UAT) with more concurrent users in different geographic locations. Similarly, integrating onshore/offshore teams is an increasingly important business need for many of today’s organizations.

This scenario explains how organizations can use preconfigured SharePoint farms for development and test workloads. A SharePoint deployment topology looks and feels exactly as it would in an on-premises virtualized deployment. Existing IT skills translate 1:1 to a Windows Azure Virtual Machines deployment, with the major benefit being an almost complete cost shift from capital expenditures to operational expenditures—no upfront physical server purchase is required. Organizations can eliminate the capital cost for server hardware and achieve flexibility by greatly reducing the provisioning time required to create, set up, or extend a SharePoint farm for a testing and development environment. IT can dynamically add and remove capacity to support the changing needs of testing and development. Plus, IT can focus more on delivering business value with SharePoint projects and less on managing infrastructure.

To fully utilize load-testing machines, organizations can configure SharePoint virtualized development and test machines on Windows Azure with operating system support for Windows Sever 2008 R2. This enables development teams to create and test applications and easily migrate to on-premises or cloud production environments without code changes. The same frameworks and toolsets can be used on premises and in the cloud, allowing distributed team access to the same environment. Users also can access on-premises data and applications by establishing a direct VPN connection.

Getting Started

Figure 4 shows a SharePoint development and testing environment in a Windows Azure VM. To build this deployment, start by using the same on-premises SharePoint development and testing environment used to develop applications. Then, upload and deploy the applications to the Windows Azure VM for testing and development. If your organization decides to move the application back on-premises, it can do so without having to modify the application.

 

Figure 4: SharePoint development and testing environment in Windows Azure Virtual Machines


Setting Up the Scenario Environment

To implement a SharePoint development and testing environment on Windows Azure, follow these steps:

  1. Provision: First, provision a VPN connection between on-premises and Windows Azure using Windows Azure Virtual Network. (Because Active Directory is not being used here, a VPN tunnel is needed.) For more information, go to Windows Azure Virtual Network (Design Considerations and Secure Connection Scenarios). Then, use the Management Portal to provision a new VM using a stock image from the image library.
  • You can upload the on-premises SharePoint development and testing VMs to your Windows Azure storage account and reference those VMs through the image library for building the required environment.
  • You can use the SQL Server 2012 image instead of the Windows Server 2008 R2 SP1 image. For more information, go to Provisioning a SQL Server Virtual Machine on Windows Azure.
  1. Install: Install SharePoint Server, Visual Studio, and SQL Server on the VMs using a Remote Desktop connection.
  1. Develop deployment packages and scripts for applications and databases: If you plan to use an available VM from the image library, the desired on-premises applications and databases can be deployed on Windows Azure Virtual Machines:
  • Create deployment packages for the existing on-premises applications and databases using SQL Server Data Tools and Visual Studio.
  • Use these packages to deploy the applications and databases on Windows Azure Virtual Machines.
  1. Deploy SharePoint applications and databases:
  • Configure security on the Management Portal endpoint and set an inbound port in the VM’s Windows Firewall.
  • Deploy SharePoint applications and databases to Windows Azure Virtual Machines using the deployment packages and scripts created in step 3.
  • Test deployed applications and databases.
  1. Manage VMs:
  • Monitor the VMs using the Management Portal.
  • Monitor the applications using Visual Studio and SQL Server Management Studio.
  • You also can monitor and manage the VMs using on-premises management software, like Microsoft System Center – Operations Manager.

Scenario 2: Public-facing SharePoint Farm with Customization

Description

Organizations want to create an Internet presence that is hosted in the cloud and is easily scalable based on need and demand. They also want to create partner extranet websites for collaboration and implement an easy process for distributed authoring and approval of website content. Finally, to handle increasing loads, these organizations want to provide capacity on demand to their websites.

In this scenario, SharePoint Server is used as the basis for hosting a public-facing website. It enables organizations to rapidly deploy, customize, and host their business websites on a secure, scalable cloud infrastructure. With SharePoint public-facing websites on Windows Azure, organizations can scale as traffic grows and pay only for what they use. Common tools, similar to those used on premises, can be used for content authoring, workflow, and approval with SharePoint on Windows Azure.

Further, using Windows Azure Virtual Machines, organizations can easily configure staging and production environments running on VMs. SharePoint public-facing VMs created in Windows Azure can be backed up to virtual storage. In addition, for disaster recovery purposes, the Continuous Geo-Replication feature allows organizations to automatically back up VMs operating in one data center to another data center miles away. (For more information on geo-replication, go to Introducing Geo-replication for Windows Azure Storage).

VMs in Windows Azure infrastructure are validated and supported for working with other Microsoft products, such as SQL Server and SharePoint Server. Windows Azure and SharePoint Server are better together: Both are part of the Microsoft family and are thoroughly integrated, supported, and tested together to provide an optimal experience. They both have a single point of support for the SharePoint application and the Windows Azure infrastructure.

Getting Started

In this scenario, more front-end web servers for SharePoint Server must be added to support extra traffic. These servers require enhanced security and Active Directory Domain Services domain controllers to support user authentication and authorization. Figure 5 shows the layout for this scenario.

Figure 5: Public-facing SharePoint farm with customization


Setting Up the Scenario Environment

To implement a public-facing SharePoint farm on Windows Azure, follow these steps:

  1. Deploy Active Directory: The fundamental requirements for deploying Active Directory on Windows Azure Virtual Machines are similar—but not identical—to deploying it on VMs (and, to some extent, physical machines) on-premises. For more information about the differences, as well as guidelines and other considerations, go to Guidelines for Deploying Active Directory on Windows Azure Virtual Machines. To deploy Active Directory in Windows Azure:
  1. Provision a VM: Use the Management Portal to provision a new VM from a stock image in the image library.
  2. Deploy a SharePoint farm:
  • Use the newly provisioned VM to install SharePoint and generate a reusable image. For more information about installing SharePoint Server, go to Install and Configure SharePoint Server 2010 by Using Windows PowerShell or CodePlex: AutoSPInstaller.
  • Configure the SharePoint VM to create and connect to the SharePoint farm.
  • Use the Management Portal to configure the load balancing.
    • Configure the VM endpoints, select the option to load balance traffic on an existing endpoint, and then specify the name of the load-balanced VM.
    • Add another front-end web VM to the existing SharePoint farm for extra traffic.
  1. Manage VMs:

  • Monitor the VMs using the Management Portal.
  • Monitor the SharePoint farm using Central Administration.

Scenario 3: Scaled-out Farm for Additional BI Services

Description

Business intelligence is essential to gaining key insights and making rapid, sound decisions. As organizations transition from an on-premises approach, they do not want to make changes to the BI environment while deploying existing BI applications to the cloud. They want to host reports from SQL Server Analysis Services (SSAS) or SQL Server Reporting Services (SSRS) in a highly durable and available environment, while keeping full control of the BI application—all without spending much time and budget on maintenance.

This scenario describes how organizations can use Windows Azure Virtual Machines to host mission-critical BI applications. Organizations can deploy SharePoint farms in Windows Azure Virtual Machines and scale out the application server VM’s BI components, like SSRS or Excel Services. By scaling resource-intensive components in the cloud, they can better and more easily support specialized workloads. Note that SQL Server in Windows Azure Virtual Machines performs well, as it is easy to scale SQL Server instances, ranging from small to extra-large installations. This provides elasticity, enabling organizations to dynamically provision (expand) or deprovision (shrink) BI instances based on immediate workload requirements.

Migrating existing BI applications to Windows Azure provides better scaling. With the power of SSAS, SSRS, and SharePoint Server, organizations can create powerful BI and reporting applications and dashboards that scale up or down. These applications and dashboards also can be more securely integrated with on-premises data and applications. Windows Azure ensures data center compliance with support for ISO 27001. For more information, go to the Windows Azure Trust Center.

Getting Started

To scale out the deployment of BI components, a new application server with services such as PowerPivot, Power View, Excel Services, or PerformancePoint Services must be installed. Or, SQL Server BI instances like SSAS or SSRS must be added to the existing farm to support additional query processing. The server can be added as a new Windows Azure VM with SharePoint 2010 Server or SQL Server installed. Then, the BI components can be installed, deployed, and configured on that server (Figure 6).

Figure 6: Scaled-out SharePoint farm for additional BI services

Setting Up the Scenario Environment

To scale out a BI environment on Windows Azure, follow these steps:

  1. Provision:
  • Provision a VPN connection between on premises and Windows Azure using Windows Azure Virtual Network. For more information, go to Windows Azure Virtual Network (Design Considerations and Secure Connection Scenarios).
  • Use the Management Portal to provision a new VM from a stock image in the image library.
    • You can upload SharePoint Server or SQL Server BI workload images to the image library, and any authorized user can pick those BI component VMs to build the scaled-out environment.
  1. Install: If your organization does not have prebuilt images of SharePoint Server or SQL Server BI components, install SharePoint Server and SQL Server on the VMs using a Remote Desktop connection.
  1. Add the BI VM:
  • Configure security on the Management Portal endpoint and set an inbound port in the VM’s Windows Firewall.
  • Add the newly created BI VM to the existing SharePoint or SQL Server farm.
  1. Manage VMs:
  • Monitor the VMs using the Management Portal.
  • Monitor the SharePoint farm using Central Administration.
  • Monitor and manage the VMs using on-premises management software like Microsoft System Center – Operations Manager.

Scenario 4: Completely Customized SharePoint-based Website

Description

Increasingly, organizations want to create fully customized SharePoint websites in the cloud. They need a highly durable and available environment that offers full control to maintain complex applications running in the cloud, but they do not want to spend a large amount of time and budget.

In this scenario, an organization can deploy its entire SharePoint farm in the cloud and dynamically scale all components to get additional capacity, or it can extend its on-premises deployment to the cloud to increase capacity and improve performance, when needed. The scenario focuses on organizations that want the full “SharePoint experience” for application development and enterprise content management. The more complex sites also can include enhanced reporting, Power View, PerformancePoint, PowerPivot, in-depth charts, and most other SharePoint site capabilities for end-to-end, full functionality.

Organizations can use Windows Azure Virtual Machines to host customized applications and associated components on a cost-effective and highly secure cloud infrastructure. They also can use on-premises Microsoft System Center as a common management tool for on-premises and cloud applications.

Getting Started

To implement a completely customized SharePoint website on Windows Azure, an organization must deploy an Active Directory domain in the cloud and provision new VMs into this domain. Then, a VM running SQL Server 2012 must be created and configured as part of a SharePoint farm. Finally, the SharePoint farm must be created, load balanced, and connected to Active Directory and SQL Server (Figure 7).

Figure 7: Completely customized SharePoint-based website


Setting Up the Scenario Environment

The following steps show how to create a customized SharePoint farm environment from prebuilt images available in the image library. Note, however, that you also can upload SharePoint farm VMs to the image library, and authorized users can choose those VMs to build the required SharePoint farm on Windows Azure.

  1. Deploy Active Directory: The fundamental requirements for deploying Active Directory on Windows Azure Virtual Machines are similar—but not identical—to deploying it on VMs (and, to some extent, physical machines) on premises. For more information about the differences, as well as guidelines and other considerations, go to Guidelines for Deploying Active Directory on Windows Azure Virtual Machines. To deploy Active Directory in Windows Azure:
  1. Deploy SQL Server:
  • Use the Management Portal to provision a new VM from a stock image in the image library.
  • Configure SQL Server on the VM. For more information, go to Install SQL Server Using SysPrep.
  • Join the VM to the newly created Active Directory domain.
  1. Deploy a multiserver SharePoint farm:
  1. Manage the SharePoint farm through System Center:
  • Use the Operations Manager agent and new Windows Azure Integration Pack to connect your on-premises System Center to Windows Azure Virtual Machines.
  • Use on-premises App Controller and Orchestrator for management functions.

 

Conclusion

Cloud computing is transforming the way IT serves organizations. This is because cloud computing can harness a new class of benefits, including dramatically decreased cost coupled with increased IT focus, agility, and flexibility. Windows Azure is leading the way in cloud computing by delivering easy, open, flexible, and powerful virtual infrastructure. Windows Azure Virtual Machines mitigate the need for hardware, so organizations can reduce cost and complexity by building infrastructure at scale—with full control and streamlined management.

Windows Azure Virtual Machines provide a full continuum of SharePoint deployments. It is fully supported and tested to provide an optimal experience with other Microsoft applications. As such, organizations can easily set up and deploy SharePoint Server within Windows Azure, either to provision infrastructure for a new SharePoint deployment or to expand an existing one. As business workloads grow, organizations can rapidly expand their SharePoint infrastructure. Likewise, if workload needs decline, organizations can contract resources on demand, paying only for what they use. Windows Azure Virtual Machines deliver an exceptional infrastructure for a wide range of business requirements, as shown in the four SharePoint-based scenarios discussed in this paper.

Successful deployment of SharePoint Server on Windows Azure Virtual Machines requires solid planning, especially considering the range of critical farm architecture and deployment options. The insights and best practices outlined in this paper can help to guide decisions for implementing an informed SharePoint deployment.

Additional Resources

Posted in Uncategorized | Leave a comment

Retailers’ Mobile and Social Commerce Strategies Will Yield Minimal Revenue

Predicts 2013: Retailers’ Mobile and Social Commerce Strategies Will Yield Minimal Revenue

30 November 2012 ID:G00231876

Analyst(s): Miriam Burt, Gale Daikoku, John Davison, Robert Hetu

VIEW SUMMARY

Tier 1 multichannel retailers will not gain significant benefits from mobile and social commerce strategies if they fail to understand how mobile and social customer interaction points can enhance and optimize cross-channel, customer shopping processes.


VIEW SUMMARY

Overview

Key Findings

  • Retailers will struggle to move significant numbers of consumers from cash and cards to Near Field Communication (NFC)-based mobile payments.
  • Retailers’ efforts to pursue location-based personalization offers will yield a very small rate of redemption.
  • Retailers will face a new threat to their profit margins and revenue-sharing models from emerging social shopping retailers.

Recommendations

For CIOs:

  • Ensure that you provide your customers with functionality on their mobile phones that they prefer to use, such as finding a store location or looking up stock availability, rather than NFC-based mobile payments.
  • Invest in multichannel analytical resources to help you increase revenue by defining more relevant coupon offers for all customers, while delivering personalized offers to a carefully chosen selection of customers.
  • Map your existing product catalog to the key demographics for social media to take advantage of the current low cost of entry to use the Facebook commerce (F-commerce) as a testing ground for social selling.

Analysis

What You Need to Know

Tier 1 multichannel retailers are still struggling to provide the everyday “business as usual” multichannel experience that their customers desire. Organizational and technology silos continue to hamper the delivery of a consistent and contiguous cross-channel customer shopping experience. Moreover, this is exacerbated by retailers investing in hyped-up mobile and social commerce solutions, rather than focusing on delivering the customer basics. For example, in store, some of the key customer basics are stock availability, an informed and available staff, and fast check-out.

Gartner predicts that retailers will struggle to move significant numbers of consumers from cash and cards, especially if they implement market-hyped, NFC-based, mobile wallet payment solutions. Our research confirms that customers’ preferences to use their mobile phones to find a store location, compare prices, look up stock availability and receive promotions are far ahead of their preferences to use their mobile phones to order and pay.

We predict retailers’ efforts to pursue context-aware personalized offers, such as location-based offers through mobile phones, will yield a very small rate of redemption. Our research shows that consumers favor paper coupons. In the near term, paper coupons will remain the dominant form of retail offer over electronic coupon redemption.

We predict that retailers will face a new threat to their profit margins and revenue-sharing models from emerging social shopping retailers. Our research shows that, with 93% share in the U.S., Facebook could become a virtual retailer by connecting manufacturers and distributors of consumer goods directly to the consumer (see Note 1). This shift could dramatically alter the profit margin and revenue-sharing models within the retailer and supplier networks, making it even more difficult for retailers to remain competitive.

Return to Top

Strategic Planning Assumptions

Strategic Planning Assumption: By 2014, less than 2% of consumers globally will adopt NFC-based mobile payments.

Analysis by: Miriam Burt and John Davison

Key Findings:

A Gartner consumer survey in 3Q11 in 10 countries showed that, on average:

  • Almost two-thirds of the consumers (62%) surveyed indicated that they did not use their mobile phones to conduct any type of financial transaction using mobile payment services.
  • Sixty percent of consumers indicated that concerns about the security of personal and payment data were the biggest barriers to using their mobile phones to make mobile payments. This is a 7% increase from the equivalent survey in 2010 (53%).
  • Seventy-nine percent of consumers indicated that the store is the main channel through which they were willing to make a purchase when conducting a cross-channel shopping event.

Market Implications:

This topic has been very hot in the past 12 months in all the major Tier 1 retail markets, with tremendous hype and publicity regarding solutions from a multitude of vendors of hardware, software, card payment services and, particularly, the NFC-based solutions, including those from Orange and Barclaycard, as well as the NFC-based mobile wallets from Isis and Google. Non-NFC-based solutions, such as the Starbucks mobile phone payment solution via its stored-value loyalty card and mobile bar codes, and PayPal’s mobile payment solution for Home Depot stores, have also been in the headlines.

About one-third of customers are using their mobile phones for financial transactions, although these are largely confined to functionality such as topping up prepaid mobile plans or purchasing digital products, not physical products. Moreover, this pertains to consumer usage of all types of mobile phones, not just to the usage of NFC-based mobile phones.

As part of a cross-channel shopping process, NFC-based mobile payments could address the need for speed of throughput and convenience during check-out in a store. This is important in some retail segments (such as grocery and convenience stores), but less important in other segments (such as luxury fashion).

If payment transaction fees using mobile devices are lower than traditional credit and debit cards, then there are clear savings for the retailer. However, in a 3Q11 retailer survey, Tier 1 retailers indicated that they expect the mobile channel, on average, to generate just under 2% of revenue through 2016, compared to 85% through the store and 12% through e-commerce. Hence, they do not see a robust business case for upgrading point of sale (POS) terminals to accept NFC-based, mobile contactless payments that include factors such as the cost of NFC-based POS terminal readers and the cost of merchant interchange fees.

Moreover, the speed of adoption of mobile payments will be dictated by consumers, so NFC-based payment solutions must demonstrate how they can support a secure, hassle-free, convenient and fast check-out — the latter being a key in-store customer service basic.

Current retailer trials of NFC-based stickers for promotions; the growing use of mobile coupons; the increasing use of mobile bar codes at the POS; and contactless payments using prepaid services for transportation applications, such as ticketing; may speed up the general adoption of NFC technology for mobile devices. For the most part, these are currently done through cards that customers touch on contactless readers, and do not involve NFC-enabled mobile phones in the payment process.

Benefits from NFC-based mobile payment transactions will only be gained if consumers are convinced that NFC mobile payments are secure, convenient and fast. They also need to have a compelling reason to make the switch. For example, retailers could give them incentives to choose this type of payment over others, such as tying loyalty programs into NFC-based mobile payments. In addition, a single set of standards needs to be agreed on by the banks, payment processing companies and retailers for NFC payments to succeed. We have not yet seen this consistency emerge yet.

Recommendations:

For CIOs:

  • Don’t let the projected rate of smartphone adoption or the hype around NFC-based contactless mobile payments drive investments in this solution.
  • Investigate how NFC can be used for nonpayment processes. For example, customers can use NFC stickers to access promotions or as a replacement for quick response (QR) code scanning.
  • Where appropriate, invest in secure, mobile POS applications in stores to enable store associates to provide the key customer basics of a fast and hassle-free check-out experience.
  • Ensure that you provide customers with functionality on their mobile phones that they prefer to use, rather than NFC-based mobile payments. This should include the capability to use their mobile phones to find a store location, compare prices, look up stock availability and receive promotions.
  • Trial mobile payments through lower-risk, stored-value payment solutions, preferably in conjunction with a loyalty solution.

Related Research:

“Hype Cycle for Retail Technologies, 2012”

“Distinguish How Consumers Want to Shop on Their Mobile Devices for Best Investment Decisions”

Strategic Planning Assumption: By 2015, less than 1% of all redeemed coupons will be location-aware offers sent by Tier 1 retailers.

Analysis by: Gale Daikoku and Robert Hetu

Key Findings:

  • Many Tier 1 retailers are pursuing personalization strategies that include the delivery of real-time offers on a customer’s mobile device while they are shopping in stores.
  • Paper coupons, including those provided directly to the customer, are the dominant format preferred by consumers.

Market Implications:

Retailers see personalization as a competitive necessity for building meaningful relationships that foster loyalty, yet many have work to do to support this level of engagement with customers. Gartner notes that nearly every Tier 1 retailer we speak with has prioritized the ability to gain a single view of its customers as a business priority, as personalization is dependent on good knowledge and segmentation of cross-channel customers. In fact, in the next few years, we expect that just over two-thirds (70%) of leading Tier 1 retailers will have improved the quality or way they develop customer offers. This can be anything from promotions to coupons or personalized offers. However, many retailers will be challenged by their abilities to deliver and execute location-aware offers to customers who are shopping in their stores.

Coupons are the most common form of retail offers. According to industry sources, even though coupon distribution is down overall due to more limited funding by manufacturers, paper coupons, many of which are delivered via free standing inserts (FSIs) that are mailed to a customer’s home, are expected to remain the dominant form of retail offer for some time. Gartner research shows that the vast majority of consumers prefer to use some form of paper coupon — in particular, those sent directly to the customer, delivered as a separate sheet with a sales receipt or from the retailer’s in-store mailer.

Electronic coupon distribution is an alternate, cost-efficient way to reach customers with offers. There are two types of e-coupons that are rated fairly high for potential use: those emailed to customers that can be printed out on paper and coupons that can be saved to loyalty accounts. However, customers are still getting used to these newer forms of couponing, and redemption will remain constrained in the near term, due to process and technology challenges in stores.

As mobile technology improves, customers will get more comfortable with using mobile devices as part of the shopping process. However, for retailers, communicating the perfect multichannel offer at the right moment in the shopping process, although theoretically attractive, is difficult to execute in real time (for example, sending location-based, context-aware, personalized mobile coupons in real time while customers are shopping in their stores).

Apart from customer readiness, there are customer privacy challenges around sending mobile coupons to customers’ personal mobile devices. In Gartner’s consumer survey, 39% of respondents said they had smartphones. However, when asked if they were willing to use a mobile app to receive coupons or rewards in stores, almost one-third (29%) indicated that that were not willing to use a smartphone app. Furthermore, 40% said that they were not willing to register their phones so that they can be tracked to receive offers while they were shopping in the store.

Recommendations:

For CIOs:

  • Maintain the capability and processes that support the delivery and redemption of paper coupons in stores.
  • Invest in the multichannel analytical resources to help you target a small, but lucrative, set of customers who are favorable to receiving personalized mobile offers.

Related Research:

“Hype Cycle for Retail Technologies, 2012”

“Consumer Survey Shows What’s Ahead for Retail Coupon Management”

“Personalization and Context-Aware Technology’s Impact on Multichannel Customer Loyalty”

“Marketing Service Provider Capabilities in Retail”

Strategic Planning Assumption: By 2015, a new social shopping retailer will emerge, accounting for 2% of U.S. Tier 1 retail sales.

Analysis by: Robert Hetu

Key Findings:

  • U.S. consumers in the 18- to 24-year-old demographic have the highest preference for using social media while shopping (48%), with expected continued growth in this activity as they grow older.
  • Facebook’s overwhelming share (93%) of social media provides it with an opportunity to generate a direct retail revenue stream via a virtual store approach.

Market Implications:

According to a Gartner consumer survey conducted in 3Q11, young adults aged 18- to 24-years-old conduct the highest levels of shopping-related social activity, including the propensity to look for special offers and check product reviews on a social networking site. The depth of personal information willingly supplied on social networks provides unmatched visibility into the lives, interests and personal networks of consumers.

Retailers primarily view the customer through their interactions via shopping activity. As they seek to expand their knowledgebases, they have been experimenting with social media through various partnerships. Most Tier 1 retailers have built a relationship with Facebook as a convenient avenue to access social networks via Facebook commerce (or F-commerce), but have found little revenue.

As experienced by retailers when they were part of the Amazon platform, the extent of information flowing between partner organizations (for example, sales and inventory data at a SKU level, as well as customer transaction data) can be used by a partner as it builds its own retail strategy. Complicating this further is that the social environment is dominated by a single player that has 93% share in the U.S. As a result, the partnership with Facebook, or with other social players that currently exist or may rise as rivals of Facebook, could provide the competitive content required to quickly enable a virtual social retailer, connecting manufacturers and distributors of consumer goods directly to the consumer. This shift could dramatically alter the profit margin and revenue-sharing models within the retailer and supplier networks, making it even more difficult for retailers to remain competitive.

Amazon grew to be a top U.S. retailer by taking advantage of a specific weakness in the retail environment — namely, the slow adoption of technology-enabled shopping. It is continuing to exploit this weakness with its forays into mobile shopping and social enablement. Amazon now includes the setup of a profile with personal information, including a picture and preferences, with the ability to share previously purchased items with others to help facilitate social shopping. Customers can also create lists of products and purchase guides to support various activities (for example, photography). As a result, Amazon is far ahead of multichannel, Tier 1 retailers in its ability to meet this new threat posed by social networks, such as Facebook.

Recommendations:

For CIOs:

  • Map your existing product catalog to the key demographics for social media to take advantage of the low cost of entry to use the F-commerce platform as a testing ground for social selling.
  • Connect e-commerce and m-commerce sites with Facebook using social plug-ins and custom applications to ensure a consistent flow for the cross-channel customer.
  • Approach F-commerce with the awareness that Facebook will use this as a revenue-generating model by planning alternative approaches for social commerce when it becomes disadvantageous to engage in F-commerce.

Related Research:

“Use Facebook to Test Social Commerce Strategy”

“Information Innovation Powers Customer-Centric Merchandising”

“Hype Cycle for Retail Technologies, 2012”

Return to Top

A Look Back

In response to your requests, we are taking a look back at some key predictions from previous years. We have intentionally selected predictions from opposite ends of the scale — one where we were wholly or largely on target, as well as one we missed.

On Target: 2012 Prediction — By 2013, the U.K. retail market will be the world’s most advanced multichannel market.

Analysis by: Miriam Burt, Van L. Baker, Gale Daikoku, John Davison, Robert Hetu

Previously Published: “Predicts 2012: Retailers Turn to Personalized Offers Through Mobile and Social but Will Struggle With Multichannel Execution”

In 2Q11 and 3Q11, Gartner surveyed leading retailers in the U.S., Canada, the U.K., France, Germany, Brazil, Russia, India, China and Japan. The survey asked retailers to estimate the percentage of revenue coming from each of their selling channels. In this survey, U.K. retailer estimates for the percentage of e-commerce sales were higher than in any other country surveyed, with 13.22% of sales compared to the 10-country average of 9.22% of sales for this channel. The same finding was made in other channels — mail order and catalog, call center, and mobile — where the U.K. had a higher estimated percentage than the 10-country average.

The corollary of this is that the U.K. retailers surveyed anticipated a smaller percentage of their revenue through 2014 coming from brick-and-mortar stores (79.28%), compared to the 10-country average of anticipated revenue in 2014 (88.56%). Thus, the U.K. is on target to be the world’s most advanced multichannel market through 2013 and beyond.

Missed: 2012 Prediction — By year-end 2012, 90% of Tier 1 and Tier 2 retailers will have an active presence on social media sites.

Analysis by: Miriam Burt, Van L. Baker, Gale Daikoku, John Davison, Robert Hetu

Previously Published: “Predicts 2012: Retailers Turn to Personalized Offers Through Mobile and Social but Will Struggle With Multichannel Execution”

In defining active presence, the intent was to incorporate some commerce activity within the social networking activity. Facebook commerce was, at the time, being pursued by many Tier 1 retailers. Over time, they learned that selling products on social media was not a straightforward process and retreated. Presently, most Tier 1 retailers have a presence on social media. However, they have not successfully monetized through F-commerce or other sales activities. Many even failed to be responsive to customer comments, making it a one-way communication vehicle.

Return to Top

Posted in Uncategorized | Leave a comment