Quantcast
Channel: Ask the Directory Services Team
Viewing all 67 articles
Browse latest View live

The Mouse Will Play

$
0
0

Hey all, Ned here. Mike and I start teaching Windows Server 2012 and Windows 8 DS internals this month in the US and UK and won’t be back until July. Until then, Jonathan is – I can’t believe I’m saying this – in charge of AskDS. He’ll field your questions and publish… stuff. We’ll make sure he takes his medication before replying.

If you’re in Reading, England June 10-22, first round is on me.

image
I didn’t say what the first round was though.

Ned “crikey” Pyle


Important Information about Remote Desktop Licensing and Security Advisory 2718704

$
0
0

Hi folks, Jonathan here. Dave and I wanted to share some important information with you.

By now you’ve all been made aware of the Microsoft Security Advisory that was published this past Sunday.  If you are a Terminal Services or Remote Desktop Services administrator then we have some information of which you should be aware.  These are just some extra administrative steps you’ll need to follow the next time you have to obtain license key packs, transfer license key packs, or any other task that requires your Windows Server license information to be processed by the Microsoft Product Activation Clearinghouse.  Since there’s a high probability that you’ll have to do that at some point in the future we’re doing our part to help spread the word.  Our colleagues over at the Remote Desktop Services (Terminal Services) Team blog have posted all the pertinent information. Take a look.

Follow-up to Microsoft Security Advisory 2718704: Why and How to Reactivate License Servers in Terminal Services and Remote Desktop Services

If you have any questions, feel free to post them over in the Remote Desktop Services forum.

Jonathan Stephens

RSA Key Blocking is Here!

$
0
0

Hello everyone. Jonathan here again with another Public Service Announcement post.

Today, Microsoft has published a new Security Advisory:

Microsoft Security Advisory (2661254): Update For Minimum Certificate Key Length

The Security Advisory and the accompanying KB article have complete information about the software update, but the key takeaway is that this update is now available on the Download Center and the Microsoft Update Catalog. In addition, Microsoft will release this software update through Microsoft Update (aka Windows Update) in October 2012. So all of you enterprise customers have two months to start testing this update to see what impact it has in your environments.

If you want information on finding weak keys in your environment then review the KB article. It describes several methods you can use. Microsoft Support has also created a PowerShell script that has been posted to the the TechNet Script Center.

Finally, I have one final warning for those of you that use makecert.exe to create test certificates. By default, makecert.exe creates certificates that chains up to the Root Agency root CA certificate located in the Intermediate Certification Authorities store. The Root Agency CA certificate has a public key of 512 bits, so once you deploy this update no certificate created with makecert.exe will be considered valid.

You should now consider makecert.exe deprecated. As a replacement, starting with Windows 7 / Windows Server 2008 R2, you can use certreq.exe to create a self-signed certificate. For example, to create a self-signed code signing certificate you can create the following .INF file:

[NewRequest]
Subject = "CN=Self Signed Cert"
KeyLength = 2048
ProviderName = "Microsoft Enhanced Cryptographic Provider v1.0"
KeySpec = "AT_SIGNATURE"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
RequestType = Cert
SMIME = False
ValidityPeriod = Years
ValidityPeriodUnits = 2

[EnhancedKeyUsageExtension]
OID = 1.3.6.1.5.5.7.3.3

The important line above is the RequestType value. That tells certreq.exe to create a self-signed certificate. Along with that value, the ValidityPeriod and ValidityPeriodUnits values allow you specify the lifetime of the self-signed certificate.

Once you create the .INF file, run the following command:

Certreq -new  selfsigned.inf selfsigned.crt

This will take your .INF file and generate a new self-signed certificate that you can use for testing.

Ok, so this was supposed to be a short post pointing to where you need to go, but it turns out that I had some other related stuff. The important message here is go read the Security Advisory and the KB article.

Go read the Security Advisory and the KB article.

Ex pace.

Jonathan “I am the Key Master” Stephens

Updated Group Policy Search service

$
0
0

Mike here with an important service announcement.  In June of 2010, guest poster Kapil Mehra introduced the Group Policy Search service.  The Group Policy Search (GPS) service is a web application hosted on Windows Azure, which enables you to search for registry-based Group Policy settings used in Windows operating systems.

It’s a "plezz-shzaa" to announce that GPS version 1.1.4 is live at http://gps.cloudapp.net.  Version 1.1.4 includes registry-based policy settings from Windows 8 and Windows Server 2012, performance improvements, bug fixes, and a few little surprises.  It's the easiest way to search for a Group Policy setting. 

So, the next time you need to search for a Group Policy settings, or want to know the registry key and value name that backs a particular policy setting-- don't look for a antiquated settings spreadsheet reference.  Get your Group Policy Search on!!

And, if you act now-- we'll throw in the Group Policy Search Windows Phone 7 application-- for free! That's right, take Group Policy Search with you on the go. What an offer! Group Policy Search and Group Policy Search Windows Phone 7 application -- for one low, low price -- FREE!  Act now and you'll get free shipping.

This is Mike Stephens and "Ned Pyle" approves this message!

Let the Blogging begin…

$
0
0

Hello AskDS Readers. Mike here again. If you notice, Ned posted one of our first Windows Server 2012 RTM blogs a while back (Managing RID Issuance in Windows Server 2012). Yes friends, the gag order has been lifted and we are allowed to spout mountains of technical goodness about Windows Server 2012 and Windows 8.

"So much time and so little to do. Wait a minute. Strike that. Reverse it." Windows Server 2012 has many cool features that Ned and I have been waiting to share with you. Here is a 50,000-foot view of the technologies and features we are going to blog in the next few weeks and months-- in no specific order.

I'll start by highlighting some of the changes with security, PKI, authentication, and authorization. The Windows Server 2012 Certificate Services role has a few feature changes that should delight many of the certificate administrators out there. With new installation, deployment, and improved configuration-- it's probably the easiest certificate authority to configure.

Windows Server 2012 authentication is a healthy technology with a ton of technical goo just seeping at the seams; starting with the mac-daddy of them all-- Kerberos. In a few weeks, we will begin publishing the first of many installments of Kerberos changes in Windows 8/Windows Server 2012. As a teaser, the lineup includes KDC Proxy Server, the latest and greatest way to configured Kerberos Constrained Delegation-- "It really whips the lama's @#%." We'll take some exhaustive time explaining some Kerberos enhancements such as Kerberos Armoring and Compound Identity. We have tons more to share in the area of authentication including Virtual Smartcard Readers, and Picture Password logon.

Advanced client security highlights features like Server Name Indicator (SNI) for Windows Server 2012, Certificate Lifecycle Notification, Weak Key Protection (most of which is published in Jonathan Stephen's latest blog, RSA Key Blocking is Here!), Implicit binding, which is the infrastructure behind the new Centralized Certificate Store IIS feature, and Client certificate hints. Advanced client security also includes a wicked-cool security-enhancement to PFX files and new a PKI module for Windows PowerShell

At some point in our publishing timeline, we'll launch into the saga of all sagas, Dynamic Access Control. We've hosted guest posts here on AskDS to introduce this radical, amazingly cool new way to perform file-based authorization. This isn't your grandfather's authorization either. Dynamic Access Control or DAC as we’ll call it, requires planning, diligence, and an understanding of many dependencies, such as Active Directory, Kerberos, and effective access. Did I mention there are many knobs you must turn to configure it? No worries though, we'll break DAC down into consumable morsels that should make it easy for everyone to understand.

The concept of claims continues by showing you how to use Windows Server 2012's Active Directory Federation Services role to leverage claims issued by Windows domain controllers. Using AD FS, you can pass-through the Windows authorization claims or transform them into well-known SAML-based claim types.

No, I'm not done yet. I'm going introduce a well-hidden feature that hasn't received much exposure, but has been labeled "pretty cool" by many training attendees. Access Denied Assistance is a gem of a feature that is locked away within the File Server Resource Manager (FSRM). It enables you to provide a SharePoint-like experience for users in Windows Explorer when they experience access denied or file not found to a shared file or folder. Access Denied Assistance provides the user with a "Request Access" interface that sends an email to the share owner that provides details on the access requested and guidance for the share owner can follow to remediate the problem. It's very slick.

Wait there is more; this is just my list of topics to cover. Ned has a fun-bag full of Active Directory related material that he'll intermix with these topics to keep things fresh. I'm certain we'll sneak in a few extras that may not be directly related to Directory Services; however, they will help you make your Windows Server 2012 and Windows 8 experience much better. Need to run for now, this blog post just wrote checks my body can't cash.

The line above and below this were intentionally left blank using Microsoft Word 2013 Preview Edition

Mike "There's no earthly way of knowing; which direction they are going... There's no knowing where they're rowing..." Stephens

MaxTokenSize and Windows 8 and Windows Server 2012

$
0
0

Hello AskDS Populous, Mike here and I want to share with you some of the excellent enhancements we accomplished in Windows 8 and Windows Server 2012 around MaxTokenSize. Let’s review MaxTokenSize and its symptoms before we jump in to wonderful world of Windows 8 (say that three times fast).

Wonderful World of Windows 8
Wonderful World of Windows 8
Wonderful World of Windows 8

What is MaxTokenSize

Kerberos is the default and preferred authentication protocol since the release of Windows 2000 Server. Over the last few years, Microsoft has made some significant investments in provided extensions to the protocol. One of those extensions to Kerberos is the Privilege Attribute Certificate or PAC (defined in Windows Server Protocol specification MS-PAC).

Microsoft created the PAC to encapsulate authorization related information in a manner consistent with RFC4120. The authorization information included in the PAC includes security identifiers, user profile information such as Full name, home directory, and bad password count. Security identifiers (SIDs) included in the PAC represent the user's current SID and any instances of SID history and security group memberships to the extent of current domain groups, resource domain groups, and universal groups.

Kerberos uses a buffer to store authorization information and reports this size to applications using Kerberos for authentication. MaxTokenSize is the size of buffer used to store authorization information. This buffer size is important because some protocols such as RPC and HTTP use it when they allocate memory for authentication. If the authorization data for a user attempting to authenticate is larger than the MaxTokenSize, then the authentication fails for that connection using that protocol. This explains why authentication failures resulted when authenticating to IIS but not when authenticating to folder shared on a file server. The default buffer size for Kerberos in Windows 7 and Windows Server 2008R2 is 12k.

Windows 8 and Windows Server 2012

Let's face the facts of today's IT environment… authentication and authorization is not getting easier; it's becoming more complex. In the world of single sign-on and user claims, the amount of authorization data is increasing. Increasing authorization data in an infrastructure that has already had its experiences with authentication failures because a user was a member of too many groups justifies some concern for the future. Fortunately, Windows 8 and Windows Server 2012 have features to help us take proactive measures to avoid the problem.

Default MaxTokenSize

Windows 8 and Windows Server 2012 benefit from an increased MaxTokenSize of 48k. Therefore, when HTTP relies on the MaxTokenSize value as the value used for memory allocation; it will allocate 48k of memory for the authentication buffer, which hold a substantially more authorization information than in previous versions of Windows where the default MaxTokenSize was only 12k.

Group Policy settings

Windows 8 and Windows Server 2012 introduce two new computer-based policy settings that help combat against large service tickets, which is the cause of the MaxTokenSize dilemma. The first of these policy settings is not exactly new-- it has been in Windows for years, but only as a registry value. Use the policy setting Set maximum Kerberos SSPI context token buffer size to change the MaxTokenSize using group policy. Looking closely at this policy setting in the Group Policy Management Editor, you'll notice the icon for this setting is slightly different from the others around it.

clip_image001

This difference is attributed to registry location the policy setting modifies when enabled or disabled. This registry setting is the actual MaxTokenSize registry key and value name that has been used in earlier versions of Windows

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters\MaxTokenSize

Therefore, you can use this computer-based policy setting to manage Windows 8, Windows Server 2012, and earlier versions of Windows. The catch here is that this registry location is not a managed policy location. Managed policy locations are removed and reapplied during policy refreshes to avoid persistent settings in the registry after the settings in a Group Policy object become out of scope. That behavior does not occur with this key, as the setting applied by this policy setting is not removed during application. Therefore, the policy setting persists even if the Group Policy object providing the setting falls out of scope.

The second policy setting is very cool and answers the question that customers always asked when they encounter a problem with MaxTokenSize: "How big is the token?" You might be one of those people that went on the crusade of a lifetime using TOKENSZ.EXE and spent countless hours trying to determine the optimal MaxTokenSize for your environment. Those days are gone.

A new KDC policy settings Warning events for large Kerberos tickets provides you with a way to monitor the size of Kerberos tickets issued by KDCs. When you enable this policy setting, you then must configure a ticket threshold size. The KDC uses the ticket threshold size to determine if it should write a warning event to the system event log. If the KDC issues a ticket that exceeds the ticket threshold size, then it writes a warning. This policy setting, when enabled, defaults to the 12k, which is the default MaxTokenSize of previous version of Windows.

clip_image003

Ideally, if you use this policy setting, then you'd likely want to set the ticket threshold value to approximately 1k less than your current MaxTokenSize. You want it lower than your current MaxTokenSize (unless you are using 12k, that is the minimum value) so you can use the warning events as a proactive measure to avoid an authentication failure due to an incorrectly sized buffer. Setting the threshold too low will just train you to ignore the Event 31 warnings because they'll become noise in the event log. Setting it too high and you're likely to be blindsided with authentication failures rather than warning events.

clip_image004

Earlier I said that this policy setting solves your problems with fumbling with TOKENSZ and other utilities to determine MaxTokenSize-- here's how. If you examine the details of the Kerberos-Key-Distribution-Center Warning event ID 31, you'll notice that it gives you all the information you need to determine the optimal MaxTokenSize in your environment. In the following example, the user Ned is a member of over 1000 groups (he's very popular and a big deal on the Internet). When I attempt to log on Ned using the RUNAS command, I generated an Event ID 31. The event description provides you with the service principal name, the user principal name, the size of the ticket requested and the size of the threshold. This enables you to aggregate all the event 31s and identify the maximum ticket size requested. Armed with this information, you can set the optimal MaxTokenSize for your environment.

clip_image006

KDC Resource SID Compression

Kerberos authentication inserts security identifiers (SIDs) of the security principal, SID history, all the groups to which the user is a member including universal groups and groups from the resource domain. Security principals with too many group memberships greatly affect the size of the authentication data. Sometimes the authentication data is larger than the allocated size reported by Kerberos to applications. This can causes authentication failure in some applications. SIDs from the resource domain share the same domain portion of the SID, these SIDs can be compressed by only providing the resource domain SID once for all SIDs in the resource domain.

Windows Server 2012 KDCs help reduce the size of the PAC by taking advantage of resource SID compression. By default, a Windows Server 2012 KDC will always compress resource SIDs. To compress resource SIDs, the KDC stores SID of the resource domain to which the target resource is a member.  Then, it inserts only the RID portion of each resource SID into the ResourceGroupIds portion of the authentication data. 

Resource SID Compression reduces the size of each stored instance of a resource SID because the domain SID is stored once rather than with each instance. Without resource SID Compression, the KDC inserts all the SIDs added by the resource domain in the Extra-SID portion of the PAC structure, which is a list of SIDs.  [MS-KILE]

Interoperability

Other Kerberos implementations may not understand resource group compression and therefore are not compatible. In these scenarios, you may need to disable resource group compression to allow the Windows Server 2012 KDC to interoperate with the third-party Kerberos implementation.

Resource SID compression is on by default; however, you can disable it. You disable resource SID compression on a Windows Server 2012 KDC using the DisableResourceGroupsFields registry value under the HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kdc\Parameters registry key. This registry value has a DWORD registry value type. You completely disable resource SID compression when you set the registry value to 1. The KDC reads this configuration when building a service ticket. With the bit enabled, the KDC does not use resource SID compression when building the service ticket.

Wrap up

There's the skinny on the Kerberos enhancements included in Windows 8 and Windows Server 2012 that specifically target large service ticket and MaxTokenSize scenarios. To summarize:

· Increased default MaxTokenSize from 12k to 48k

· New Group Policy setting to centrally manage MaxTokenSize

· New Group Policy setting to write warnings to the system event log when a service ticket exceeds a designated threshold

· New Resource SID compression to reduce the storage size of SIDs from the resource domain

Keep an eye out for more Windows 8 and Kerberos needful

Mike "~Mike" Stephens

Windows Server 2012 Shell game

$
0
0

Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" Microsoft TechNet subscription. Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the Newton's cradle-like experience of the circular progress indicator

clip_image002

Click…click…click…click-- installation complete; the computer reboots.

You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then…

Holy Shell, Batman! I don't have a desktop!

clip_image004

Hey everyone, Mike here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 general availability on September 4. If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it.

This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature.

clip_image006

There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely).

Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: Minimal Server Interface. Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes:

  • Internet Explorer
  • The Desktop
  • Windows Explorer
  • Windows 8-style application support
  • Multimedia support
  • Desktop Experience

Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option - the benefit a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills.

clip_image008

"Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is "Install the Server Graphical Shell or Install Minimal Server Interface."

Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server).

Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup).

Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure

Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click Manage, and click Remove Roles and Features. Select the target server and then click Features. Expand User Interfaces and Infrastructure.

To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the Server Graphical Shell checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the Server Graphical Shell and Graphical Management Tools and Infrastructure check boxes and complete the wizard.

clip_image010

Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation.

Use the following command to view a list of the Server GUI components

clip_image011

Get-WindowsFeature server-gui*

Give your attention to the Name column. You use this value with the Remove-WindowsFeature and Install-WindowsFeature PowerShell cmdlets.

To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run:

Remove-WindowsFeature Server-Gui-Shell

To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation.

Remove-WindowsFeature Server-Gui-Mgmt-Infra

To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run:

Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra

Adding Server Graphical Shell and Graphical Management Tools and Infrastructure

Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the Install State column. The Removed value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The Install-WindowsFeature cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature.

clip_image015

To stage Server Shell features to a Windows Core Installation

You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the \sources folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM.

clip_image017

You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail.

You still use the Install-WindowsFeature cmdlets to stage the binaries to the computer; however, we are going to use the -source argument to inform Install-WindowsFeature the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like

Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4

Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword wim: You need to suffix the path with a :4, which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the Restart-Computer cmdlets.

clip_image019

Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart.

clip_image021

It should return to the Configuring Windows feature screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the Press Ctrl+Alt+Delete to sign in screen

clip_image023

Done

That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that deer-in-the-headlights look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time.

Mike

"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V."

Stephens

....And knowing is half the battle!


Revenge of Y2K and Other News

$
0
0

Hello sports fans!

So this has been a bit of a hectic time for us, as I'm sure you can imagine. Here's just some of the things that have been going on around here.

Last week, thanks to a failure on the time servers at USNO.NAVY.MIL, many customers experienced a time rollback to CY 2000 on their Active Directory domain controllers. Our team worked closely with the folks over at Premier Field Engineering to explain the problem, document resolutions for the various issues that might arise, and describe how to inoculate your DCs against a similar problem in the future. If you were affected by this problem then you need to read this post. If you weren't affected, and want to know why, then you need to read this post. Basically, we think you need to read this post. So...here's the link to the AskPFEPlat blog.

In other news, Ned Pyle has successfully infiltrated the Product Group and has started blogging on The Storage Team blog. His first post is up, and I'm sure there will be many more to follow. If you've missed Ned's rare blend of technical savvy and sausage-like prose, and you have an interest in Microsoft's DFSR and other storage technologies, then go check him out.

Finally...you've probably noticed the lack of activity here on the AskDS blog. Truthfully, that's been the result of a confluence of events -- Ned's departure, the Holiday season here in the US, and the intense interest in Windows 8 and Windows Server 2012 (and subsequent support calls). Never fear, however! I'm pleased to say that your questions to the blog have been coming in quite steadily, so this week I'll be posting an omnibus edition of the Mail Sack. We also have one or two more posts that will go up between now and the end of the year, so there's that to look forward to. Starting with the new calendar year, we'll get back to a semi-regular posting schedule as we get settled and build our queue of posts back up.

In the mean time, if you have questions about anything you see on the blog, don't hesitate to contact us.

Jonathan "time to make the donuts" Stephens

Intermittent Mail Sack: Must Remember to Write 2013 Edition

$
0
0

Hi all, Jonathan here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on:

Before we get started, however, I wanted to share information about a new service available to Premier customers through Microsoft Services Premier Support. Many Premier customers will be familiar with the Risk Assessment Program (RAP). Premier Support is now rolling out an online offering called the RAP as a Service (or RaaS for short). Our colleagues over on the Premier Field Engineering (PFE) blog have just posted a description of the new offering, and I encourage you to check it out. I've been working on the Active Directory RaaS offering since the early beta, and we've gotten really good feedback. Unfortunately, the offering is not yet available to non-Premier customers; look at RaaS as yet one more benefit to a Premier Support contract.

 

Now on to the Mail Sack!

Question

I'm considering upgrading my DFSR hub servers to Server 2012. Is there anything I should know before I hit the easy button and do an upgrade?

Answer

The most important thing to note is that Microsoft strongly discourages mixing Windows Server 2012 and legacy operating system DFSR. You just mentioned upgrading your hub servers, and make no mention of any branch servers. If you're going to upgrade your DFSR servers then you should upgrade all of them.

Check out Ned's post over on the FileCab blog: DFS Replication Improvements in Windows Server. Specifically, review the section that discusses Dynamic Access Control Support.

Also, there is a minor issue that has been found that we are still tracking. When you upgrade from Windows Server 2008 R2 to Windows Server 2012 the DFS Management snap-in stops working. The workaround is to just uninstall and then reinstall the DFS Management tools:

You can also do this with PowerShell:

Uninstall-WindowsFeature -name RSAT-DFS-Mgmt-Con
Install-WindowsFeature -name RSAT-DFS-Mgmt-Con

 

Question

From our SharePoint site, when users click on log-off then they get sent to this page: https://your_sts_server/adfs/ls/?wa=wsignout1.0.

We configured the FedAuth cookie to be session based after we did this:

$sts = Get-SPSecurityTokenServiceConfig 
$sts.UseSessionCookies = $true 
$sts.Update() 

 

The problem is, unless the user closes all their browsers then when they go to the log-in page the browser remembers their credentials. This is not acceptable for some PC's are shared by people. Also, closing all browsers is not acceptable as they run multiple web applications.

Answer

(Courtesy of Adam Conkle)

Great question! I hope the following details help you in your deployment:

Moving from a persistent cookie to a session cookie with SharePoint 2010 was the right move in this scenario in order to guarantee that closing the browser window would terminate the session with SharePoint 2010.

When you sign out via SharePoint 2010 and are redirected to the STS URL containing the query string: wa=wsignout1.0, this is what we call a WS-Federation sign-out request. This call is sufficient for signing out of the STS as well as all relying parties signed into during the session.

However, what you are experiencing is expected behavior for how Integrated Windows Authentication (IWA) works with web browsers. If your web browser client experienced either a no-prompt sign-in (using Kerberos authentication for the currently signed in user), or NTLM, prompted sign-in (provided credentials in a Windows Authentication "401" credential prompt), then the browser will remember the Windows credentials for that host for the duration of the browser session.

If you were to collect a HTTP headers trace (Fiddler, HTTPWatch, etc.) of the current scenario, you will see that the wa=wsignout1.0 request is actually causing AD FS and SharePoint 2010 (and any other RPs involved) to clean up their session cookies (MSISAuth and FedAuth) as expected. The session is technically ending the way it should during sign-out. However, if the client keeps the current browser session open, browsing back to the SharePoint site will cause a new WS-Federation sign-in request to be sent to AD FS (wa=wsignin1.0). When the sign-in request is sent to AD FS, AD FS will attempt to collect credentials with a HTTP 401, but, this time, the browser has a set of Windows credentials ready to provide to that host.

The browser provides those Windows credentials without a prompt shown to the user, and the user is signed back into AD FS, and, thus, is signed back into SharePoint 2010. To the naked eye, it appears that sign-out is not working properly, while, in reality, the user is signing out and then signing back in again.

To conclude, this is by-design behavior for web browser clients. There are two workarounds available:

Workaround 1

Switch to forms-based authentication (FBA) for the AD FS Federation Service. The following article details this quick and easy process: AD FS 2.0: How to Change the Local Authentication Type

Workaround 2

Instruct your user base to always close their web browser when they have finished their session

Question

Are the attributes for files and folders used by Dynamic Access Control are replicated with the object? That is, using DFSR, if I replicate the file to another server which uses the same policy will the file have the same effective permissions on it?

Answer

(Courtesy of Mike Stephens)

Let me clarify some aspects of your question as I answer each part

When enabling Dynamic Access Control on files and folders there are multiple aspects to consider that are stored on the files and folders.

Resource Properties

Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision. That information is stored in an alternate data stream on the file or folder. This would replicate with the file, the same as the security descriptor.

Security Descriptor

The security descriptor replicates with the file or folder. Therefore, any conditional expression would replicate in the security descriptor.

All of this occurs outside of Dynamic Access Control -- it is a result of replicating the file throughout the topology, for example, if using DFSR. Central Access Policy has nothing to do with these results.

Central Access Policy

Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system. This linking is accomplished by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy. The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission.

If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server. The permissions would not apply.

Question

I read the post located here regarding the machine account password change in Active Directory.

Based on what I read, if I understand this correctly, the machine password change is generated by the client machine and not AD. I have been told, (according to this post, inaccurately) that AD requires this password reset or the machine will be dropped from the domain.

I am a Macintosh systems administrator, and as you probably know, this issue does indeed occur on Mac systems.

I have reset the password reset interval to be various durations from fourteen days which is the default, to one day.

I have found that if I disjoin and rejoin the machine to the domain it will generate a new password and work just fine for 30 days. At that time, it will be dropped from the domain and have to be rejoined. This is not 100% of the time, however it is often enough to be a problem for us as we are a higher education institution which in addition to our many PCs, also utilizes a substantial number of Macs. Additionally, we have a script which runs every 60 days to delete machine accounts from AD to keep it clean, so if the machine has been turned off for more than 60 days, the account no longer exists.

I know your forte is AD/Microsoft support, however I was hoping that you might be able to offer some input as to why this might fail on the Macs and if there is any solution which we could implement.

Other Mac admins have found workarounds like eliminating the need for the pw reset or exempting the macs from the script, but our security team does not want to do this.

Answer

(Courtesy of Mike Stephens)

Windows has a security policy feature named Domain member: Disable machine account password change, which determines whether the domain member periodically changes its computer account password. Typically, a mac, linux, or unix operating system uses some version of Samba to accomplish domain interoperability. I'm not familiar with these on the mac; however, in linux, you would use the command

Net ads changetrustpw 

 

By default, Windows machines initiate a computer password change every 30 days. You could schedule this command to run every 30 days once it completes successfully. Beyond that, basically we can only tell you how to disable the domain controller from accepting computer password changes, which we do not encourage.

Question

I recently installed a new server running Windows 2008 R2 (as a DC) and a handful of client computers running Windows 7 Pro. On a client, which is shared by two users (userA and userB), I see the following event on the Event Viewer after userA logged on.

Event ID: 45058 
Source: LsaSrv 
Level: Information 
Description: 
A logon cache entry for user userB@domain.local was the oldest entry and was removed. The timestamp of this entry was 12/14/2012 08:49:02. 

 

All is working fine. Both userA and userB are able to log on on the domain by using this computer. Do you think I have to worry about this message or can I just safely ignore it?

Fyi, our users never work offline, only online.

Answer

By default, a Windows operating system will cache 10 domain user credentials locally. When the maximum number of credentials is cached and a new domain user logs onto the system, the oldest credential is purged from its slot in order to store the newest credential. This LsaSrv informational event simply records when this activity takes place. Once the cached credential is removed, it does not imply the account cannot be authenticated by a domain controller and cached again.

The number of "slots" available to store credentials is controlled by:

Registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
Setting Name: CachedLogonsCount
Data Type: REG_SZ
Value: Default value = 10 decimal, max value = 50 decimal, minimum value = 1

Cached credentials can also be managed with group policy by configuring:

Group Policy Setting path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options.
Group Policy Setting: Interactive logon: Number of previous logons to cache (in case domain controller is not available)

The workstation the user must have physical connectivity with the domain and the user must authenticate with a domain controller to cache their credentials again once they have been purged from the system.

I suspect that your CachedLogonsCount value has been set to 1 on these clients, meaning that that the workstation can only cache one user credential at a time.

Question

In Windows 7 and Server 2008 Kerberos DES encryption is disabled by default.

At what point will support for DES Kerberos encryption be removed? Does this happen in Windows 8 or Windows Server 2012, or will it happen in a future version of Windows?

Answer

DES is still available as an option on Windows 8 and Windows Server 2012, though it is disabled by default. It is too early to discuss the availability of DES in future versions of Windows right now.

There was an Advisory Memorandum published in 2005 by the Committee on National Security Systems (CNSS) where DES and all DES-based systems (3DES, DES-X) would be retired for all US Government uses by 2015. That memorandum, however, is not necessarily a binding document. It is expected that 3DES/DES-X will continue to be used in the private sector for the foreseeable future.

I'm afraid that we can't completely eliminate DES right now. All we can do is push it to the back burner in favor of newer and better algorithms like AES.

Question

I have two Issuing certification authorities in our corporate network. All our approved certificate templates are published on both issuing CAs. We would like to enable certificate renewals from Internet with our Internet-facing CEP/CES configured for certificate authentication in Certificate Renewal Mode Only. What we understand from the whitepaper is that it's not going to work when the CA that issues the certificate must be the same CA used for certificate renewal.

Answer

First, I need to correct an assumption made based on your reading of the whitepaper. There is no requirement that, when a certificate is renewed, the renewal request be sent to the same CA as that that issued the original certificate. This means that your clients can go to either enrollment server to renew the certificate. Here is the process for renewal:

  1. When the user attempts to renew their certificate via the MMC, Windows sends a request to the Certificate Enrollment Policy (CEP) server URL configured on the workstation. This request includes the template name of the certificate to be renewed.
  2. The CEP server queries Active Directory for a list of CAs capable of issuing certificates based on that template. This list will include the Certificate Enrollment Web Service (CES) URL associated with that CA. Each CA in your environment should have one or more instances of CES associated with it.
  3. The list of CES URLs is returned to the client. This list is unordered.
  4. The client randomly selects a URL from the list returned by the CEP server. This random selection ensures that renewal requests are spread across all returned CAs. In your case, if both CAs are configured to support the same template, then if the certificate is renewed 100 times, either with or without the same key, then that should result in a nearly 50/50 distribution between the two CAs.

The behavior is slightly different if one of your CAs goes down for some reason. In that case, should clients encounter an error when trying to renew a certificate against one of the CES URIs then the client will failover and use the next CES URI in the list. By having multiple CAs and CES servers, you gain high availability for certificate renewal.

Other Stuff

I'm very sad that I didn't see this until after the holidays. It definitely would have been on my Christmas list. A little pricey, but totally geek-tastic.

This was also on my list, this year. Go Science!

Please do keep those questions coming. We have another post in the hopper going up later in the week, and soon I hope to have some Windows Server 2012 goodness to share with you. From all of us on the Directory Services team, have a happy and prosperous New Year!

Jonathan "13th baktun" Stephens

 

 

Friday Mail Sack: LeBron is not Jordan Edition

$
0
0

Hi folks, Ned here again. Today we discuss trusts rules around domain names, attribute uniqueness, the fattest domains we’ve ever seen, USMT data-only migrations, kicking FRS while it’s down, and a few amusing side topics.

Scottie, don’t be that way. Go Mavs.

Question

I have two forests with different DNS names, but with duplicate NetBIOS names on the root domain. Can I create a forest (Kerberos) trust between them? What about NTLM trusts between their child domains?

Answer

You cannot create external trusts between domains with the same name or SID, nor can you create Kerberos trusts between two forests with the same name or SID. This includes both the NetBIOS and FQDN version of the name – even if using a forest trust where you might think that the NB name wouldn’t matter – it does. Here I am trying to create a trust between fabrikam.com and fabrikam.net forests – I get the super useful error:

“This operation cannot be performed on the current domain”

image

But if you are creating external (NTLM, legacy) trusts between two non-root domains in two forests, as long as the FQDN and NB name of those two non-root domains are unique, it will work fine. They have no transitive relationship.

So in this example:

  • You cannot create a domain trust nor a forest trust between fabrikam.com and fabrikam.net
  • You can create a domain (only) trust between left.fabrikam.com and right.fabrikam.net
  • You cannot create a domain trust between fabrikam.com and right.fabrikam.net
  • You cannot create a domain trust between fabrikam.net and left.fabrikam.com

fabrikam1

Why don’t the last two work? Because the trust process thinks that the trust already exists due to the NetBIOS name match with the child’s parent. Arrrgh!

image

BUT!

You could still have serious networking problems in this scenario regardless of the trust. If there are two same-named domains physically accessible through the network from the same computer, there may be a lot of misrouted communication when people just use NetBIOS domain names. They need to make sure that no one ever has to broadcast NetBIOS to find anything – their WINS environments must be perfect in both forests and they should convert all their DFS to using DfsDnsConfig. Alternatively they could block all communication between the two root domains’ DCs, perhaps at a firewall level.

Note: I am presuming your NetBIOS domain name matches the left-most part of the FQDN name. Usually it does, but that’s not a requirement (and not possible if you are using more than 15 characters in that name).

Question

Is possible to enforce uniqueness for the sAMAccountName attribute within the forest?

Answer

[Courtesy of Jonathan Stephens]

The Active Directory schema does not have a mechanism for enforcing uniqueness of an attribute. Those cases where Active Directory does require an attribute to be unique in either the domain (sAMAccountName) or forest (objectGUID) are enforced by other code – for example, AD Users and Computers won’t let you do it:

image

The only way you could actually achieve this is to have a custom user provisioning application that would perform a GC lookup for an account with a particular sAMAccountName, and would only permit creation of the new object should no existing object be found.

[Editor’s note: If you want to see what happens when duplicate user samaccountname entries are created, try this on for size in your test lab:

1. Enable AD Recycle Bin.
2. Create an OU called Sales.
3. Create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis'.
4. Delete the user.
5. In the Users container, create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis' (simulating someone trying to get that user back up and running by creating it new, like a help desk would do for a VIP in a hurry).
6. Restore the deleted 'Sara Davis' user back to her previous OU (this will work because the DN's do not match and the recreated user is not really the restored one), using:

get-adobject -filter 'samaccountname -eq "saradavis"' -includedeletedobjects | restore-adobject -targetpath "ou=sales,dc=consolidatedmessenger,=dc=com"

(note the 'illegal modify operation' error).

7. Despite the above error, the user account will in fact be restored successfully and will now exist in both the Sales OU and the Users container, with the same sAMAccountName and userPrincipalName.
8. Logon as SaraDavis using the NetBIOS-style name.
9. Logoff.
10. Note in DSA.MSC how 'Sara Davis' in the Sales OU now has a 'pre-windows 2000' logon name of $DUPLICATE-<something>.
11. Note how both copies of the user have the same UPN.
12. Logon with the UPN name of saradavis@consolidatedmessenger.com and note that this attribute does not get mangled.

Fun, eh? – Ned]

Question

Which customer in the world has the most number of objects in a production AD domain?

Answer

Without naming specific companies - I have to protect their privacy - the single largest “real” domain I have ever heard of had ~8 million user objects and nearly nothing else. It was used as auth for a web system. That was back in Windows 2000 so I imagine it’s gotten much bigger since then.

I have seen two other customers (inappropriately) use AD as a quasi-SQL database, storing several hundred million objects in it as ‘transactions’ or ‘records’ of non-identity data, while using a custom schema. This scaled fine for size but not for performance, as they were constantly writing to the database (sometimes at a rate of hundreds of thousands of new objects a day) and the NTDS.DIT is - naturally - optimized for reading, not writing.  The performance overall was generally terrible as you might expect. You can also imagine that promoting a new DC took some time (one of them called about how initial replication of a GC had been running for 3 weeks; we recommended IFM, a better WAN link, and to stop doing that $%^%^&@).

For details on both recommended and finite limits, see:

Active Directory Maximum Limits - Scalability

http://technet.microsoft.com/en-us/library/active-directory-maximum-limits-scalability(WS.10).aspx

The real limit on objects created per DC is 2,147,483,393 (or 231 minus 255). The real limit on users/groups/computers (security principals) in a domain is 1,073,741,823 (or 230). If you find yourself getting close on the latter you need to open a support case immediately!

Question

Is a “Data Only” migration possible with USMT? I.e. no application settings or configuration is migrated, only files and folders.

Answer

Sure thing.

1. Generate a config file with:

scanstate.exe /genconfig:config.xml

2. Open that config.xml in Notepad, then search and replace “yes” with “no” (including the quotation marks) for all entries. Save that file. Do not delete the lines, or think that not including the config.xml has the same effect – that will lead to those rules processing normally.

3. Run your scanstate, including config.xml and NOT including migapp.xml. For example:

scanstate.exe c:\store /config:config.xml /i:migdocs.xml /v:5

Normally, your scanstate log should be jammed with entries around the registry data migration:

Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.ico]
Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jfif]
Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jpe]
Processing Registry HKCU\Control Panel\Accessibility\HighContrast
Processing Registry HKCU\Control Panel\Accessibility\HighContrast [Flags]
Processing Registry HKCU\Control Panel\Accessibility\HighContrast [High Contrast Scheme]

If you look through your log after using the steps above, none of those will appear.

You might also think that you could just rename the DLManifests and ReplacementManifests folders to get the same effect and you’d almost be right. The problem is that Vista or Windows 7also use the built in %systemroot%\winsxs\manifests folders, and you certainly cannot remove those. Just go with the config.xml technique.

Question

After we migrate SYSVOL from FRS to DFSR on Windows Server 2008 R2, we still see that the FRS service is set to automatic. Is it ok to disable?

Answer

Absolutely. Once an R2 server stops replicating SYSVOL with FRS, it cannot use that service for any other data. If you try to start the FRS service or replicate with it will log events like these:

Log Name: File Replication Service
Source: NtFrs
Date: 1/6/2009 11:12:45 AM
Event ID: 13574
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: 7015-SRV-03.treyresearch.net
Description:
The File Replication Service has detected that this server is not a domain controller. Use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated and therefore, the service has been stopped. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

Log Name: File Replication Service
Source: NtFrs
Date: 1/6/2009 2:16:14 PM
Event ID: 13576
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: 7015-srv-01.treyresearch.net
Description:
Replication of the content set "PUBLIC|FRS-REPLICATED-1" has been blocked because use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

We document this in the SYSVOL Replication Migration Guide but it’s easy to miss and a little confusing – this article applies to both R2 and Win2008, and Win2008 can still use FRS:

7. Stop and disable the FRS service on each domain controller in the domain unless you were using FRS for purposes other than SYSVOL replication. To do so, open a command prompt window and type the following commands, where <servername> is the Universal Naming Convention (UNC) path to the remote server:

Sc <servername>stop ntfrs

Sc <servername>config ntfrs start=disabled

Other stuff

Another week, another barrage of cloud.


(courtesy of the http://xkcd.com/ blog)

Finally… friggin’ Word. Play this at 720p, full screen.

 

----

Have a great weekend folks.

- Ned “waiting on yet more email from humor-impaired MS colleagues about his lack of professionalism” Pyle

Target Group Policy Preferences by Container, not by Group

$
0
0

Hello again AskDS readers, Mike here again. This post reflects on Group Policy Preference targeting items, specifically targeting by security groups. Targeting preference items by security groups is a bad idea. There is a better way that most environments can accomplish the same result, at a fraction of the cost.

Group Membership dependent

The world of Windows has been dependent on group membership for a long time. This dependency is driven by the way Windows authorizes access to resources. The computer or user must be a member of the group in order to access the printer or file server. Groups are and have been the bane of our existence. Nevertheless, we should not let group membership dominate all aspects of our design. One example where we can move away from using security groups is with Group Policy Preference (GPP) targeting.

Targeting by Security Group

GPP Targeting items control the scope of application for GPP items. Think of targeting items as Group Policy filtering on steroids, but they only apply to GPP items included in a Group Policy object. They introduce an additional layer of administration that provides more control over "how" GPP items apply to a specific user or computer.

image
Figure 1 - List of Group Policy Preference Targeting items

The most common scenario we see using the Security Group targeting item is with the Drive Map preference item. IT Professionals have been creating network drive mappings based on security groups since Moby Dick was a sardine-- it's what we do. The act is intuitive because we typically apply permissions to the group and add users to the group.

The problem with this is that not all applications determine group membership the same way. Also, the addition of Universal Groups and the numerous permutations of group nesting make this a complicated task. And let's not forget that some groups are implicitly added when you log on, like Domain Users, because it’s the designated primary group. Programmatically determining group membership is simple -- until it's implemented, and its implementation's performance is typically indirectly proportional to its accuracy. It either takes a long time to get an accurate list, or a short time to get a somewhat accurate list.

Security Group Computer Targeting

Using GPP Security Group targeting for computers is a really bad idea. Here's why: in most circumstances, the application retrieves group memberships from a domain controller. This means network traffic from the client to the domain controller and back again. Using the network introduces latency. Latency introduces slow process, and slow processing is the last thing you want when the computer is processing Group Policy. Also, Preference Targeting allows you to create complex targeting scenarios using Boolean operators such as AND, OR, and NOT. This is powerful stuff and lets you combine one or more logon scripts into a single GP item. However, the power comes at a cost. Remember that network traffic we created by make queries to the domain controller for group memberships? Well, that information is not cached; each Security Group targeting item in the GPO must perform that query again- yes, the same one it just did. Don't hate, that's just the way it works. This behavior does not take into account nest groups. You need to increase the number of round trips to the domain controller if you want to include groups of groups of groups etcetera ad nauseam (trying to make my Latin word quota).

Security Group User Targeting

User Security Group targeting is not as bad as computer Security Group targeting. During user Security Group targeting, the Group Policy Preferences extension determines group membership from the user's authentication token. This process if more efficient and does not require round trips to the domain controller. One caveat with depending on group membership is the risk of the computer or user's group membership containing too many groups. Huh- too many Groups? Yes, this happens more often than many realize. Windows creates an authentication token from information in the Kerberos TGT. The Kerberos TGT has a finite amount of storage for this information. User and computers with large group memberships (groups nested with groups…) can maximize the finite storage available in the TGT. When this happens, the remaining groups memberships are truncated, which creates the effect that the user is not a member of that group. Groups truncated from the authentication token results in the computer or user not receiving a particular Group Policy preference item.

You got any better ideas?

A better choice of targeting Group Policy Preference items is to use Organization Unit targeting items. It's da bomb!!! Let's look at how Organizational Unit targeting items work.

image
Figure 2 Organizational Unit Targeting item

The benefits Organizational Unit Targeting Items

Organization Unit targeting items determines OU container membership by parsing the distinguished name of the computer or user. So, it simply uses string manipulation to determine what OUs are in scope with the user or computer. Furthermore, it can determine if the computer or user has direct container membership of an OU by simply looking for the first occurrence of OU immediately following the principal name in the distinguished name.

Computer Targeting using OUs

Computer Preference targeting with OUs still has to contact a domain controller. However, it’s an LDAP call and because we are not chasing nested groups, it's quick and efficient. First, the preference client-side extension gets the name of the computer. The CSE gets name from the local computer, either from the environment variable or from the registry, in that order. The CSE then uses the name to look up the security identifier (SID) for the computer. Windows performs an LDAP bind to the computer object in Active Directory using the SID. The bind completes and retrieves the computer object's distinguished name. The CSE then parses the distinguished name as needed to satisfy the Organizational Unit targeting item.

User Targeting using OUs

User Preference targeting requires fewer steps because the client-side extension already knows the user's SID. The remaining work performed by the CSE is to LDAP bind to the user object using the user's SID and retrieve the distinguished name from the user object. Then, business as usual, the CSE parses the distinguished name to satisfy the Organizational Unit targeting item.

Wrap Up

So there you have it. The solution is clean and it takes full advantage of your existing Active Directory hierarchy. Alternatively, it could be the catalyst needed to start a redesign project. Understandably, this only works for Group Policy Preferences items; however-- every little bit helps when consolidating the number of groups to which computer and users belong-- and it makes us a little less dependent on groups. Also, it's a better, faster, and more efficient alternative over Security Group targeting. So try it.

Update

We recently published a new article around behavior changes with Group Policy Preferences Computer Security Group Targeting.  Read more here.

- Mike "This is U.S. History; I see the globe right there" Stephens

 

 

What is the Impact of Upgrading the Domain or Forest Functional Level?

$
0
0

Hello all, Jonathan here again. Today, I want to address a question that we see regularly. As customers upgrade Active Directory, and they inevitably reach the point where they are ready to change the Domain or Forest Functional Level, they sometimes become fraught. Why is this necessary? What does this mean? What’s going to happen? How can this change be undone?

What Does That Button Do?

Before these question can be properly addressed, if must first be understood exactly what purposes the Domain and Forest Functional Levels serve. Each new version of Active Directory on Windows Server incorporates new features that can only be taken advantage of when all domain controllers (DC) in either the domain or forest have been upgraded to the same version. For example, Windows Server 2008 R2 introduces the AD Recycle Bin, a feature that allows the Administrator to restore deleted objects from Active Directory. In order to support this new feature, changes were made in the way that delete operations are performed in Active Directory, changes that are only understood and adhered to by DCs running on Windows Server 2008 R2. In mixed domains, containing both Windows Server 2008 R2 DCs as well as DCs on earlier versions of Windows, the AD Recycle Bin experience would be inconsistent as deleted objects may or may not be recoverable depending on the DC on which the delete operation occurred. To prevent this, a mechanism is needed by which certain new features remain disabled until all DCs in the domain, or forest, have been upgraded to the minimum OS level needed to support them.

After upgrading all DCs in the domain, or forest, the Administrator is able to raise the Functional Level, and this Level acts as a flag informing the DCs, and other components as well, that certain features can now be enabled. You'll find a complete list of Active Directory features that have a dependency on the Domain or Forest Functional Level here:

Appendix of Functional Level Features
http://technet.microsoft.com/en-us/library/understanding-active-directory-functional-levels(WS.10).aspx

There are two important restrictions of the Domain or Forest Functional Level to understand, and once they are, these restrictions are obvious. Once the Functional Level has been upgraded, new DCs on running on downlevel versions of Windows Server cannot be added to the domain or forest. The problems that might arise when installing downlevel DCs become pronounced with new features that change the way objects are replicated (i.e. Linked Value Replication). To prevent these issues from arising, a new DC must be at the same level, or greater, than the functional level of the domain or forest.

The second restriction, for which there is a limited exception on Windows Server 2008 R2, is that once upgraded, the Domain or Forest Functional Level cannot later be downgraded. The only purpose that having such ability would serve would be so that downlevel DCs could be added to the domain. As has already been shown, this is generally a bad idea.

Starting in Windows Server 2008 R2, however, you do have a limited ability to lower the Domain or Forest Functional Levels. The Windows Server 2008 R2 Domain or Forest Functional level can be lowered to Windows Server 2008, and no lower, if and only if none of the Active Directory features that require a Windows Server 2008 R2 Functional Level has been activated. You can find details on this behavior - and how to revert the Domain or Forest Functional Level - here.

What Happens Next?

Another common question: what impact does changing the Domain or Forest Functional Level have on enterprise applications like Exchange or Lync, or on third party applications? First, new features that rely on the Functional Level are generally limited to Active Directory itself. For example, objects may replicate in a new and different way, aiding in the efficiency of replication or increasing the capabilities of the DCs. There are exceptions that have nothing to do with Active Directory, such as allowing NTFRS replacement by DFSR to replicate SYSVOL, but there is a dependency on the version of the operating system. Regardless, changing the Domain or Forest Functional Level should have no impact on an application that depends on Active Directory.

Let's fall back on a metaphor. Imagine that Active Directory is just a big room. You don't actually know what is in the room, but you do know that if you pass something into the room through a slot in the locked door you will get something returned to you that you could use. When you change the Domain or Forest Functional Level, what you can pass in through that slot does not change, and what is returned to you will continue to be what you expect to see. Perhaps some new slots added to the door through which you pass in different things, and get back different things, but that is the extent of any change. How Active Directory actually processes the stuff you pass in to produce the stuff you get back, what happens behind that locked door, really isn't relevant to you.

If you carry this metaphor forward into the real world, if an application like Exchange uses Active Directory to store its objects, or to perform various operations, none of that functionality should be affected if the Domain or Forest Functional Mode changes. In fact, if your applications are also written to take advantage of new features introduced in Active Directory, you may find that the capabilities of your applications increase when the Level changes.

The answer to the question about the impact of changing the Domain or Forest Functional Level is there should be no impact. If you still have concerns about any third party applications, then you should contact the vendor to find out if they tested the product at the proposed Level, and if so, with what result. The general expectation, however, should be that nothing will change. Besides, you do test your applications against proposed changes to your production AD, do you not? Discuss any issues with the vendor before engaging Microsoft Support.

Where’s the Undo Button?

Even after all this, however, there is a great concern about the change being irreversible, so that you must have a rollback plan just in case something unforeseen and catastrophic occurs to Active Directory. This is another common question, and there is a supported mechanism to restore the Domain or Forest Functional Level. You take a System State back up of one DC in each domain in the forest. To recover, flatten all the DCs in the forest, restore one for each domain from the backup, and then DCPROMO the rest back into their respective domains. This is a Forest Restore, and the steps are outlined in detail in the following guide:

Planning for Active Directory Forest Recovery
http://technet.microsoft.com/en-us/library/planning-active-directory-forest-recovery(WS.10).aspx

By the way, do you know how often we’ve had to help a customer perform a complete forest restore because something catastrophic happened when they raised the Domain or Forest Functional Level? Never.

Best Practices

What can be done prior to making this change to ensure that you have as few issues as possible? Actually, there are some best practices here that you can follow:

1. Verify that all DCs in the domain are, at a minimum, at the OS version to which you will raise the functional level. Yes… I know this sounds obvious, but you’d be surprised. What about that DC that you decommissioned but for which you failed to perform metadata cleanup? Yes, this does happen.
Another good one that is not so obvious is the Lost and Found container in the Configuration container. Is there an NTDS Settings object in there for some downlevel DC? If so, that will block raising the Domain Functional Level, so you’d better clean that up.

2. Verify that Active Directory is replicating properly to all DCs. The Domain and Forest Functional Levels are essentially just attributes in Active Directory. The Domain Functional Level for all domains must be properly replicated before you’ll be able to raise the Forest Functional level. This practice also addresses the question of how long one should wait to raise the Forest Functional Level after you’ve raised the Domain Functional Level for all the domains in the forest. Well…what is your end-to-end replication latency? How long does it take a change to replicate to all the DCs in the forest? Well, there’s your answer.

Best practices are covered in the following article:

322692 How to raise Active Directory domain and forest functional levels
http://support.microsoft.com/default.aspx?scid=kb;EN-US;322692

There, you’ll find some tools you can use to properly inventory your DCs, and validate your end-to-end replication.

Update: Woo, we found an app that breaks! It has a hotfix though (thanks Paolo!). Mkae sure you install this everywhere if you are using .Net 3.5 applications that implement the DomainMode enumeration function.

FIX: "The requested mode is invalid" error message when you run a managed application that uses the .NET Framework 3.5 SP1 or an earlier version to access a Windows Server 2008 R2 domain or forest  
http://support.microsoft.com/kb/2260240

Conclusion

To summarize, the Domain or Forest Functional Levels are flags that tell Active Directory and other Windows components that all DCs in the domain or forest are at a certain minimal level. When that occurs, new features that require a minimum OS on all DCs are enabled and can be leveraged by the Administrator. Older functionality is still supported so any applications or services that used those functions will continue to work as before -- queries will be answered, domain or forest trusts will still be valid, and all should remain right with the world. This projection is supported by over eleven years of customer issues, not one of which involves a case where changing the Domain or Forest Functional Level was directly responsible as the root cause of any issue. In fact, there are only cases of a Domain or Forest Functional Level increase failing because the prerequisites had not been met; overwhelmingly, these cases end with the customer's Active Directory being successfully upgraded.

If you want to read more about Domain or Forest Functional Levels, review the following documentation:

What Are Active Directory Functional Levels?
http://technet.microsoft.com/en-us/library/cc787290(WS.10).aspx

Functional Levels Background Information
http://technet.microsoft.com/en-us/library/cc738038(WS.10).aspx

Jonathan “Con-Function Junction” Stephens

AskDS is 12,614,400,000,000,000 shakes old

$
0
0

It’s been four years and 591 posts since AskDS reached critical mass. You’d hope our party would look like this: 

image

But it’s more likely to be:

image

Without you, we’d be another of those sites that glow red hot, go supernova, then collapse into a white dwarf. We really appreciate your comments, questions, and occasional attaboys. Hopefully we’re good for another year of insightful commentary.

Thanks readers.

The AskDS Contributors

Improved Group Policy Preference Targeting by Computer Group Membership

$
0
0

Hello AskDS readers, it's Mike again talking about Group Policy Preference targeting items. I posted an article in June entitled Targeting Group Policy Preferences by Container, not by Group. This post highlighted the common problems many people encounter when targeting preferences items based on a computer's group membership, why the problem occurs, and some workarounds.

Today, I'd like to introduce a hotfix released by Microsoft that improves targeting preference items by computer group membership. The behavior before the hotfix potentially resulted in slow computer group policy application. The slowness was caused by the way Security Group targeting applies against a computer account. The targeting item makes multiple round trips to a domain controller to determine group memberships (including nested groups). The slowness is more significant when the computer applying the targeting item does not have a local domain controller and must use a domain controller across a WAN link.

You can download the hotfix for Windows 7 and Windows Server 2008 R2 through Microsoft Knowledgebase article 2561285. This hotfix changes how the Security Group Targeting item calculates computer group membership. During policy application, the targeting item requests a copy of the computer's authentication token. This token is mostly identical to the token created during logon, which means it contains a list security identifiers (SIDs) for every group of which the computer is a member, including nested groups. The targeting item performs the configured comparison against this list of SIDs in the token, rather than multiple LDAP calls to a domain controller. This behavior aligns the behavior of computer security group targeting with that of user security group targeting. This should improve the performance of security group targeting.

Mike "Try, Try, Try Again" Stephens


Cluster and Stale Computer Accounts

$
0
0

Hi, Mike here again. Today, I want to write about a common administrative task that can lead to disaster: removing stale computer accounts from Active Directory.

Removing stale computer accounts is simply good hygiene-- it’s the brushing and flossing of Active Directory. Like tartar, computer accounts have the tendency to build up until they become a problem (difficult to identify and remove, and can lead to lengthy backup times).

Oops… my bad

Many environments separate administrative roles. The Active Directory administrator is not the Cluster Administrator. Each role holder performs their duties in a somewhat isolated manner-- the Cluster admins do their thing and the AD admins do theirs. The AD admin cares about removing stale computer accounts. The cluster admin does not… until the AD admin accidentally deletes a computer account associated with a functioning Failover Cluster because it looks like a stale account.

Unexpected deletion of Cluster Name Object (CNO) or Virtual computer Object (VCO) is one of the top issues worked by our engineers that support Clustering and High-Availability. Everyone does their job and boom-- Clustered Servers stop working because CNOs or the VCOs are missing. What to do?

What's wrong here

I'll paraphrase an article posted on the Clustering and High-Availability TechNet blog that solves this scenario. Typically, domain admins key on two different attributes to determine if a computer account is stale: pwdlastSet and LastLogonTimeStamp. Domains that are not configured to a Window Server 2003 Domain Functional Level use the pwdLastAttribute. However, domains configured to a Windows Server 2003 Domain Functional Level or later should use the lastLogonTimeStamp attribute. What you may not know is that a Failover Cluster (CNO and VCO) does not update the lastLogonTimeStamp the same way as a real computer.

Cluster updates the lastLogonTimeStamp when it brings a clustered network name resource online. Once online, it caches the authentication token. Therefore, a clustered network named resource working in production for months will never update the lastLogonTimeStamp. This appears as a stale computer account to the AD administrator. Being a good citizen, the AD administrator deletes the stale computer account that has not logged on in months. Oops.

The Solution

There are few things that you can do to avoid this situation.

  • Use the servicePrincipalName attribute in addition to the lastLogonTimeStamp attribute when determining stale computer accounts. If any variation of MSClusterVirtualServer appears in this attribute, then leave the computer account alone and consult with the cluster administrator.
  • Encourage the Cluster administrator to use -CleanupAD to delete the computer accounts they are not using after they destroy a cluster.
  • If you are using Windows Server 2008 R2, then consider implementing the Active Directory Recycle Bin. The concept is identical to the recycle bin for the file system, but for AD objects. The following ASKDS blogs can help you evaluate if AD Recycle Bin is a good option for your environment.

Mike "Four out of Five AD admins recommend ASKDS" Stephens

Friday Mail Sack: Super Slo-Mo Edition

$
0
0

Hello folks, Ned here again with another Mail Sack. Before I get rolling though, a quick public service announcement:

Plenty of you have downloaded the Windows 8 Developer Preview and are knee-deep in the new goo. We really want your feedback, so if you have comments, please use one of the following avenues:

I recommend sticking to IT Pro features; the consumer side’s covered and the biggest value is your Administrator experience. The NDA is not off - I still cannot comment on the future of Windows 8 or tell you if we already have plans to do X with Y. This is a one-way channel from you to us (to the developers).

Cool? On to the sack. This week we discuss:

Shake it.

Question

We were chatting here about password synchronization tools that capture password changes on a DC and send the clear text password to some third party app. I consider that a security risk...but then someone asked me how the password is transmitted between a domain member workstation and a domain controller when the user performs a normal password change operation (CTRL+ALT+DEL and Change Password). I suppose the client uses some RPC connection, but it would be great if you could point me to a reference.

Answer

Windows can change passwords many ways - it depends on the OS and the component in question.

1. For the specific case of using CTRL+ALT+DEL because your password has expired or you just felt like changing your password:

If you are using a modern OS like Windows 7 with AD, the computer uses the Kerberos protocol end to end. This starts with a normal AS_REQ logon, but to a special service principal name of kadmin/changepw, as described in http://www.ietf.org/rfc/rfc3244.txt.

The computer first contacts a KDC over port 88, then communicates over port 464 to send along the special AP_REQ and AP_REP. You are still using Kerberos cryptography and sending an encrypted payload containing a KRB_PRIV message with the password. Therefore, to get to the password, you have to defeat Kerberos cryptography itself, which means defeating the crypto and defeating the key derived from the cryptographic hash of the user's original password. Which has never happened in the history of Kerberos.

image

The parsing of this kpasswd traffic is currently broken in NetMon's latest public parsers, but even when you parse it in WireShark, all you can see is the encryption type and a payload of encrypted goo. For example, here is that Windows 7 client talking to a Windows Server 2008 R2 DC, which means AES-256:

image
Aka: Insane-O-Cryption ™

On the other hand, if using a crusty OS like Windows XP, you end up using a legacy password mechanism that worked with NT 4.0 – in this case SamrUnicodeChangePasswordUser2 (http://msdn.microsoft.com/en-us/library/cc245708(v=PROT.10).aspx).

XP also supports the Kerberos change mechanism, but by default uses NTLM with CTRL+ALT+DEL password changes. Witness:

image

This uses “RPC over SMB with Named Pipes” with RPC packet privacy. You are using NTLM v2 by default (unless you set LMCompatibility unwisely) and you are still double-protected (the payload and packets), which makes it relatively safe. Definitely not as safe as Win7 though – just another reason to move forward.

image

You can disable NTLM in the domain if you have Win2008 R2 DCs and XP is smart enough to switch to using Kerberos here:

image

... but you are likely to break many other apps. Better to get rid of Windows XP.

2. A lot of administrative code use SamrSetInformationUser2, which does not require knowing the user’s current password (http://msdn.microsoft.com/en-us/library/cc245793(v=PROT.10).aspx). For example, when you use NET USER to change a domain user’s password:

image

This invokes SamrSetInformationUser2 to set Internal4InformationNew data:

image

So, doubly-protected (a cryptographically generated, key signed hash covered by an encrypted payload). This is also “RPC over SMB using Named Pipes”

image

The crypto for the encrypted payload is derived from a key signed using the underlying authentication protocol, seen from a previous session setup frame (negotiated as Kerberos in this case):

image

3. The legacy mechanisms to change a user password are NetUserChangePassword (http://msdn.microsoft.com/en-us/library/windows/desktop/aa370650(v=vs.85).aspx) and IADsUser::ChangePassword (http://msdn.microsoft.com/en-us/library/windows/desktop/aa746341(v=vs.85).aspx)

4. A local user password change usually involves SamrUnicodeChangePasswordUser2, SamrChangePasswordUser, or SamrOemChangePasswordUser2 (http://msdn.microsoft.com/en-us/library/cc245705(v=PROT.10).aspx).

There are other ways but those are mostly corner-case.

Note: In my examples, I am using the most up to date Netmon 3.4 parsers from http://nmparsers.codeplex.com/.

Question

If I try to remove the AD Domain Services role using ServerManager.msc, it blocks me with this message:

image

But if I remove the role using Dism.exe, it lets me continue:

image

This completely hoses the DC and it no longer boots normally. Is this a bug?

And - hypothetically speaking, of course - how would I fix this DC?

Answer

Don’t do that. :)

Not a bug, this is expected behavior. Dism.exe is a pure servicing tool; it knows nothing more of DCs than the Format command does. ServerManager and servermanagercmd.exe are the tools that know what they are doing.
Update: Although as Artem points out in the comments, we want you to use the Server Manager PowerShell and not servermanagercmd, which is on its way out.

To fix your server, pick one:

  • Boot it into DS Repair Mode with F8 and restore your system state non-authoritatively from backup (you can also perform a bare metal restore if you have that capability - no functional difference in this case). If you do not have a backup and this is your only DC, update your résumé.
  • Boot it into DS Repair Mode with F8 and use dcpromo /forceremoval to finish what you started. Then perform metadata cleanup. Then go stand in the corner and think about what you did, young man!

Question

We are getting Event ID 4740s (account lockout) for the AD Guest account throughout the day, which is raising alerts in our audit system. The Guest account is disabled, expired, and even renamed. Yet various clients keep locking out the account and creating the 4740 event. I believe I've traced it back to the occasional attempt of a local account attempting to authenticate to the domain. Any thoughts?

Answer

You'll see that when someone has set a complex password on the Guest account, using NET USER for example, rather than having it be the null default. The clients never know what the guest password is, they always assume it's null like default - so if you set a password on it, they will fail. Fail enough and you lock out (unless you turn that policy off and replace it with intrusion protection detection and two-factor auth). Set it back to null and you should be ok. As you suspected, there a number of times when Guest is used as part of a "well, let's try that" algorithm:

Network access validation algorithms and examples for Windows Server 2003, Windows XP, and Windows 2000

To set it back you just use the Reset Password menu in Dsa.msc on the guest account, making sure not to set a password and clicking ok. You may have to adjust your domain password policy temporarily to allow this.

As for why it's "locking out" even though it's disabled and renamed:

  • It has a well-known SID (S-1-5-21-domain-501) so renaming doesn’t really do anything except tick a checkbox on some auditor's clipboard
  • Disabled accounts can still lock out if you keep sending bad passwords to them. Usually no one notices though, and most people are more concerned about the "account is disabled" message they see first.

Question

What are the steps to change the "User Account" password set when the Network Device Enrollment Service (NDES) is installed?

Answer

When you first install the Network Device Enrollment Service (NDES), you have the option of setting the identity under which the application pool runs to the default application pool identity or to a specific user account. I assume that you selected the latter. The process to change the password for this user account requires two steps -- with 27 parts (not really…).

  1. First, you must reset the user account's password in Active Directory Users and Computers.

  2. Next, you must change the password configured in the application pool Advanced Settings on the NDES server.

a. In IIS manager, expand the server name node.

b. Click on Application Pools.

c. On the right, locate and highlight the SCEP application pool.

image

d. In the Action pane on the right, click on Advanced Settings....

e. Under Process Model click on Identity, then click on the … button.

image

f. In the Application Pool Identity dialog box, select Custom account and then click on Set….

g. Enter the custom application pool account name, and then set and confirm the password. Click Ok, when finished.

image

h. Click Ok, and then click Ok again.

i. Back on the Application Pools page, verify that SCEP is still highlighted. In the Action pane on the right, click on Recycle….

j. You are done.

Normally, you would have to be concerned with simply resetting the password for any service account to which any digital certificates have been assigned. This is because resetting the password can result in the account losing access to the private keys associated with those certificates. In the case of NDES, however, the certificates used by the NDES service are actually stored in the local computer's Personal store and the custom application pool identity only has read access to those keys. Resetting the password of the custom application pool account will have no impact on the master key used to protect the NDES private keys.

[Courtesy of Jonathan, naturally - Neditor]

Question

If I have only one domain in my forest, do I need a Global Catalog? Plenty of documents imply this is the case.

Answer

All those documents saying "multi-domain only" are mistaken. You need GCs - even in a single-domain forest - for the following:

(Update: Correction on single-domain forest logon made, thanks for catching that Yusuf! I also added a few more breakage scenarios)

  • Perversely, if you have enabled IgnoreGCFailures (http://support.microsoft.com/kb/241789); turning it on removes universal groups from the user security token if there is no GC, meaning they will logon but not be able to access resources they accessed fine previously).
  • If your users logon with UPNs and try to change their password (they can still logon in a single domain forest with UPN or NetBiosDomain\SamAccountName style logons).
  • Even if you use Universal Group Membership Caching to avoid the need for a GC in a site, that DC needs a GC to update the cache.
  • MS Exchange is deployed (All versions of Exchange services won't even start without a GC).
  • Using the built-in Find in the shell to search AD for published shares, published DFS links, published printers, or any object picker dialog that provides option "entire directory"  will fail.
  • DPM agent installation will fail.
  • AD Web Services (aka AD Management Gateway) will fail.
  • CRM searches will fail.
  • Probably other third parties of which I'm not aware.

We stopped recommending that customers use only handfuls of GCs years ago - if you get an ADRAP or call MS support, we will recommend you make all DCs GCs, unless you have an excellent reason not. Our BPA tool states that you should have at least one GC per AD site: http://technet.microsoft.com/en-us/library/dd723676(WS.10).aspx.

Question

If I use DFSR to replicate a folder containing symbolic links, will this replicate the source files or the actual symlinks? The DFSR FAQ says symlink replication is supported under certain circumstances.

Answer

The symlink replicates; however, the underlying data does not replicate just because there is a symlink. If the data is not stored within the RF, you end up with a replicated symlink to nowhere:

Server 1, replicating a folder called c:\unfiltersub. Note how the symlink points to a file that is not in the scope of replication:

image

Server 2, the symlink has replicated - but naturally, it points to an un-replicated file. Boom:

image

If the source data is itself replicated, you’re fine. There’s no real way to guarantee that though, except preventing users from creating files outside the RF by using permissions and FSRM screens. If your end users can only access the data through a share, they are in good shape. I'd imagine they are not the ones creating symlinks though. ;-)

Question

I read your post on career development. There are many memory techniques and I know everyone is different, but what do you use?

[A number of folks asked this question - Neditor]

Answer

When I was younger, it just worked - if I was interested in it, I remembered it. As I get older and burn more brain cells though, I find that my best memory techniques are:

  • Periodic skim and refresh. When I have learned something through deep reading and hands on, I try to skim through core topics at least once a year. For example, I force myself to scan the diagrams in the all the Win2003 Technical Reference A-Z sections, and if I can’t remember what the diagram is saying, I make myself read that section in detail. I don’t let myself get too stale on anything and try to jog it often.
  • Mix up the media. When learning a topic, I read, find illustrations, and watch movies and demos. When there are no illustrations, I use Visio to make them for myself based on reading. When there are no movies, I make myself demo the topics. My brain seems to retain more info when I hit it with different styles on the same subject.
  • I teach and publically write about things a lot. Nothing hones your memory like trying to share info with strangers, as the last thing I want is look like a dope. It makes me prepare and check my work carefully, and that natural repetition – rather than forced “read flash cards”-style repetition, really works for me. My brain runs best under pressure.
  • Your body is not a temple (of Gozer worshipers). Something of a cliché, but I gobble vitamins, eat plenty of brain foods, and work out at least 30 minutes every morning.

I hope this helps and isn’t too general. It’s just what works for me.

Other Stuff

Have $150,000 to spend on a camera, a clever director who likes FPS gaming, and some very fit paint ballers? Go make a movie better than this. Watch it multiple times.

image
Once for the chat log alone

Best all-around coverage of the Frankfurt Auto Show here, thanks to Jalopnik.

image
Want!

The supposedly 10 Coolest Death Scenes in Science Fiction History. But any list not including Hudson’s last moments in Aliens is fail.

If it’s true… holy crap! Ok, maybe it wasn’t true. Wait, HOLY CRAP!

So many awesome things combined.

Finally, my new favorite time waster is Retronaut. How can you not like a website with things like “Celebrities as Russian Generals”.

image
No, really.

Have a nice weekend folks,

- Ned “Oh you want some of this?!?!” Pyle

Friday Mail Sack: Guest Reply Edition

$
0
0

Hi folks, Ned here again. This week we talk:

Let's gang up.

Question

We plan to migrate our Certificate Authority from single-tier online Enterprise Root to two-tier PKI. We have an existing smart card infrastructure. TechNet docs don’t really speak to this scenario in much detail.

1. Does migration to a 2-tier CA structure require any customization?

2. Can I keep the old CA?

3. Can I create a new subordinate CA under the existing CA and take the existing CA offline?

Answer

[Provided by Jonathan Stephens, the Public Keymaster- Editor]

We covered this topic in a blog post, and it should cover many of your questions: http://blogs.technet.com/b/askds/archive/2010/08/23/moving-your-organization-from-a-single-microsoft-ca-to-a-microsoft-recommended-pki.aspx.

Aside from that post, you will also find the following information helpful: http://blogs.technet.com/b/pki/archive/2010/06/19/design-considerations-before-building-a-two-tier-pki-infrastructure.aspx.

To your questions:

  1. While you can migrate an online Enterprise Root CA to an offline Standalone Root CA, that probably isn't the best decision in this case with regard to security. Your current CA has issued all of your smart card logon certificates, which may have been fine when that was all you needed, but it certainly doesn't comply with best practices for a secure PKI. The root CA of any PKI should be long-lived (20 years, for example) and should only issue certificates to subordinate CAs. In a 2-tier hierarchy, the second tier of CAs should have much shorter validity periods (5 years) and is responsible for issuing certificates to end entities. In your case, I'd strong consider setting up a new PKI and migrating your organization over to it. It is more work at the outset, but it is a better decision long term.
  2. You can keep the currently issued certificates working by publishing a final, long-lived CRL from the old CA. This is covered in the first blog post above. This would allow you to slowly migrate your users to smart card logon certificates issued by the new PKI as the old certificates expired. You would also need to continue to publish the old root CA certificate in the AD and in the Enterprise NTAuth store. You can see these stores using the Enterprise PKI snap-in: right-click on Enterprise PKI and select Manage AD Containers. The old root CA certificate should be listed in the NTAuthCertificates tab, and in the Certificate Authorities Container tab. Uninstalling the old CA will remove these certificates; you'll need to add them back.
  3. You can't take an Enterprise CA offline. An Enterprise CA requires access to Active Directory in order to function. You can migrate an Enterprise CA to a Standalone CA and take that offline, but, as I've said before, that really isn't the best option in this case.

Question

Are there any know issues with P2Ving ADAM/AD LDS servers?

Answer

[Provided by Kim Nichols, our resident ADLDS guru'ette - Editor]

No problems as far as we know. The same rules apply as P2V’ing DCs or other roles; make sure you clean up old drivers and decommission the physicals as soon as you are reasonably confident the virtual is working. Never let them run simultaneously. All the “I should have had a V-8” stuff.

Considering how simple it is to create an ADLDS replica, it might be faster and "cleaner" to create a new virtual machine, install and replicate ADLDS to it, then rename the guest and throw away the old physical; if ADLDS was its only role, naturally.

Question

[Provided by Fabian Müller, schlau Deutsche PFE- Editor]

When using production delegation in AGPM, we can grant permissions for editing group policy objects in the production environment. But these permissions will be written to all deployed GPOs, not for specific ones. GPMC makes it easy to set “READ” and “APPLY” permissions on a GPO, but I cannot find a security filtering switch in AGPM. So how can we manage the security filtering on group policies without setting the same ACL on all deployed policies?

Answer

Ok, granting “READ” and “APPLY” permissions respectively managing security filtering in AGPM is not that obvious to find. Do it like this in the change control panel of AGPM:

  • Check-out the according Group Policy Object and provide a brief overview of the changes to be done in the “comments” window, e.g. “Add important security filtering ACLs for group XYZ, dude!
  • Edit the checked-out GPO

In the top of the Group Policy Management Editor, click “Action” –> “Properties”:

image

  • Change to “Security” tab and provide your settings for security filtering:

image

  • Close the Group Policy Management Editor and Check-in the policy (again with a good comment)
  • If everything is done you can now safely “Deploy” the just edited GPO – now the security filter is in place in production:

image

Note 1: Be aware that you won’t find any information regarding the security filtering change in the AGPM history of the edited group policy object. There is nothing in the HTML reports that refer to security filtering changes. That’s why you should provide a good explanation on your changes during “check-in” and “check-out” phase:

image

image

Note 2: Be careful with “DENY” ACEs using AGPM – they might get removed. See the following blog for more information on that topic: http://blogs.technet.com/b/grouppolicy/archive/2008/10/27/agpm-3-0-doesnt-preserve-all-the-deny-aces.aspx

Question

I have one Windows Server 2003 IIS machine with two web applications, each in its own application pool. How can I register SPNs for each application?

Answer

[This one courtesy of Rob Greene, the Abominable Authman - Editor]

There are a couple of options for you here.

  1. You could address each web site on the same server with different host names.  Then you can add the specific HTTP SPN to each application pool account as needed.
  2. You could address each web site with a unique port assignment on the web server.  Then you can add the specific HTTP SPN with the port attached like http/myweb.contoso.com:88
  3. You could use the same account to run all the application pool accounts on the same web server.

NOTE: If you choose option 1 or 2, you have to be careful about Internet Explorer behaviors.  If you choose the unique host name per web site then you will need to make sure to use HOST records in DNS or put a registry key in place on all workstations if you choose CNAME.  If you choose having a unique port for each web site, you will need to put a registry key in place on all workstations so that they send the port number in the TGS SPN request.

http://blogs.technet.com/b/askds/archive/2009/06/22/internet-explorer-behaviors-with-kerberos-authentication.aspx

Question

Comparing AGPM controlled GPOs within the same domain is no problem at all – but if the AGPM server serves more than one domain, how can I compare GPOs that are hosted in different domains using AGPM difference report?

Answer

[Again from Fabian, who was really on a roll last week - Editor]

Since AGPM 4.0 we provide the ability to export and import Group Policy Objects using AGPM. What you have to do is:

  • To export one of the GPOs from domain 1…:

image

  • … and import the *.cab to domain 2 using the AGPM GPO import wizard (right-click on an empty area in AGPM Contents—> Controlled tab and select “New Controlled GPO…”):

image

image

  • Now you can simply compare those objects using difference report:

image

[Woo, finally some from Ned - Editor]

Question

When I use the Windows 7 (RSAT) version of AD Users and Computers to connect to certain domains, I get error "unknown user name or bad password". However, when I use the XP/2003 adminpak version, no errors for the same domain. There's no way to enter a domain or password.

Answer

ADUC in Vista/2008/7/R2 does some group membership and privilege checking when it starts that the older ADUC never did. You’ll get the logon failure message for any domain you are not a domain admin in, for example. The legacy ADUC is probably broken for that account as well – it’s just not telling you.

image

Question

I have 2 servers replicating with DFSR, and the network cable between them is disconnected. I delete a file on Server1, while the equivalent file on Server2 is modified. When the cable is re-connected, what is the expected behavior?

Answer

Last updater wins, even if a modification of an ostensibly deleted file. If the file was deleted first on server 1 and modified later on server 2, it would replicate back to server 1 with the modifications once the network reconnected. If it had been deleted later than the modification, that “last write” would win and it would delete from the other server once the network resumed.

More info on DFSR conflict handling here http://blogs.technet.com/b/askds/archive/2010/01/05/understanding-dfsr-conflict-algorithms-and-doing-something-about-conflicts.aspx

Question

Is there any automatic way to delete stale user or computer accounts? Something you turn on in AD?

Answer

Nope, not automatically; you have to create a solution to detect the age and disable or delete stale accounts. This is a very dangerous operation - make sure you understand what you are getting yourself into. For example:

Question

Whenever I try to use the PowerShell cmdlet Get-ACL against an object in AD, always get an error like " Cannot find path ou=xxx,dc=xxx,dc=xxx because it does not exist". But it does!

Answer

After you import the ActiveDirectory module, but before you run your commands, run:

CD AD:

Get-Acl won’t work until you change to the magical “active directory drive”.

Question

I've read the Performance Tuning Guidelines for Windows Server, and I wonder if all SMB server tuning parameters (AsyncCredits, MinCredits, MaxCredits, etc) also work (or help) for DFSR.  Also, do you know the limit is for SMB Asynchronous Credits - the document doesn’t say?

Answer

Nope, they won’t have any effect on DFSR – it does not use SMB to replicate files. SMB is only used by the DFSMGMT.MSC if you ask it to create a replicated folder on another server during RF setup. More info here:

Configuring DFSR to a Static Port - The rest of the story - http://blogs.technet.com/b/askds/archive/2009/07/16/configuring-dfsr-to-a-static-port-the-rest-of-the-story.aspx

That AsynchronousCredits SMB value does not have a true maximum, other than the fact that it is a DWORD and cannot exceed 4,294,967,295 (i.e. 0xffffffff). Its default value on Windows Server 2008 and 2008 R2 is 512; on Vista/7, it's 64.

HOWEVER!

As KB938475 (http://support.microsoft.com/kb/938475) points out, adjusting these defaults comes at the cost of paged pool (Kernel) memory. If you were to increase these values too high, you would eventually run out of paged pool and then perhaps hang or crash your file servers. So don't go crazy here.

There is no "right" value to set - it depends on your installed memory, if you are using 32-bit versus 64-bit (if 32-bit, I would not touch this value at all), the number of clients you have connecting, their usage patterns, etc. I recommend increasing this in small doses and testing the performance - for example, doubling it to 1024 would be a fairly prudent test to start.

Other Stuff

Happy Birthday to all US Marines out there, past and present. I hope you're using Veterans Day to sleep off the hangover. I always assumed that's why they made it November 11th, not that whole WW1 thing.

Also, happy anniversary to Jonathan, who has been a Microsoft employee for 15 years. In keeping with the tradition, he had 15 pounds of M&Ms for the floor, which in case you’re wondering, it fills a salad bowl. Which around here, means:

image

Two of the most awesome things ever – combined:

A great baseball story about Lou Gehrig, Kurt Russell, and a historic bat.

Off to play some Battlefield 3. No wait, Skyrim. Ah crap, I mean Call of Duty MW3. And I need to hurry up as Arkham City is coming. It's a good time to be a PC gamer. Or Xbox, if you're into that sorta thing.

 

Have a nice weekend folks,

 - Ned "and Jonathan and Kim and Fabian and Rob" Pyle

Friday Mail Sack: Best Post This Year Edition

$
0
0

Hi folks, Ned here and welcoming you to 2012 with a new Friday Mail Sack. Catching up from our holiday hiatus, today we talk about:

So put down that nicotine gum and get to reading!

Question

Is there an "official" stance on removing built-in admin shares (C$, ADMIN$, etc.) in Windows? I’m not sure this would make things more secure or not. Larry Osterman wrote a nice article on its origins but doesn’t give any advice.

Answer

The official stance is from the KB that states how to do it:

Generally, Microsoft recommends that you do not modify these special shared resources.

Even better, here are many things that will break if you do this:

Overview of problems that may occur when administrative shares are missing
http://support.microsoft.com/default.aspx?scid=kb;EN-US;842715

That’s not a complete list; it wasn’t updated for Vista/2008 and later. It’s so bad though that there’s no point, frankly. Removing these shares does not increase security, as only administrators can use those shares and you cannot prevent administrators from putting them back or creating equivalent custom shares.

This is one of those “don’t do it just because you can” customizations.

Question

The Windows PowerShell Get-ADDomainController cmdlet finds DCs, but not much actual attribute data from them. The examples on TechNet are not great. How do I get it to return useful info?

Answer

You have to use another cmdlet in tandem, without pipelining: Get-ADComputer. The Get-ADDomainController cmdlet is good mainly for searching. The Get-ADComputer cmdlet, on the other hand, does not accept pipeline input from Get-ADDomainController. Instead, you use a pseudo “nested function” to first find the PDC, then get data about that DC. For example, (this is all one command, wrapped):

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

When you run this, PowerShell first processes the commands within the parentheses, which finds the PDC. Then it runs get-adcomputer, using the property of “Name” returned by get-addomaincontroller. Then it passes the results through the pipeline to be formatted. So it’s 123.

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

Voila. Here I return the OS of the PDC, all without having any idea which server actually holds that role:

clip_image002[6]

Moreover, before the Internet clubs me like a baby seal: yes, a more efficient way to return data is to ensure that the –property list contains only those attributes desired:

image

Get-ADDomainController can find all sorts of interesting things via its –service argument:

PrimaryDC
GlobalCatalog
KDC
TimeService
ReliableTimeService
ADWS

The Get-ADDomain cmdlet can also find FSMO role holders and other big picture domain stuff. For example, the RID Master you need to monitor.

Question

I know about Kerberos “token bloat” with user accounts that are a member of too many groups. Does this also affect computers added to too many groups? What would some practical effects of that? We want to use a lot of them in the near future for some application … stuff.

Answer

Yes, things will break. To demonstrate, I use PowerShell to create 2000 groups in my domain and added a computer named “7-01” to them:

image

I then restart the 7-01 computer. Uh oh, the System Event log is un-pleased. At this point, 7-01 is no longer applying computer group policy, getting start scripts, or allowing any of its services to logon remotely to DCs:

image 

Oh, and check out this gem:

image

I’m sure no one will go on a wild goose chase after seeing that message. Applications will be freaking out even more, likely with the oh-so-helpful error 0x80090350:

“The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you.”

Don’t do it. MaxTokenSize is probably in your future if you do, and it has limits that you cannot design your way out of. IT uniqueness is bad.

Question

We have XP systems using two partitions (C: and D:) migrating to Windows 7 with USMT. The OS are on C and the user profiles on D.  We’ll use that D partition to hold the USMT store. After migration, we’ll remove the second partition and expand the first partition to use the space freed up by the first partition.

When restoring via loadstate, will the user profiles end up on C or on D? If the profiles end up on D, we will not be able to delete the second partition obviously, and we want to stop doing that regardless.

Answer

You don’t have to do anything; it just works. Because the new profile destination is on C, USMT just slots everything in there automagically :). The profiles will be on C and nothing will be on D except the store itself and any non-profile folders*:

clip_image001
XP, before migrating

clip_image001[5]
Win7, after migrating

If users have any non-profile folders on D, that will require a custom rerouting xml to ensure they are moved to C during loadstate and not obliterated when D is deleted later. Or just add a MOVE line to whatever DISKPART script you are using to expand the partition.

Question

Should we stop the DFSR service before performing a backup or restore?

Answer

Manually stopping the DFSR service is not recommended. When backing up using the DFSR VSS Writer – which is the only supported way – replication is stopped automatically, so there’s no reason to stop the service or need to manually change replication:

Event ID=1102
Severity=Informational
The DFS Replication service has temporarily stopped replication because another
application is performing a backup or restore operation. Replication will resume
after the backup or restore operation has finished.

Event ID=1104
Severity=Informational
The DFS Replication service successfully restarted replication after a backup
or restore operation.

Another bit of implied evidence – Windows Server Backup does not stop the service.

Stopping the DFSR service for extended periods leaves you open to the risk of a USN journal wrap. And what if someone/something thinks that the service being stopped is “bad” and starts it up in the middle of the backup? Probably nothing bad happens, but certainly nothing good. Why risk it?

Question

In an environment where AGMP controls all GPOs, what is the best practice when application setup routines make edits "under the hood" to GPOs, such as the Default Domain Controllers GPO? For example, Exchange setup make changes to User Rights Assignment (SeSecurityPrivilege). Obviously if this setup process makes such edits on the live GPO in sysvol the changes will happen, but then only to have those critical edits be lost and overwritten the next time an admin re-deploys with AGPM.

Answer

[via Fabian “Wunderbar” Müller  – Ned]

From my point of view:

1. The Default Domain and Default Domain Controller Policies should be edited very rarely. Manual changes as well as automated changes (e.g. by the mentioned Exchange setup) should be well known and therefore the workaround in 2) should be feasible.

2. After those planned changes were performed, you have to use “import from production” the production GPO to the AGPM archive in order to reflect the production change to AGPM. Another way could be to periodically use “import from production” the default policies or to implement a manual / human process that defines the “import from production” procedure before a change in these policies is done using AGPM.

Not a perfect answer, but manageable.

Question

In testing the rerouting of folders, I took the this example from TechNet and placed in a separate custom.xml.  When using this custom.xml along with the other defaults (migdocs.xml and migapp.xml unchanged), the EngineeringDrafts folder is copied to %CSIDL_DESKTOP%\EngineeringDrafts' but there’s also a copy at C:\EngineeringDrafts on the destination computer.

I assume this is not expected behavior.  Is there something I’m missing?

Answer

Expected behavior, pretty well hidden though:

http://technet.microsoft.com/en-us/library/dd560751(v=WS.10).aspx

If you have an <include> rule in one component and a <locationModify> rule in another component for the same file, the file will be migrated in both places. That is, it will be included based on the <include> rule and it will be migrated based on the <locationModify> rule

That original rerouting article could state this more plainly, I think. Hardly anyone does this relativemove operation; it’s very expensive for disk space– one of those “you can, but you shouldn’t” capabilities of USMT. The first example also has an invalid character in it (the apostrophe in “user’s” on line 12, position 91 – argh!).

Don’t just comment out those areas in migdocs though; you are then turning off most of the data migration. Instead, create a copy of the migdocs.xml and modify it to include your rerouting exceptions, then use that as your custom XML and stop including the factory migdocs.xml.

There’s an example attached to this blog post down at the bottom. Note the exclude in the System context and the include/modify in the user context:

image

image

Don’t just modify the existing migdocs.xml and keep using it un-renamed either; that becomes a versioning nightmare down the road.

Question

I'm reading up on CAPolicy.inf files, and it looks like there is an error in the documentation that keeps being copied around. TechNet lists RenewalValidityPeriod=Years and RenewalValidityPeriodUnits=20 under the "Windows Server 2003" sample. This is the opposite of the Windows 2000 sample, and intuitively the "PeriodUnits" should be something like "Years" or "Weeks", while the "Period" would be an integer value. I see this on AskDS here and here also.

Answer

[via Jonathan “scissor fingers” Stephens  – Ned]

You're right that the two settings seem like they should be reversed, but unfortunately this is not correct. All of the *Period values can be set to Minutes, Hours, Days, Weeks, Months or Years, while all of the *PeriodUnits values should be set to some integer.

Originally, the two types of values were intended to be exactly what one intuitively believes they should be -- *PeriodUnits was to be Day, Weeks, Months, etc. while *Period was to be the integer value. Unfortunately, the two were mixed up early in the development cycle for Windows 2000 and, once the error was discovered, it was really too late to fix what is ultimately a cosmetic problem. We just decided to document the correct values for each setting. So in actuality, it is the Windows 2000 documentation that is incorrect as it was written using the original specs and did not take the switch into account. I’ll get that fixed.

Question

Is there a way to control the number, verbosity, or contents of the DFSR cluster debug logs (DfsrClus_nnnnn.log and DfsrClus_nnnnn.log.gz in %windir%\debug)?

Answer

Nope, sorry. It’s all static defined:

  • Severity = 5
  • Max log messages per log = 10000
  • Max number of log files = 999

Question

In your previous article you say that any registry modifications should be completed with resource restart (take resource offline and bring it back online), instead of direct service restart. However, official whitepaper (on page 16) says that CA service should be restarted by using "net stop certsvc && net start certsvc".

Also, I want to clarify about a clustered CA database backup/restore. Say, a DB was damaged or destroyed. I have a full backup of CA DB. Before restoring, I do I stop only AD CS service resource (cluadmin.msc) or stop the CA service directly (net stop certsvc)?

Answer

[via Rob “there's a Squatch in These Woods” Greene  – Ned]

The CertSvc service has no idea that it belongs to a cluster.  That’s why you setup the CA as a generic service within Cluster Administration and configure the CA registry hive within Cluster Administrator.

When you update the registry keys on the Active CA Cluster node, the Cluster service is monitoring the registry key changes.  When the resource is taken offline the Cluster Service makes a new copy of the registry keys to so that the other node gets the update.  When you stop and start the CA service the cluster services has no idea why the service is stopped and started, since it is being done outside of cluster and those registry key settings are never updated on the stand-by node. General guidance around clusters is to manage the resource state (Stop/Start) within Cluster Administrator and do not do this through Services.msc, NET STOP, SC, etc.

As far as the CA Database restore: just logon to the Active CA node and run the certutil or CA MMC to perform the operation. There’s no need to touch the service manually.

Other stuff

The Microsoft Premier Field Organization has started a new blog that you should definitely be reading.

Welcome to your nightmare (Thanks Mark!)

Totally immature and therefore funny. Doubles as a gender test.

Speaking of George Lucas re-imaginings, check out this awesome shot-by-shot comparison of Raiders and 30 other previous adventure films:


Indy whipped first!

I am completely addicted to Panzer Corps; if you ever played Panzer General in the 90’s, you will be too.

Apropos throwback video gaming and even more re-imagining, here is Battlestar Galactica as a 1990’s RPG:

   
The mail sack becomes meta of meta of meta

Like Legos? Love Simon Pegg? This is for you.

Best sci-fi books of 2011, according to IO9.

What’s your New Year’s resolution? Mine is to stop swearing so much.

 

Until next time,

- Ned “$#%^&@!%^#$%^” Pyle

Friday Mail Sack: It’s a Dog’s Life Edition

$
0
0

Hi folks, Ned here again with some possibly interesting, occasionally entertaining, and always unsolicited Friday mail sack. This week we talk some:

Fetch!

Question

We use third party DNS but used to have Windows DNS on domain controllers; that service has been uninstalled and all that remains are the partitions. According to KB835397, deleting the ForestDNSZones and DomainDNSZones partitions is not supported. Soon we will have removed the last few old domain controllers hosting some of those partitions and replaced them with Windows Server 2008 R2 that never had Windows DNS. Are we getting ourselves in trouble or making this environment unsupported?

Answer

You are supported. Don’t interpret the KB too narrowly; there’s a difference between deletion of partitions used by DNS and never creating them in the first place. If you are not using MS DNS and the zones don’t exist, there’s nothing in Windows that should care about them, and we are not aware of any problems.

This is more of a “cover our butts” article… we just don’t want you deleting partitions that you are actually using and naturally, we don’t rigorously test with non-MS DNS. That’s your job. ;-)

Question

When I run DCDIAG it returns all warning events for the system event log. I have a bunch of “expected” warnings, so this just clogs up my results. Can I change this behavior?

Answer

DCDIAG has no idea what the messages mean and has no way to control the output. You will need to suppress the events themselves in their own native fashion, if their application supports it. For example, if it’s a chatty combination domain controller/print server in a branch office that shows endless expected printer Warning messages, you’d use the steps here.

If your application cannot be controlled, there’s one (rather gross) alternative to make things cleaner though, and that’s to use the FIND command in a few pipelines to remove expected events. For example, here I always see this write cache warning when I boot this DC, and I don’t really care about it:

image

Since I don’t care about these entries, I can use pipelined FIND (with /v to drop those lines) and narrow down the returned data. I probably don’t care about the time generated since DCDIAG only shows the last 60 minutes, nor the event string lines either. So with that, I can use this single wrapped line in a batch file:

dcdiag/test:systemlog | find /I /v "eventid: 0x80040022" | find /I /v "the driver disabled the write cache on device" | find /i /v "event string:" | find /i /v "time generated:"

clip_image002
Whoops, I need to fix that user’s group memberships!

Voila. I still get most of the useful data and nothing about that write cache issue. Just substitute your own stuff.

See, I don’t always make you use Windows PowerShell for your pipelines. ツ

Question

If I walk into a new Windows Server 2008 AD environment cold and need to know if they are using DFSR or FRS for SYSVOL replication, what is the quickest way to tell?

Answer

Just run this DFSRMIG command:

dfsrmig.exe /getglobalstate

That tells you what the current state of the SYSVOL DFSR topology and migration.

If it says:

  • “Eliminated”

… they are using DFSR for SYSVOL. It will show this message even if the domain was built from scratch with a Windows Server 2008 domain functional level or higher and never performed a migration; the tool doesn’t know how to say “they always used DFSR from day one”.

If it says:

  • “Prepared”
  • “Redirected”

… they are mid-migration and using both FRS and DFSR, favoring one or the other for SYSVOL.

If it says:

  • “Start”
  • “DFSR migration has not yet initialized”
  • “Current domain functional level is not Windows Server 2008 or above”

… they are using FRS for SYSVOL.

Question

When using the DFSR WMI namespace “root\microsoftdfs” and class “dfsrvolumeconfig”, I am seeing weird results for the volume path. On one server it’s the C: drive, but on another it just shows a wacky volume GUID. Why?

Answer

DFSR is replicating data under a mount point. You can see this with any WMI tool (surprise! here’s PowerShell) and then use mountvol.exe to confirm your theory. To wit:

image

image

Question

I notice that the "dsquery user -inactive x" command returns a list of user accounts that have been inactive for x number of weeks, but not days.  I suspect that this lack of precision is related to this older AskDS post where it is mentioned that the LastLogonTimeStamp attribute is not terribly accurate. I was wondering what your thoughts on this were, and if my only real recourse for precise auditing of inactive user accounts was by parsing the Security logs of my DCs for user logon events.

Answer

Your supposition about DSQUERY is right. What's worse, that tool's queries do not even include users that have never logged on in its inactive search. So it's totally misleading. If you use the AD Administrative Center query for inactive accounts, it uses this LDAP syntax, so it's at least catching everyone (note that your lastlogontimestamp UTC value would be different):

(&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2)(|(lastLogonTimestamp<=129528216000000000)(!lastLogonTimestamp=*)))

You can lower the msDS-LogonTimeSyncInterval down to 1 day, which removes the randomization and gets you very close to that magic "exactness" (within 24 hours). But this will increase your replication load, perhaps significantly if this is a large environment with a lot of logon activity. Warren's blog post you mentioned describes how to do this. I’ve seen some pretty clever PowerShell techniques for this: here's one (untested, non-MS) example that could be easily adopted into native Windows AD PowerShell or just used as-is. Dmitry is a smart fella. Make sure that you if you find scripts that the the author clearly understood Warren’s rules.

There is also the option - if you just care about users' interactive or runas logons and you have all Windows Vista or Windows 7 clients - to implement msDS-LastSuccessfulInteractiveLogonTime. The ups and downs of this are discussed here. That is replicated normally and could be used as an LDAP query option.

Windows AD PowerShell has a nice built-in constructed property called “LastLogonDate” that is the friendly date time info, converted from the gnarly UTC. That might help you in your scripting efforts.

After all that, you are back to Warren's recommended use of security logs and audit collection services. Which is a good idea anyway. You don't get to be meticulous about just one aspect of security!

Question

I was reading your older blog post about setting legal notice text and had a few questions:

  1. Has Windows 7 changed to make this any easier or better?
  2. Any way to change the font or its size?
  3. Any way to embed URLs in the text so the user can see what they are agreeing to in more detail?

Answer

[Courtesy of that post’s author, Mike “DiNozzo” Stephens]

  1. No
  2. No
  3. No

:)

#3 is especially impossible. Just imagine what people would do to us if we allowed you to run Internet Explorer before you logged on!

image

 [The next few answers courtesy of Jonathan “Davros” Stephens. Note how he only ever replies with bad news… – Neditor]

Question

I have encountered the following issue with some of my users performing smart card logon from Windows XP SP3.

It seems that my users are able to logon using smart card logon even if the certificate on the user’s smart card was revoked.
Here are the tests we've performed:

  1. Verified that the CRL is accessible
  2. Smartcard logon with the working certificate
  3. Revoked the certificate + waited for the next CRL publish
  4. Verified that the new CRL is accessible and that the revoked certificate was present in the list
  5. Tested smartcard logon with the revoked certificate

We verified the presence of the following registry keys both on the client machine and on the authenticating DC:

HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLValidityExtensionPeriod
HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors

None of them were found.

Answer

First, there is an overlap built into CRL publishing. The old CRL remains valid for a time after the new CRL is published to allow clients/servers a window to download the new CRL before the old one becomes invalid. If the old CRL is still valid then it is probably being used by the DC to verify the smart card certificate.

Second, revocation of a smart card certificate is not intended to be usable as real-time access control -- not even with OCSP involved. If you want to prevent the user from logging on with the smart card then the account should be disabled. That said, one possible hacky alternative that would be take immediate effect would be to change the UPN of the user so it does not match the UPN on the smart card. With mismatched UPNs, implicit mapping of the smart card certificate to the user account would fail; the DC would have no way to determine which account it should authenticate even assuming the smart card certificate verified successfully.

If you have Windows Server 2008 R2 DCs, you can disable the implicit mapping of smart card logon certificates to user accounts via the UPN in favor of explicit certificate mapping. That way, if a user loses his smart card and you want to make sure that that certificate cannot be used for authentication as soon as possible, remove it from the altSecurityIdentities attribute on the user object in AD. Of course, the tradeoff here is the additional management of updating user accounts before their smart cards can be used for logon.

Question

When using the SID cloning tools like sidhist.vbs in a Windows Server 2008 R2 domain, they always fail with error “Destination auditing must be enabled”. I verified that Account Management auditing is on as required, but then I also found that the newer Advanced Audit policy version of that setting is also on. It seems like the DSAddSIDHistory() API does not consider this new auditing sufficient? In my test environment everything works fine, but it does not use Advanced Auditing. I also found that if I set all Account Management advanced audit subcategories to enabled, it works.

Answer

It turns out that this is a known issue (it affects ADMT too). At this time, DsAddSidHistory() only works if it thinks legacy Account Management is enabled. You will either need to:

  • Remove the Advanced Auditing policy and force the destination computers use legacy auditing by setting Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings to disabled.
  • Set all Account Management advanced audit subcategories to enabled, as you found, which satisfies the SID cloning function.

We are making sure TechNet is updated to reflect this as well.  It’s not like Advanced Auditing is going to get less popular over time.

Question

Enterprise and Datacenter editions of Windows Server support enforcing Role Separation based on the common criteria (CC) definitions.  But there doesn't seem to be any way to define the roles that you want to enforce.

CC Security Levels 1 and 2 only define two roles that need to be restricted (CA Administrator and Certificate Manager).  Auditing and Backup functions are handled by the CA administrator instead of dedicated roles.

Is there a way to enforce separation of these two roles without including the Auditor and Backup Operator roles defined in the higher CC Security Levels?

Answer

Unfortunately, there is no way to make exceptions to role separation. Basically, you have two options:

  1. Enable Role Separation and use different user accounts for each role.
  2. Do not enable Role Separation, turn on CA Auditing to monitor actions taken on the CA.

[Now back to Ned for the idiotic finish!]

Other Stuff

My latest favorite site is cubiclebot.com. Mainly because they lead me to things like this:


Boing boing boing

And this:


Wait for the pit!

Speaking of cool dogs and songs: Bark bark bark bark, bark bark bark-bark.

Game of Thrones season 2 is April 1st. Expect everyone to die, no matter how important or likeable their character. Thanks George!

At last, Ninja-related sticky notes.

For all the geek parents out there. My favorite is:

adorbz-ewok
For once, an Ewok does not enrage me

It was inevitable.

 

Finally: I am headed back to Chicagoland next weekend to see my family. If you are in northern Illinois and planning on eating at Slott’s Hots in Libertyville, Louie’s in Waukegan, or Leona’s in Chicago, gimme a wave. Yes, all I care about is the food. My wife only cares about the shopping, that’s why we’re on Michigan avenue and why she cannot complain. You don’t know what it’s like living in Charlotte!! D-:

Have a nice weekend folks,

Ned “my dogs are not quite as athletic” Pyle

Viewing all 67 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>